Amazon’s kooky new keyboard lets humans and AI write music together

This gadget from Amazon Web Services shows off the power of AI by helping developers compose everything from jazz to rock—with help from a neural network.

Amazon’s kooky new keyboard lets humans and AI write music together
[Photo: courtesy of Amazon]

Amazon has come up with an entertaining but unlikely way to get developers excited about using its AI services: music.


At its AWS re:Invent event in Las Vegas on Sunday night, the company unveiled a new music keyboard called DeepComposer, which developers can use to compose music in collaboration with Amazon generative AI models running in the cloud. The keyboard plugs into a PC, where the human can use a control panel to communicate and collaborate with the AI models. Amazon calls DeepComposer “the world’s first machine learning-enabled musical keyboard.”

Generative AI creates art, text, music, or other content based on examples fed to it via machine-learning models. In this case, the AWS models can create a four-part accompaniment around a simple melody or chord progression a developer might play on the keyboard. The accompaniment can be in one of four genre styles–rock, pop, jazz, or classical.

Mike Miller [Photo: Mark Sullivan]
But the point is not making great music (it doesn’t exactly do that) but rather to get familiar with AI models that create the tunes.

“This whole system, DeepComposer, is designed for giving developers a hands-on opportunity to learn about this new technology while at the same time having fun with music,” said AWS engineer Mike Miller, who moved from Amazon’s consumer-electronics design group at Lab126 to AWS in order to create the generative AI models for music creation.

“It walks developers through the process of using generative AI by starting with just an understanding of how the creative process works, understanding how generative models are trained,” Miller says. “And then taking that knowledge and allowing them to apply it to other domains and train their own models.”

The keyboard itself resembles the sort of cheap mini keyboards some people use to play virtual instruments within a digital audio workstation (DAW) app like GarageBand or Pro Tools. It’s plasticky and the keys are too small for any serious playing. But, again, it’s meant for developers, not musicians.


Big-data music

How does the model know how to create the original music and conform it to a specific style? By ingesting lots and lots of training data. Miller and his team used thousands of public domain pieces of music from various genres as training data and fed it into the model via MIDI files containing eight-measure chunks of the music. The neural network then processed the data and learned from patterns in the various types of music.

DeepComposer uses a generational adversarial neural network (GANN) to create original accompaniment based on the user’s input on the keyboard.

Hardware-wise, DeepComposer looks like a pretty typical music keyboard. [Photo: courtesy of Amazon]
Miller told me that the GANN actually contains two models. One acts as a “generator” that creates the musical notes and phrases. The other acts as a “discriminator” that continually gives feedback to the generator to push it toward creating music that’s more consistent with the genre. Miller said he thinks of the “generator” as an orchestra that makes sounds and of the “discriminator” as a conductor that’s constantly guiding and correcting the players.

Jamming with the GANN

I visited Miller at Amazon’s offices in Palo Alto so that I could try out DeepComposer myself. When I sat down at the keyboard and gave the AI a melody to improvise around, I won’t say we made beautiful music together. But we did make interesting music, and it was fascinating to watch the AI making real musical and stylistic choices, almost in real time.

A few examples:

Jazz. After I fed a simple chord progression into the model—right now it’s only possible to create eight-measure improvisations—I asked it to create an accompaniment in the jazz genre. Even though the progression I’d played was uncomplicated, the resulting accompaniment I heard bore little relation to it. The chords I heard the piano playing were, well, bizarre. It sounded like someone pressing down as many keys as possible with both hands, then letting up, then repeating.


Pop. Miller told me the models had advanced farther in learning to play in the pop and rock genres. I played a flowery pop melody on the keyboard. This time the accompaniments generated by the model were clearly related to what I had played on the keyboard. The drummer decided to dispense with the snare drum backbeat altogether and instead play fast sequences on a cymbal. The keyboard player hit a few clams.

Rock. On one song the AI surprised me by playing a half-time rhythm behind the rapid succession of notes I’d played on the keyboard. I heard a guitar play a little improvised lick that sounded original. On the other hand, I heard that same AI guitar suddenly squawk out a couple of completely random and discordant notes. With more training, the discriminator in the model might recognize those notes as errors and correct the generator.

After playing with DeepComposer for a half hour I could hear how the generator and the discriminator were working together. Miller played me the generator’s performance after progressively more rounds of feedback and correction, and I could hear the music improve, both musically and stylistically.

I also got the sense that neural networks find their path to creating pleasing, genre-faithful music very differently than humans do. The model’s mistakes are jarring and out of context, but it’s also capable of strikingly innovative flourishes. Most human musicians, in my experience, hide in the safety of derivativeness until they’ve gained enough experience to take a fledgling leap at originality.

Making AI fun

Miller told me that DeepComposer is a way to get developers to see the possibilities using AWS AI cloud services in general, not to promote a specific generative AI service. AI can feel daunting to some developers because of the complexity of the models and the idea that designing them is as much art as science. AWS is trying to use fun to break down the barriers.

AWS’s DeepComposer, DeepLens, and DeepRacer. [Photo: courtesy of Amazon]
AWS has used that approach before. At last year’s re:Invent AWS offered developers a mini race car called DeepRacer and challenged them to create reinforcement learning models that guide the car safely around a track. The previous year, it announced a DeepLens camera, on which developers could run their own computer vision AI models to recognize objects.


Amazon will be selling the DeepComposer keyboard (with the companion cloud AI service) on It’s meant for developers, but there’s nothing stopping curious non-techies from buying one. Amazon had not yet set the price at the time of this writing.