James Murphy, the prolific producer and LCD Soundsystem frontman, is known for creating rhythmic, electronic beats. His latest project, however, is yielding a completely different sound, one that’s less dance jam and more glitchy and hypnotic. That’s because instead of writing a hit song, Murphy has been tasked with creating music based on U.S. Open tennis matches.
IBM’s U.S. Open Sessions is an experimental project that addresses that simple question: can you make music from tennis data? To do so, IBM and agency Ogilvy & Mather enlisted Murphy to turn tennis data collected via the IBM cloud into real-time music with a music generator created by Tool of North America.
The resulting sounds, which stream live during each U.S. Open match until the tournament’s end on Sept. 8, are otherworldly, or, as Tool creative director Patrick Gunderson puts it, “like something Danny Elfman would write for a Tim Burton film.” Paired with a visualizer, the sessions serve as an ethereal audio-visual soundtrack of each match. After the tournament is complete the live versions will continue to live online and shorter versions will be highlighted. Murphy will also take the music that’s been produced for 14 of the matches and re-mix it into what Gunderson refers to as “something a bit more digestible than the three-hour soundscapes we created for the live matches.”
Wanting to understand how data points such as game, set, and match became beeps, boops, and beats, we spoke with Tool creative directors Gunderson and Michael Sevilla, as well as digital executive producer Chris Neff, to get the score.
Gunderson says that when the team at Ogilvy relayed their “crazy idea to create music from tennis data” the Tool team didn’t really know if it would be possible. “But we had some interesting ideas about how we could accomplish it, so after we got the brief I spent all night building a prototype. It uses some principles of generative art that I’ve been using in my visual work for some time, but combined with principles of music theory to make an initial composition.”
“We created a set of initial conditions from information that will be present at the beginning of every match: Player names and seeds, temperature, court name, etc. to determine things like instrumentation, tempo, rhythm periods, and song key, and the sequencer is set in motion,” says Gunderson. “There are six instruments in any given match, one for each player and the rest to support and fill out the sound. As the tennis match plays out we extract data from every point: game score, set score, time between points, and significant events like aces and break points to further alter the track as the game plays out.”
With James Murphy on board to interpret the raw data into music, the tool he used needed to be easy to use, says Gunderson. His first prototype was clunky and needed to be reworked.
“If I wanted to change even the smallest setting I had to dive into the code to manually change it. During my initial conversations with James, I gained some insights into how he works. He showed me a number of the vintage synths he has in his studio, how they work, and how he uses them. At that point, I knew that if James were to have any appreciable impact on this project, he being based in NY, and me in LA, I’d have to build something more intuitive for him to use. From there I set off to the lab to build a brand new instrument for James to play. It has a familiar set of knobs and switches, so that was a start. But what those knobs did to affect the music was something entirely new.
Gunderson says that instead of directly affecting the music beat-for-beat as one does with a traditional sequencer, his new instrument uses some principles of music theory and automatically generates the beat and pitch patterns. “James learned in less than a day how the auto-sequencer worked and how all the settings affected the output. From that point forward, James would make new groups of settings for me to build into the system and offer feedback. He and his team also put together dozens of sound-banks for the sequencer to sample from.”
For the visuals, Sevilla says he began experimenting with multiple visual directions of the site design as well as the music visualizer and the significant moments of the matches. “It’s one thing to create dynamic visuals that are synced to music; it’s entirely more challenging and complex thing to do the same but also tell the story of each play. Aces, double faults, game points, match points, break points . . . every major moment of match data that was being translated into music needed a graphic animation to interject into the visuals and give context to the entire experience,” says Sevilla. “Since we’re creating the music note-for-note in real time the visuals you see are tied to the music and data in a much more intimate way than most music visualizers.”
Working with real-time data from the IBM Cloud, everything about the U.S. Open Sessions moved quickly. “Most users probably have no idea that the data feeding our site is independent of what they would see on an ESPN or stat tracker. It is completely unique and even faster than other game tracking tools,” says Neff.
Speed is also a factor in playback. Archived matches are played in a 10th of the time, says Neff, and pressing Q on the keyboard further speeds up playback, letting users jump forward by 50 points.
“The coolest part about this is that the audio and visual experience is really consistent with what you would experience during a full match but the events hit much quicker. It is like a more intense or compact audio/visual experience with the same type of elements as the predecessor. Try watching an archived match in real speed and then quicken the pace. It is gives you a really great sense of the evolution of a match. It also is tennis 101 for those who understand the game fully.”