advertisement
advertisement

The machines have taught themselves to make Mario levels

A new research project pushes for better video games through cutting-edge artificial intelligence.

The machines have taught themselves to make Mario levels
[Screenshot: Wikimedia Commons]

Artificial intelligence isn’t quite ready to put Shigeru Miyamoto out of a job, but it has managed to produce decent Super Mario Bros. levels with little human intervention.

advertisement
advertisement

Using a modern AI technique called Generative Adversarial Networks, a group of researchers devised a way to create new Mario levels by analyzing an actual one. They then figured out how to search the results for certain characteristics, such as difficulty. The research shows how AI could create games that automatically adapt to the player’s skill level, or at the very least provide inspiration to human game designers.

Computer-generated levels have been a part of video games for decades, and academics have even competed in years past to make the best Mario level generation algorithms. But in most of those cases, a programmer still had to set up all the parameters in which the computer could do its work. Over the past few years, however, some researchers have taken a different approach, creating AI that can actually learn from existing level designs to understand what a playable Mario level should look like.

“Most [previous] systems involved designing game-specific algorithms, so the twist with the current generation of research is to take a machine learning approach and train generators from example data (which might be provided by artists/designers instead of programmers),” Adam Smith, an assistant professor at University of California, Santa Cruz, who coauthored the Mario paper, says via e-mail.

The Mario project–also known as MarioGAN–is one of two recent attempts to create video game levels using Generative Adversarial Networks, a four-year-old AI technique that many scientists have regarded as a breakthrough. (The other project, as reported by The Register, generates Doom levels.)

GANs are often described as a cop vs. counterfeiter scenario: One neural network looks at a set of training data–in this case, training images derived from a single Mario level–and tries to create new samples based on the characteristics it observes. Meanwhile, a second neural network tries to distinguish between the “real” training data and the new “fake” data. In trying to fool the cop, the counterfeiter learns to make better fakes, which in this case means more realistic Mario levels. (Nintendo did not respond to Fast Company‘s request for comment.)

[Image: courtesy of MarioGAN]
The researchers then devised a way to search the latent space of the neural network for certain characteristics, such as the amount of ground tiles Mario can run and jump on, or the number of jumps required for a computer-controlled player to get through the level without issue. While the results weren’t flawless–some levels were impossible to traverse, and others had broken pipes–the researchers were able to create a level that gradually increased in difficulty. In the future, this approach could allow for an endless level that automatically gets harder over time, or one that emphasizes discovery of hard-to-reach items.

advertisement

Still, the system has some notable limitations. GANs work best when there’s good sample data to work with, and both the Mario and Doom projects relied on a body of training data that exists for academic work. Although some research does exist on machine learning that doesn’t depend on direct training data, in some sense, there may not be a point in having game designers use AI to make levels if the training process involves making lots of levels themselves.

There’s also much more work to be done in getting AI to understand the full range of possible experiences that make for great level design. Optimizing for difficulty is one thing; matching the intent of someone like Miyamoto–who carefully arranged every block and goomba in Super Mario Bros.’ opening moments to elicit surprise, fear, and understanding–is another.

But perhaps that’s just another technical hurdle to overcome.

“It’s not that humans have a monopoly on this skill,” Smith says. “We just don’t have a big data set of the correct behavior to train our system on yet.”

advertisement
advertisement

About the author

Jared Newman covers apps and technology for Fast Company from his remote outpost in Cincinnati. He also writes for PCWorld and TechHive, and previously wrote for Time.com.

More