Fast company logo
|
advertisement

Unity Technologies has linked its game engine to machine learning software, in order to train better virtual characters and physical robots.

BY Sean Captain4 minute read

For years, video game developers have used artificial intelligence to animate those characters encountered by a player, but non-playable characters, or NPCs, have been based on sets of rules coded by humans. Using the AI technology du jour, machine learning, future NPCs will program and reprogram their own rules, based on the experiences they encounter in games, in the process getting smarter the longer they play.

So says Danny Lange, the VP of AI and machine learning at Unity Technologies, a major maker of game “engine” software that handles the underlying mechanics of titles like Firewatch and ChronoBlade. Today the company announced Unity Machine Learning Agents—open-source software linking its game engine to machine learning programs such as Google’s TensorFlow. It will allow non-playable characters, through trial and error, to develop better, more creative strategies than a human could program, says Lange, using a branch of machine learning called deep reinforcement learning.

Unity’s new AI-linking tool isn’t confined to virtual characters. The software can also speed up the development of real-life robots, like self-driving cars, says Lange, by training them relentlessly in sprawling, computer-generated—but lifelike—virtual landscapes.

Unity used machine learning to devise strategies by assessing scenes from multiple angles—a birds-eye view (left) and a first-person perspective (right)—in this unreleased tank battle game.

Unity didn’t invent these technologies, but it’s made them easier to use, says the company. Google’s DeepMind, for instance, has used deep reinforcement learning to teach AI agents to play 1980s video games like Breakout, and, in part, to master the notoriously challenging ancient Chinese game Go.

There are also many examples of training self-driving systems in game-like environments. MSC Software’s Virtual Test Drive application provides simulations for car training. Games like The Open Racing Car Simulator and Euro Truck Simulator 2 are also being used for virtual training of autonomous cars. And Nvidia’s new Isaac Lab uses rival Epic Games’ Unreal Engine to generate lifelike virtual environments for training the algorithms that control actual robots.

Lange promises that the new ML-Agents tools, now available in beta on GitHub, will eliminate days or even weeks of hacking together links between a game engine and AI software. “What we’re trying to do here is get to that point within an hour,” he says, making it easier for more people to experiment with developing a better game character or training a robot.

Smarter Games

Unity showed an example of deep reinforcement learning’s potential earlier this year with a simplified knock-off of the Unity-based mobile game Crossy Road, itself a knock-off of 1980s hit Frogger.

A chicken has to cross an endlessly wide road, gaining a point every time it hits a gift box and losing a point every time it runs into a truck. With the mandate to maximize the score, the learning process begins.

At first, the chicken flits around like a drunken moth, going backwards and forwards and colliding into gifts and trucks with equal intensity. After a few hours of trial and error, coupled with machine learning to identify the best tactics, the bird sails through the game with godlike power.

advertisement

More complex non-playable characters could be trained on subtler goals, says Lange, such as maximizing playtime for the humans in a first-person shooter game.

“It will probably develop some strategy where it’s going to show itself in surprising ways, and you’re going to chase it, but you won’t catch it, and it won’t kill you right away,” says Lange. “You open the door for more creative behavior, which you could not possibly even imagine; or it would be very, very labor intensive to implement in traditional code.”

Don’t expect such autodidact virtual opponents soon. Building NPCs with deep reinforcement learning is still a science experiment for academics and tech company research teams. But the process might speed up if Unity’s ML-Agents make it easier for its millions of registered developers, even those without big budgets, to experiment.

Smarter Robots

Video game engines like Unity and Unreal can now model real-world physics with extreme precision. From the interplay of light and landscape to the friction between a rubber tire and concrete road, games provide virtual environments that are accurate enough to train a real-world robot.

Using a process called procedural rendering, a game engine can synthesize, on the fly, essentially unlimited miles of photo-realistic road to traverse. Machine learning software analyzes the video feeds from games and learns how to accurately interpret what it sees.


Related: Data Will Be Oil For Self-Driving Cars So These Humans Are Mining It Now


“It’s very similar to when you have a vehicle driving around in San Francisco capturing that on video,” says Lange, who was head of machine learning at Uber before he left for Unity in December 2016. “But the Uber guys, what we would have to do is go home and hire contractors to label that video data.” People have to tag every tree, car, pedestrian, sidewalk, lane divider, etc., so the learning software knows what it’s seeing and develops techniques to recognize them. In virtual training, every object in a scene is already labeled because software like Unity or Unreal generated a photo-realistic version of it.

Autonomous cars are giant tech projects right now—straining even the resources of major carmakers and Silicon Valley companies. But as products like Unity make it easier for small-time game developers to get started, Unity’s ML-Agents might enable more small-time robot and robot-car developers, too.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Sean Captain is a business, technology, and science journalist based in North Carolina. Follow him on Twitter  More


Explore Topics