The 3D graphics card company Nvidia recently released this mind-bending bit of research, developed in conjunction with some of the University of California, Berkeley, team behind the best dance video of the year. It’s a driving simulator that allows you to pilot a car through a city. Why is that a big deal? Because that city, from its buildings to its streets to its cars, is being created in real time by AI.
Like most AI that generates images, the software was taught on a training set. It learned from driver camera footage of vehicles moving through several major cities. This footage was segmented, meaning that it was labeled with “car” or “tree” so the AI could learn to identify details. And by sheer repetition, it gradually learned what these items should look like and where they should go. Then, as is customary in machine learning, two algorithms duked it out, drawing these items and attempting to fool one another with their forgeries. What resulted is the demo you see here. Nvidia was able to paint a blobby 3D world with a sharp, realistic veneer.
Yes, Nvidia’s footage does have that same murky look that generative AI drawings generally have–the sort of fuzzy photorealism that designers achieved in Mortal Kombat by photographing real actors and downsampling them into 16-bit graphics. No one would look at either project closely and be fooled, but it feels pretty real, anyway.
Technically, the demo is impressive for a few reasons. One, it’s a great proof of concept illustrating what’s possible: Yes, we can generate convincing city blocks through automation alone. Architects and city planners could make great use of this technology one day, rendering rapid prototypes to test theories before cementing them in real bricks and concrete. Two, the footage you see isn’t rendered over hours or even days of processing, as so many good AI constructions are today. It’s being generated by the AI model in real time, in an interactive, 3D environment. Granted, the simulation is powered by a $3,000 graphics processor. Even so, it’s rendered just like any video game on your Xbox.
Currently, constructing the virtual worlds of big video games takes countless 2D and 3D modelers and an underpaid network of overseas labor, because each object is still drawn by hand with a mouse or stylus. Nvidia has proven that AI can automate a lot of that labor, and it can almost certainly do so in real products as silicon gets faster and AI gets smarter at the task. As social networks like Facebook mull the construction of an entire secondary universe online, and as companies like Adobe pursue a similar long-term interest AI graphics, expect to see a lot more of this type of experimentation as our real world becomes indistinguishable from digital fiction.