When Harry Potter first comes to Hogwarts School of Witchcraft and Wizardry, he is stunned to see the portraits that line the school’s storied stone walls move and talk. Thanks to artificial intelligence, that just might be a reality some day.
A new project from Samsung is helping bring portraits to life, animating the features on Mona Lisa, Marilyn Monroe, and that Einstein poster you probably had on your wall during freshman year. The machine learning specialists at Samsung used so-called facial landmarks to pair up an image of a target face (Einstein, Jennifer Lopez, etc.) with the face of an IRL human source and make the target face do what the source face does. So if you have that dorm-room picture of Einstein and pair it with the face of a human who lives in the dorm room, you can create lifelike motion thanks to machine learning. If the person yawns, Einstein could yawn, too.
As TechCrunch points out, making one face do what another is doing isn’t a huge leap in the machine learning world. However, in the past it has required a lot of data to analyze and perfect. As documented in a paper published by Samsung AI Center, which you can read here, Samsung’s researchers can produce similarly eerie results based on single image of a person’s face. They can then take that one image and transform it into a video of that face yawning, smiling, presumably eyerolling, mouthing along to Jennifer Lopez songs, or whatever facial expressions real humans make.
It’s not flawless, but it’s certainly one step closer to making your Einstein poster talk back without the use of (legal!) marijuana or psychedelic mushrooms (hey, Colorado!).
Watch the video below and thank the muggles who made it possible: