We’ve all lusted after a celebrity’s new hair style. Many of us have even been so bold as to tear a page from Elle or GQ, shove it in our stylist’s face, and proclaim, “make me look like George Clooney!” But what can you really tell from a single photo of someone’s hair? How do the sides and back flow? Who’s to say Clooney isn’t rocking a man bun you don’t even see?
Now, the PhD student Menglei Chai, working with collaborators at Zhejiang University’s State Key Lab of CAD&CG, has demonstrated an incredible new technique that couples the latest in machine learning with the latest in hair growth simulation to create 3-D models of any hair style, just from a photo. It’s called AutoHair.
“These 3-D models are composed of individual hair strands grown from the head scalp, just like what real hairs are,” says Chai.
The project was recently featured on Prosthetic Knowledge, but the technique is anything but an overnight success. Go to Chai’s research page, and you’ll see how four years of scaffolding projects have gotten him to this point. His technique is really a combination of several techniques: first, they scanned Flickr for 100,000 portraits. Manually, they trained the computer, by marking hair segments–basically, the chunks of your hair like parts that will dictate how the strands flow down your scalp. Modeling the growth of individual strands of hair from these segments, researchers can make some level of guess about how the total head of hair looks.
The problem with just doing this, says Chai, is that “it often generates results that are too smooth and flat, especially at the back.” Hair flow isn’t synonymous with hair style–the cuts and curls that really make hair look fashionable. So this segmentation and hair grown information was essentially cross-referenced against hundreds of 3-D hair models from The Sims, which gave the back of someone’s head a bit more flare.
AutoHair’s results are not a perfect version of the source portrait’s actual hair–that would be impossible, Chai admits. But in his words, his simulation is developing “plausible” cuts. And we would agree. His models, though they seem a bit biased toward the asymmetrical cuts typical in anime and video games (it’s no surprise his database was The Sims), make a believable guesstimate at what someone’s entire head of hair looks like.
It’s easy to imagine Chai’s technology in the salon of the future. You walk in, hand over a photo, and your stylist demos what Clooney’s cut might look like on your face. And in fact, the team has had success in applying the same hairstyle to different heads.
However, in the immediate, Chai imagines that there is a lot of potential in avatar creation–to upload a photo of yourself and have a video game model out a more believable face. And on that note, he doesn’t want to stop with just the head. “Perhaps it’s about the time to merge hair, face, and body together, to recover the whole human from images, and make it animatable!” says Chai. “That would be awesome.”