Tombroff, 46, is developing software to analyze images captured by 3-D cameras and working with EA Sports, Texas Instruments, and Panasonic to make avatars move based on users’ actions.
“What we do is reconstruct the volume, body parts, and skeleton of the user as he moves in front of the 3-D camera. We track that in real time, so we can animate an avatar or allow you to control any application with your gestures. We started working on this technology in 2003, and started entering the consumer market in 2007. Last year, when Microsoft announced Project Natal, it brought a lot of attention to the space. Having a big company put its foot in this industry has really helped us propel our technology.
Imagine an interactive TV series that places your avatar in the scene. If you decide to become a singer, that singer will move exactly as you’re moving. In your home, you look ridiculous, but on TV, you’re Madonna.”