When I joined Sony Computer Entertainment’s U.S. R&D group in July 1999, PlayStation 2 had just been unveiled, delivering workstation-level graphics in a home video game console. But as graphics realism improved, it was becoming increasingly evident that advances were also needed in the way players could interact with games. By improving interaction technology, there would be two benefits: First, existing gaming experiences could be enhanced by improved interaction. Second, completely new experiences could be created, which could potentially attract a new audience to video games.
My interaction technology research at SCE focused on connecting a video camera to PlayStation 2. Previously, I had used computer vision and automatic tracking for my PhD thesis at Stanford University in the ’90s, and then for a people-tracking product at startup Teleos Research. Based on this experience, I believed a video camera could be an excellent interaction device for games. The USB camera was plugged directly into PlayStation 2 and placed on top of the television, looking out at the player. I created several technology demos highlighting this new interaction approach, presented publicly at numerous industry conferences. Initially, this was done using an off-the-shelf Webcam, but later SCE developed custom camera hardware designed especially for the purpose of tracking player motion: EyeToy.
Richard Marks testing a PlayStation Move prototype
The EyeToy camera launched in 2003, bundled with the game EyeToy: Play. The camera hardware was designed to deliver 60 frames-per-second video (most other Webcams at the time were only 30fps) and it delivered the video in a format optimized for PlayStation 2. But the biggest technological advances came from the PlayStation 2 software that was developed to process the live video in real time. Motion detection, template matching, and color segmentation algorithms were created to track the player’s motion. Around this time, martial artist Jet Li was touring our SCEA offices, so I showed him several demos. As I showed him how EyeToy and color tracking could be used to control a virtual sword by moving a physical toy sword, he took the sword from my hand and proceeded to spin and twirl it insanely fast, smiling the entire time.
EyeToy was very successful, selling millions of units globally, and EyeToy: Play won many industry awards including two BAFTAs and two AIAS awards. In addition to its commercial and critical success, EyeToy had a broader impact on the video game industry. People who had never played video games were enticed by this new type of interaction that was both intuitive and fun. Many anecdotes appeared online about how “entire families were playing together” and it being “the first time my girlfriend ever wanted to play”. EyeToy helped to break down market barriers in both age and gender, and helped pave the way for the many other natural interaction paradigms that later become popular such as singing, strumming a guitar, and other types of motion sensing.
The first EyeToy games revolved around “enhanced reality”, blending live video with computer graphics. The player’s image was displayed on television, and this image directly interacted with the virtual game objects. Later games such as EyeToy: Antigrav did not display the player’s image, but instead tracked his arms and face to control an avatar riding on a virtual hover-board.
When PlayStation 3 launched, because it has more computation power than PlayStation 2, algorithms that were previously too “heavy” became possible (e.g. real-time face detection, dense feature tracking, etc.). The EyeToy was upgraded to a new camera device named PlayStation Eye, with four times the resolution of the original EyeToy. PlayStation Eye also transmits raw uncompressed video to PS3, thereby avoiding compression artifacts. And it has much better low-light sensitivity: Late one night, while I was testing a PlayStation Eye prototype with all the office lights off, I was puzzled to still see live video of myself. I then realized that the prototype was so sensitive that just the light given off by my PC screen was sufficient for it to image my face! (Soon thereafter, we filed a patent for techniques to use a TV screen as a controllable light source).
Tracking the player’s motion using a camera generated many new experience possibilities, but it became clear that such an approach by itself had fundamental limitations. While large body movements were good for creating a feeling of engagement, many experiences needed a level of precision that just isn’t possible with only a camera. Also, though the freedom of camera-only interaction was a great benefit, it made some experiences awkward, such as pointing, selecting, and shooting. Some experiences obviously felt better when holding a tool in your hand. These realizations led to the development of PlayStation Move.
The PlayStation Move combines all that was good about the EyeToy and PlayStation Eye experiences with the benefits of a traditional game controller. The combination of motion-based gaming and controller input means PlayStation Move can support all types of genres, from shooters to social games. The developers who are supporting PlayStation Move have some amazing ideas for games that take advantage of options such as augmented reality; some of those ideas were demonstrated at the Game Developers Conference in March, but there are plenty more that will be unveiled at E3 next week.
Next time, Richard Marks on creating the PlayStation Move.