The mad scientists at Disney Research have developed a new way to film big, sweeping scenes in a fraction of the time.
The team has reportedly built software that algorithmically puts together footage taken from multiple cameras at a time, splicing them altogether into one coherent video–maybe like a car-chase montage you’d watch in an action flick, or a cinematic football game a la Friday Night Lights but from the perspectives of the audience. This is accomplished by parsing out what each camera is focused on, and algorithmically figuring out what, exactly, is the most interesting thing that everyone is watching.
Imagine you’re at a Beyoncé concert, and everyone’s eyes are looking at the same thing: Beyoncé. (Duh.) Since everyone is looking at the same thing, but from different angles, the algorithm can edit together all that footage, and approximate a final product that an editor would otherwise have to assemble piecemeal.
“Though each individual has a different view of the event, everyone is typically looking at, and therefore recording, the same activity–the most interesting activity,” Yaser Sheikh, an associate research professor of robotics at Carnegie Mellon University tells PhysOrg. “By determining the orientation of each camera, we can calculate the gaze concurrence, or 3D joint attention, of the group. Our automated editing method uses this as a signal indicating what action is most significant at any given time.”