We’ve seen before how much potential depth-sensing cameras have when it comes to the future of special effects. But a new algorithm from Disney Research could make it possible to do sophisticated video special effects without accurate depth information. Which doesn’t sound that exciting, but make no mistake: this is revolutionary stuff that could soon literally make every video you shoot with any ol’ camera phone look like it came from Hollywood film equipment with Hollywood special effects.
Depth sensors like the one in the Microsoft Kinect are great for special effects because they allow cameras to clearly distinguish one object in a frame from another. They do this by essentially scanning a room with invisible beams, creating a 3-D model of the scene directly before the camera, called the ‘scene space’. Using this data, depth sensing cameras can be used to do things like remove objects from a shot, augment reality with digital models, post-process videos after the fact
That means depth-sensing cameras only make special effects easier in very controlled circumstances. But a new paper published by a team led by Felix Klose at Disney Research Zurich has come up with an algorithm that can extrapolate a highly-accurate scene space from inaccurate—or even totally missing—3-D depth data.
In a demo reel, Klose and his team show off some of the possibilities. By running it through their algorithm, which essentially simulates a 3-D ‘scene space’ by tracking pixels frame-by-frame from video file, Disney was able to create sophisticated action cam shots from a video of a skateboarder doing some tricks, copying the skateboarder and freezing him in mid-air. Likewise, they demonstrate how they can easily remove objects from a scene, make them transparent, refocus a video after-the-fact, deblur an object in a video, denoise a video so it appears crystal clear in dark lighting, and more. Magic stuff.
In short, what we’re talking about here is an algorithm that can make any video you shoot on a smartphone look like it was shot with something closer to a Red. It’s efficient enough to run on any desktop or laptop, but can add sophisticated post-processing effects to any old video: the kind of stuff a pro would usually need to spend hours in Adobe After Effects to accomplish. An algorithm that, in their paper, Disney Research seems to indicate could conceivably come to smartphones eventually.
Your dog’s Snapchat may never be the same.