“Computers get faster, and then we keep pushing it.”
That was the mantra behind the creation of DreamWorks Animation’s Megamind, which opens next Friday, November 5. The 3-D superhero flick has used as much cutting-edge technology in streamlining the production pipeline as it has rendering the story onscreen. Good thing too–with the voices of Will Ferrell, Jonah Hill, Tina Fey, and Brad Pitt in a story about a supervillain who finds his life meaningless after conquering the hero, Megamind is the studio’s most anticipated launch of the year.
“This was a super-ambitious movie,” says Megamind director Tom McGrath. “We created an entire city built from the ground up–from repairs in the streets to fire hydrants and newsstands to reflective glass windows.” And that took a lot of bandwidth–not just in rendering, but sending the graphics back and forth between the multiple sites involved.
DreamWorks Animation developed cutting-edge technology with commercial partners Hewlett-Packard and Intel–who also got to repurpose it for their consumer products. The HP technology included Dreamcolor, a chip for high-end color displays and color calibration across multiple devices, and the Halo high-definition videoconferencing system.
Halo was invaluable in coordinating the 60-plus animators in DWA’s Glendale, California headquarters and PDI/DreamWorks, its animation production satellite in Redwood City, California. “Our process involved animators acting out scenes, so we would have to see each other,” says McGrath. “Then there was viewing and lighting dailies. The color is so nuanced. With Dreamcolor, you could look at any monitor and get the correct color.”
DWA’s collaboration with Intel produced InTru3-D, which creates 3-D images from the ground up and across platforms, and enables real-time rendering through extreme processing power. In addition, the studio augmented its rendering operation with more than 3000 of Intel’s Westmere-based Xeon cores, while Intel optimization engineers helped maximize system performance. Eventually DWA plans to re-architect its software to take advantage of Intel’s next generation chips.
“Megamind pushed our tools to the limits, hitting daily rendering peaks 50 percent higher than any previous production,” says Chief Technology Officer Ed Leonard. “We ultimately used more than 54 million render hours to render the entire film.”
On the storytelling side, McGrath, who last co-directed the 2-D Madagascar franchise, found 3-D both an adjustment and enhancement. The additional spatial dimensions in shots demanded slower cuts to allow the eye to adjust, and could be used to effect sensation.
“We increased and decreased the stereoscopic depth of shots to create specific depth effects,” says Global Stereoscopic Supervisor Phil “Captain 3-D” McNally. “One example is looking over the edge of Metro Tower where we ramp up the stereo depth to enhance the sense of vertigo.”
There’s a tendency with 3-D to want to capitalize on what McGrath calls “pointy sticks”–visual tricks like pointing spears at the camera to enhance the 3-D wow factor. “We tended to use those types of gimmicks for comedy, like when Metroman is juggling babies,” he says. “But it’s the things you don’t normally attribute to 3-D that are most successful–namely, enhancing emotion. You can slow down the shots to show great moments of isolation in a huge volume of space, and create great intimacy using slightly longer lenses and crushing depth of field.
“The superhero-style movie opens all windows and doors, with characters flying, and 3-D enables you to create an immersive experience for the viewer, rather than from an objective point of view,” adds McGrath. “But the biggest challenge with Megamind was not to shoot a conventional movie.”