Fast company logo
|
advertisement

Tom Graham, founder of Metaphysic.ai, on how generative AI will upend the film industry and change our understanding of what’s ‘real.’

AI will destroy Hollywood as we know it

[Photo: Gilberto Tadday/TED]

BY Jesus Diaz10 minute read

Tom Graham was a couple of hours away from his keynote conversation with Chris Anderson on stage at TED 2023 in Vancouver when I Zoomed him in search of some definitive answers about AI. Drowning in artificial generative quicksand, buried in amazing news week after week, I needed to grab his hand and come up for air before sinking back into the mud of the next generative artificial intelligence update.

Graham is the CEO of Metaphysic, which has developed an AI technology to capture the biometric AI profiles of any human to deepfake them in real time. (Remember deepfake Tom Cruise TikToks and the resurrected Elvis who won America’s Got Talent? That’s the work of Metaphysic.) Graham’s predictions of what is coming in the next few years left me, once again, in awe. Even if only 10% of what he said during our conversation happens, it would still leave me 50% amazed, 50% terrified, and 100% stupefied.

Host Chris Anderson (left) and Tom Graham on stage at TED 2023 [Photo: Gilberto Tadday/TED]

I interviewed Graham last year from his home in Australia, where we discussed the future of generative AI. His testimony, along with others, informed a threat projection of how AI could unravel society in the next 10 years. The core of that projection is more or less what Graham discussed with Anderson: “It seems like we are going to have to get used to the world where we and our children won’t be able to trust the evidence of our eyes,” he said on stage at TED as his own face morphed seamlessly into Anderson’s face in real time. 

During this conversation, which has been condensed and edited, I was more interested in focusing on the bright side of things. We discussed the exciting future of generative AI, its effect on the entertainment and creative industries, the way it will empower anyone with a story to tell to create freely and without economical constraints, and how we will be able to capture our life experiences with the fidelity of reality itself.

Fast Company: How’s it going, Tom?

Tom Graham: Great. It’s been very hectic. It’s just kinda beginning. I’m in the first session that kicks off at 5 p.m.

FC: Okay, first tell me about the stuff with CAA. [Earlier this year, Metaphysic signed an agreement with the world’s largest talent agency to create AI-powered biometric models of its clients.]

TG: We did this [Robert] Zemeckis film called Here. It stars Tom Hanks, Robin Wright, Paul Bettany, Kelly Reilly . . . It’s a big cast in this amazing adaptation of a graphic novel [by Richard McGuire]. We did a lot of de-aging of the characters because the movie covers their entire lifetimes. It’s both happening live on set while they’re actually acting, and then obviously it comes out in the movie and looks amazing. 

Our partnership with CAA comes from focusing on using this technology in really interesting ways at the forefront of entertainment, but it’s also focused on how we can empower individual people. In this case, it enables actors to own and control their data from the real world—their hyperreal identities [the biometric AI model made of photographic information captured in extremely high definition].

As more and more people have access to generative AI that can create hyperrealistic human performances, how do we connect that with the reality of people who need to own their own performance? We can’t infinitely use certain actors without their permission. That’s a lot of what we work on with CAA.

FC: Is the objective to help people hire Tom Hanks without actually hiring the physical Tom Hanks? Is that something that could be done? 

TG: You’d still have to hire Tom Hanks in every sense, but maybe Tom Hanks wouldn’t have to turn up on set to actually film. That’s definitely happening today, particularly in advertisements that involve sports figures, who have way less time to be in content than, say, actors. There are lots of applications in which we are beginning to decouple human performance from the physical locality and the time. 

There are also some examples where people may still be on set and acting, but then there are lots and lots of shots that are not so demanding, right? It’s just them kind of standing around. You don’t necessarily need them to be there for a lot of those. 

FC: How is that going to affect the industry?

TG: It’s really going to change the way in which we create content, period, because ultimately using generative AI is a hundred times cheaper than using 3D modeling and traditional VFX and CGI and all that. Ultimately, it’s going to be cheaper than setting up a camera.

FC: Do you think that the goal down the line is to integrate these biometric AI people into other generative AI workflows? We know that in the near future we’ll be able to create movies interactively just by giving commands. Basically anyone will be able to be a director.

TG: Yes. And it’s going to be more than that. Way, way more than that. Just think about the 2D linear format of film—a storyteller telling a story from beginning to end. Imagine these cinematic universes, like The Lord of the Rings, which was a universe before it was even in film. You can bring all of the assets and the ideas and the visuals and the storylines together into large [generative AI] models. And then you and I could create our own story within them. We could be living that ourselves. And it could be really deeply personal to us and our story. I’m an elf and you are something else. [I’m an orc, of course]. It’s The Lord of Rings, but we are kind of co-opting it for our own purpose.

FC: They told me you’re going to be transformed into the cofounder of TED at the keynote, to show the power of this technology. [After this conversation, Graham transformed into Anderson on the TED stage, then proceeded to transform into Sunny Bates—a founding member of TED.]

TG: We have some interactive things, which are really fun, but I want to highlight the really important talking points. People don’t understand that this is not just for fun; it’s not just for memes. If this content is hyperrealistic—if it’s just like the real world—then it’s extending reality. 

[Graham also showed an incredible demo of an AI-generated Aloe Blacc singing “Wake Me Up” in multiple languages. Watch below.]

FC: What’s the endgame then?

TG: What we need now is to focus on how we empower individuals to own and control their real-world data. We have agency over our bodies in the real world and our private spaces. People can’t come into our homes. We need to extend that set of rights into a future that’s powered by generative AI. We need to democratize control over reality. That’s the thing that has to happen. Because if we are creating reality, and the means of production are controlled by big tech companies, then it’s the opposite of democratic norms and institutions that we experience today [in the physical world].

advertisement

FC: So, are you promising that Metaphysic will give us the tools to be the owners of our digital selves without your company owning anything?

TG: Yeah, that’s correct. [Pause] This is a lot to promise—definitely not promising anything [laughter]. But we are the people who are definitely going to be pushing this discussion forward, trying to create tools and institutions to empower individuals. There’s business, but then there is what’s really important, right? We can’t contribute to clean energy or ending world hunger any better than other people can, but we can contribute to the future where we try to use AI in a way that is good for people and good for society.

The reason we’re in that position is because we’re being the leaders in creating hyperrealistic content that draws on data from the real world. It’s only just now that other people across generative AI are starting to process this because things like Stable Diffusion are starting to pump out things that look really quite realistic. But not realistic like we do realistict. Not with video.

FC: But we’ll get to that point.

TG: Definitely. I think that there will be a real proliferation of content. I would say two years from now it will be happening on a regular basis. It’ll be super accessible and at the level of full video where you really struggle to tell the difference with reality. Today that is very difficult to do, but it’s only a matter of time. There will be lots of examples over the next two years that will look like that, but at scale, where you and I can easily do it. It’s a very short period of time for us to prepare ourselves both psychologically as individuals and as governments and regulators. 

FC: There’s no doubt that a lot of the [VFX] studios are going to reconvert completely. Many of them are going to disappear. The democratization of these tools is also going to destroy a lot of jobs. Like stock photography, to begin with.

TG: I’m not sure, you know. I think it makes perfect linear sense to say, okay, we can do all of the things that thousands of VFX artists do today, so we don’t need to hire them for movies anymore. . . . Today there are very, very few movies getting made because they are very expensive. Studios and people with money don’t have enough money to make lots of movies. But over time, these [AI] tools will lead to making more and more movies.

If you take your $50 million budget and you can make three movies instead of one movie, then actually it’s probably likely that there’ll be a shortage of certain labor, right? People who are storytellers, people who are artists, they will begin to make feature-quality films that can be really good [for less money]. They will not be gatekept by the money holders. . . .

I think that generally AI will have a profound impact on every format of job that we do as humans. But inside movies specifically, I honestly think that there’ll be more people hired to create the content of telling stories than there are today. What will be interesting, however, is how that works with unions and collective action. That kind of stuff is not clear.

FC: It’s not clear at all. But also there are a lot of people out there who will be out of a job completely because of generative AI very soon.

TG: I disagree. I think that the biggest category of job growth for the future of generative AI will be people who capture data from the real world and make that accessible to large AI models. If you think about what’s inside those models today, it’s not very good. We need to bring a thousandfold more data into those models to really be able to do stuff with the finesse that filmmakers want to do today. People who contribute to stock photography today will just migrate to contributing to these models in exactly the same business model.

FC: So where does it go from here?

TG: It’s beyond 2D linear, conventional media. The purpose of Metaphysic is that everything about how humans interact with technology—every screen that you look at, every single thing that you do online—we focus on how this technology is going to change our interaction with all that, which is everything we do out of the physical real world. We talk about extending reality, when it looks so real that in your mind’s eye, it just becomes reality. 

It’s about entertainment but also every facet of human existence. We are beginning to decouple human experience from where and when it happens—its locality and its moment in time. You can capture data from your experiences in the real world. Maybe it’s your kid’s fifth birthday party. In the future, you can have that major event in your life in your catalog of life events, download it, render it out with AI, and fully relive that experience with exactly the same fidelity of the experience you lived the first time you were there. That’s a lot of what we’re talking about. 

FC: Extending your life, forever.

TG: You know, it’s like electrical light. When it was invented, suddenly our world was brighter for 50% more of the day, right? That’s a lot. We added 50% more daytime to our lives. We added more reality to our previous reality. With generative AI, we can scale reality even further. When it looks and feels so realistic that our brain processes it just like reality, we integrate it into reality. Ultimately, we can scale the human experience. That’s a lot like adding more light to our world. And if we can add a better reality, that’s also really going to change so many things.

Recognize your brand's excellence by applying to this year's Brands That Matters Awards before the early-rate deadline, May 3.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Jesus Diaz is a screenwriter and producer whose latest work includes the mini-documentary series Control Z: The Future to Undo, the futurist daily Novaceno, and the book The Secrets of Lego House. More


Explore Topics