Since deepfakes burst onto the scene a few years ago, many have worried that they represent a grave threat to our social fabric. Creators of deepfakes use artificial intelligence-based neural network algorithms to craft increasingly convincing forgeries of video, audio, and photography almost as if by magic. But this new technology doesn’t just threaten our present discourse. Soon, AI-generated synthetic media may reach into the past and sow doubt into the authenticity of historical events, potentially destroying the credibility of records left behind in our present digital era.
In an age of very little institutional trust, without a firm historical context that future historians and the public can rely on to authenticate digital media events of the past, we may be looking at the dawn of a new era of civilization: post-history. We need to act now to ensure the continuity of history without stifling the creative potential of these new AI tools.
Imagine that it’s the year 2030. You load Facebook on your smartphone, and you’re confronted with a video that shows you drunk and deranged, sitting in your living room saying racist things while waving a gun. Typical AI-assisted character attack, you think. No biggie.
You scroll down the page. There’s a 1970s interview video of Neil Armstrong and Buzz Aldrin on The Dick Cavett Show declaring, “We never made it to the moon. We had to abort. The radiation belt was too strong.” 500,000 likes.
Further down, you see the video of a police officer with a knee on George Floyd’s neck. In this version, however, the officer eventually lifts his knee and Floyd stands up, unharmed. Two million likes.
Here’s a 1966 outtake from Revolver where the Beatles sing about Lee Harvey Oswald. It sounds exactly like the Fab Four in their prime. But people have been generating new Beatles songs for the past three years, so you’re skeptical.
You click a link and read an article about James Dean. There’s a realistic photo of him kissing Marilyn Monroe—something suggested in the article—but it has been generated for use by the publication. It’s clearly labeled as an illustration, but if taken out of context, it could pass for a real photo from the 1950s.
Further down your feed, there’s an ad for a new political movement growing every day: Break the Union. The group has 50 million members. Are any of them real? Members write convincing posts every day—complete with photos of their daily lives—but massive AI astroturfing campaigns have been around for some time now.
Meanwhile, riots and protests rage nonstop in cities around America. Police routinely alter body camera footage to erase evidence of wrongdoing before releasing it to the public. Inversely, protesters modify body camera and smartphone footage to make police actions appear worse than they were in reality. Each altered version of events serves only to stoke a base while further dividing the opposing factions. The same theme plays out in every contested social arena.
In 2030, most people know that it’s possible to fake any video, any voice, or any statement from a person using AI-powered tools that are freely available. They generate many thousands of media fictions online every day, and that quantity is only going to balloon in the years to come.
But in a world where information flows through social media faster than fact-checkers can process it, this disinformation sows enough doubt among those who don’t understand how the technology works (and apathy among those who do) to destroy the shared cultural underpinnings of society—and trust in history itself. Even skeptics allow false information to slip through the cracks when it conveniently reinforces their worldview.
This is the age of post-history: a new epoch of civilization where the historical record is so full of fabrication and noise that it becomes effectively meaningless. It’s as if a cultural singularity ripped a hole so deeply in history that no truth can emerge unscathed on the other side.
How deepfakes threaten public trust
Deepfakes mean more than just putting Sylvester Stallone’s face onto Macaulay Culkin’s body. Soon, people will be able to craft novel photorealistic images and video wholesale using open-source tools that utilize the power of neural networks to “hallucinate” new images where none existed before.
The technology is still in its early stages. And right now, detection is relatively easy, because many deepfakes feel “off.” But as techniques improve, it’s not a stretch to expect that amateur-produced AI-generated or -augmented content will soon be able to fool both human and machine detection in the realms of audio, video, photography, music, and even written text. At that point, anyone with a desktop PC and the right software will be able to create new media artifacts that present any reality they want, including clips that appear to have been generated in earlier eras.
The study of history requires primary source documents that historians can authenticate as being genuine—or at least genuinely created during a certain time period. They do this by placing them in a historical context.
Currently the historical integrity of our online cultural spaces is atrocious.
Today, most new media artifacts are “born digital,” which means they exist only as bits stored on computer systems. The world generates untold petabytes of such artifacts every day. Given the proper technology, novel digital files can be falsified without leaving a trace. And thanks to new AI-powered tools, the barriers to undetectably synthesizing every form of digital media are potentially about to disappear.
In the future, historians will attempt to authenticate digital media just as they do now: by tracking down its provenance and building a historical context around its earliest appearances in the historical record. They can compare versions across platforms and attempt to trace its origin point.
But if the past is any indication, our online archives might not survive long enough to provide the historical context necessary to allow future historians to authenticate digital artifacts of our present era. Currently the historical integrity of our online cultural spaces is atrocious. Culturally important websites disappear, blog archives break, social media sites reset, online services shut down, and comments sections that include historically valuable reactions to events vanish without warning.
Today much of the historical context of our recent digital history is held together tenuously by volunteer archivists and the nonprofit Internet Archive, although increasingly universities and libraries are joining the effort. Without the Internet Archive’s Wayback Machine, for example, we would have almost no record of the early web. Yet even with the Wayback Machine’s wide reach, many sites and social media posts have slipped through the cracks, leaving potential blind spots where synthetic media can attempt to fill in the blanks.
The Peril of historical context attacks
If these weaknesses in our digital archives persist into the future, it’s possible that forgers will soon attempt to generate new historical context using AI tools, thereby justifying falsified digital artifacts.
Let’s say it’s 2045. Online, you encounter a video supposedly from the year 2001 of then-President George W. Bush meeting with Osama bin Laden. Along with it, you see screenshots of news websites at the time the video purportedly debuted. There are dozens of news articles written perfectly in the voices of their authors discussing it (by an improved GPT-3-style algorithm). Heck, there’s even a vintage CBS Evening News segment with Dan Rather in which he discusses the video. (It wasn’t even a secret back then!)
Trained historians fact-checking the video can point out that not one of those articles appears in the archives of the news sites mentioned, that CBS officials deny the segment ever existed, and that it’s unlikely Bush would have agreed to meet with bin Laden at that time. Of course, the person presenting the evidence claims those records were deleted to cover up the event. And let’s say that enough pages are missing in online archives that it appears plausible that some of the articles may have existed.
This hypothetical episode won’t just be one instance out of the blue that historians can pick apart at their leisure. There may be millions of similar AI-generated context attacks on the historical record published every single day around the world, and the sheer noise of it all might overwhelm any academic process that can make sense of it.
Without reliable digital primary source documents—and without an ironclad chronology in which to frame both the documents and their digital context—the future study of the history of this period will be hampered dramatically, if not completely destroyed.
Let’s say that, in the future, there’s a core group of historians still holding the torch of enlightenment through these upcoming digital dark ages. They will need a new suite of tools and cultural policies that will allow them to put digital artifacts—real and synthesized alike—in context. There won’t be black-and-white solutions. After all, deepfakes and synthesized media will be valuable historical artifacts in their own way, just as yesteryear’s dead-tree propaganda was worth collecting and preserving.
Currently some attempts are being made to solve this upcoming digital media credibility problem, but they don’t yet have the clarion call of urgency behind them that’s necessary to push the issue to the forefront of public consciousness. The death of history and breakdown of trust threatens the continuity of civilization itself, but most people are still afraid to talk in such stark terms. It’s time to start that conversation.
Here are some measures that society may take—some more practical than others:
1. Maintain better historical archives
To study the past, historians need reliable primary source materials provided by trustworthy archives. More public and private funding needs to be put into reliable, distributed digital archives of websites, news articles, social media posts, software, and more. Financial support for organizations such as the Internet Archive is paramount.
2. Train computers to spot fakes
It’s currently possible to detect some of today’s imperfect deepfakes using telltale artifacts or heuristic analysis. Microsoft recently debuted a new way to spot hiccups in synthetic media. The Defense Advanced Research Projects Agency, or DARPA, is working on a program called SemaFor whose aim is to detect semantic deficiencies in deepfakes, such as a photo of a man generated with anatomically incorrect teeth or a person with a piece of jewelry that might be culturally out of place.
But as deepfake technology improves, the tech industry will likely play a cat-and-mouse game of trying to stay one step ahead, if it’s even possible. Microsoft recently wrote of deepfakes, “. . . the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology.”
That doesn’t mean that keeping up with deepfakes is impossible. New AI-based tools that detect forgeries will likely help significantly, as will automated tools that can compare digital artifacts that have been archived by different organizations and track changes in them over time. The historical noise generated by AI-powered context attacks will demand new techniques that can match the massive, automated output generated by AI media tools.
3. Call in the moderators
In the future, the impact of deepfakes on our civilization will be heavily dependent on how they are published and shared. Social media firms could decide that suspicious content coming from nontrusted sources will be aggressively moderated off their platforms. Of course, that’s not as easy as it sounds. What is suspicious? What is trusted? Which community guidelines do we uphold on a global platform composed of thousands of cultures?
Facebook has already announced a ban on deepfakes, but with hyper-realistic synthetic media in the future, that rule will be difficult to enforce without aggressive detection techniques. Eventually, social media firms could also attempt draconian new social rules to reduce techniques—say, that no one is allowed to post content that depicts anyone else unless it also includes themselves, or perhaps only if all people in a video consent to its publication. But those same rules may stifle the positive aspects of AI-augmented media in the future. It will be a tough tightrope for social media firms to walk.
4. Authenticate trustworthy content
One of the highest-profile plans to counter deepfakes so far is the Content Authenticity Initiative (CAI), which is a joint effort among Adobe, Twitter, The New York Times, the BBC, and others. The CAI recently proposed a system of encrypted content attribution metadata tags that can be attached to digital media as a way to verify the creator and provenance of data. The idea is that if you can prove that the content was created by a certain source, and you trust the creator, you’ll be more likely to trust that the content is genuine. The tags will also let you know if the content has been altered.
CAI is a great step forward, but it does have weak spots. It approaches the problem from a content protection/copyright point of view. But individual authorship of creative works may become less important in an era when new media could increasingly be created on demand by AI tools.
It’s also potentially dangerous to embed personally identifiable creator information into every file we create—consider the risks it might present to those whose creations raise the ire of authoritarian regimes. Thankfully, this is optional with the CAI, but its optional nature also limits its potential to separate good content from bad. And relying on metadata tags baked into individual files might also be a mistake. If the tags are missing from a file, they can be added later after the data has been falsified, and there will be no record of the earlier data to fall back on.
5. Create a universal timestamp
To ensure the continuity of history, it would be helpful to establish an unalterable chronology of digital events. If we link an immutable timestamp to every piece of digital media, we can determine if it has been modified over time. And if we can prove that a piece of digital media existed in a certain form before the impetus to fake it arose, it is much more likely to be authentic.
The best way to do that might be by using a distributed ledger—a blockchain. You might wince at the jargon, since the term blockchain has been so overused in recent years. But it’s still a profound invention that might help secure our digital future in a world without shared trust. A blockchain is an encrypted digital ledger that is distributed across the internet. If a blockchain network is widely used and properly secured, you cannot revise an entry in the ledger once it is put in place.
Blockchain timestamps already exist, but they need to be integrated on a deep level with all media creation devices to be effective in this scenario. Here’s how an ideal history stamp solution might work.
The ledger, if maintained over time, will give future historians some hope for tracking down the actual order of historical events.
When a piece of media is modified, cropped, or retouched, a new hash would be created that references the older hash and entered into the blockchain as well. To prevent inevitable privacy issues, entries on the blockchain wouldn’t be linked to individual authors, and the timestamped files themselves would stay private unless shared by the creator. Like any cryptographic hash, people with access to the blockchain would not be able to reverse-engineer the contents of the file from the digital fingerprint.
To verify the timestamp of a post or file, a social media user would click a button, and software would calculate its hash and use that hash to search the history blockchain. If there were a match, you’d be able to see when that hash first entered the ledger—and thus verify that the file or post was created on a certain date and had not been modified since then.
This technique wouldn’t magically allow the general populace to trust each other. It will not verify the “truth” or veracity of content. Deepfakes would be timestamped on the blockchain too. But the ledger, if it is maintained over time, will give future historians some hope for tracking down the actual order of historical events, and they’ll be better able to gauge the authenticity of the content if it comes from a trusted source.
Of course, the implementation of this hypothetical history stamp will take far more work than what is laid out here, requiring consensus from an array of stakeholders. But a system like this would be a key first step in providing a future historical context for our digital world.
6. Restrict access to deepfake tools
At some point, it’s likely that politicians in the U.S. and Europe will widely call to make deepfake tools illegal (as unmarked deepfakes already are in China). But a sweeping ban would be problematic for a free society. These same AI-powered tools will empower an explosion in human creative potential and artistry, and they should not be suppressed without careful thought. This would be the equivalent of outlawing the printing press because you don’t like how it can print books that disagree with your historical narrative.
Even if some synthetic media software becomes illegal, the tools will still exist in rogue hands, so legal remedies will likely only hamstring creative professionals while driving the illicit tools underground where they can’t as easily be studied and audited by tech watchdogs and historians.
7. Build a cryptographic ark for the future
No matter the solution, we need to prepare now for a future that may be overwhelmed by synthetic media. In the short term, one important aspect of fixing the origin date of a media artifact in time with a history blockchain is that if we can prove that the media was created before a certain technology existed to falsify it, then we know it is more likely to be genuine. (Admittedly, with rapid advances in technology, this window may soon be closed.)
Still, if we had a timestamp network up and running, we could create a “cryptographic ark” for future generations that would contain the entirety of 20th-century media—films, music, books, website archives, software, periodicals, 3D scans of physical artifacts—digitized and timestamped to a date in the very near future (say, January 1, 2022) so that historians and the general public 100 years from now will be able to verify that yes, that video of Buzz Aldrin bouncing on the moon really did originate from a time before 13-year-olds could generate any variation of the film on their smartphone.
Of course, the nondigital original artifacts will continue to be stored in archives, but with public trust in institutions diminished in the future, it’s possible that people (especially those not born in the 20th century who did not witness the media events firsthand) won’t believe officials who claim those physical artifacts are genuine if they don’t have the opportunity to study them themselves.
With the cryptographic ark, anyone will be able to use the power of the history blockchain to verify the historical era of pre-internet events if they can access the timestamped versions of the digitized media artifacts from an online archive.
Thinking about all of this, it might seem like the future of history is hopeless. There are rough waters ahead, but there are actions we can take now to help the continuity of history survive this turbulent time. Chief among them, we must all know and appreciate the key role history plays in our civilization. It’s the record of what we do, what we spend, how we live—the knowledge we pass on to our children. It’s how we improve and build on the wisdom of our ancestors.
While we must not let disinformation destroy our understanding of the past, we also must not descend so far into fear that we stifle the creative tools that will power the next generation of art and entertainment. Together, we can build new tools and policies that will prevent digital barbarians from overwhelming the gates of history. And we can do it while still nourishing the civilization inside.