Fast company logo
|
advertisement

TECH

Is content validation the next growth industry?

In this era of disinformation, dissembling, and deepfakes, emerging technologies that restore our trust in online content could make for booming new businesses.

[Animation: Andrey_A/iStock; Dieter_G/Pixabay]

BY Ryan Holmes6 minute read

By now, you’ve probably seen the fake video of Barack Obama calling Donald Trump “a total and complete dipshit.” The voice may not be exactly right, but the clip—which took a team of video pros at BuzzFeed 56 hours to create—vividly illustrates the nascent threat of deepfakes, i.e. digitally altered videos that can pretty much make anyone say anything.

Deepfake technology is already being used to insert celebrity faces into pornography, and it’s not hard to see dangerous implications for politics. Putting false statements into the mouths of state actors could easily spur an international controversy, a stock market panic, or even an outright war. Far from science fiction, the threat is so real that DARPA, the U.S. defense agency responsible for emerging military technology, has already assembled an official media forensics lab to sniff out fakes.

Adulterated videos, of course, aren’t the only threat on the fake news front. The 2016 U.S. presidential election offered a convincing illustration of the power of plain old fake headlines and news stories, spread on social media, to sway the course of world events. Since then, we’ve grown accustomed to second-guessing, and occasionally falling for, dubious stories spread on our feeds. (It doesn’t help that real journalism is now being conflated by some politicians with fake news.) In the end, we’re left deeply uncertain who and what to trust—if anything at all.

Social media sits at the crux of many of these challenges. It’s the primary place people get their news these days and, sadly, one of the places most vulnerable to manipulation. As someone who has built a career in social media, I find this alarming. I have tremendous faith in the power of social channels to create connection and open up dialogue. Networks like Facebook and Twitter have become part of the plumbing of the internet and aren’t going away. But the spread of fake content—not just wacky, easily dismissed conspiracy theories but convincing videos capable of making experts do a double take—is a growing threat.

How do we restore trust and confidence in online content in this climate? To me, the way forward isn’t just an algorithm tweak or a new set of regulations. This challenge is far too complex for that. We’re talking, at root, about faith in what we see and hear online, about trusting the raw data that informs the decisions of individuals, companies, and whole countries. The time for a Band-Aid fix has long passed. Instead, we may be talking about the digital era’s next growth industry: content validation.

The burgeoning content validation industry
Interestingly, we’re already seeing a flurry of activity in this arena, as the arms race between fakers and detectives accelerates. The Deepfake phenomenon, in particular, has inspired a growing technological response, outlined recently by Axios’s Kaveh Waddell. The startup Truepic, which has just attracted more than $10 million in funding from the likes of Reuters, has set its sights on sniffing out details like eye reflectivity and hair placement, which are nearly impossible to fake across the thousands of frames in a video. Gfycat, the gif-hosting platform, uses AI-powered tools that check for anomalies to identify and pull down offending clips on its site.

On the academic and research front, scientists at Los Alamos are building algorithms that hunt out repeated visual elements, a telltale sign of video manipulation, while SUNY Albany researchers have developed a system that monitors video blinking patterns. DARPA and its media forensics team, meanwhile, look for inconsistencies in lighting on AI-generated faces.

Trickier still, however, is flagging fake and biased text-based news stories—the kind of content that’s easy to make and likeliest to find its ways into our social streams. There’s little technical trickery required here, just an old-fashioned ability to tell convincing lies and use language to manipulate readers’ own biases and emotions. Perhaps for that reason, these fake stories seem to avoid machine detection and often require direct human intervention to suss out.

Facebook, for all its technical sophistication, has resorted to partnering with a growing army of human fact checkers to vet content on its platform in the wake of Cambridge Analytica and the 2016 election crises. Posts flagged as false by users (or by machine learning) are forwarded on to one of 25 fact-checking partners in 14 countries, including the likes of the Associated Press, PolitiFact, and Snopes. Content deemed false is, in turn, demoted by Facebook, which pushes it lower in the news feed, evidently reducing future views by more than 80%.

This manual, piecemeal approach admittedly leaves a lot to be desired: Standards vary by fact-checking organization, and even patently fake stories may go viral before Facebook has a chance to demote them. Plus, the sheer scale frustrates human intervention: Every 60 seconds on Facebook, 510,000 comments are posted and 136,000 photos are uploaded, according to one estimate. It’s little wonder there aren’t enough fact-checkers to review all false claims.

advertisement

The future of trust
Is there a better way to address this problem? There has to be. Will it be easy? Nope. And this is where ingenuity and market opportunity need to come together. For instance, can we find a way to leverage domain authority–the search engine ranking scores that serve as a rough proxy for “trustworthiness”—in vetting content shared on social media? (An outlet like the New York Times, for instance, which has a domain authority of 99/100, would be ranked as highly trusted.) This approach is scalable but, admittedly, far from perfect, since domain authority is rooted largely in linkbacks rather than factual correctness.

Or could we take a cue from HTTPS protocol? The trusted lock symbol next to the address bar on our browsers offers instant assurance that sites we’re visiting—like banks or online stores—are secure and our sensitive data is safe. It’s not hard to imagine how useful this concept would be in the world of content: I’m picturing a nifty little badge on videos, photos, and stories that have been verified as true and factually correct, not fakes. The challenge is that HTTPS is merely an assurance of encryption. Vouching for data accuracy is a thornier issue altogether and far more complex from a technical and human standpoint.

What about blockchain? The idea of an immutable ledger stored in the cloud, tracing the origin of all content to its source, definitely sounds appealing. Users could compare versions of videos or images to check for modifications, and watermarks would serve as a badge of quality. (Indeed, this is largely the idea behind the Truepic app.) But here, too, the question is whether this can be applied to text-based content, where the intent to deceive leaves fewer technical traces.

Ultimately, comprehensive content validation is far easier to imagine than to pull off in real life. Once upon a time, we had a pretty great system for this—it was called journalism. For all of their limits and flaws, news outlets aspired to certain standards of accuracy, and professionals dedicated their lives to upholding them. The internet and social media have eroded the role of traditional editorial gatekeepers. Anyone can create news now. Anyone can spread false information. Anyone can cast aspersions. And as sophisticated new tools proliferate, almost anyone can can create a convincing fake reality.

But I remain optimistic. The backbone of the digital economy—which, increasingly, is the only economy—is information exchange. When that information loses its validity, that’s a huge problem… and a huge market opportunity. Content validation is a void waiting to be filled, and it may just represent one of the next great digital waves.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ryan Holmes is the CEO of Hootsuite, a social media management system with more than 10 million users. A college dropout, he started a paintball company and pizza restaurant before founding Invoke Media, the company that developed Hootsuite in 2009. Today, Holmes is an authority on the social business revolution, quoted in The New York Times and Wall Street Journal and called upon to speak at TEDx and SXSW Interactive Conferences More