Early one morning in April 2017, a series of horrific photos and videos began hitting Facebook and YouTube showing civilians in a rebel-held area of northern Syria writhing on the ground and gasping for oxygen as deadly sarin-based gas—which witnesses said was dropped from the sky by the Syrian government—filled their lungs. It was one of the worst chemical attacks in the country’s nearly decade-long conflict, yet the United Nations Security Council failed to adopt a resolution to intervene. They were stymied by Russia and its allies, who dismissed the visual evidence as staged.
“Other countries—people who wanted to deny the reality of what was going on in Syria—undermined the validity of user-generated content,” says Mounir Ibrahim, who was serving as an adviser to then UN Ambassador Nikki Haley at the time. “It was a surprisingly effective argument.” Ibrahim, a career diplomat with the U.S. State Department, was posted in Syria during the Arab Spring six years earlier and witnessed, firsthand, the potency of visual documentation—the way it could empower the powerless and lead to the fracturing of autocratic regimes. Now that advantage was being subverted. (Indeed, a year after the sarin attack, Syrian president Bashar al-Assad’s forces carried out a similar strike near Damascus.)
Realizing that NGOs, journalists, and others needed a way to authenticate on-the-ground footage, Ibrahim reached out to San Diego–based Truepic. The company has a free camera app that, immediately after a user pushes the shutter button, imprints photos and videos with indelible metadata, including a time stamp and geolocation information. After being transmitted to Truepic for verification, the image or video is then recorded to the blockchain and uploaded to the company’s servers, which also host an individual web page for each piece of content.
Ibrahim quit his State Department job, joined Truepic as vice president of strategic initiatives in October 2017, and immediately set to work brokering relationships between the company and his humanitarian contacts in Syria. Today, the Truepic app is used by doctors and hospital workers with the Syrian American Medical Society to document attacks on civilians, as well as by citizen journalists, including 16-year-old Muhammad Najem, whose September video plea for President Trump to “protect children like me” from Assad’s chemical attacks has been seen more than a million times. The State Department is currently training citizen journalists in the Middle East to use the app in their reporting.
Truepic was founded in 2015 by Craig Stack, a Goldman Sachs alum who saw an opportunity in making it harder for Craigslist scammers and dating-site lurkers to deceive people. “It hit me that there were all these apps that deal with image manipulation or spoofing location and time settings,” says Stack, who now serves as COO. But today the company’s primary mission is to use image-verification tools to identify and battle more formidable forms of disinformation—from the faux social media accounts that the Kremlin used to manipulate the 2016 U.S. presidential election to the doctored photos that travel the back roads of WhatsApp and catalyze violence in places like Myanmar and India.
Truepic’s latest target is deepfakes, a portmanteau of “deep learning” and “fake news,” which use artificial intelligence to essentially copy and paste an individual’s face and expressions onto another’s body in eerily realistic videos. Among the most sordid examples include a Reddit post from the fall of 2017 that shared pornographic deepfakes of Hollywood actresses’ faces, complete with a do-it-yourself instruction kit. Among the most comical, but disconcerting: a widely circulated clip, created as a call to action, that featured President Obama seemingly referring to Trump as a “dipshit.” (At the end of the video, director and comedian Jordan Peele revealed himself to be the voice.)
It’s not difficult to imagine the havoc that these videos could wreak—on politics, journalism, law enforcement, and more—as the sophisticated computing tools they require become more accessible. How might manipulated videos be deployed to embarrass or discredit people in the public eye? In a world filled with deepfakes, can bystander video still be used to hold police accountable? What happens when what’s fake appears real—and what’s real can be plausibly denied?
“If I was a campaign manager for the next election, I would be videotaping every single one of my candidate’s speeches and putting them on Truepic,” says Hany Farid, a professor of computer science at Dartmouth College who has helped detect manipulation in preexisting images for organizations including DARPA and The New York Times. He now serves as a Truepic adviser. “That way, when the question of authenticity comes up, if somebody manipulates a video, [the user] has a provable original, and there’s no debate about what happened.”
Last fall, Truepic acquired Farid’s startup, Fourandsix Technologies. Using Farid’s digital-forensics tools, Truepic will soon be able to evaluate the trustworthiness of the billions of images taken outside its camera app. In March, it plans to introduce a product called Truepic Insight that scans images and videos, compares them against both similar and known authentic ones online, and sniffs out the subtle abnormalities (mismatches in eye reflectivity; irregularities in light or color) that are the key “tells” of photo manipulation. “This system is designed to be high frequency, high speed, and data driven,” says Truepic CEO Jeffrey McGregor, who joined the company after selling his restaurant-payments startup Dash to the reservations system Reserve in 2016.
While the camera app is free, McGregor has cleverly found commercial applications for his company’s patented technologies and sells Truepic’s technology to enterprise customers. Jewelers Mutual Insurance now has users open a Truepic-embedded app and snap photos of valued items for coverage, vastly compressing the average underwriting and inspection process. Similarly, Truepic is also used by catastrophe insurers, such as La Jolla, California’s Palomar Specialty, to reduce the time between claim submission and payment, and by banks to underwrite home-equity and small-business loans. McGregor envisions Truepic Insight speeding up these processes even further by having policyholders simply drag and drop photos onto insurers’ websites. (A journalist, meanwhile, could use the service to upload photos from an anonymous source and test their veracity within minutes.) As Truepic’s profile grows—and its pilot programs turn into contracts—McGregor is projecting $4.5 million in 2019 revenue.
He is also speaking with social media companies—including one extremely prominent one, he says—where disinformation is bred and distributed. (The Truepic app is already popular with Reddit “Ask Me Anything” Q&A subjects, who use it to snap a verified photo of themselves for the site.) His pitch: What if there was an automatic stamp that went onto every Truepic-authenticated image when it was uploaded to the social media site? Or a fraud-detection service that, upon noticing the slightest rogue pixel, immediately labeled a piece of content as falsified?
The company’s most game-changing partnership may be with Qualcomm, which makes smartphone processors used by Samsung, LG, and Xiaomi, among other companies. Qualcomm plans to optimize its chips for Truepic’s technology, allowing Truepic to verify information about the device with which an image was taken. Longer term, the collaboration would allow any device operating with a Qualcomm chip to offer users image-authentication capability within the phone’s native camera app. “We think this is a legitimate problem that needs to be solved, and we can provide an extra layer of security,” says Keith Kressin, Qualcomm’s senior VP of product management for Snapdragon, its mobile processor and platform. (Neither party will offer specifics on a timeline, but McGregor describes it as a “five-plus-year road map.”)
If Truepic succeeds, it will provide a crucial bulwark against the rising tide of disinformation. Still, the prospect of entrusting yet another private tech company with such an important societal function is somewhat unsettling. What if Truepic’s servers get hacked? Could governments use its image metadata to surveil citizens?
McGregor gets the picture. “Our team has a lot of responsibility, and I take it seriously,” he says. “We are working to find a solution [to disinformation] as quickly as we can—really for the sake of democracy as a whole. We feel the pressure. We are in this.”