From schlocky daytime TV (Did you cheat on your girlfriend?) to police interrogations (Did you shoot him?), the old-fashioned polygraph is still considered a reliable way to get to the truth by many people. The trouble is it’s not, and that has been shown over and over again.
The polygraph “lie detector” essentially looks for physiological changes (blood pressure, pulse) as a subject is questioned. If you’re nervous for whatever reason—guilty or not—the outcome can be skewed. It’s why the results are limited in terms of courtroom admissibility.
Controversy over polygraphs have given way to a new breed of computerized fib busters that use AI to essentially scan for many more tell-tale signs of deception.
The U.S. Department of Homeland Security and authorities in Canada and the European Union are testing a system called AVATAR, developed by researchers at San Diego State University and the University of Arizona, that asks questions via interactive video terminal at border crossings. While the subject answers standard questions about weapons or produce, they’re digitally monitored for lies, with suspicious travelers sent to additional screening by human agents.
“The system can detect changes in the eyes, voice, gestures and posture to determine potential risk,” Aaron Elkins, an assistant professor of management information systems at San Diego State, has said. “It can even tell when you’re curling your toes.”
More recently, Elkins told CNBC that the system has between a 60% and 75% accuracy rate, with peaks of up to 80%. While those numbers might not seem that high, it still beats humans, who he told CNBC only judge truthfulness correctly between 54% and 60% of the time.
AVATAR, which stands for Automated Virtual Agent for Truth Assessments in Real-Time, isn’t the only digital lie detection system. A Lehi, Utah, company called Converus announced last month that its EyeDetect system, which administers a 30-minute test judging truthfulness based on a computer’s observations of eye movement, would be accepted as evidence in a New Mexico court. The defendant in the case was judged “credible” by the system and had asked the court to allow the test as evidence.
“It’s a significant milestone to have EyeDetect test results admitted as evidence in court,” said Converus President and CEO Todd Mickelsen in a statement. “Attorneys with strong cases can now use EyeDetect to exonerate their clients.”
A sheriff’s office in New Mexico also uses EyeDetect to screen job candidates, according to Converus, and the company has gotten some attention administering its tests to willing politicians.
If the devices become more prevalent in court, at border crossings or anywhere else, they’ll likely face more scrutiny from scientists and civil libertarians questioning their accuracy.
“Machine learning is the technology du jour, and everybody is taking it around the track to see what it can do in every different area,” says Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU). “The fundamental problem with lie detection is there is really no reliable relationship between your internal mental state and any kind of external stimuli.”
The ACLU has opposed lie detection technologies dating back to the 1950s, for reasons beyond the effectiveness of the polygraph.
“We have said since the 1970s that even if the polygraph were to pass an acceptable threshold of reliability, or a more accurate lie-detection technology were to come along, we would still oppose it because of the unacceptable violation of civil liberties it represents,” Stanley wrote in a blog post in 2012. “We view techniques for peering inside the human mind as a violation of the Fourth and Fifth Amendments, as well as a fundamental affront to human dignity.”