In the list of people in line to lose their jobs to robots, some doctors are slowly edging closer to the front. Johnson & Johnson recently tried to sell an anesthesiology robot to hospitals (though hospitals didn’t buy it). Robot surgeons can sew more precisely than humans. Physician assistants have a 14.5% chance of being automated in the next 20 years.
But it’s likely to take longer for algorithms to do the job of an internal medicine doctor. Diagnosis might be an early skill for computers to pick up, but they aren’t there yet: A recent small study put doctors head-to-head with online symptom checkers. Faced with symptoms from an anonymous patient, doctors were far more likely to identify the right disease.
In theory, algorithms should be good at diagnosis. “You hear, in particular in the Silicon Valley world, how computers can be the way of the future in terms of diagnosing patients,” says Ateev Mehrotra, an assistant professor at Harvard Medical School and senior author of the study. “Because it’s complex pattern recognition and maybe computers can do that better.”
The researchers compared 23 symptom checkers–the tools that let you enter a series of symptoms and then suggest what might be wrong–in a previous study, using example symptoms from patients with a mix of diseases, from less serious to serious, and common to uncommon. In this study, they presented the same symptoms to doctors using Human Dx, an online platform. The doctors didn’t know they weren’t dealing with real patients.
“We thought it was a reasonable comparison, because this is what people are using out there,” says Mehrotra. “These symptom checkers, sites or apps, are used more than 100 million times a year.”
Doctors got the diagnosis right 72% of the time, versus 34% for the robots. They also were more likely to list the correct diagnosis among the first three possibilities (84% versus 51%).
In some cases, symptom checkers might miss a serious illness that needs attention, such as meningitis. In other cases, they might temporarily terrify people by suggesting they might have cancer.
Of course, physicians aren’t perfect either. Doctors provided the wrong diagnosis about 15% of the time, a similar rate found in other studies (even when doctors have access to more information, such as physical exams and tests, which wasn’t the case here).
“There’s been a lot of attention recently about the issue that unfortunately when people see a physician, relatively commonly they’re misdiagnosed,” Mehrotra says.
In the near future, as symptom checkers improve, they’re probably most likely to be used as tools to help doctors do better, not replace them. “You could argue that the work that we did here is a little artificial,” he says. “In the sense that when we look at how computers are being used in our lives, often it’s not an either/or. It’s rather they help us and augment what we do.”
Doctors may someday pull up an app that automatically helps diagnose patients in real time.
“It’s a big leap to do this, but maybe there’s a computer program reading all of the information that’s coming in, and on the left side of the screen it’s saying, ‘It could be this, it could be this, it could be this,’ and constantly changes it as more information is included,” he says. “It could even say, ‘Hey, I think it’s really important that you ask this question.’ That combination of computer and physician might be better than physician alone.”
Eventually, an algorithm on its own may do better than a human. “Will there ever come a point when computers will outperform physicians? I’m confident that will probably happen in the future,” says Mehrotra. “How soon in the future, I don’t know.”