AI-powered healthcare isn’t without pitfalls, but its potential is vast

Algorithms are poised to change everything about keeping people healthy—including patients who might never have the opportunity to see a human doctor.

AI-powered healthcare isn’t without pitfalls, but its potential is vast
[Photo: Solen Feyissa/Unsplash]

Anybody with even the vaguest interest in politics and economics will recognize that the provision of healthcare is one of the most important global financial problems for private citizens and for governments. On the one hand, improvements in healthcare provision over the past two centuries are probably the most important single achievement of the scientific method in the industrialized world: In 1800, life expectancy for someone in Europe would have been less than 50 years; someone born in Europe today could reasonably expect to live late into their seventies. Of course, these improvements in healthcare and life expectancy have not yet found their way to all parts of the globe, but overall, the trend is positive, and this is, of course, a cause for celebration.


But these welcome advances have created challenges. First, populations are, on average, becoming older. And older people typically require more healthcare than younger people, which means that the overall cost of healthcare has risen. Second, as we develop new drugs and treatments for diseases and other afflictions, the overall range of conditions that we can treat increases—which also leads to additional healthcare costs. And of course, a key underlying reason for the expense of healthcare is that the resources required to deliver healthcare are expensive, and people with the skill and qualifications to do so are scarce.

Because of these problems, healthcare—and more particularly funding for healthcare—is everywhere a perennial issue for politicians to wrangle with. Wouldn’t it be wonderful, then, if there were a technological fix to the healthcare problem?

The idea of AI (artificial intelligence) for healthcare is nothing new—the seminal MYCIN expert system was widely acclaimed after demonstrating better-than-human performance when diagnosing the causes of blood diseases in humans. No surprise, then, that MYCIN was followed by a wave of similar healthcare-related expert systems, although it is fair to say that relatively few of these made it far from their research labs. But nowadays, interest in AI for healthcare is back with a vengeance, and this time, there are several developments that suggest it has a better chance of succeeding on a large scale.

AI on your wrist

One important new opportunity for AI-powered healthcare is what we might call personal healthcare management. Personal healthcare management is made possible by the advent of wearable technology—smartwatches like the Apple Watch, and activity/fitness trackers such as Fitbit. These devices continually monitor features of our physiology such as our heart rate and body temperature. This combination of features raises the fascinating prospect of having large numbers of people generating data streams relating to their current state of health on a continual basis. These data streams can then be analyzed by AI systems, either locally (via the smartphone you carry in your pocket) or by uploading them to an AI system on the internet.

It is important not to underestimate the potential of this technology. For the first time ever, we can monitor our state of health on a continual basis. At the most basic level, our AI-based healthcare systems can then provide impartial advice on managing our health. This is, in some sense, what devices like Fitbit already do—they monitor our activity and can also set us targets.


It may be possible to detect the onset of dementia simply from the way that someone uses their smartphone.

Mass-market wearables are in their infancy, but there are plenty of indications of what is to come. In September 2018, Apple introduced the fourth generation of its Apple Watch, which included a heart monitor for the first time. Electrocardiogram apps on the phone can monitor the data provided by the heart-rate tracker and have the potential to identify the symptoms of heart diseases, perhaps even calling for an ambulance on your behalf if necessary. One possibility is monitoring for the elusive signs of atrial fibrillation—an irregular heartbeat—which can be the precursor to a stroke or other circulatory emergency. An accelerometer in the phone can be used to identify the signature of someone falling, potentially calling for assistance if needed. Such systems require only fairly simple AI techniques: What makes them practicable now is the fact that we can carry a powerful computer with us, which is continually connected to the internet, and which can be linked to a wearable device equipped with a range of physiological sensors.

Some applications of personal healthcare may not even require sensors, just a standard smartphone. Colleagues of mine at the University of Oxford believe it may be possible to detect the onset of dementia simply from the way that someone uses their smartphone. Changes in the way that people use their phone or changes in patterns of behavior recorded by their phone can indicate the onset of the disease, before any other person notices these signs and long before a formal diagnosis would normally be made. Dementia is a devastating condition and presents an enormous challenge for societies with aging populations. Tools that can assist with its early diagnosis or management would be very welcome. Such work is still at the very preliminary stages, but it provides yet another indicator of what might come.

This is all very exciting, but the opportunities presented by these new technologies come with some potential pitfalls too. The most obvious of these is privacy. Wearable technology is intimate: It continually watches us, and while the data it obtains can be used to help us, it also presents boundless opportunities for misuse.

One area of immediate concern is the insurance industry. In 2016, the health insurance company Vitality started offering Apple Watches along with its insurance policies. The watches monitor your activity, and your insurance premiums are then set according to how much exercise you undertake. If, one month, you decided to be a couch potato and undertook no exercise, you might pay a full premium; but you could offset this the next month by going on a fitness frenzy, leading to a reduced premium. Perhaps there is nothing directly wrong with such a scheme, but it suggests some much more uncomfortable scenarios. For example, in September 2018, the U.S.-based insurance company John Hancock announced that in the future, it will only offer insurance policies to individuals who are prepared to wear activity-tracking technology. The announcement was widely criticized. [Editor’s note: Wearing a fitness tracker and sending information to John Hancock is an option within its Vitality plans, not a requirement.]

Taking this kind of scenario further, what if we were only able to access national healthcare schemes (or other national benefits) if we agreed to be monitored and to meet daily exercise targets. You want healthcare? Then you have to walk 10,000 steps per day! Some people see nothing wrong with such a scenario; for others, it represents a profound intrusion and an abuse of our basic human rights.


Diagnostics, automated

Automated diagnosis is another exciting potential application for AI in healthcare. The use of machine learning to analyze data from medical imaging devices such as X-ray machines and ultrasound scanners has received enormous attention over the past decade. At the time of this writing, it seems as if a new scientific article is announced showing that AI systems can effectively identify abnormalities from medical images every single day. This is a classic application of machine learning: We train the machine learning program by showing it examples of normal images and examples of abnormal images. The program learns to identify images with abnormalities.

A well-publicized example of this work came from DeepMind. In 2018, the company announced it was working with Moorfields Eye Hospital in London to develop techniques to automatically identify diseases and abnormalities from eye scans. Eye scans are a major activity for Moorfields: They typically undertake a thousand of them every working day, and analyzing these scans is a large part of the work of the hospital.

DeepMind’s system used two neural networks, the first to “segment” the scan (identifying the different components of the image), and the second for diagnosis. The first network was trained on about 900 images, which showed how a human expert would segment the image; the second network was trained on about 15,000 examples. Experimental trials indicated that the system performed at or above the level of human experts.

You don’t have to look far to find many other striking examples of how current AI techniques are being used to build systems with similar capabilities—for identifying cancerous tumors on X-rays, diagnosing heart disease from ultrasound scans, and many other examples.

It is hopelessly naive to hold up human judgment as some sort of gold standard.

Many have urged caution in the push for AI’s use in healthcare. For one thing, the healthcare profession is, above all, a human profession: Perhaps more than any other role, it requires the ability to interact with and relate to people. A GP needs to be able to “read” her patients, to understand the social context in which she is seeing them, to understand the kinds of treatment plans that are likely to work for this particular patient versus those which aren’t, and so on. All the evidence indicates that we can now build systems that can achieve human expert performance in analyzing medical data—but this is only a small part (albeit a phenomenally important part) of what human healthcare professionals do.


Another argument against AI’s use in healthcare is that some people would prefer to rely on human judgment rather than that of a machine. They would rather deal with a person. I think there are two points to make here.

First, it is hopelessly naive to hold up human judgment as some sort of gold standard. We are, all of us, flawed. Even the most experienced and diligent doctor will sometimes get tired and emotional. And however hard we try, we all fall prey to biases and prejudices, and often we just aren’t very good at rational decision-making. Machines can reliably make diagnoses that are every bit as good as those of human experts—the challenge/opportunity in healthcare is to put that capability to its best use. My belief is that AI is best used not to replace human healthcare professionals but to augment their capabilities—to free them from routine tasks and allow them to focus on the really difficult parts of their job; to allow them to focus on people rather than paperwork; and to provide another opinion—another voice in the room—to give further context for their work.

Second, the idea that we have a choice between dealing with a human physician or an AI healthcare program seems to me to be a first-world problem. For many people in other parts of the world, the choice may instead be between healthcare provided by an AI system or nothing. AI has a lot to offer here. It raises the possibility of getting healthcare expertise out to people in parts of the world who don’t have access to it at present. Of all the opportunities that AI presents us with, it is this one that may have the greatest social impact.

Excepted from A Brief History of Artificial Intelligence. Copyright © 2021 by Michael Wooldridge. Excerpted by permission of Flatiron Books, a division of Macmillan Publishers. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.