Fast company logo
|
advertisement

A new study shows that a widely used algorithm for predicting which patients get additional care is disproportionately counting out black patients—and could have left tens of thousands without adequate medical care.

Technology biased against black patients runs rampant in hospitals

[Photo:
Image Source/Getty Images]

BY Ruth Reader2 minute read

Artificial intelligence experts have been warning for years that bias in automation could cause unintentional harm in the future. But it’s also happening with technology that’s being used right now. In a recent paper published in Science, researchers lay out how a common algorithm used in hospitals to assess whether a patient needs extra care was making significantly biased recommendations.

According to researchers, the algorithm was disproportionately suggesting more white patients for a continued care program than black patients, even when black patients were sicker. Only 17.7% of black patients were getting additional care, when 46.5% of them should have been, the report found. The algorithm is used across health systems and is estimated to impact about 100 million people around the country, according to both STAT and the Washington Post.

The bias appears to stem from how the system, which the Washington Post identifies as being made by the health services company Optum, quantifies what it means to be sick. Rather than using illness or biological data, the algorithm uses cost and insurance claim information to understand how healthy a person is. The more dollars spent, the logic goes, the sicker a person is. The data revealed that less was being spent on black patients in part because they were receiving less care. Notably, the study points, algorithms based on cost are widely used in health care.

As a result, even though they may have been just as sick as their white counterparts, the algorithm was not recommending black patients for follow-up care. Since how much is spent on a patient doesn’t necessarily equate to their level of health, it led to a gap in understanding who was sick. In this case, a large portion of black patients who needed more from their doctors were being overlooked.

As a part of the study, researchers designed a new algorithm that incorporated physiological data, thus reducing this bias by 84%. When they tested out the new algorithm, they found that the number of overlooked chronic conditions among black patients dropped from about 50,000 to 8,000.

In addition to pointing out bias in technology, the study also highlights the problem with a health care system that is structured around cost rather than data about people’s physiological conditions.

The researchers are now working on having their update rolled out. Going forward, Sendhil Mullainathan, a senior researcher on the report and a scientist at University of Chicago’s Booth School of Business, thinks that health tech companies should be thinking more critically about how they design algorithms.

Luckily, the algorithm does not exist in a vacuum. Doctors are ultimately in charge of deciding what kind of care patients need and the algorithm just makes suggestions. Still, Mullainathan suggests that we should be looking into potential biases before technology is unleashed on the masses.

“When there’s new technology like this, it takes time for them to go from prototype to a larger scale, and some of these things don’t show up until later,” he said in an interview with STAT. “If we have a prototype, there needs to be more questions [like about racial bias] that need to be asked.”

Read the full interview with Mullainathan here.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More


Explore Topics