Theres no quick fix to find racial bias in health care algorithms

 theverge.com  12/04/2019 15:36:05   Nicole Wetsman
Capitol at Night [by Robero Ceballos via Flickr]

Legislators in Washington, DC are taking a closer look at racial bias in health care algorithms after an October analysis found racial bias in a commonly used health tool. Sens. Cory Booker (D-NJ) and Ron Wyden (D-OR) released letters on Tuesday calling for federal agencies and major health companies to describe how theyre monitoring the medical algorithms they use every day for signs of racial bias.

Unfortunately, both the people who design these complex systems, and the massive sets of data that are used, have many historical and human biases built in, they wrote to the Centers for Medicare and Medicaid Services (CMS). The senators focus for their letters was mostly on gathering information. They wanted to know if the CMS, which administers Medicare and Medicaid, is collecting information from health care organizations on their use of algorithms. They asked the Federal Trade Commission if it was investigating harm caused by discriminatory algorithms on consumers and asked the health companies, including Aetna and Blue Cross Blue Shield, if and how they audit the algorithms they use for bias.

Equity in health care depends on identifying and rooting out bias in algorithms. But because the programs are still relatively new, there still arent best practices for how to do so. These practices bubbling up are more like best efforts, says Nicol Turner-Lee, a fellow in the Center for Technology Innovation at the Brookings Institution.

There are tools designed to detect bias in algorithms, as the senators note in their missives, built by private companies like IBM and nonprofits like The Alan Turing Institute. But theyre not perfect solutions, and they dont weed out all of the potential sources of bias. Running algorithms through their checklists should be done routinely, says Brian Powers, an internal medicine physician at Brigham and Womens Hospital and author on the October racial bias paper. But that wont get you all the way there.

The October study, for example, identified a bias that wouldnt be found by an automated detection system, Powers says. The algorithm it analyzed is designed to flag patients with complex health care needs, and it uses the cost of care as a proxy for how sick they are. The algorithm itself was unbiased, but using cost prediction as a way to flag patients was biased because white patients are given costlier treatments than black patients. Just looking at the algorithm alone wouldnt have identified the issue. What we uncovered wasnt in the algorithm and its ability to do what it was trained to do, but in the implementation, and the choice to use cost as a proxy, Powers says.

Bias identification tools can help to make algorithms and artificial intelligence tools more accurate, but they dont necessarily tackle the root causes of bias, which are larger systemic issues and inequities. An open-source tool might look for problems with an algorithm, but it would not be able to identify issues with the way it was constructed, Turner-Lee says. The biggest challenge we have with bias in AI and machine learning systems has to do with the fact that we continue to divorce them from larger structural issues that are happening in areas like health care, she says.

Fixing the problem isnt straightforward. While quantitative tools are necessary, theyre not sufficient, Powers says. I think its going to have to be a little more nuanced. It might be possible to generate a list of key questions, but theres no single solution that could apply to any algorithm. Algorithms that use proxy measures, for example  like health costs as a measure of sickness  need to be examined more carefully, he says, and any bias in the proxy would have to be evaluated separately.

Removing bias will require a closer partnership between users, like health care systems and doctors and the groups developing algorithms. There should also be more attention paid to potential bias when a program is built, not just after its available for use in hospitals. Companies are more reactive to problems, versus being more proactive, Turner-Lee says.

The racial bias study from October drew attention to the issue of bias in algorithms, as will the letters from Sens. Booker and Wyden. Turner-Lee says that attention is encouraging, but there still needs to be more recognition of how structural, social issues affect technology in key areas like health care. Were not as close as we could be, she says, because we still see the technical space as being only about technology.

« Go back