6 votes

Dissecting racial bias in an algorithm used to manage the health of populations

1 comment

  1. patience_limited
    Link
    I'm posting the link for the original paper rather than The Guardian's article, because it's a relatively accessible study with explicit conclusions. From the summary and abstract (additional...

    I'm posting the link for the original paper rather than The Guardian's article, because it's a relatively accessible study with explicit conclusions. From the summary and abstract (additional emphasis mine):

    Racial bias in health algorithms
    The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin). The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half. Bias occurs because the algorithm uses health costs as a proxy for health needs. Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.

    Abstract
    Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.

    Key statement from the study:

    Our analysis has implications beyond what we learn about this particular algorithm. First, the specific problem solved by this algorithm has analogies in many other sectors: The predicted risk of some future outcome (in our case, health care needs) is widely used to target policy interventions under the assumption that the treatment effect is monotonic in that risk, and the methods used to build the algorithm are standard. Mechanisms of bias uncovered in this study likely operate elsewhere. Second, even beyond our particular finding, we hope that this exercise illustrates the importance, and the large opportunity, of studying algorithmic bias in health care, not just as a model system but also in its own right. By any standard—e.g., number of lives affected, life-and-death consequences of the decision—health is one of the most important and widespread social sectors in which algorithms are already used at scale today, unbeknownst to many.

    Perhaps it's only in the U.S. that cost would be used as a proxy measure of actual need for health interventions, but this strikes me as a particularly dire failure to validate a model for use in human medicine. In this case, taking prior healthcare expenditures as inputs to the model just reified the historical condition of under-treatment for Black people. What's startling here is how little change was needed to make the model more accurate - just add actual clinical measures of health conditions to the inputs.

    I haven't yet encountered physicians who explicitly say they'll use system recommendations in preference to their own judgement and experience. For the most part, they treat expert systems as an adjunct to care, with a bias toward more treatment rather than less. I haven't researched Optum's products, but if they're being sold as providing an "evidence-based standard of care", providers might defer so that they're not constantly fighting insurance denials.

    1 vote