Preventing Racial Bias in Health Care Algorithms

By Devin Partida, Editor-in-Chief,
Twitter: @rehackmagazine

Headlines and press releases regularly announce that artificial intelligence (AI) could prove game-changing in the health care industry. It has already facilitated progress that would have otherwise have stayed out of reach.

However, racial bias in AI algorithms is a topic that deserves more attention. What if the models make conclusions due to incorrect, race-based assumptions? Anyone who uses or develops these algorithms must work to prevent that possibility. Here are some practical ways to do that.

Verify Appropriate Representation in the Datasets
People should begin by checking all applicable datasets for balanced representation. Perhaps the training data only has a tiny percentage of people of color reflected in it. The data is therefore more likely to conclude things that don’t represent real-world populations.

Getting rid of bias in algorithms requires a multi-prong approach. However, scrutinizing the data to ensure accurate representation is an excellent step to tackle bias and make algorithms better overall. After identifying information gaps, the responsible parties should move forward with decisive action to correct them by obtaining the necessary data.

Become Proactive About Identifying Biases
People who develop or utilize health care algorithms should also expect bias as a certainty. Humans naturally show unconscious preferences and prejudices, so relying on their expertise alone becomes insufficient.

One often-utilized option is to invest in automated tools that highlight biases. Those help to a point, but they don’t target the root of the problem. Experts say that adopting a more collaborative approach between the parties who develop and benefit from the algorithm is a start. Additionally, the goal must be to find the biases during development rather than after a product arrives on the market.

“One of the biggest things that can be done to avoid bias is to be conscious of potential biases before building the algorithm,” says Katie Bakewell, a mathematician at the AI and machine learning company NLP Logix. “If I make the same healthcare model using data from a facility in Vermont and a facility in Miami, I would expect the results to differ due to the differences in population demographics. One of the major bias issues that healthcare models face is limited exposure to patients of different ethnic backgrounds.”

“Additionally,” Bakewell says, “there needs to be a partnership between clinicians and those building the models. Machine learning algorithms are great at identifying patterns in data, but they can only utilize the data that developers make available to them. By working directly with clinicians in building the models, data around “tribal knowledge” can be incorporated into the algorithm, and relationships between comorbidities, medication interactions, and other variables such as race can be highlighted and accounted for.”

Learn How the Algorithms Function
Now that health care algorithms have become more prevalent in society, concerned individuals have more questions about what those models seek to answer and how they reach those outcomes. Algorithm developers should strive for transparency through publicly accessible content that explains the factors examined. Similarly, those experts should clarify what kind of predictions the algorithms can make.

A health care professional tasked with making investments in algorithm-based technology should take care not to become dazzled by slick marketing language when researching products. They should prepare to ask company representatives the tough questions that can help them gauge whether an offering effectively limits biases. Such thorough research lets medical organizations avoid the bad press that ordinarily accompanies headlines about flawed technology.

Look Deeper to Find Accurate Correlations
People got a reality check this summer when academic researchers explored 13 algorithms used for patient care decisions and found racial biases could contribute to potentially harmful outcomes in all of them. The models analyzed everything from kidney function to a person’s likelihood of successful vaginal deliveries if they previously had cesarean sections. The study’s senior author said the findings could mean that Black and Latinx patients are less likely to receive appropriate care.

He also mentioned that algorithm developers often base their work on overly simplistic correlations, such as those connecting race with a poorer medical outcome. However, the true driver of the link could be a secondary factor, such as restricted access to health care. Jones warned that issues often classified as purely racial are more about class and poverty. These realities mean algorithm makers must not jump to conclusions too quickly.

Make Reducing Bias an Organizational Aim
There is no single guaranteed way to cut bias out of health care algorithms. However, people are much more likely to progress with that goal if they commit to minimizing bias as part of an organizational mission.

“Some [bias] areas are easier to fix than others,” points out Fred Goldstein, President of Accountable Health, LLC. “For example, ensuring you have a good distribution of people in your sample. Others are more difficult or more hidden as the article ‘Dissecting racial bias in an algorithm used to manage the health of populations’ by Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan uncovered. In that case the algorithm used cost as the estimator of risk. Selecting that as the driver loaded inherent bias into the system because African Americans, due to known healthcare disparities have lower costs at the same level of illness.”

New York’s Mount Sinai Hospital has a Racism and Bias Initiative that started in 2015. That project helps medical students recognize the historical underpinnings associated with racism and bias in the health care industry. From the development side of things, a software company might develop a checklist that quality assurance professionals can go through to screen for and eliminate bias.

View Progress as a Path
In closing, eliminating racial bias in medical algorithms will not be a straightforward task. People should also not focus too much on single solutions as they try to minimize it. The ideal approach will likely be to utilize various methods, tools and principles to make this bias less problematic.

Thus, anyone involved with medical algorithms should identify what components they can influence regarding the goal of removing bias. Next, they should understand that progress will occur gradually and that continuous improvement should become the overarching aim.