fbpx
Connect with us

AI

Why are minority patients being left behind by AI?

Avatar photo

Published

on

Clinical practice and medical AI reportedly constitutes “cycles of exclusion” for minority populations, with origins in systematic racism and implicit bias. What’s going on?

Concern is growing for the future of healthcare AI – after continued signs that it could be seriously disadvantaging minority groups and acting in a way which shows “implicit” racism and bias. 

While there is still hope for AI transforming the way patients receive healthcare, recent research from the University of Michigan shows AI to produce “cycles of exclusion” – essentially casting out minority populations.

The research demonstrates that in medicine, substantial disparities exist in both experiences and health outcomes for minoritised populations, with “origins in systemic racism, implicit bias, historical practice, and social determinants of health”

It reads: “We draw on a theory of “exclusion cycles” – developed in the context of nonmedical social interactions to link known dominant-group and minoritised-group behaviors and demonstrate their self-reinforcing interactions. 

Racial disparities “intractable” in medicine

“Interlinked cycles help reveal why exclusion and racial disparities are so intractable in medicine, despite efforts to reduce them on the part of physicians and health systems through strategies focused on individual parts of the cycle such as diverse workforce recruitment or implicit bias training. 

“This framework highlights particular dangers that may arise through expanding use of big data and artificial intelligence (AI)–based systems in medicine, making bias especially intractable unless tackled directly and early.”

AI “disadvantaging” women 

In July, scientists warned that AI had biases which needed “rooting out” – connected with disadvantaging women, as well as ethnic minorities. The Guardian reported how, without the right preparation, AI could “dramatically deepen” existing health inequalities in our society. 

Ulrik Stig Hansen and Eric Landau co-founders of Encord told Health Tech World how there’s an “urgent need” to answer questions around AI bias. 

They said: “When it comes to algorithmic bias, the problem often stems from biased or unrepresentative training data. Models learn to make predictions after training and retraining on a variety of data.

AI will make mistakes

“If this data isn’t representative of the patient population that the medical AI is going to serve, then the AI will make mistakes when it’s put into practice and encounters never-before-seen cases. 

“For instance, if medical AI is going to be deployed in a hospital that predominantly serves patients from minority communities, then the model needs to be trained on a vast amount of data collected from patients of similar demographics. 

“Then, it must be validated with similar but never-before-seen data to ensure that it will perform as expected when put to work in the real world. For a model to deliver real business value and value for patients and clinicians, it needs to be localised by specific patient populations.”

Don’t miss…

Where are all the women in AI?

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending stories