(ORDO NEWS) — AI-powered deep learning models can only determine a person’s race from their x-rays, a new study has found – something that would be impossible for a human doctor looking at the same x-rays.
The findings raise some troubling questions about the role of artificial intelligence in medical diagnosis, evaluation, and treatment: could racial bias be inadvertently applied by computer programs when examining such images?
Having trained its AI on hundreds of thousands of existing x-rays detailing the patient’s race, an international team of health researchers from the US, Canada and Taiwan tested their system on x-rays that the computer program had not seen before (and had no additional information about them) .
The AI was able to predict the race of the patient in these images with amazing accuracy, even if the images were taken from people of the same age and the same sex. With some groups of images, the system reached 90 percent.
“We set out to conduct a comprehensive assessment of the ability of AI to recognize a patient’s race from medical images,” the researchers write in the published paper.
“We have shown that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities that persisted under external validation conditions.”
The study replicates findings from a previous study that found that artificial intelligence that scans X-rays is more likely to miss signs of illness in people of color. To prevent this, scientists need to understand why it happens in the first place.
By its very nature, AI mimics human thinking in order to quickly spot patterns in data. However, this also means that he may unwittingly succumb to the same prejudices. Even worse, the complexity of these systems makes it difficult to unravel the prejudices we have woven into them.
At the moment, scientists do not know why the AI system is so good at determining race from images that do not contain such information, at least at first glance. Even when providing limited information, such as removing clues about bone density or focusing on a small part of the body, the models still surprisingly well guessed the race specified in the file.
Perhaps the system finds signs of melanin, the pigment that gives color to the skin, unknown to science so far.
“Our finding that AI can accurately predict race, even in damaged, cropped and noisy medical images, often in cases where clinical experts cannot do so, poses a huge risk to all models used in medical imaging,” the researchers write.
The study adds to a growing body of evidence that AI systems can often reflect people’s biases and prejudices, be it racism, sexism, or otherwise. Skewed training data can lead to skewed results, making them far less useful.
This must be balanced with the powerful potential of artificial intelligence to process much more data much faster than a human can, from disease detection methods to climate change models.
The study leaves many questions unanswered, but for now, it’s important to be aware of the potential for racial bias in AI systems – especially if we’re going to hand them more responsibility in the future.
“We need to take a break,” researcher and physician Leo Anthony Sely of the Massachusetts Institute of Technology told the Boston Globe.
“We can’t rush to implement algorithms in hospitals and clinics until we make sure they don’t make racist or sexist decisions.”
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.