How “smarter” AI could help root out data bias and inequity

Noting that AI may unintentionally intensify inequities that already exist in modern healthcare, a recent panel explored how to recognize those biases in order to defeat them.
Jeff Rowe

Healthcare researchers and providers can crunch all the data they want in their search for health solutions, but if an array of social biases in the data remain unaddressed, the result may only be a perpetuation of existing inequities.

That widely recognized problem was the subject of a panel discussion at the recent meeting of the Radiological Society of North America (RSNA) entitled 'Artificial Intelligence and Implications for Health Equity: Will AI Improve Equity or Increase Disparities?’.

As Judy Wawira Gichoya, the discussion’s moderator from Emory University School of Medicine, summed up the challenge, ”The data we use is collected in a social system that already has cultural and institutional biases. (…) If we just use this data without understanding the inequities then algorithms will end up habituating, if not magnifying, our existing disparities.”

Echoing Wawira Gichoya, Ziad Obermeyer, associate professor of health policy and management at the Berkeley School of Public Health, talked about what’s dubbed “the pain gap phenomenon,” in which the pain of white patients is treated or investigated until a cause is found, while in other races it may be ignored or overlooked.

"Society's most disadvantaged, non-white, low income, lower educated patients (…) are reporting severe pain much more often. An obvious explanation is that maybe they have a higher prevalence of painful conditions, but that doesn't seem to be the whole story," he said.

Obermayer explained that listening to the patient, and not just the radiologist, could help researchers develop solutions to predict the experience of pain. As an example, he pointed to an NIH-sponsored dataset that helped him experiment with a new type of algorithm, with which he found more than double the number of black patients with severe pain in their knees who would be eligible for surgery than before.  

Rather than just looking at the AI model and seeing that it is biased, Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital, Australia, suggested conducting exploratory error analysis to look at every error case and find common threads.

"Look at the cases it got right and those it got wrong,” he argued. “All the cases AI got right will have something in common and so will the ones it got wrong, then you can find out what the system is biased toward.”

Other panelists included Constance Lehman, professor of radiology at Harvard Medical School, director of breast imaging, and co-director of the Avon Comprehensive Breast Evaluation Center at Massachusetts General Hospital; and Regina Barzilay, professor in the department of electrical engineering and computer science and member of the Computer Science and AI Lab at the Massachusetts Institute of Technology.