Experts: Risk of AI bias should be “top of mind” for developers and execs

Recent research suggests AI and predictive algorithms can be less accurate for vulnerable populations and could exacerbate existing health disparities.
Jeff Rowe

One of the great promises of AI is that it can crunch an unprecedented amount of data in a very short period of time.  One of the accompanying risks, however, is that the data will not accurately reflect the reality of medical conditions across the entire population.

In a recent article, tech writer Jessica Kim Cohen took a deep dive into the concerns regarding the potential for AI to exacerbate, not improve, current health disparities, beginning with the growing efforts by healthcare stakeholders to alert their colleagues to the potential. 

The good news is that many executives are already up to speed on the risk, with fully half of healthcare execs surveyed in a recent poll identifying potential bias as one of the biggest risks associated with AI adoption.

Cohen noted, however, that “it needs to be top-of-mind for everyone, experts say, since widely deployed algorithms inform care decisions for thousands—if not hundreds of thousands—of patients.

“AI has a huge potential—if it’s done right,” she heard from Satish Gattadahalli, director of digital health and informatics in advisory firm Grant Thornton’s public sector business. Like with the rest of medicine, that means taking the steps necessary to ensure a commitment to “do no harm.” That “needs to be baked into the strategy from the get-go,” he said. 

A big part of the problem, Cohen explained, is that AI requires a massive amount of data. And not just any data, but data that’s reflective of the patients a hospital will be treating.

“To create an AI tool,” she said, “developers feed a system reams of training data, from which it learns to identify features and draw out patterns. But if that data lacks information on some populations, such as racial minorities or patients of a low socioeconomic status, insights the AI pinpoints might not be applicable to those patient groups.”

And according to the federal Government Accountability Office in a report last year, “that lack of diversity is one of the core problems driving bias in AI,  . . . since it could result in tools that are less safe and effective for some patient populations.”

In the course of her article, Cohen looks at how numerous institutions are responding to the risk. 

For example, she noted that “to ensure ethical considerations like equity are thought about from the start, Mount Sinai Health System in New York City is building an AI ethics framework led by bioethics experts. . . . The framework will use the WHO’s ethics and governance report as a foundation.”

Similarly, Independence Blue Cross, a health insurer in Philadelphia, develops most of its AI tools in-house and has been working with the Center for Applied AI at the University of Chicago Booth School of Business. The center provides free feedback and support to healthcare providers, payers and technology companies that are interested in auditing specific algorithms or setting up processes to identify and mitigate algorithmic bias.

“Working with the Center for Applied AI has helped data scientists at Independence Blue Cross systematize how they think about bias and where to add in checks and balances, such as tracking what types of patients an algorithm tends to flag, and whether that matches up to what’s expected, as well as what the implications of a false positive or negative could be.”

Finally, Cohen notes that AI isn’t just a one-time implementation. “Healthcare organizations need to constantly monitor AI,” she cautioned, . . . “documenting the decisions it makes and refining when it isn’t working as expected.”

Photo by Peter Howell/Getty Images