Effective AI implementation requires proceeding with caution

It’s critical for healthcare providers to recognize not just the unique value that AI tools can provide, but also the unique challenges that come with implementing them.
Jeff Rowe

Given that AI is still relatively new to healthcare, there are many more questions than answers when it comes to determining how effectively some new AI tools really work.

So notes Liz Richardson, head of The Pew Charitable Trusts’ health care products project, in a recent commentary

She begins by reviewing the disagreement over the effectiveness of a new, AI-driven sepsis detection tool from Epic – the EHR giant says the tool has proven highly successful in tests, but some early users aren’t so sure – then observes that the clash of perspectives reflects “the broader challenge with AI software products: How they are retrained within a clinical setting matters just as much as how they are developed. Adapting such tools to new environments can prove difficult when patient populations, staffing, and standards for diagnosing diseases and providing care may be very different from those on which the products are based.”

In other words, while among the promises of AI is the chance to make a range of processes more efficient or productive, getting to that point can take some work.

“Before using any AI software, hospital officials must tailor it to their clinical environment and then validate and test the program to ensure it works,” Richardson explains. “Once the product is in use, staff must monitor it on an ongoing basis to ensure safety and accuracy. These processes require significant investment and regular attention; it can take years to fine-tune the program.”

And then there’s the much-discussed potential for bias.

A study published in 2019, for example, “found that an AI tool used widely to help hospitals allocate resources dramatically underestimated the health care needs of Black patients. Because its algorithm used health care costs as a proxy for assessing patients’ actual health, the software perpetuated bias against Black people, who tend to spend less on health care—not because of differences in overall health, but because systemic inequities result in less access to health care and treatments.”

In Richardson’s view, a big problem is that there are “few independent resources are available to help hospitals and health systems navigate the AI terrain. To help them, professional medical societies could develop guidance for validating and monitoring AI tools related to their specialties.”

Another option she points to would be to implement “standards and routine methods for postmarket surveillance to ensure systems’ effectiveness and equity, similar to how drugs are monitored once they are on the market.”

In short, the potential of AI is clear; the tricky part is ensuring specific tools are functioning appropriately on a provider by provider basis.

Photo by kaisersosa67/Getty Images