For AI to improve healthcare, providers must be able to trust it

Yes, AI can change healthcare, says one stakeholder, but only if it’s developed and implemented so that the underlying algorithms can be trusted.
Jeff Rowe

AI isn’t really a matter of technology; “it’s a matter of transparency and trust.”

That’s according to Manoj Saxena, Executive Chairman of CognitiveScale, an AI solutions provider, in a recent commentary at HITConsultant.

Not surprisingly, Saxena points right off to the myriad ways AI is transforming healthcare, but he’s quick to add that “(w)ithout trust, AI cannot deliver on its potential value.   To trust an AI system, we must have confidence in its decisions. Reliability, fairness, interpretability, robustness, and safety will need to be the underpinnings of Health AI.”

As an example, he holds up the growing us of AI to “triage” patients at the point of intake in order to maximize use of medical resources while, of course, ensuring appropriate care.  The problem, he says, is that bias is sometimes “baked into” the AI algorithms such system rely on.

Specifically, he cites a Wall Street Journal article about a hospital intake algorithm that “gave healthier white patients the same ranking as black patients who had much worse chronic illnesses, as well as poorer laboratory results and vital signs.”

Why? Well, says Saxena,  “the algorithm used cost to rank patients for intake, (and) because spending for black patients was less than for white patients with similar medical conditions, the AI inadvertently gave preference to white patients over black patients. Put it another way, the AI exacerbated racial disparities that are already present in our healthcare system.”

The good news, Saxena says, is that such outcomes aren’t inevitable. To avoid them, we have to begin by recognizing that AI isn’t like other tools. “The technology is a thinking partner, one we need to understand, and ultimately trust. That process isn’t automatic, nor is it inherently transparent. Understanding and trusting an AI is akin to understanding and trusting complex human institutions.”

Specifically, he says, for us to trust AI, it must adhere to five principles:

“1. Data rights: Do you have the rights to the data and is the data reliable?

2. Explainability: Is your AI transparent?

3. Fairness: Is your AI unbiased and fair?

4. Robustness: Is your AI robust and secure?

5. Compliance: Is your AI appropriately governed?”

In laymen’s terms, he says, “by building healthcare AIs with these five principles in mind, doctors and patients have the ability to ‘look under the hood.’” 

The bottom line for him is that “AI must serve humanity, not the other way around. When doctors are empowered to challenge AI, they actually maximize their benefits.”

And the key to enable doctors to challenge AI, he says, is to educate them, as well as patients, about how AI work, “in order to avoid undesirable outcomes and build the trust needed to reap the long-term benefits of this new technology.”