Despite diagnostic advances, patients still wary of AI

Research indicates that patients are reluctant to use AI-based healthcare even when it outperforms human doctors. Why? And what can be done about it?
Jeff Rowe

According to a number of recent studies, AI can match or beat physician diagnoses when it comes to heart disease, skin cancer and eye disease, to name just a few conditions on which new AI has been tested.  Yet another recent study has found that, even when presented with such evidence, patients still don’t really trust AI to diagnosis their health condition.

Why?

According to the study’s authors in a recent article at Harvard Business Review, it’s because patients tend to believe that despite its demonstrated accuracy, AI can’t truly understand their particular, unique symptoms, so they’d still rather hear the diagnosis from a physician.

“(P)eople see medical care delivered by AI providers as inflexible and standardized — suited to treat an average patient but inadequate to account for the unique circumstances that apply to an individual,” the authors, Chiara Longini and Carey K. Morewedge, both university professors of marketing, explain.

Moreover, “(w)e found that when health care was provided by AI rather than by a human care provider, patients were less likely to utilize the service and wanted to pay less for it. They also preferred having a human provider perform the service even if that meant there would be a greater risk of an inaccurate diagnosis or a surgical complication.”

On an instinctive level, this isn’t all that surprising. Who really wants to believe a machine can figure out what’s wrong with a human?  But Longini and Morewedge rightly observe that continued public mistrust will hinder AI’s potential over the long run.

So what to do?

First, they say, “providers can assuage concerns about being treated as an average or a statistic by taking actions that increase the perceived personalization of the care delivered by AI.”

In other words, describe the steps taken to ensure that the AI is targeting a particular patients unique situation.

Next, for specifically AI-based services (e.g., chatbot diagnoses or app-based treatments), “providers could emphasize the information gathered about patients to generate their unique profile, including their lifestyle, family history, genetic and genomic profiles, and details about their environment.”

Finally, healthcare organizations can take extra steps to spread the word about AI’s capacity for personalization — “for example, by sharing evidence with the media, explaining how the algorithms work, and sharing patients reviews of the service.”

Not surprisingly, especially given the improvements AI has demonstrated in studies thus far, Longini and Morewedge note that people are comfortable utilizing medical AI “if a physician remains in charge of the ultimate decision.”

Call it the enduring value of the human touch.

AI has potential, to put it mildly, but key to realizing that potential is getting the public comfortable with having an algorithm, along with a person, involved in critical decisions about their healthcare.