Study finds consumers still largely in the dark on healthcare AI

Researchers investigating the reasons behind slow AI adoption in the healthcare sector found users wary of trusting machines as much as they do human healthcare providers.
Jeff Rowe

There’s no shortage of enthusiasm for AI in healthcare on the part of many stakeholders.  But with patients themselves?  Perhaps not so much.

That’s according to a new study by researchers at the Rotterdam School of Management Erasmus University and Boston University Questrom School of Business.

Not surprisingly, the main reason for hesitation seems to be a lack of understanding as to how AI works and how the new technology can change healthcare for the better.

As Dr. Romain Cadario at the Rotterdam School of Management Erasmus University explained the problem, “Artificial intelligence is revolutionising the healthcare industry, offering services at a cost and scale that makes healthcare more accessible and affordable to more patients in both developed and developing countries. Many AI-driven diagnosis can perform comparably or even better than human specialists, as well providing easier access to information for patients outside of clinical settings via smart devices. However, the clarity of these benefits is often matched by the opacity with which they are delivered, which presents a major barrier to their adoption.”

For the study, researchers ran five online experiments with a sample of 2,699 American citizens.

The upshot? Consumers believe they can better understand decisions made by human healthcare providers than any comparable decisions made by their AI-led healthcare counterparts, despite the fact they may not really understand how algorithmic insights can help human providers make medical decisions.

Said Dr. Cadario, “In one experiment, for instance, we randomly assigned a group of participants to self-assess how well they felt they understood the process by which human provider or an algorithmic provider triages potentially cancerous skin lesions through visual inspection. We then assigned another group to take a test measuring their actual understanding of his process and compared the responses of the two groups. We found that participants consistently claimed lower subjective understanding for medical decisions made by algorithmic providers—even though their objective understanding of both AI and human practitioner decision making was the same.”

According to the researchers, consumers still largely view healthcare AI as an indecipherable “black box”, making them less likely to make use of such services.  Perhaps more disturbingly, the researchers found, even many decision-makers within the healthcare sector hold this view, rendering many still wary of making the necessary investments. 

The answer, say Dr. Cadario and his colleagues, is for AI providers and the healthcare services that use them to do a better job of providing clarity to consumers, to help boost trust and improve uptake.

“Transparency provides multiple benefits to patients, providers, healthcare systems, and firms delivering healthcare services,” Dr. Cadario noted. “Our results illustrate how transparency facilitates patient adoption and can reduce patient resistance to medical artificial intelligence, critical psychological barriers to meeting patient needs given the current surge in healthcare demand.”

Photo by Zapp2Photo/Getty Images