How developers can guide the orderly spread of healthcare AI

If the implementation science community is to facilitate the adoption of machine learning across healthcare, a team of researchers writes, issues such as privacy and algorithmic bias will need to be addressed.
Jeff Rowe

AI is coming, so “we suggest that implementation science researchers and practitioners make a commitment to more fully consider the wider range of issues that relate to its implementation, which include health system, social, and economic implications of the deployment of AI in healthcare settings.”

So write a team of researchers in Canada in a paper published this week in the Journal of Medical Internet Research.  Their purpose, they write, is to find the most appropriate language for discussions of AI in healthcare, then focus on questions about the deployment of AI as a clinical decision-making tool across the healthcare sector.

In broad terms, James Shaw, PhD, of the University of Toronto and colleagues compare the market penetration of AI with that of other technologies according to an already developed framework dubbed NASSS, which stands for Nonadoption, Abandonment and Challenges to the Scale-up, Spread and Sustainability (of health and care technologies).

Following the NASSS framework, they outline a number of  issues they consider primary responsible for impeding implementation of primarily decision-support AI, including:

Meaningful decision support: Clinical decision-making is “a complex process involving the integration of a variety of data sources, incorporating both tacit and explicit modes of intelligence,” the authors explain.

To help this process along, AI developers are adding communication tools such as data visualization. “The nature and value of these communication tools are central to the implementation process, helping to determine whether and how algorithmic outputs are incorporated in everyday routine practices.”

Explainability: How do healthcare AI models achieve their results? According to Shaw and Co., even the computer scientists who create them often don’t know.

“The lack of understanding of those mechanisms and circumstances poses challenges to the acceptability of machine learning to healthcare stakeholders,” they write. “Although the issue of explainability relates clearly to decision support uses cases of machine learning as explained here, the issue may apply even more profoundly to automation-focused use cases as they gain prominence in healthcare.”

Privacy and consent: Legislation and guidance are lagging behind when it comes to the proper use of data from wearable devices, the authors note, adding that many health-related apps have unclear consent processes concerning the flow of data they generate.

Moreover, data that are de-identified may be re-identifiable when linked with other datasets. 

“These considerations create major risks for initiatives that seek to make health data available for use in the development of machine learning applications, potentially leading to substantial resistance from healthcare providers,” the authors write.

Algorithmic bias: “In cases where training data are partial or incomplete or only reflect a subset of a given population, the resulting model will only be relevant to the population of people represented in the dataset,” the authors observe. “This raises the question about data provenance and represents a set of issues related to the biases that are built into algorithms used to inform decision making.”

Scalability and normal accidents: As AI applications spread across the healthcare landscape, it’s inevitable, the authors say, that some algorithmic outputs will confound, contradict or otherwise confront others.

“The effects of this interaction are impossible to predict in advance, in part because the particular technologies that will interact are unclear and likely not yet implemented in the course of usual care,” they write. “We suggest that implementation scientists will need to consider the unintended consequences of the implementation and scale of ML in health care, creating even more complexity and greater opportunity for risks to the safety of patients, health care providers, and the general public.”

In concluding their observations and predictions, the authors call the future of machine learning in healthcare “positive but uncertain.” To a considerable extent, they suggest, acceptance and adoption of the technology rests in the collective hands of all healthcare stakeholders—patients, providers and AI developers alike.