Study: sound regulation needed for AI-assisted clinical diagnosis

In theory, AI systems scan patient data and deliver faster, more accurate diagnoses than can be achieved currently, but stakeholders say new policy guidelines are needed to ensure patient safety.
Jeff Rowe

AI is undoubtedly poised to disrupt healthcare, but a concurrent policy process is needed to ensure the implementation of AI is both safe for patients and effective.

That’s according to a new white paper from the Duke-Margolis Center for Health Policy that outlines the policy needs for incorporating artificial intelligence (AI) into diagnosis and other types of clinical decision support software  with effective innovation, regulation and patient protections.

“Integrating AI into healthcare safely and effectively will need to be a careful process, requiring policymakers and stakeholders to strike a balance between the essential work of safeguarding patients while ensuring that innovators have access to the tools they need to succeed in making products that improve the public health,” Greg Daniel, PhD, MPH, Deputy Director for Policy at Duke-Margolis, said at the time of the report’s release.

The paper was developed with the input of a multi-stakeholder working group and addresses the major challenges currently hindering safe, effective AI health care innovation:

Evidentiary needs for increased adoption of AI-enabled technologies. According to the study, necessary evidence will include, among other things, “the effect of the software on patient outcomes, care quality, total costs of care, and workflow (and) the usability of the software and its effectiveness at delivering the right information in a way that clinicians find useful and trustworthy.

Specifically, Duke-Margolis identifies a potential need for fresh thinking about product labeling. The current regulatory model for product labeling and other topics including verification and validation is tailored toward devices with fixed features. As AI systems can continue to "learn" and improve after regulatory clearance, it is unclear how they can fit into the current model.

"More clarity is needed to understand when modifications or updates to AI-enabled [software as a medical device] will require submission of a new 510(k) or a supplemental PMA to FDA, and when these quality systems will suffice," Duke-Margolis wrote in the report.

Ensuring AI systems are ethically trained and flexible. In a similar way, the report noted the need for “best practices to mitigate bias that may be introduced by the training data used to develop software are critical to ensuring that software developed with data-driven AI methods do not perpetuate or exacerbate existing clinical biases.”

Demonstrating value --  Public and private coverage and reimbursement to the provider will drive adoption and increase the return on investment for these technologies, the report noted, but AI-enabled clinical decision support software must be able to demonstrate improvements in provider system efficiency and enable providers to meet key outcome and cost measures. A useful first step would be to establish which clinical decision support software features and performance outcomes will be most valued by payers, as well as the types of evidence that will be required to prove performance gains. 

“AI-enabled clinical decision support software has the potential help clinicians arrive at a correct diagnosis faster, while enhancing public health and improving clinical outcomes,” said Christina Silcox, PhD, managing associate at Duke-Margolis and co-author on the white paper. “To realize AI’s potential in health care, the regulatory, legal, data, and adoption challenges that are slowing safe and effective innovation need to be addressed.”