Work in progress: how the FDA intends to regulate AI

The FDA has several initiatives in the pipeline to address an array AI regulatory challenges, but officials say they still need private sector help to ensure steady progress.
Jeff Rowe

While AI is still a relatively new development in healthcare, many providers would like it to be a regular facet of clinical operations, particularly in radiology.  The problem, however, is that a number of regulatory logjams have yet to be resolved.

That’s according to Ravi Samala, PhD, a staff fellow of the U.S. Food and Drug Administration’s (FDA) Division of Imaging, Diagnostics, and Software Reliability (DIDSR), who spoke recently at the Conference on Machine Intelligence in Medical Imaging (C-MIMI) help by the Society for Imaging Informatics in Medicine (SIIM).

In his view, there are four “driving forces” in the implementation of AI across healthcare that the FDA is finding particularly challenging when it comes to determining the best regulatory approach: the array of emerging applications, novel specialty areas for AI, the unique and varied nature of medical data, and the rapid, ongoing advances in algorithms.

"Based on these observations, our team has come up with five AI/ML gaps in the regulatory science research that could help the agency's mission of giving patients in the U.S. first access to software as medical devices," he explained.

Specifically, Samala pointed to research gaps involving “data size, new and multimodality data, computer-aided triage applications, quantitative imaging (and) adaptive algorithms.

Concerning the issue of data size, Samala observed, "There's a strong need to fundamentally understand the limitations of smaller datasets, as well as to develop techniques to maximize information and enhance AI/ML training.”

As for the development of new types of AI software that utilize multiple types of data sources, Samala noted that these new types of algorithms often require novel assessment paradigms and also measurement of safety and effectiveness when used in new types of applications.

"Although most of the devices currently on the market are diagnostic in nature, we are seeing more and more devices that are prognostic -- trying to do treatment response prediction, risk assessment in therapy -- which requires different assessment metrics as well as different standards," he said.

The FDA is working to identify the evaluation approaches that can assess the performance of these types of devices, he said.

According to Samala, the third regulatory gap involves computer-aided triage, which, while it has been one of the fastest growing areas of AI, with over 30 AI software applications receiving clearance over the last two years, still needs to be more accurately assessed for the amount of time that these types of algorithms may produce.

The number of AI applications involving risk assessment and patient prognosis is also expected to increase, Samala said.

Photo by vmjpg/Getty Images