For regulators, AI standards still a work in progress

From fitness trackers to high-end imaging devices, consumers and clinicians are increasingly embracing devices using AI technologies, even as regulators struggle to determine the best way to test and approve them.
Jeff Rowe

“We are reaching a point where digital AI-driven devices are able to replicate the abilities of human health specialists, but whether FDA policies can keep a pace with these innovations remains to be seen.”

So noted tech writer Sally Turner in a recent review of the challenges the US Food and Drug Administration (FDA) has been facing as policymakers work to update regulations even as AI devices quickly enter the healthcare marketplace.

She pointed to a statement from outgoing FDA Commissioner Scott Gottlieb, who acknowledged that the FDA’s traditional approach to overseeing certain healthcare products did not align with the types of innovations being developed.

“Our approach to regulating these novel, swiftly evolving products must foster, not inhibit, innovation,” wrote Gottlieb, as the FDA announced plans to evolve its policies to advance development and oversight of innovative digital health tools.

According to Turner, the FDA is keen to point out that it has also cleared a range of software with AI function, and the market is diversifying. In February 2018, for example, the organization permitted clinical decision support software for alerting providers of a potential stroke in patients, and in April it approved the first medical device to use AI to detect “greater than a mild level of the eye disease diabetic retinopathy” in adults who have diabetes.

“The FDA recognises that artificial intelligence holds enormous promise for the future of medicine,” Stephanie Caccomo, spokesperson for the FDA, explained to Turner, “and we’re actively developing a new regulatory framework to promote innovation in this space, and support the use of AI-based technologies. As we explore and test our Pre-Cert pilot—where we focus on a firm’s underlying quality— we’ll account for one of the greatest benefits of machine learning – that it can continue to learn and improve as it is used.”

Still, Turner observes, the FDA faces some tough challenges in its management of digital tools. “It is likely that emerging developments in AI, and innovative algorithms, may in some cases fail to meet the FDA’s recent draft guidance,” she wrote.  “Clinical tools powered by AI undoubtedly have many advantages, but the massive amounts of data they provide may make them less accessible to end-users who are more familiar with traditional analytics.”

Despite the challenges, the FDA’s Caccomo said that FDA will make sure that all aspects of its regulatory framework, such as new software validation tools, are sufficiently flexible to keep pace with the unique attributes of this rapidly advancing field.

“Employing the Pre-Cert approach to AI may allow a firm to make certain minor changes to its devices without having to make submissions each time,” she said. “We know that our approach to AI regulation must establish appropriate guardrails for patients. And even as we cross-new frontiers in innovation, we must make sure that these novel technologies can deliver benefits to patients by meeting our standards for safety and effectiveness.”