How do you regulate a technology that is designed to keep changing even as it works?
That’s the fundamental question facing regulators as they work to figure out how to ensure the safety of new AI technology, and it’s the question considered in a recent commentary by Zuhal Reed, an attorney with Virginia-based Medmarc, a life sciences insurance firm.
It’s no secret that, as Reed notes, “the enormous advantage conferred by (AI’s) ability to learn from real-world use and improve itself while delivering healthcare is revolutionary in the field of healthcare, bioinformatics and medical technology.” But because of the rapid spread of AI, “the assessment and mitigation of risk, compliance, and quality assurance is becoming more important than ever.”
For starters, Reed offers a succinct definition of the difference between software as a medical device (SaMD) and software in a medical device (SiMD), the former being when software simply helps a medical device function – as in an insulin pump – while the latter is when the software actually is the medical device, as in a smart watch that can monitor blood pressure.
It’s not so tricky to regulate SiMD, Reed explains, because the algorithms in use are “locked” or unchanging. But it’s not that easy with SaMD, given that the machine learning algorithms are supposed to “learn” and change – ideally becoming more precise and accurate – as they gather data.
So, the question is, how do you regulate a device when, in a day, a month, a year, it’s not really going to be the same device given the internal changes to how, or how effectively, it functions?
For the moment, explains Reed, the FDA is left with the “De Novo” process for approval of SaMD devices. “The De Novo process is a risk-based process of approval used by FDA for medical devices that do not fall into a previously classified category. Once a device is approved through the De Novo process, it may be used for the approval of future devices through the 510(k) pathway.”
Thus far, however, Reed says “FDA has not approved AI/ML-based SaMD that uses adaptive algorithms because the current regulatory pathways do not allow for approval of a device that constantly improves itself in real-time.”
But what FDA has done is propose a regulatory framework for regulating locked and unlocked AI/ML-based SaMD. This framework takes a “total product life cycle approach” to regulation to effectively monitor new and improved algorithms.
Dubbed “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan,” the proposal will, among other things, allow manufacturers to “pre-specify modifications and describe how the algorithm will change over time along with any safety implications.”
It will also, Reed adds, “address monitoring of real-world performance and good machine learning practice (GMLP).”
Photo by kaisersosa67/Getty Images