EU working group points to need for AI monitoring and regulation

As AI rapidly develops, the report noted, new uses threaten to outstrip current regulatory frameworks.
Jeff Rowe

The International Coalition of Medicines Regulatory Authorities (ICMRA), an Amsterdam-based working committee of the European Union, has released a report recommending, among other things, the creation of a new permanent working group to monitor the use of AI and make sure that regulations can accommodate new developments.

“AI technologies are increasingly applied in medicines development,” the report’s writers observed at the outset. “Opportunities to apply AI occur across all stages of a medicine’s lifecycle: from target validation and identification of biomarkers, to annotation and analysis of clinical data in trials, pharmacovigilance and clinical use optimisation. This range of applications brings with it regulatory challenges, including the transparency of the algorithms themselves and their meaning, as well as the risks of AI failures and the wider impact these would have on its uptake in pharmaceutical development and ultimately on patients’ health.”

In addition to group to monitor AI developments across the landscape in medicines development – from preclinical, clinical development, pharmacovigilance and clinical use – the ICMRA also suggested that regulators may need to apply a risk-based approach to assessing and regulating AI.

“The scientific or clinical validation of AI use would require a sufficient level of understandability and regulatory access to the employed algorithms and underlying datasets,” the report explained. “Legal and regulatory frameworks may need to be adapted to ensure such access options. In addition, limits to validation and predictability may have to be identified and tolerated when, for example, the AI is to learn, adapt or evolve autonomously (on each user's device, as in hypothetical case); such deployments would also be considered higher-risk AI uses.”

In addition, sponsors and developers of AIs – including pharmaceutical companies – should set up governance structures to oversee algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product. That should include a committee to understand and manage the implications of higher-risk AI, particularly when a new tool can “learn, adapt or evolve autonomously.”

Other recommendations include the development of regulatory guidelines for AI in areas such as data provenance, reliability, transparency and understandability, pharmacovigilance, and real-world monitoring of patient functioning.

In developing the report, to elucidate some of the challenges AI use poses for global medicines regulation, two hypothetical case studies were developed by ICMRA members. These examples were then used to ‘stress test’ the regulatory systems of ICMRA members to discover the areas where change may be needed.

In pharmacovigilance applications, for example, the report determined that a balance must be struck between relying on AI to identify safety signals, and making sure there is still adequate human oversight of signal detection and management.

The new document has been published as other authorities have also been looking at the challenges of adapting to the rapid adoption and evolution of AI.

The report also follows on the heels of a similar action plan that the US FDA released in January for the regulation of AI and machine learning-based software.

Photo by Portishead1/Getty Images