Health policy analysts call for more rigorous AI monitoring

Under a higher standard, say the experts, new predictive analytics must demonstrate impact on the delivery of health care and patient outcomes.
Jeff Rowe

AI and increased computing power have long held the promise of improving prediction and prognostication in health care, but to unlock their potential, new, advanced algorithms are required that will improve analytics while protecting patient safety. 

That’s one of the conclusions health policy analysts from the University of Pennsylvania and the University of California have reached in a commentary published in Science calling for a more rigorous means of monitoring and introducing AI medical applications. In their paper, Ravi Parikh, Ziad Obermeyer, and Amol Navathe provide five standards they believe should be implemented when allowing AI medical applications to be used for medical procedures.

The process of using AI applications when diagnosing medical conditions or providing predictions for outcomes based on treatment options is still a new technique, and one that is constantly being explored. The authors say only recently have AI-based algorithms and tools been integrated into medical predictions. The paper suggests five standards to provide security to patients who are part of a medical treatment that involves AI applications or devices.

For starters, regulators should expect more than mathematical elegance from AI by setting a standard that includes developing benefits that are clearly identifiable and are subject to validation by the FDA, just as other drugs and devices undergo. 

The second standard includes establishing benchmarks that can ensure an application’s usefulness and quality can be evaluated. “By measuring algorithms against existing standards of care, regulators and policymakers can identify the best combination of protocols, human practitioners, and artificial intelligence.”

The third standard involves ensuring that variable input specifications are clear, so other institutions can use this information when testing a new application or device. The fourth standard contains possible interventions associated with findings by AI systems, and determining if they are successful and appropriate. The last includes the implementation of regular audits. Auditing AI applications takes into consideration the importance of collected data and their abilities that change as time progresses.

The authors also suggest that because AI applications are so new, current regulations may not be working or implemented correctly. Consequently, they suggest medical companies incorporate a “promise and protection” policy towards AI medical devices to ensure the technology solely focuses on promoting better healthcare for patients.