Proceed with AI in healthcare, says national academy, but cautiously

The best advice for AI developers, say the report’s authors, is to start with real problems in healthcare and explore the best solutions by engaging the relevant stakeholders.
Jeff Rowe

AI has the potential to, among other things, improve patient and clinical team outcomes, reduce costs and influence population health.  But stakeholders across the healthcare sector must be wary of overpromising and underdelivering, both on anticipated ROI and projected outcomes for patients.

That’s according to a new report, entitled AI in Healthcare: The Hope, The Hype, The Promise, The Peril, from the National Academy of Medicine.

As the report’s authors see it, the potential benefits of AI are readily apparent on a number of fronts.  From an efficiency angle, for example, “AI tools can be used to reduce cost and gain efficiencies through prioritizing human labor focus on more complex tasks; to identify workflow optimization strategies; to reduce medical waste  . . . and to automate highly repetitive business and workflow processes by using reliably captured structured data. (But) when implementing these tools, it is critical to be thoughtful, equitable, and inclusive to avoid adverse events and unintended consequences.”

More broadly, the authors note that AI tools are only as good as the data used to develop and maintain them, and they observe that current data sources have many limitations impacting current capacity to deliver evidence-based healthcare and develop AI algorithms.

“The implementation of electronic health records and other health information systems has provided scientists with rich longitudinal, multidimensional and detailed records about an individual’s health data,” the authors contend. “However, these data are noisy and biased because they are produced for different purposes in the process of documenting care.”

In short, say the authors, bad data will only result in bad models. “There is a tendency to hype AI as something magical that can learn no matter what the inputs are. In practice, the choice of data always trumps the choice of the specific mathematical formulation of the model.”

In a companion piece published at JAMA Network, three of the report’s authors contend, “Realistically, the current opportunity is augmented intelligence, supporting data synthesis, interpretation, and decision-making for clinicians, allied health professionals, and patients. Focusing on this reality is essential for developing user trust because there is an understandable low tolerance for machine error, and these tools are being implemented in an environment of inadequate regulation and legislation.”

As AI continues to develop, the authors point to the need for regulators to balance innovation with safety via regulation and legislation to promote public trust.

“AI has the potential to improve patient outcomes but could also pose significant risks in terms of inappropriate or inaccurate patient risk assessment, treatment recommendations, diagnostic error, privacy breaches, and other factors,” they explain. “While regulators should remain flexible, the potential for lagging legal responses will remain a challenge for AI innovation. . . . Liability will continue to evolve as regulators, courts, and the risk-management industries weigh in, and a careful balance and understanding of this is critical for AI adoption.