In one corner, we have the expanded boundaries of human performance, the democratization of medical knowledge and the automation of drudgery.
In the other, we have data challenges, privacy concerns and the ever-present risk of bias and inequality.
Those are just a few of the pros and and cons of AI laid out in a new report from The Brookings Institution titled, “Risks and Remedies for Artificial Intelligence in Healthcare.”
Penned by W. Nicholson Price, II, professor of law at University of Michigan Law School, the report aims to provide a comprehensive crystal ball for stakeholders as the still-young technology quickly spreads across the healthcare sector.
On the plus side, Price points to “at least four major roles” AI has the potential to play in healthcare in the years ahead.
Starting at the top, so to speak, he cites AI’s potential to push the “boundaries of human performance” when it comes to diagnosis, citing Google Health’s new AI-driven tool that “can predict the onset of acute kidney injury up to two days before the injury occurs; compare that to current medical practice, where the injury often isn’t noticed until after it happens.”
Excellence in another form, says Price, will come with the democratization of medical knowledge and excellence. “AI can also share the expertise and performance of specialists to supplement providers who might otherwise lack that expertise,” he explains. “Such democratization matters because specialists, especially highly skilled experts, are relatively rare compared to need in many areas.”
Next up is a favorite in any profession or job, the elimination – or, in this case, the automation of drudgery in medical practice. “AI can automate some of the computer tasks that take up much of medical practice today,” says Price. “If AI systems can queue up the most relevant information in patient records and then distill recordings of appointments and conversations down into structured data, they could save substantial time for providers and might increase the amount of face-time between providers and patients and the quality of the medical encounter for both.”
As for the risks, Price begins with the very real possibility that, at times, even AI might make the wrong call, thus potentially resulting in mistakes such as recommending the wrong drug for a patient or missing a tumor on a radiological scan.
Similar consequences could ensue from the vast amounts of data needed to train AI algorithms. On top of that is the fact that “patients typically see different providers and switch insurance companies, leading to data split in multiple systems and multiple formats. This fragmentation increases the risk of error, decreases the comprehensiveness of datasets, and increases the expense of gathering data—which also limits the types of entities that can develop effective healthcare AI.”
As for solutions to these and other potential issues, Price recommends both a robust presence on the part of the FDA as AI spreads, but he also says there is likely to be an oversight role for professional organization such as the American College of Radiology and the American Medical Association in those areas that fall beyond the FDA’s purview.
Finally, he says, there’s the critical role of providers in ensuring the success of AI in the long run. And for that, he suggests, “medical education will need to prepare providers to evaluate and interpret the AI systems they will encounter in the evolving health-care environment.”