Expert: AI’s future is big, but it can help providers in little ways right now

Expecting the rise of AI and actually experiencing it are two different realities, says one stakeholder, noting the trick is to know what AI can do now while preparing for what it will do in the future.
Jeff Rowe

What would you do with a million interns?

Chances are, most healthcare organizations haven’t considered quite so unlikely a scenario, but according to Thomas Barton, Senior Director, Business Process, Medical Information and Pharmacovigilance, at Eversana, a life sciences service provider, that’s the way healthcare stakeholders should be viewing the prospects for current uses of AI in the healthcare sector.

In a recent commentary, Barton initially observes that “The gold rush towards AI has been fueled by exaggerating its capabilities and overgeneralizing the ways the technology could be applied in real world settings. Well-meaning experts talked about the amazing potential of AI – from improving patient outcomes, providing better engagement between healthcare providers and patients, and increasing compliance and efficiency – without clarifying the current limitations.”

And those limitations are complex, Barton notes, as he takes an in-depth look at the challenges presented by introducing technology that can theoretically – meaning at same point in the future – take over significant swathes of basic patient care.

“One of the main challenges with AI powered technologies for FDA regulated companies is a process known as computer system validation (CSV),” he explains, “a combination of risk assessments and software testing documents designed to prove that the software poses no risk to patient safety or quality of care, is fit for use in a regulated setting, and produces information or data that meet a set of predefined requirements.”

Put simply, the problem centers on the difficulties involved in ensuring that a technology that has the capacity to change automatically as it “learns” will also remain safe for use with patients.

But as Barton sees things, the evolving landscape of AI development and regulation will become more manageable over time, and in the meantime healthcare organizations need to draw a line between futuristic aspirations and practical, if admittedly “ground level,” uses for AI right now.  Hence the question about the potential of a million interns.

“This simple progression of AI, from a back-end processing tool to a front-end optimizer, can greatly reduce the repetitive and menial tasks that would otherwise need to be performed by a human agent,” Barton argues. “Many compliance risks would be reduced since the technology would presumably be more consistent in its output and each response would have a final review and approval step by a human agent prior to release of the information. At the same time, this type of back-end application of AI provides companies with invaluable experience working with the technology, and abundant data to justify the use of AI in increasingly more complex and autonomous ways.”

No, he says, AI will not replace human beings as it becomes more “skilled.” But one key advantage to using AI now is that a slow transition to more fully automated, and increasingly unsupervised, AI solutions “will allow the highly trained human agents to focus on the more challenging inquiries that AI won’t be able to handle in the near term: training and maintaining the automated systems, ‘rescuing’ the AI when needed, providing quality control and oversight, and addressing complex clinical questions.”

And in the long term, he says, as AI becomes “smarter,” the human responsibilities will “will shift away from menial tasks towards management and content curation of the AI systems.”