Health systems tap AI to fight pandemic, but longterm testing needs remain

The use of AI to fight COVID-19 shows the technology’s potential to step up quickly in an emergency, but longterm use requires the continuation of careful, patient testing.
Jeff Rowe

AI in healthcare can do an increasing number of things very well.  But other things, perhaps not so much.

In a recent commentary, tech consultant Norman Lewis makes a point of highlighting that distinction, as he sees an increasing number of healthcare stakeholders getting, in his view, a bit too enthusiastic about AI’s prospects as the emerging technologies are suddenly being enlisted in the fight against the coronavirus.

“The Covid-19 pandemic has turned into a gateway for the adoption of AI in healthcare,” he notes. “Staff shortages and overwhelming patient loads have fast-tracked promising new technologies, particularly AI tools that can speed triage. But this accelerated process contains dangers: regulatory oversight, which has hampered innovation in healthcare over the years, nevertheless remains critical. We are not dealing with harmless standards – this is about life and death – oversight and rigorous testing is vital.”

He points to an example in the UK in which an AI-based chest x-ray system was due to be tested, but then the pandemic struck and the tool was put right into service. “Within weeks, the AI-based X-ray tool was retooled to detect Covid-19-induced pneumonia. Instead of a trial to double-check human diagnosis, the technology is now performing initial readings. If this speeds up diagnosis, that is to be welcomed, . . . (but) this has ignited an AI healthcare ‘arms race’ to develop new software or upgrade existing tools in the hope that the pandemic will fast-track deployment by side-stepping pre-Covid-19 regulatory barriers.”

In many ways, argues Lewis, the use of AI in the pandemic battle is a good news/bad news situation, perhaps simple news/not so simple would be a better way of putting it.

On the one hand, “AI in healthcare excels in areas with well-defined tasks, with clearly defined inputs and binary outputs that can be easily validated. . . . AI systems are ideally suited to situations where human expertise is a scarce resource, like in many developing TB-prevalent countries where a lack of radiological expertise in remote areas is a real problem.”

On the other hand, “AI tools are not the same as human intelligence. They are algorithms designed by humans. An AI system, for example, could never replace a surgeon precisely because when the body is cut open, things might not meet pre-programmed expectations. Surgeons need to think on their feet. Algorithms rely on people sitting on their rear ends programming them.”

In short, says Lewis, it’s clear AI is a boon to the future of healthcare, and “its deployment in the Covid-19 crisis reveals some of this potential. But it needs careful consideration because it has implications that go way beyond healthcare.”

After all, he notes, “healthcare is not an exact science and much of it cannot be reduced to algorithmic certainty. Instincts and experience are more important.”  

And that’s why real, live doctors are still indispensable.