Can AI be held liable for mistaken diagnoses?

The adoption of AI in radiology will certainly be influenced by science, says one expert, but it will also be shaped by the courts and the realities of defensive medicine.

Can algorithms be sued?

At first blush, the question seems a bit fanciful, but it doesn’t long before one recognizes the relevance of the question as AI and its underlying algorithms are put to work across the healthcare sector.

The short answer, says Saurabh Jha, associate professor of radiology at the University of Pennsylvania, is, “That depends.”

In a recent commentary, Jha examines the liability questions increasingly swirling around AI by positing a hypothetical in which a patient dies after an AI-based x-ray diagnosis misses a case of pneumonia.

The difficulty in attributing liability begins with the question of who developed the algorithm.  If the facility developed it in-house, Jha says, “it will be liable through what’s known as enterprise liability. Though the medical center isn’t legally obliged to have radiologists oversee AI’s interpretation of x-rays, by removing radiologists from the process it assumes the risk of letting AI fly solo.”

If, however, the facility bought the algorithm from an AI vendor, the answer to the question of liability becomes more complex.

As Jha points out, unlike medical items that can malfunction, such as catheters or pacemakers, “(a)lgorithms aren’t static products — they learn and evolve. An algorithm approved by the FDA is different than the one that has been reading x-rays and learning about them for a year. The FDA can’t foresee how the algorithm will perform in the future.”

Moreover, some algorithms are entering the market through the standard, more-rigorous approval process, while others are entering via the FDA’s “expedited” pathway, a distinction that can impact how a court will view liability in case of a malfunction.

Finally, the reality of tort law, says Jha, is that the question of “(o)n whom the responsibility for continuous improvement of AI will fall — vendors or users — depends on the outcome of the first major litigation in this realm. The choice of who to sue, of course, may be affected by deep pockets. Plaintiffs may prefer suing a large hospital instead of a small, venture-capital-supported start-up.”

But amidst all the murk, the good news for radiologists, at least as Jha views the evolving landscape, is that AI isn’t going be taking any jobs from radiologists any time soon. Indeed, “the more likely scenario is that AI will be used to help radiologists by flagging abnormalities on images. Radiologists will still be responsible for the final interpretation. AI will be to radiologists what Dr. Watson was to Sherlock Holmes — a trusted, albeit overzealous, assistant.”