Researchers: juries not inclined to jump to AI malpractice conclusions

Despite the promise of AI to improve patient outcomes, legal scholars have previously cautioned that tort law may create a substantial legal barrier to physician acceptance of AI recommendations.
Jeff Rowe

Potential liability is always a tricky subject in healthcare, and those concerns are perhaps never more acute than when new technology such as AI is implemented in healthcare practices.

A new study of potential jury candidates, however, has found that physicians who follow AI recommendations may not be considered as liable for potential malpractice as was previously thought.

Published in the January issue of The Journal of Nuclear Medicine (JNM), the study surveyed a representative sample of 2,000 adults in the US, each of whom read one of four scenarios in which an AI algorithm provided a drug dosage recommendation to a physician, then evaluated the physician’s decision by assessing whether the treatment decision by most physicians and a reasonable physician in similar circumstances.

“New AI tools can assist physicians in treatment recommendations and diagnostics, including the interpretation of medical images,” explained Kevin Tobia, JD, PhD, assistant professor of law at the Georgetown University Law Center, in Washington D.C. “But if physicians rely on AI tools and things go wrong, how likely is a juror to find them legally liable? Many such cases would never reach a jury, but for one that did, the answer depends on the views and testimony of medical experts and the decision making of lay juries. Our study is the first to focus on that last aspect, studying potential jurors’ attitudes about physicians who use AI.”

The results showed that participants used two different factors to evaluate physicians’ utilization of AI support systems: whether the treatment provided was standard and whether the physician followed the AI recommendation.

Participants judged physicians who accepted a standard AI recommendation more favorably than those who rejected it. Conversely, however, if a physician received a nonstandard AI recommendation, he or she was not judged as safer from liability by rejecting it.

In short, while prior literature has suggested that laypersons are very averse to AI, this study indicates that they are, in fact, not strongly opposed to a physician’s acceptance of AI medical recommendations, and that the threat of a physician’s legal liability for accepting AI recommendations may be smaller than is commonly thought.

“Of course, only a fraction of medical malpractice lawsuits reach a jury—many more settle,” the researchers noted. “But even parties who ultimately settle their medical malpractice claims benefit from knowledge about the likely jury outcome if trial had ensued. For those, the results here provide evidence about the shadow of the law; the likely outcome of the court proceedings is an important input into settlement negotiations.”