What’s the best way to talk with patients about the potential impact of specific AI tools in their healthcare?
That’s one way of summing up a recent study and commentary published in the AMA Journal of Ethics.
The commentary described a hypothetical case involving an assistive AI surgical device and focused on the potential harms emerging from interactions between humans and AI systems with an eye toward determining how best to distribute responsibility for informing patients about potential impacts.
According to the researchers, “medical ethics has begun to highlight concerns about uses of AI and robotics in health care, including algorithmic bias, the opacity and lack of intelligibility of AI systems, patient-clinician relationships, potential dehumanization of health care, and erosion of physician skill. In response, members of the medical community and others have called for changes to ethical guidelines and policy and for additional training requirements for AI devices.”
Given the potential of AI to augment human medical care, they added, the proper role of healthcare professionals vis-à-vis their digital counterparts is particularly relevant.
The researchers also noted that “interconnected with lack of knowledge about AI systems—including how errors could occur—are varied perceptions patients and healthcare professionals have about AI technology. Computing experts offer wide-ranging visions of where AI is going, from utopian views in which humanity’s problems are largely solved to dystopian scenarios of human extinction.1 These visions can influence whether patients . . . and physicians embrace AI (perhaps too quickly) or fear it (even though it might improve health outcomes).”
They point to a 2016 survey of 12,000 people across 12 European, Middle-Eastern, and African countries that found that only 47 percent of respondents would be willing to have a “robot perform a minor, non-invasive surgery instead of a doctor,” with that number dropping to 37 percent for major, invasive surgeries.
After weighing a number of scenarios from both provider and patient perspectives, the researchers suggest that “companies provide detailed information about AI systems, which can help ensure that physicians—and subsequently their patients—are well informed. By explaining to patients the specific roles of healthcare professionals and of AI and robotic systems as well as the potential risks and benefits of these new systems, physicians can help improve the informed consent process and begin to address major sources of uncertainty about AI.”