While there’s no shortage of discussion concerning the potential impact of new AI on the delivery and effectiveness of healthcare, what still often lost is due consideration to the uncertain effects AI might have on patient-physician relationships.
In a recent piece at JAMA, Robert Wachter, MD, of UC-San Francisco, author of the 2017 bestseller The Digital Doctor, and two colleagues tackle the question: “How can patient-physician trust be maintained or even improved with the introduction of AI?”
“When considering the implications of healthcare AI on trust, a broad range of health care AI applications need to be considered,” the three write, including (1) use of healthcare AI by physicians and systems, such as for clinical decision support and system strengthening, physician assessment and training, quality improvement, clinical documentation, and nonclinical tasks, such as scheduling and notifications; (2) use of healthcare AI by patients including triage, diagnosis, and self-management; and (3) data for healthcare AI involving the routine use of patient data to develop, validate, and fine-tune health care AI as well as to personalize the output of health care AI.”
Each of these applications, they suggest, has the potential to enable and/or disable three key components of trust:
Competency: For physicians, “competency” means demonstrated and communicated levels of clinical mastery, the authors suggest. For patients, it’s reflected in the degree to which they show their capacity to understand their own health status.
“Because much of AI is and will be used to augment the abilities of physicians, there is potential to increase physician competency and enable patient-physician trust,” they write. “On the other hand, trust will be compromised by AI that is inaccurate, biased or reflective of poor-quality practices as well as AI that lacks explainability and inappropriately conflicts with physician judgment and patient autonomy.”
Motive: Motive refers in part to a patient’s trust that the physician is acting solely in the interests of the patient. Conversely, does the physician feel the patient is self-informing to collaborate on care, or in response to an emotional need to feel in control?
“Through greater automation of low-value tasks, such as clinical documentation, it is possible that AI will free up physicians to identify patients’ goals, barriers and beliefs, and counsel them about their decisions and choices, thereby increasing trust,” Wachter et al. write. “Conversely, AI could automate more of the physician’s workflow but then fill freed-up time with more patients with clinical issues that are more cognitively or emotionally complex.”
Transparency. Finally, patients are more likely to be reassured when AI tools help them see that clinical decisions are being made on evidence and expert consensus, the three write.
“[I]f patient data are routinely shared with external entities for AI development, patients may become less transparent about divulging their information to physicians, and physicians may be more reluctant to acknowledge their own uncertainties,” they warn. “AI that does not explain the source or nature of its recommendations (“black box”) may also erode trust.”
AI in healthcare is bound to reshape relationships between physicians and patients, the authors conclude, but “it need not automatically erode trust between them. By reaffirming the foundational importance of trust to health outcomes and engaging in deliberate system transformation, the benefits of AI could be realized while strengthening patient-physician relationships.”