Human doctors may call patients by their first names, but AI doctors need to earn their patients’ trust.
That’s one key takeaway from a recent study by researchers from Penn State and University of California, Santa Barbara (UCSB), which found that patients are more likely to consider an AI health chatbot intrusive if it already knew their medical history and “chatted” on familiar terms with them.
“Machines don’t have the ability to feel and experience, so when they ask patients how they are feeling, it’s really just data to them,” explained S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and an affiliate of Penn State’s Institute for Computational and Data Sciences (ICDS). “It’s possibly a reason why people in the past have been resistant to medical AI.”
On the other hand, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB, machines do have advantages as medical providers.
For example, similar to a family doctor who has treated a patient for a long time, computer systems could — hypothetically — know a patient’s complete medical history.
“This struck us with the question: ‘Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn’t developed a relationship with us, and what do we value in a relationship with a medical expert?’” said Walther. “So this research asks, who knows us better — and who do we like more?”
To find the answer to that question, the team designed five chatbots for the two-phase study, recruiting a total of 295 participants for the first phase, 223 of whom returned for the second phase. In the first part of the study, participants were randomly assigned to interact with either a human doctor, an AI doctor, or an AI-assisted doctor.
In the second phase of the study, the participants were assigned to interact with the same doctor again. However, when the doctor initiated the conversation in this phase, they either identified the participant by the first name and recalled information from the last interaction or they asked again how the patient preferred to be addressed and repeated questions about their medical history.
“One of the reasons we conducted this study was that we read in the literature a lot of accounts of how people are reluctant to accept AI as a doctor,” said Cheng Chen, doctoral student in mass communications at Penn State. “They just don’t feel comfortable with the technology and they don’t feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem.”
The study’s findings, however, suggest that this strategy can backfire.
“When an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing larger concerns with AI in society,” said Sundar.