Experts: chatbots need work to become better listeners

Chatbots may significantly lower health systems' operating costs, says a team of doctors, but evaluation and research are critical to keep the trust of both patients and healthcare workers.
Jeff Rowe

While healthcare organizations have been steadily incorporating AI-driven chatbots into their patient outreach efforts, particularly as the COVID-19 pandemic has given rise to so many questions and concerns, some stakeholders suggest more thought needs to be given to myriad clinical, ethical and legal aspects of the technology before continuing too far with their implementation.

In a recent commentary in JAMA, a trio of doctors from the University of Pennsylvania and SUNY Buffalo point to the range of complexities – medical, ethical and legal – chatbots are quickly being asked to manage at a time when the technology is still relatively new.

For example, the team writes, “Patient safety considerations reflect the difficulty of CAs (Conversational Agents) in interpreting patient meaning. It remains unclear how well CA systems will recognize nuanced statements that may signal potential harm or a benign action, such as a postpartum mother’s statement: ‘I’m having feelings I’ve never had before—I’m going onto the balcony.’”

Within their viewpoint, the team believes they have laid out key considerations that can inform a framework for decision-making when it comes to implementing chatbots in healthcare. Their considerations should apply even when rapid implementation is required to respond to events like the spread of COVID-19.

"We need to recognize that this is relatively new technology and even for the older systems that were in place, the data are limited," said the viewpoint's lead author, John D. McGreevey III, MD, an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania, in a statement. "Any efforts also need to realize that much of the data we have comes from research, not widespread clinical implementation. Knowing that, evaluation of these systems must be robust when they enter the clinical space, and those operating them should be nimble enough to adapt quickly to feedback.”

In their article, the authors lay out 12 different focus areas that should be considered when planning to implement a chatbot in clinical care.

For example, “to what extent should chatbots be extending the capabilities of clinicians, which we'd call augmented intelligence, or replacing them through totally artificial intelligence?" co-author Ross Koppel wondered. "Likewise, we need to determine the limits of chatbot authority to perform in different clinical scenarios, such as when a patient indicates that they have a cough, should the chatbot only respond by letting a nurse know or digging in further: 'Can you tell me more about your cough?’"

In short, the team notes, “the use of CAs may improve health outcomes and lower costs, (but) researchers and developers, in partnership with patients and clinicians, should rigorously evaluate these programs. Further consideration and investigation involving CAs and related technologies will be necessary, not only to determine their potential benefits but also to establish transparency, appropriate oversight, and safety.”