Duke U. puts practical implications first in AI development

The Duke research team focuses on tools that add clear value by asking frontline health workers what would be most helpful.
Jeff Rowe

While the technological side of AI in healthcare has been getting significant attention for quite some time, what has gotten considerably less attention, from stakeholders and observers alike, is the impact the introduction of AI has had on frontline healthcare workers.

That’s according to a recent article at STAT that takes a look at how Duke University is trying a different approach to AI development, one that begins by asking for ideas from hospital staffers concerning what specific AI tools they would find helpful, then developing them with those specific uses in mind.

“You want people thinking critically about the implications of technology on society,” Mark Sendak, population health and data science lead at the Duke Institute for Health Innovation, summed up the overall approach.

As the article notes, “(g)etting practitioners to adopt AI systems that are either opaquely defined or poorly introduced is arduous work. Clinicians, nurses, and other providers may be hesitant to embrace new tools — especially those that threaten to interfere with their preferred routines — or they may have had a negative prior experience with an AI system that was too time-consuming or cumbersome.”

Says Sendak, the data science lead, “You don’t start by writing code. Eventually you get there, but that happens in parallel with clinicians around the workflow design.”

The article looks specifically at efforts to develop an algorithm that aims to determine the chances a hospital patient will develop sepsis, “previous projects have produced AI tools designed to save clinicians time and effort, such as an easy-to-use algorithm that spots urgent heart problems in patients, (while) others improve the patient experience, such as a deep learning tool that scans photographs of dermatology patients’ skin and lets clinicians more rapidly slot them into the appropriate treatment pathways for faster care.”

What makes the Duke approach unique is that after potential AI tools are modeled by a group of in-house engineers and medical staff, reviewed by the innovation team and associate dean, and released, a team of anthropologists and sociologists studies their real-world impacts. 

“Among their questions: How do you ensure that frontline clinicians actually know when — and when not — to use an AI system to help inform a decision about a patient’s treatment? Clinicians, engineers, and frontline staff have a constant feedback loop in weekly faculty meetings.”

As the article sums it up, “Duke’s approach is an effort at transparency at a time when the vast majority of AI tools remain understudied and often poorly understood among the broader public. Unlike drug candidates, which are required to pass through a series of rigorous steps as part of the clinical trial process, there’s no equivalent evaluation system for AI tools, which experts say poses a significant problem.”

At Duke, the article concludes, “the researchers aren’t just looking at how accurate their models are — but also how effective they are in the real world setting of a hectic hospital.”