“I’ve had to get comfortable with the fact that my ignorance is increasing every day.”
That bracing observation comes from C. Donald Combs, PhD, vice president of the School of Health Professions at Eastern Virginia Medical School (EVMS), in a recent article at HealthITAnalytics about how AI is changing not only how medicine is practiced, but also how it’s taught.
As the article’s writer sums it up, given the “broad potential impact” of current and emerging AI across the healthcare sector, “future and current providers will need training on how to better understand the tools and the ethical implications associated with adopting these technologies into clinical practice.”
To be sure, there are physicians and med students who argue their focus should be on patients, but as Combs points out, “It’s no longer sufficient to read the New England Journal of Medicine, JAMA, and Academic Medicine in Nature and Science. There’s no way to keep up with all the current information. We need to be teaching a strategy for trying to keep up with the information. . . There are huge amounts of data out there, and our students are going to have to learn not to keep up with all the data because an algorithm can do that. They're going to have to learn what's useful and they're going to have to interact with patients in a different way.”
To that end, he says, students need to be learning probabilities and the methods used in AI, and as “AI becomes more integrated into clinical care, medical curricula will need to shift their focus from hunting and synthesizing data to interpreting results and emphasizing compassion.”
Interestingly, and perhaps ironically given the hype about AI, the article points out that a significant element in understanding how to use AI is understanding how to steer AI queries in the right direction.
Says the writer, “training needs to highlight how to understand AI output and question AI methodologies to avoid unsafe exploration. Continuous learning in AI technologies is designed so that the software takes in new information and uses that information to generate more accurate results. . . . Without proper training on how to understand and question AI outputs, providers can become complacent in questioning results and misdiagnose or over-diagnose a patient.”
The ideal goal, says Combs, is that ““We're going to spend less time gathering information and trying to sort through that ourselves as we identify trustworthy AI.”
And that, he says, will allow providers to spend more time focusing on the reason why they went into medicine in the first place: caring for patients.