Expert: AI clearing regulatory hurdles faster than ever

According to one AI developer, two very human characteristics – empathy and the capacity to make decisions – will always be a critical part of the role of doctors.
Jeff Rowe

Clearing regulatory hurdles has been an ongoing challenge for AI innovators as new algorithms and tools have been developed for the healthcare sector, but according to one stakeholder progress on that front is steadily being made.

"It's fair to say to say that the regulatory barrier is ... no longer the current battle of AI in healthcare," Yann Fleureau, CEO and founder of diagnostic AI company Cardiologs, said during a HIMSS20 online seminar, "We have all these technologies that have been validated by the regulators because they have been considered to be safe enough to not create unnecessary patient harm, and the next step is adoption by the caregiver community.”

According to Fleureau, regulations are only part of the challenge for developers, as new technologies always need time to gather clinical validation from repeated use before providers will feel comfortable using them. 

"Most of the validation that has been done so far has been performed by teams that are not necessarily device manufacturers; [in other words] not teams that are going to bring the product to market," he said. "Proper clinical validation in the real world requires more time, and is the next thing for the AI in healthcare industry to tackle to yield widespread adoption."

But even acceptance in clinical settings won’t mean AI is universally applicable, Fleureau cautions. In particular, he notes, AI will never be able to replace the human capacity for empathy.

"Even if an AI reaches what I'd say is perfect performance [and] has a capacity to find all the biomarkers we think about, at the end of the day medicine is not just about technique," Fleureau said. "There is something way more important in the relationship between a patient and a doctor. And there's a very simple reason to that: it's because there are many decisions and questions in medicine for which there is no right or wrong answer."

As he sees the future of AI, regardless of the success and widespread use of AI clinical-decision support tools, doctors will still be needed to ensure that individual patients continue to receive bona fide human care.

"Doctors care about the individual, the person, and so it is fair to believe that in the future one of the key roles played by doctors ... will be to be the person in the healthcare organization to have transgression rights," he said. "[They will be] the only person in the room who will have the right to say, against all technology, against all omnipotent AI, 'I'm going against that decision, that score, that AI recommendation, because I am the person responsible for this patient.’"