AI developer: AI is impressive, but doctors should still check the machine’s work

When it comes to evaluating AI, says one expert, we should recognize that there are still many questions to ask about how AI can best operate so that it properly augments medical professionals without adversely impacting overall public health.
Jeff Rowe

A number of tests recently have resulted in AI software “outscoring” human doctors when it comes to diagnosing specific diseases.  But not always, and therein lies a danger.

That’s one way of summing up a recent commentary in Wired by Arijit Sengupta, founder and CEO of Aible, a startup that creates AI for businesses.

Needless to say, Sengupta is a fan of AI and what it can potentially do for healthcare and other sectors of the economy.  But he’s also quick to point out the risk of over-hyping what AI can do. More importantly, he argues, glossing over AI’s potential lapses – for example, diagnoses where humans still actually do better than their AI counterparts – can be downright dangerous.

As an example, he uses a recent New York Times review of a study released in Nature. In short, the Times piece didn’t cover all aspects of the study, particularly a part in which physician accuracy outpaced that of the AI software.

As he explains his concern, “Misdiagnosis can have far-reaching negative impacts,  . . . No doctor obeying the Hippocratic oath would ever put a patient’s life at risk to increase their accuracy score, and we as a society should never impose such a metric on doctors, or turn life-or-death decisions over to a machine incapable of understanding the value of a human life.”

Interestingly, Sengupta says, “AI is famous for ‘cheating the test,’ like how it beats certain games by exploiting bugs. In the case of the research published in Nature, the AI and the human doctors were reviewing the same write-up of symptoms by a group of trained physicians. Now, humans have quirks. Physicians may write much shorter notes when they know a patient doesn't have a disease but write more extensively when they suspect a disease. They may use shorter phrases or make more typos if they are really paying attention to the patient. Sometimes they update their notes if the patient had a disease but not if the patient was healthy. The AI, especially a deep-learning AI, would be really good at picking up such clues from the diagnosis notes, and that would give it a competitive advantage.”

Naturally, Sengupta is completed committed to the belief that “AI will profoundly change the world.” But he also believes overhyping the technology without first considering the implications of its behavior increases the risk that AI will be deployed in harmful ways. 

“The next time you’re in your doctor’s office,” he says, “think about the decisions he or she makes on a daily basis about the intricacies of providing the right care to maximize not just health but quality of life. When those decisions impact your life, or the lives of your loved ones, do you want your doctor focused on beating the accuracy of an AI model? Or carefully weighing the best options for each individual?”