Teamwork: how AI and people can help each other

Machines do some things better than humans and other things not so well at all, say experts. Humans and machines together, however, can be a powerful combination.
Jeff Rowe

“I think that all our patients should actually want AI technologies to be brought to bear on weaknesses in the health care system, but we need to do it in a non-Silicon Valley hype way.” 

Those sage words come from Isaac Kohane, a biomedical informatics researcher at Harvard Medical School, in a recent review of the promise and challenge of AI in healthcare by tech writer Jeremy Hsu.

As Hsu notes, there’s an abundance of optimism about AI around the medical community, and while much of that optimism likely has merit, it’s important that we’ve a ways to go before AI’s promise becomes widespread reality.

“If AI works as promised,” says Hsu, “it could democratize health care by boosting access for underserved communities and lowering costs . . . AI systems could free overworked doctors and reduce the risk of medical errors that may kill tens of thousands, if not hundreds of thousands, of U.S. patients each year. And in many countries with national physician shortages, such as China where overcrowded urban hospitals’ outpatient departments may see up to 10,000 people per day, such technologies don’t need perfect accuracy to prove helpful.”

At the same time, he cites critics who are concerned that a hasty implementation of AI could negate its promise if it “tramples patient privacy rights, overlooks biases and limitations, or fails to deploy services in a way that improves health outcomes for most people.”

Summing up the irony of AI and other technological advances, Jayanth Komarneni, founder and chair of the Human Diagnosis Project (Human Dx), a public benefit corporation focused on crowdsourcing medical expertise, observes, “In the same way that technologies can close disparities, they can exacerbate disparities. And nothing has that ability to exacerbate disparities like AI.”

Hsu takes the time to itemize some of the major challenges of AI, including the need for new algorithms to “learn” on clean data, and the fact that “(b)ecause AI systems lack the general intelligence of humans, they can make baffling predictions that could prove harmful if physicians and hospitals unquestioningly follow them.”

On that note, he says, “the people and companies providing AI services will need to sort out legal liability in the case of inevitable mistakes. And unlike most medical devices, which usually need just one regulatory approval, AI services may require additional review whenever they learn from new data.”

To a significant extent, Hsu argues, the longterm success of AI depends on the different visions for implementing it. “Early development has focused on very narrow diagnostic applications, such as scrutinizing images for hints of skin cancer or nail fungus, or reading chest X-rays. But more recent efforts have tried to diagnose multiple health conditions at once.”

Still, he concludes, “if AI services pass all the tests and real-world checks, they could become significant partners for humans in reshaping modern health care.”