“It's time for a ‘more nuanced and thoughtful conversation’ about AI technologies.”
So said Microsoft Corporation Vice President Peter Lee on a panel of healthcare and technology experts at HIMSS19 in Orlando, last week.
Lee was quick to note AI has already done wonders for healthcare, saying, "Even if we never move beyond the current state of the art, we have a decade of application and value to extract" from existing AI-derived datasets.
Still, he and the other panel members agreed that while AI has quickly found its way into every corner of healthcare, “from patient-facing chatbots to imaging interpretation to advanced analytics applications,” there remain a host of ethical questions about how, where and to what extent AI and machine learning apps should be deployed.
As Microsoft Associate General Counsel Hemant Pathak put it, there are "bright lines that we don't want to cross.”
According to Pathak, Microsoft has been working on recognizing those lines, establishing, for example, there's an in-company institutional review board that weighs its approach to development of facial recognition technology. Moreover, this past December, Microsoft shared a long blog post where it said it was "time for action" on that particular strain of AI, and called for "governments in 2019 to start adopting laws to regulate this technology.”
Echoing Microsoft’s efforts, Susannah Rose, associate chief experience officer at Cleveland Clinic, said for all the huge potential of AI applications, they still need to be closely monitored.
"It's not just how AI is diffused in the [healthcare] system; it's the structure of how we'll be testing it," she said. With machine learning applications, it's critical that "we not abandon the notions of rigorous testing that we have in healthcare today," she added. "I don't think AI can be any exception to that sort of rigorous involvement."
As the technology continues to evolve almost daily, there are already immense benefits to the consumer, but there are perils.
"Even small defects in the training samples can cause unpredictable failures," said Lee. "Understanding blind spots and bias in the models" is a must-have for safe integration of AI into clinical workflows.