AI isn’t a solution; it’s an ongoing development that has the potential to transform the healthcare sector, but it will take careful consideration and preparation every step of the way.
That’s one way to sum up a recent commentary by Christopher Maiona, MD, CMO at PatientKeeper’s, a health IT solutions provider.
Writing recently at MedCityNews, clearly recognizes the opportunities AI can offer to the healthcare sector, from faster diagnoses to more personalized health plans to an array of administrative efficiencies. But, he adds, “it’s important to consider the implementation process, the associated benefits and risks, and its impact on how we train the physicians of tomorrow.”
For starters, he notes that AI is very much in the beginner’s stage, and there’s a long period trial and error ahead. Consequently, “instead of jumping at headlines about the possibilities of AI in the hospitals of the future, healthcare leaders should look to their physicians and find out what they would like to see from an AI system.”
Moreover, from the perspective of patients, Maiona says should view AI as largely an assistant. “As another tool in a physician’s arsenal, AI can assist with decision making and diagnoses as needed. Constantly ingesting and processing a wealth of information, AI will arm providers with a better understanding of their patients’ unique situations. This could range from algorithms that identify symptoms and diagnose diseases to creating patient-specific care plans that draw on personal data such as medical history, genetics and national databases of constantly updated medical research.”
As for their own interests, Maiona says providers can rightfully view AI as offering welcome assistance and relief as the specter of physician burnout looms ever larger around the world, particularly when it comes to administrative tasks.
But, he asks, “how will this dynamic affect the training of physicians? Will AI impact physicians’ clinical skills? What if these systems crash? How will physicians deliver care? Even with proper vigilance, a potentially unhealthy dependence on AI systems may develop.”
Finally, says Maiona, there’s the growing awareness of the ethical issues that must be considered. “Patient data, privacy and unintended biases should all be evaluated before wide-scale implementation is undertaken. AI depends on pooling large volumes of data (data lakes) which requires sharing of information. Patients will need to feel confident in sharing their unique de-identified data with national databases, which means trust (in the system and the technology) is a prerequisite.”
Trust, of course, is paramount with any new technology, and AI is certainly no different. As Maiona sees AI’s future in healthcare, he believes the potential benefits “outweigh the risks and will contribute significantly to the emerging reality of a ‘learning healthcare system.’ But success hinges on the employment of intelligent workflows and instinctive user interfaces that increase efficiency, allow providers to practice at the top of their license and optimize safety.”