There is little doubt that AI is already transforming healthcare in dramatic ways, but it is doing so in a way that is not entirely transparent, which raises significant ethical concerns.
So writes Satish Gattadahalli, Director of Digital Health and Health Informatics, Grant Thornton Public Sector, in a recent commentary at HealthDataManagement.
In his view, the lack of transparency, or “black box” phenomenon, with AI makes it difficult to verify or trust the outputs.
For example, he says, “healthcare AI tools have been observed to replicate racial, socioeconomic and gender bias. Even when algorithms are free of structural bias, data interpreted by algorithms may contain bias later replicated in clinical recommendations.”
Given the risks when it comes to both trusting AI diagnoses and, by extension, ensuring safe care for patients, Gattadahalli offers up a significant list of steps healthcare systems should take to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors.
At the top of the list is the establishment of overarching ethics-based governing principles to, among other things, protect against patient harm, ensure that AI tools are designed and developed using transparent protocols, and that patients are appraised of the known risks and benefits of AI technologies to make informed medical decisions.
Another step Gattadahalli recommends is subjecting algorithms to peer review. “Rigorous peer review processes are essential to exposing and addressing blind spots and weaknesses in AI models. In particular, AI tools that will be applied to data involving race or gender should be peer reviewed and validated to avoid compounding bias.”
When it comes to helping clinicians interpret AI results correctly, Gattadahalli notes that AI technologies “should be designed and implemented in a way that augments, rather than replaces, professional clinical judgement.”
On the other side of the healthcare equation, when it comes to patients, Gattadahalli says that “as applications of AI in healthcare evolve, a strategic messaging strategy is important to ensuring the key benefits and risks of healthcare AI will be understood by patients. A robust training plan must also explore ethical and clinical nuances that arise among patients, caregivers, researchers and AI systems.”
Finally, along with the other developments, the evolution of AI is going to be an ongoing process in which change will be constant. Consequently, Gattadahalli says, “algorithmic decision processes must be monitored, assessed and refined continuously.”
In the end, notes Gattadahalli, the goal of these and other ethical guidelines is reduce ethical risks to patients, providers and payers, while also enhancing the public’s trust.