Needed: ethical governance for fair and effective AI

There’s little question about AI’s potential to transform healthcare, says one stakeholder, but an ethical governing framework is key to ensuring that the changes are definite improvements.
Jeff Rowe

For all the potential benefits of AI in healthcare, even the most enthusiastic advocates are aware of the myriad ethical hurdles that need to be leapt before the use of AI can be considered an essentially unmitigated good.

Writing recently at STAT, for example, Satish Gattadahalli, director of digital health and health informatics at Grant Thornton Public Sector, notes that AI has, among other things, “been observed to replicate racial, socioeconomic, and gender bias, (and) even when algorithms are free of structural bias, data interpreted by algorithms may contain bias that is replicated in clinical recommendations. Although algorithmic bias is not unique to predictive artificial intelligence, AI tools are capable of amplifying these biases and compounding existing health care inequalities.”

Given these and other potential misuses, however inadvertent, Gattadahalli argues that, on one level, “the implications for patient safety, privacy, and provider and patient engagement are profound,” while on another level, anyone who might have access to a patients’s record in which AI data have been included “could discriminate on the basis of speculative forecasts about mental health, or the risks of cognitive decline, cancer risk, opioid abuse, and more.”

In short, the potential for inadvertent misuse or conscious abuse is significant.

With those pitfalls in mind, Gattadahalli argues, any healthcare organization investing in AI must take several steps “to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors.”

First, he says, “establish ethics-based governing principles” so that AI initiatives are adhering to key overarching principles to ensure any initiatives are shaped and implemented in an ethical way.

To that end, organizations should establish a digital ethics steering committee that includes the chief data officer, chief privacy officer, chief information officer, chief health informatics officer, chief risk officer, and chief ethics officer.

Further, says Gattadahalli, diverse focus groups —  including patients, patient advocates, providers, researchers, educators, and policymakers — should be convened to “contribute to requirements, human centered design, and design reviews; identify training data biases early and often; and participate in acceptance testing.”

In addition, new algorithms should be subject to peer review.  Specifically, “AI tools that will be applied to data involving race or gender should be peer-reviewed and validated to avoid compounding bias. Peer reviewers may include internal and external care providers, researchers, educators, and diverse groups of data scientists other than AI algorithm developers.”

Beyond those and other developmental precautions, Gattadahalli recommends ongoing training for staff and the continued monitoring of the algorithmic decision process.

At a minimum, he says, transparency must be required of AI developers and vendors, while access to patient data should be granted only to those who need it as part of a specific job.  And patients should “demand informed consent and transparency of data in algorithms, AI tools, and clinical decision support systems.”