Expert: Legal and ethical concerns sure to increase as AI spreads

The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances, but policymakers should prepare now for an array of legal and ethical challenges.
Jeff Rowe

Artificial Intelligence (AI) “involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences.”

As a generic definition, that observation from Sharona Hoffman, Professor of Health Law and Bioethics at Case Western Reserve University, doesn’t sound so bad, and in a recent commentary Hoffman is quite optimistic about the benefits of AI in healthcare and other sectors.

At the same time, however, Hoffman points to the importance of understanding the risks and potential downsides of AI, noting that AI in medicine raises quite a few legal and ethical concerns.  “Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship,” she explains, and she urges policymakers to establish safeguards just as has happened with the spread of genetic testing.

That problem in a nutshell, Hoffman argues, is that “(i)f AI generates predictions about your health, I believe that information could one day be included in your electronic health records.

Anyone with access to your health records could then see predictions about cognitive decline or opioid abuse. Patients’ medical records are seen by dozens or even hundreds of clinicians and administrators in the course of medical treatment. Additionally, patients themselves often authorize others to access their records: for example, when they apply for employment or life insurance.”

The potential problems are simply on the individual level, either.  Says Hoffman, “Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others.”

That gets particularly tricky, she explains, because non-healthcare entities don’t fall under HIPAA and, therefore, don’t have to seek permission to disclose health information.

“Such disclosures can lead to discrimination. Employers, for instance, are interested in workers who will be healthy and productive, with few absences and low medical costs. If they believe certain applicants will develop diseases in the future, they will likely reject them. Lenders, landlords, life insurers and others might likewise make adverse decisions about individuals based on AI predictions.”

In her view, the core of a solution to these risks to expand both HIPAA and the Americans with Disabilities Act, the first to cover any entity that handles health information for business purposes, the second “to prohibit discrimination based on forecasts of future diseases.”

Indeed, Hoffman concludes, AI holds great promise, but “to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.”