Ethics for AI sparks debate among many experts

The key is not to stifle AI development, say providers with ethical concerns, but to ensure that new technologies assist, not replace, the doctors on the front line of healthcare.
Jeff Rowe

As AI continues to be developed and implemented in healthcare, discussions concerning the range of ethical concerns raised by the use of AI are also increasingly front and center.

For example, at the recent annual meeting for the Society for Imaging Informatics in Medicine (SIIM), a panel of experts took on the issue of ethics in AI development, debating specifically whether vendors have a right to buy patient data to create algorithms or if patients should be the gatekeepers of their own imaging data.

According to reports of the discussion, Adam B. Prater, MD, MPH, assistant professor of radiology and imaging sciences at Emory University School of Medicine in Atlanta, considered the issue from the business side, arguing that healthcare needs vendors if radiology is to realize AI’s true potential.

“Every other industry has artificial intelligence; we’re super behind and we need vendors for progress. Healthcare can’t solve everything by itself,” he said. 

One key question, Prater conceded, revolves around whether or not vendors can be trusted with the patient data needed for educating algorithms, but he suggested the concern was exaggerated, noting that if vendors follow HIPPA, and perhaps add stronger data use agreements for improved protections, vendors should be able to purchase data on an open market.

“(Vendors) are not out to harm people, we’re actually trying to make AI good and save lives, so this is really about patients,” Prater told the audience.

On the other side of the issue was Patricia Balthazar, MD, a radiology resident at Emory, who argued the patient should be put first when using data to create AI.

“We (clinicians) said that we should involve patients in the decision-making, and we have consents for every procedure,” Balthazar said. “If the patient is getting an imaging study, they should know that (hospitals are) selling the image to someone else.”

But patient impacts aren’t the only concern accompanying the rise of AI in healthcare.  There is also no small amount of hand-wringing when it comes to the impact on healthcare employees. 

At a recent conference in Singapore, for example, a panel discussed the idea that technologies such as AI should not replace humans but should augment the work of clinicians and, with luck, even enhance the patients’ interactions with their doctors.

Dr Philip Wong,  for example, a practicing cardiologist and founder of WEB Biotechnology, pointed out that doctors have a sense of compassion and empathy to want to help patients regardless of their health condition, and that is something AI cannot do at the current moment.

“Anything thing (or tech) that is adopted by the hospitals, we always have qualified specialists or ‘men/women in the loop,’” he explained. “For instance in radiology, even though a lot of analysis is done by AI, in the end the person who signs off is the radiologist or specialist radiologists who have to sign off the report.”

His wish as a doctor, he said, was “for a ‘digital health persona’ for patients, to get a lot more health information from them and try to solve their problems. With the digital health persona, health information is being collected continuously, even outside the hospital and the information can even be front-loaded to the doctor before the patient sees him/her. This gives the doctor a broader picture of the patients’ health.”