Machine learning and AI have the potential to make numerous and significant contributions to healthcare, but users shouldn’t lose sight of the legal questions that will inevitably accompany the spread of the technologies.
That’s according to Matt Fisher, general counsel for the virtual care platform Carium, who will be moderating a panel entitled "Sharing Data and Ethical Challenges: AI and Legal Risks,” at the HIMSS Machine Learning & AI for Healthcare event this December.
"There are a bunch of different questions about where the risks and liabilities might arise,” Fisher observed in a recent interview with Healthcare IT News.”
On the cybersecurity side, Fisher said, the potential issues in the process of training the models rather during their actual use.
"If big companies are contracting with a healthcare system, we're going to be working to develop new systems to analyze data and produce new outcomes," he said. “If a health system is transferring protected health information over to a big tech company, not only do you have the privacy issue, there's also the security issue. They need to make sure their systems are designed to protect against attack.”
Furthermore, Fisher added, a cyber breach is a matter of when, not if, and “(a)nyone working with sensitive information needs to be aware of and thinking about that.”
Another risk can come in the form of a biased algorithm that could possibly lead to claims against the manufacturer or a health organization.
"You've started to see electronic health record-related claims come up in malpractice cases," Fisher said, adding that if a patient experiences a negative result from a device at home, they could bring the claim against a manufacturer.
Also potentially at legal risk are clinicians relying on a device in a medical setting who don't account for varied outcomes for different groups of people. "When you have these types of issues widely reported and talked about, it presents more of a favorable landscape to try and find people who have been harmed," said Fisher.
Addressing and preventing such legal risks depends on the situation, said Fisher. When an organization is going to subscribe to or implement a tool, it should screen the vendor, asking questions about how an algorithm was developed and how the system was trained.
"If it's going to be directly interacting with patient care, consider building [the device's functionality] into informed consent if appropriate," he said.
In the end, however, while an organization can take steps to reduce liability, it's not possible to fully shield yourself from the threat of legal action.
"You can never prevent a case from being brought," Fisher said, but "you can try to set yourself up for the best footing."
Photo by vmjpg/Getty Images