Regulatory landscape changing rapidly as AI use increases

AI tools increasingly permeate the healthcare landscape, and providers need to understand the legal and regulatory implications of that transformation sooner rather than later.
Jeff Rowe

There’s still a long way to go before AI tools are fully integrated into the healthcare system, but there’s enough AI already in the system that providers need to be increasingly well-versed in both the regulatory trends and the legal consequences thereof.

So argues a team of attorneys from Polsinelli PC in the first of a three-part series posted in recent months at Bloomberg Law.

The first thing for providers to recognize, say Iliana Peters, Liz Harding and Lindsay Dailey, is that while they may already recognize the growing role of AI in pharma circles or surgery robots, they “may not realize that the clinical decision support, claims review, and voice-to-text transcriptions tools that they use also include AI. Healthcare system IT staff also rely heavily on AI tools to detect and combat cyber threats to the information that healthcare providers need to provide quality care.”

And what that means is that “important state, federal, and international legal requirements” may already be in play in a way that is designed to control and monitor how personal information is used in healthcare decisions.

In short, they say, “healthcare practitioners of all types should understand not only the privacy and security concerns, but also the ethical implications with using AI tools, particularly given how prevalent they are now, and will become, in the healthcare industry.”

To help providers understand the full implications of AI, the three attorneys lay out what they consider “best practices.”

First, they say, providers “must ensure that the developers of their AI tools build in the necessary security requirements from the beginning, and must understand how these tools use their data, particularly patient and employee information.”

Second, given that “AI tools are only as good as their programming and the data they use to achieve their goals,” providers need to ensure that the AI tools they use don’t access or collect data that they don’t actually need. After all, they note, excessive amounts of unused data in data repositories can lead to significant legal risk in view of privacy risks.

Finally, providers and healthcare organizations need to be on the watch for the possibility of any “unintended outcomes related to ethical questions about the use of AI, including considerations of how the AI tools are used in practice and with regard to individuals of certain ages, ethnicities, or genders.”

In other words, be sure to have your policies and protocols in place or well on the way to being so, even though it may seem “early” in the move toward the pervasive use of AI.

“There are significant privacy requirements and security controls that are necessary to ensure not only that patient and employee information is used (or disclosed) only as permitted by state data privacy and security laws, HIPAA and other federal laws, and GDPR and other international laws, but also that patients and employees have rights to their information, as provided for by such laws,” the attorneys point out.

Knowing what those requirements and controls are is key to ensuring the smooth and effective move toward fully realizing the benefits of AI in healthcare.