How can health system executives ensure that their AI and machine learning endeavors protect patient data, meet HIPAA compliance, and minimize security threats?
That’s the question tech writer Mandy Roth recently asked a couple of cybersecurity experts, recently, at HealthLeaders Media.
As Clearwater CEO Steve Cagle summed up the problem, with data stockpiles increasing by nearly 50 percent per year, the challenge for IT managers in healthcare and other sectors is to keep up with the fact that there are “more points of access” to data stockpiles at the same time as "cyber criminals and cyber-attacks are becoming much more sophisticated in nature.”
An additional problem, said Cagle, is that, “unfortunately healthcare has been catching up a bit when it comes to cyber security. The industry has made great strides over the last several years, but, compared to other industries, they're just not there yet."
Citing limited resources and budgets as primary factors in this dynamic, he noted, ”The technology has outpaced the security and security, and oftentimes, has not been designed into the solution.”
In addition, some organizations still struggle with HIPAA compliance.
According to Kenny Pyatt, senior director of engineering at Digital Reasoning, with whom Clearwater recently announced a three-year cyber risk partnership, when partnering with AI technology companies, "healthcare executives must insist upon transparency, communication, and a willingness to open the 'black box.' Without an ethical, secure, and transparent partnership, healthcare executives won't even know the potential risks."
The two men also noted that health systems should ensure that outside parties are compliant with mandates and guidelines issued by multiple organizations, including HHS’ Office of Civil Rights and the National Institute for Standards and Technology.
In addition, Cagle pointed out, healthcare organizations sure conduct enterprise-wide security risk assessments. "By doing that, you can identify where you have the most risk," he explained. Once you "understand where those exposures are, you [can] risk-rate them and … identify the best way to go about reducing risk to a level that's acceptable to [your] organization.”
As for the questions healthcare organizations should ask when choosing AI partners, Cagle and Pyatt suggest several as a starting point to help minimize risk, ensure cyber security, and ensure validity of AI solutions, including:
- How will you process and store our data?
- Describe the environment where data is stored; who will have access to it?
- Are you willing to share the results of all experiments conducted and monitored model performance over time?
- Can you demonstrate results with validated data and methods supported by peer-reviewed research?
- What encryptions will you use to protect the data?