Even as healthcare providers and researchers turn to AI for help identifying and treating diseases, it’s becoming increasingly clear AI has a growing role to play in the critical area of cybersecurity across healthcare and other sectors.
That’s according to a new report by The Economist Intelligence Unit sponsored by law firm Pillsbury Winthrop Shaw Pittman LLP.
“The frequency and scale of cyber incidents and data breaches have grown exponentially, and in short order,” the report notes at the beginning. “But artificial intelligence (AI) tools can play
an important role in alleviating this growing exposure. These emerging technologies are well-suited to address some of the largest gaps in existing cyber defenses, providing 24-7 system monitoring, streamlining threat detection efforts and independently improving efficacy over time. They can offer layers of organizational data protection not previously available, mitigating human error and ensuring compliance with established cybersecurity policies.”
According to the report, AI in cybersecurity is projected to increase at a Compound Annual Growth Rate (CAGR) of 23.66 percent from 2020 to 2027. With that increase in mind, the report surveys the array of possibilities and significant challenges that come with using AI to enhance cybersecurity efforts.
The most obvious benefit AI brings is its capacity to automate security incident detection by leveraging existing and emerging threat intelligence.
“AI can basically process large numbers of data files all at once, which is obviously a lot faster than a human could and that really is important,” says AJ Abdallat, CEO of AI solutions company Beyond Limits, in the report.
As a result, the report notes, a recent survey of 4,500 senior business decision makers showed that data security was the main reason to implement AI within their organizations, ahead of process automation and business process optimization, among other areas.
“In integrating AI technologies with cybersecurity programs and systems, businesses across sectors have an invaluable opportunity to address one of the most complicated and potentially damaging risk factors organizations face today,” the report said.
In many ways, the advantages of AI notwithstanding, it’s the ultimate double-edged sword, as hackers also have access to AI, and the technology may also be advancing too fast for regulators to develop the rules and regulations necessary to control it.
“There’s a shocking lack of industry best practices or regulations to ensure that those AI systems are actually reliable, robust, transparent and free of bias,” Jessica Newman, research fellow at UC Berkeley’s Center for Long-Term Cybersecurity, explained in the report. “We are increasing the complexity of a good portion of the systems that we rely upon across industries, without adequate insight into how those AI systems are making decisions and whether they should be trusted.”
Nonetheless, says the report, “while AI by itself may not be a cure-all for cyber risk, it does have the potential to meaningfully enhance existing cybersecurity and data protection programs in important ways. Utilized concurrently with established human-led information security teams, the two can play off one another’s strengths, bringing levels of rigor, vigilance and responsiveness to cyber defense efforts that neither could achieve independently.”
Photo by imaginima/Getty Images