Whilst the multi-faceted nature of generative AI (Gen AI) is creating transformative solutions for businesses, it is important to be mindful of the possible risks.
If not harnessed ethically or with consideration, Gen AI has the power to cause irreversible cybersecurity damage to businesses. Now more than ever, enterprises are having to make strategic decisions to adapt to succeed in the evolving AI/cyber landscape.
With this in mind, AI Magazine speaks with John Farley, Managing Director of Cyber at Gallagher, about how cyber risks have evolved in relation to AI development and how businesses can ensure that they are best protected.
How have you seen cyber risks evolve overtime?
Cyber risks have evolved in significant ways over the past few decades. In the late 1990s, cyber risks were focused on viruses and system glitches that generally caused minimal business interruption. It was more or less a technology errors and omissions issue that did not have a large enough impact to get the attention of business leaders.
As time went on, hackers found that by penetrating networks they would be able to steal payment card data and monetise it for financial crimes. Threat actors evolved from there and began to launch social engineering campaigns to facilitate funds transfer schemes.
Ransomware attacks then became the favoured technique amongst criminal groups, where hackers encrypt the victim’s data with malware, demanding significant funds from victims in order to return it. The extortion demands can involve several million dollars and often lead to significant business interruption losses.
Today’s cyber risks are also leading to heightened regulatory risk. Many organisations are now subject to multiple state, federal and international data collection compliance obligation laws. Non-compliance can now lead to costly regulatory investigations, lawsuits, fines and reputational harm.
What AI cyber risks are you currently seeing at Gallagher?
While on the surface, emerging AI technology has the potential to provide vast new efficiencies. However there are several potential threats may also evolve, including:
Data bias: Outcomes are impacted when AI systems are trained with inaccurate or incomplete information, which ultimately can lead organisations to make unfair assumptions or even implement discriminatory practices.
Misinformation campaigns: Malicious actors will likely find generative AI an ideal launching pad for misinformation campaigns. The credibility of the information that this technology may blindly vacuum from public sources is an open question — one that needs careful consideration before an organisation relies on and acts upon AI-derived advice for key business decisions.
Regulatory risk: At this point, regulation of AI usage is in its infancy. However, we predict increased regulatory scrutiny of the use of this new technology in the near future from a variety of global regulator-driven privacy regimes. Compliance requirements may extend to those contributing to its development and to those using it to provide goods and services to their clients.
Privacy liability: Several privacy laws related to collecting, storing and sharing personally identifiable information (PII) will likely apply to AI usage. Careful consideration of legal compliance related to these issues should be a priority.
Liability related to intellectual property: Organisations need to be wary of liability risk when using intellectual property and AI technology. These risks can manifest if intellectual property becomes part of the learning models and ultimately AI generated outputs. Without proper permissions and credits, organisations may expose themselves to copyright, trademark and patent infringement litigation.
How do you expect the cybersecurity landscape to evolve in 2024, in line with AI?
The ransomware ecosystem will continue to evolve. We expect a continual introduction of new ransomware variants, increasing ransom demands and all industry sectors to be impacted at some level. We also expect ransomware attacks to follow the ongoing trend of double extortion, where threat actors both encrypt and ex-filtrate their victim’s data, threatening to expose it if extortion is not paid.
Another factor that could exacerbate cyber claims frequency and severity involves heightened regulatory risk. While our focus is on the regulators at the state and federal level in the United States, regulatory risk may extend to other territories and be influenced by other global privacy regimes in 2024.
Many states have enacted comprehensive privacy laws and we expect more to follow in 2024. Most focus on data collection compliance obligations and some allow for private rights of action in certain circumstances. We note the most significant claims activity is being driven via wrongful data collection allegations around both website tracking technologies as well as biometric data.
Emerging technology, most notably AI, may exacerbate an already formidable cyber threat environment. It will require efforts from government, regulators, technology providers and the insurance industry to fully understand the new risk before it can be managed. In the meantime, we remain focused on the various forms of AI usage and the risks that may emerge. Threat actors may use it to launch sophisticated phishing schemes, misinformation campaigns and other attacks.
AI adoption by business leaders could have unintended consequences, including but not limited to data bias, privacy liability, risks associated with intellectual property and professional liability.
Do you have any advice for businesses? What do they need to be mindful of?
While it is widely agreed that no organisation can prevent cyber attacks 100% of the time, there are several things they can do to manage cyber risk.
There are several cyber security controls that should be implemented. These include implementing multi-factor authentication, endpoint detection and response tools (EDR) , patch management programs, data backup practices, using virtual private networks (VPN), privileged access management programs (PAM), penetration testing and employee training.
It is also advisable to create a written incident response plan and to conduct tabletop exercises to test the plan. This can help mitigate the financial and reputational harm from an attack. Organisations should also be aware of the patchwork of privacy laws and regulatory requirements that may heighten risks around network security and privacy liability incidents. Seek the advice of legal experts to help in meeting compliance requirements.
[In addition], consider purchasing cyber insurance to help cover costs that may result in the event of a cyber incident. Many policies cover the costs for crisis management experts, cyber extortion payments, business interruption, media liability, data asset restoration and third party lawsuits from individuals, business partners and regulators.
AI Magazine is a BizClik brand