Navigating the Impact of Gen AI on Cybersecurity

A recent report by the National Cyber Security Centre (NCSC) has highlighted concerns about the increasing impact of AI on cyberattacks. We take a look.

A recent report by the National Cyber Security Centre (NCSC) has highlighted concerns about the increasing impact of AI on cyberattacks over the next two years. The report emphasises that AI is already being used for malicious cyber activities and is expected to amplify the frequency and severity of cyber threats, particularly ransomware.

The NCSC says that the use of AI lowers the entry barrier for less skilled cyber criminals, including hackers-for-hire and hacktivists. This allows them to conduct more effective access and information-gathering operations. The enhanced access, combined with AI’s improved targeting capabilities, is anticipated to contribute to a heightened global ransomware threat in the near term.

“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” said Lindy Cameron, the NCSC’s former CEO.

“The emergent use of AI in cyber attacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.”

The emergence of criminal generative AI

The report highlights the emergence of criminal Generative AI (Gen AI) and Gen AI-as-a-service, enabling cyber criminals to access improved capabilities. However, the effectiveness of Gen AI models is constrained by the quality and quantity of the data on which they are trained.

The National Crime Agency (NCA) warns that advancements in AI are likely to increase the ransomware threat in the coming years. The NCA notes that AI services reduce entry barriers, attract more cyber criminals and enhance their capabilities by improving the scale, speed and effectiveness of existing attack methods.

Oz Alashe MBE, CEO of CybSafe, tells AI Magazine: “While organisations explore the opportunities generative AI provides, it’s essential to consider the other side of that coin. With business leaders highlighting the power this technology gives cybercriminals to create more convincing phishing campaigns, deepfakes and more, the onus is on organisations to equip their people with the tools to effectively identify and mitigate these growing threats.

“With only 21% of people believing they can discern an AI-generated piece of text from human-written text, cybersecurity professionals have their work cut out. As we use this technology more and more, the line between real and fake will continue to blur. As a result, organisations must engage their staff, moving beyond compliance to build and promote positive cybersecurity behaviours to combat the rising tide of cybercrime.”

2024 Global Cybersecurity Outlook report 

In the World Economic Forum’s 2022 Global Cybersecurity Outlook report, approximately half of leaders said that automation and machine learning would have the greatest influence on cybersecurity in the following two years. Nearly two years later, executives still feel the same – this year, approximately half of leaders still agree that Gen AI will have the most significant impact on cybersecurity in the next two years. Industries such as cybersecurity (65%), agriculture (63%), banking (56%) and insurance (56%) all had the largest percentages of leaders choosing Gen AI as the biggest influence on cybersecurity. 

Kris Burkhardt, Global Chief Information Security Officer from Accenture, states: “We must strengthen our defences across the board, and the same can be true for any emerging technology. A lot of the attack vectors seem to be the same, they just tend to be amplified.”

Leaders in the 2024 Global Cybersecurity Outlook report also expressed concerns about the impact on cybersecurity in the near term. This year, 56% of leaders said that Gen AI will advantage cyber attackers over defenders in the next two years. 

More specifically, their greatest concern about Gen AI is that it will advance the adversary’s ability to undertake actions that defenders are already fighting against such as phishing, developing custom malware and propagating misinformation.

The same attack vectors that have been employed by cybercriminals are still being used; however, new technology paves the way for nefarious activity. Gen AI chatbots are making it much easier for cybercriminals to create believable phishing emails and write custom malware. Although popular commercial chatbots have built-in censors and proactive controls to prevent abuse, cybercriminals are adopting large language models to design malicious subscription-based services. Chatbots such as FraudGPT and WormGPT are lowering the skills required to commit complex and convincing campaigns.

How can leaders help ensure that AI is developed securely?

Guidelines for Secure AI System Development, published by the NCSC and developed with the US’s Cybersecurity and Infrastructure Security Agency (CISA) and agencies from 17 other countries, advise on the design, development, deployment and operation of AI systems. 

The NCSC says that, crucially, keeping AI systems secure is as much about organisational culture, process, and communication as it is about technical measures. “Security should be integrated into all AI projects and workflows in your organisation from inception. This is known as a ‘secure by design’ approach, and it requires strong leadership that ensures security is a business priority, and not just a technical consideration,” it says. 

“Leaders need to understand the consequences to the organisation if the integrity, availability or confidentiality of an AI system were to be compromised. There may be operational and reputational consequences, and your organisation should have an appropriate response plan in place. As a manager, you should also be particularly aware of AI-specific concerns around data security. You should understand whether your organisation is legally compliant and adhering to established best practices when handling data related to these systems.”

The NCSC says the burden of using AI safely should not fall on the individual users of the AI products; customers typically won’t have the expertise to fully understand or address AI-related risks. That is, developers of AI models and systems should take responsibility for the security outcomes of their customers.

Ensuring the secure development and deployment of AI systems is paramount as they become increasingly integrated into cybersecurity. leaders suggest prioritising the secure development and deployment of AI systems within cybersecurity. They advocate for a "secure by design" approach at the inception of AI projects and emphasise understanding the potential consequences of AI system vulnerabilities and preparing robust response plans.  Guidelines, such as those from the NCSC, offer valuable frameworks for guidance. By promoting cybersecurity awareness and vigilance, leaders can effectively suggest measures to mitigate evolving threats posed by AI and safeguard their organisations.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

NASA's First Chief AI Officer Shows AI's Value Cross Sector

NASA's appointing of inaugural CAIO David Salvagnini marks a significant stride towards AI integration across various sectors

OpenAI: A Look at the AI Trailblazer’s Leadership Landscape

We take a look at the heads leading the world's most famous AI startup following the news Co-founder and Chief Scientist Ilya Sutskever's will depart

Who Are Microsoft's LLM Contemporaries?

Microsoft is reportedly building a new AI model that could be a game-changer in the Large Language Model (LLM) field

Beyond the Hype: Unlocking Real Business Value With Gen AI

AI Strategy

OpenAI Poised to Announce Google Search Competitor

AI Applications

Why Companies Must be Ruthless With Their AI Prioritisation

AI Strategy