Shadow AI set to drive new wave of insider threats: Imperva

Imperva warns that the twin factors of poor data controls and the advent of new generative AI tools will lead to a spike in data breaches.

As LLM-powered chatbots have become more powerful, many organisations have implemented bans altogether or on what data can be shared with them. However, since an overwhelming majority (82%) have no insider risk management strategy in place, they remain blind to instances of employees using generative AI to help them with tasks like writing code or filling out Requests For Proposals (RFPs), despite the fact that this often involves employees giving unauthorized applications access to sensitive data stores.

Terry Ray, SVP, Data Security GTM and Field CTO at cybersecurity company, Imperva says: “Forbidding employees from using generative AI is futile.

“We’ve seen this with so many other technologies - people are inevitably able to find their way around such restrictions and so prohibitions just create an endless game of whack-a-mole for security teams, without keeping the enterprise meaningfully safer.” 

Insider threats are responsible for more than half of all data breaches

Insider threats are responsible for more than half of all data breaches (58%), and are often among the most damaging. Previous research from Imperva on the biggest data breaches of the last five years found that a quarter (24%) were due to human error (defined as the accidental or malicious use of credentials for fraud, theft, ransom or data loss). However, insider threats are consistently deprioritised by businesses, with a third (33%) saying they do not perceive them as a significant threat.

“People don’t need to have malicious intent to cause a data breach,” continued Ray. “Most of the time, they are just trying to be more efficient in doing their jobs. But if companies are blind to LLMs accessing their backend code or sensitive data stores, it’s just a matter of time before it blows up in their faces.”

Imperva believes that rather than relying on employees to not use unauthorised tools, businesses need to focus on securing their data and ensuring they can answer key questions such as; who is accessing it, what, how, and from where. The company has put together a number of steps, it says, that every organisation, regardless of size, should be taking:

  • Visibility: It’s crucial for organisations to discover and have visibility over every data repository in their environment so that important information stored in shadow databases isn’t being forgotten or abused.
  • Classification: Once organisations have created an inventory of every data store in their environment, the next step is to classify every data asset according to type, sensitivity, and value to the organisation. Effective data classification helps an organisation understand the value of its data, whether the data is at risk, and which controls should be implemented to mitigate risks.
  • Monitoring and analytics: Businesses also need to implement data monitoring and analytics capabilities that can detect threats such as anomalous behaviour, data exfiltration, privilege escalation, or suspicious account creation.
Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology