Gen AI Risk: 55% of Company Data Loss Involves Personal Info

Businesses are increasingly focused on securing their infrastructure against data losses and data leakage resulting from ever-increasing Gen AI usage, according to Menlo Security
Menlo Security's 2023 report finding that more than half of generative AI (Gen AI) inputs contained sensitive and personally identifiable information

A report has found that personal identifiable information (PII) accounts for the most frequent instance of potential exposure and data loss.

According to Menlo Security’s report ‘The Continued Impact of Generative AI on Security Posture’, found that 55% of Data Loss Prevention (DLP) events detected by its team included attempts to input PII into Gen AI sites. This is the case even as enterprise security policies increase by 26%.

PII refers to information that confirms the identity of an individual to whom the information applies to.

This comes in the midst of the development and deployment of Gen AI rapidly booming worldwide as, whilst it offers opportunities, there are also cybersecurity risks to consider.

Enterprise focus on securing data loss

Menlo Security’s report marks the second instalment of Gen AI reports that analyses the changing behaviour of employees utilising Gen AI and subsequent security risks for businesses. 

It found that there is a greater need for group level security within a company, rather than domain level security.

For organisations that have security policies on an application basis, 92% have security-focused policies in place around generative AI usage while 8% allow unrestricted Gen AI usage. On the other hand, organisations that have security policies on generative AI apps as a group saw 79% having security-focused security policies in place, while 21% allow unrestricted usage.

According to the report, there was also an 80% increase in attempted file uploads to Gen AI websites, which researchers attribute to the many AI platforms that have added file upload features within the past six months.

Once users were introduced to it, they quickly took advantage. Copy and paste attempts to Gen AI sites decreased minimally, highlighting the need to implement technology to control these actions.

With AI being safety harnessed and monitored, it does enable security teams to better conduct cybersecurity investigations within a business. AI systems are also able to intervene in cyberattacks and respond to incidents to support human workforces.

Implementing cybersecurity measures to keep workforces safe

Businesses are increasingly focused on securing their infrastructure against data losses and data leakage resulting from ever-increasing Gen AI usage, according to Menlo Security. In fact, the research team discovered a 26% increase in organisational security policies for Gen AI sites.

However, most businesses that were surveyed are doing so on a case-by-case basis, instead of establishing blanket policies across Gen AI applications as a whole.

It demonstrates a need for a scalable and efficient way to monitor enterprise use of Gen AI and evolve to safeguard, adapt to evolving Gen AI functionalities and address cybersecurity risks.

In the cybersecurity landscape, anxieties facing executives include an increased level of data security threats compared to the last 12 months, revealing that these fears include ransomware and data theft.

It speaks to the importance of organisations ensuring that they have the knowledge and skillsets to integrate AI systems successfully and safely into their security tools moving forward. Likewise, it is essential for businesses to become more cyber-aware in order to reap the full benefits of AI and utilise it to keep operations secure.

Speaking on these findings, Marcus Fowler, CEO of Darktrace Federal, says: “AI is going to continue to have a major impact on security teams and change the work that they do, but when applied responsibly and with the right programmatic approach, AI will upskill the cyber workforce.

“The tools used by attackers and defenders—and the digital environments that need to be defended—are constantly changing and increasingly complex. In order for defenders to keep up, we must continually strengthen and empower the cybersecurity workforce.

"AI represents the greatest advancement in truly augmenting the current cyber workforce, expanding situational awareness, and accelerating mean time to action to allow them to be more efficient, reduce fatigue and prioritise cyber investigation workloads.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

AiDLab Culture x AI: The Future of the Fashion Industry

Hong Kong’s AiDLab launches its Culture x AI programme to push the boundaries of the fashion industry with its use of AI to inform the design process

Sophia Velastegui: Overcoming Enterprise AI Challenges

AI Magazine speaks with AI business leader Sophia Velastegui as she offers advice for businesses seeking to advance their AI use cases responsibly

Bigger Not Always Better as OpenAI Launch New GPT-4o mini

OpenAI release new GPT-4o mini model designed to be more cost-efficient whilst retaining a lot of the same capabilities of larger models

Why are the UK and China Leading in Gen AI Adoption?

AI Strategy

Moody's Gen AI Tool Alerts CRE Investors on Risk-Posing News

Data & Analytics

AWS Unveils AI Service That Makes Enterprise Apps in Minutes

Data & Analytics