Cyber firm breaches Nvidia's NeMo framework with ease

Share
Nvidia LLM AI framework breached with ease by security firm
Nvidia's NeMo Framework faces scrutiny as researchers from Robust Intelligence uncover vulnerabilities that compromise safety and privacy

Global chipmaker, Nvidia, has developed the NeMo Framework, an AI software that facilitates the utilisation of large language models.

This technology empowers developers to create generative AI applications like chatbots.

However, recent research has discovered that certain features of Nvidia's software can be exploited, bypassing safety measures and potentially exposing private information.

These findings underscore the challenges faced by AI companies in commercialising this groundbreaking technology.

Researchers from Robust Intelligence, a San Francisco-based organisation, successfully breached the guardrails implemented in Nvidia's NeMo Framework.

Within hours, they were able to manipulate the language models, overriding intended restrictions.

For instance, by instructing the system to replace the letter 'I' with 'J,' the researchers were able to extract personally identifiable information (PII) from a database.

They also uncovered additional vulnerabilities, causing the AI to deviate from its intended purpose, delving into unrelated topics such as a Hollywood star's health or historical events like the Franco-Prussian war.

Implications for AI commercialisation: A warning

The ease with which the researchers bypassed the safeguards highlights the complexity of deploying reliable and secure AI systems.

Yaron Singer, CEO of Robust Intelligence and a professor of computer science at Harvard University, emphasises that this discovery serves as a cautionary tale for AI companies, revealing the inherent pitfalls they must navigate.

The research team has advised clients to avoid using Nvidia's software product based on their test results.

Nvidia, in response, has addressed one of the root causes behind the identified issues, but the incident underscores the challenges AI companies face in ensuring the safety and privacy of their technologies.

Building public trust in AI

As AI continues to advance and permeate various industries, it is crucial for companies like Nvidia to build public trust in this transformative technology.

In recent years, major AI companies, including Google and Microsoft-backed OpenAI, have released chatbots incorporating guardrails to prevent racist speech and oppressive behaviour.

However, even these leading companies have encountered safety hiccups.

Nvidia, along with others in the AI industry, must invest efforts in assuring the public that AI technology holds vast potential and is not solely a threat or a source of fear.

Bea Longworth, Nvidia's head of government affairs in Europe, the Middle East, and Africa, stressed the importance of establishing public confidence in AI at a conference organised by industry lobby group, TechUK.

The vulnerability discovered in Nvidia's NeMo Framework, which enables AI developers to work with large language models, highlights the challenges that AI companies face in ensuring the safety and privacy of their systems.

The ability of researchers to manipulate the software and extract sensitive information raises concerns regarding the deployment of AI technology.

To foster public trust, it is essential for AI companies to address these vulnerabilities, implement robust safeguards, and emphasise the enormous potential of AI as a transformative force rather than a cause for fear or uncertainty.

Share

Featured Articles

MoD Powers Up Data and AI Analytics with £50m Investment

Kainos has secured a £50m agreement to enhance the UK Ministry of Defence’s Defence Data Analytics Platform, supporting Royal Navy, Army and RAF operations

AI's Thirst for Water Raises Sustainability Concerns

In the wake of the UKs planned AI infrastructure boom, concerns are being raised over the potential impact more data centres could have on water supply

How Google’s AI Chip Upgrades Set Sustainability Standards

Google unveils new method for measuring environmental impact of AI hardware as processor efficiency gains deliver 70% decrease in carbon emissions

Google Makes Gemini 2.0 AI Model Available to Everyone

AI Applications

AI for Good: Why Microsoft is Using AI for Positive Change

AI Strategy

How Volvo's Autonomous Trucks are Using Gen AI

Technology