Cyber firm breaches Nvidia's NeMo framework with ease

Share
Nvidia LLM AI framework breached with ease by security firm
Nvidia's NeMo Framework faces scrutiny as researchers from Robust Intelligence uncover vulnerabilities that compromise safety and privacy

Global chipmaker, Nvidia, has developed the NeMo Framework, an AI software that facilitates the utilisation of large language models.

This technology empowers developers to create generative AI applications like chatbots.

However, recent research has discovered that certain features of Nvidia's software can be exploited, bypassing safety measures and potentially exposing private information.

These findings underscore the challenges faced by AI companies in commercialising this groundbreaking technology.

Researchers from Robust Intelligence, a San Francisco-based organisation, successfully breached the guardrails implemented in Nvidia's NeMo Framework.

Within hours, they were able to manipulate the language models, overriding intended restrictions.

For instance, by instructing the system to replace the letter 'I' with 'J,' the researchers were able to extract personally identifiable information (PII) from a database.

They also uncovered additional vulnerabilities, causing the AI to deviate from its intended purpose, delving into unrelated topics such as a Hollywood star's health or historical events like the Franco-Prussian war.

Implications for AI commercialisation: A warning

The ease with which the researchers bypassed the safeguards highlights the complexity of deploying reliable and secure AI systems.

Yaron Singer, CEO of Robust Intelligence and a professor of computer science at Harvard University, emphasises that this discovery serves as a cautionary tale for AI companies, revealing the inherent pitfalls they must navigate.

The research team has advised clients to avoid using Nvidia's software product based on their test results.

Nvidia, in response, has addressed one of the root causes behind the identified issues, but the incident underscores the challenges AI companies face in ensuring the safety and privacy of their technologies.

Building public trust in AI

As AI continues to advance and permeate various industries, it is crucial for companies like Nvidia to build public trust in this transformative technology.

In recent years, major AI companies, including Google and Microsoft-backed OpenAI, have released chatbots incorporating guardrails to prevent racist speech and oppressive behaviour.

However, even these leading companies have encountered safety hiccups.

Nvidia, along with others in the AI industry, must invest efforts in assuring the public that AI technology holds vast potential and is not solely a threat or a source of fear.

Bea Longworth, Nvidia's head of government affairs in Europe, the Middle East, and Africa, stressed the importance of establishing public confidence in AI at a conference organised by industry lobby group, TechUK.

The vulnerability discovered in Nvidia's NeMo Framework, which enables AI developers to work with large language models, highlights the challenges that AI companies face in ensuring the safety and privacy of their systems.

The ability of researchers to manipulate the software and extract sensitive information raises concerns regarding the deployment of AI technology.

To foster public trust, it is essential for AI companies to address these vulnerabilities, implement robust safeguards, and emphasise the enormous potential of AI as a transformative force rather than a cause for fear or uncertainty.

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy