OpenAI preparedness framework: Enhancing global AI safety

OpenAI has released details of its new preparedness framework that aims to mitigate AI risks and prioritise safe and responsible model development

OpenAI has this week (18th December 2023) released an initial version of its preparedness framework to better facilitate safe and responsible AI models.

As part of the AI company expanding its safety processes, a new safety advisory group has been put in place to make recommendations to leadership. Most notably, the board will maintain veto power and can choose to prevent the release of an AI model even if leadership declares the AI as safe.

This news comes at the end of what has been a very exciting year for OpenAI. In addition to experiencing fast-paced development, the company has also seen turbulence in its executive board, with Sam Altman having been ousted and then reinstated as the company CEO in a space of one week in November 2023.

Advancing the study into AI risk

The ChatGPT developer says in its framework: “The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematise our safety thinking, we are adopting the initial version of our Preparedness Framework. 

It describes OpenAI’s processes to track, evaluate, forecast and protect against risks posed by increasingly powerful AI models.

“By catastrophic risk, we mean any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals - this includes, but is not limited to, existential risk,” the company says.

As reported by The Washington Post, Sam Altman says that regulation to try to prevent harmful impacts of AI shouldn’t make it harder for smaller companies to compete. It also highlighted that at the same time, Altman has pushed the company to commercialise its technology to facilitate faster growth.

OpenAI’s decision to publicise its framework highlights how every company developing AI needs to hold itself to account - balancing business growth with responsibility. Given the immense popularity that ChatGPT has seen in just one year, the company clearly recognises the significance of ensuring AI is without risk.

Eliminating bias and mitigating global concerns

Its framework will focus on mitigating the misuse of current AI models and products like ChatGPT. The preparedness team will be led by Professor Aleksander Madry and will hire AI researchers, computer scientists, national security experts and policy professionals to monitor the technology, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous.

The Preparedness team will also map out the emerging risks of frontier models, with the company investing in capability evaluations and forecasting to better detect emerging risks. In particular, the company wishes to go beyond the hypothetical and work with data-driven predictions. 

In addition, the company has said that it will run evaluations and continually update ‘scorecards’ for its models. It will evaluate all of its frontier models to help the team assess the risks of its models to develop protocols for added safety and outside accountability. This will include preventing racial biases, for instance, to ensure that the AI systems do not develop to the point of causing harm. 

Previously, the company was a part of forming the Frontier Model Forum with Google, Anthropic and Microsoft with the goal of regulating AI development to ensure it is developed and harnessed responsibly. 

The forum aims to help advance research into AI safety, identity safety best practices for frontier models and share knowledge with policymakers and academics to advance responsible AI development and leverage AI to address social challenges.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology