OpenAI preparedness framework: Enhancing global AI safety

OpenAI has released details of its new preparedness framework that aims to mitigate AI risks and prioritise safe and responsible model development

OpenAI has this week (18th December 2023) released an initial version of its preparedness framework to better facilitate safe and responsible AI models.

As part of the AI company expanding its safety processes, a new safety advisory group has been put in place to make recommendations to leadership. Most notably, the board will maintain veto power and can choose to prevent the release of an AI model even if leadership declares the AI as safe.

This news comes at the end of what has been a very exciting year for OpenAI. In addition to experiencing fast-paced development, the company has also seen turbulence in its executive board, with Sam Altman having been ousted and then reinstated as the company CEO in a space of one week in November 2023.

Advancing the study into AI risk

The ChatGPT developer says in its framework: “The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematise our safety thinking, we are adopting the initial version of our Preparedness Framework. 

It describes OpenAI’s processes to track, evaluate, forecast and protect against risks posed by increasingly powerful AI models.

“By catastrophic risk, we mean any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals - this includes, but is not limited to, existential risk,” the company says.

As reported by The Washington Post, Sam Altman says that regulation to try to prevent harmful impacts of AI shouldn’t make it harder for smaller companies to compete. It also highlighted that at the same time, Altman has pushed the company to commercialise its technology to facilitate faster growth.

OpenAI’s decision to publicise its framework highlights how every company developing AI needs to hold itself to account - balancing business growth with responsibility. Given the immense popularity that ChatGPT has seen in just one year, the company clearly recognises the significance of ensuring AI is without risk.

Eliminating bias and mitigating global concerns

Its framework will focus on mitigating the misuse of current AI models and products like ChatGPT. The preparedness team will be led by Professor Aleksander Madry and will hire AI researchers, computer scientists, national security experts and policy professionals to monitor the technology, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous.

The Preparedness team will also map out the emerging risks of frontier models, with the company investing in capability evaluations and forecasting to better detect emerging risks. In particular, the company wishes to go beyond the hypothetical and work with data-driven predictions. 

In addition, the company has said that it will run evaluations and continually update ‘scorecards’ for its models. It will evaluate all of its frontier models to help the team assess the risks of its models to develop protocols for added safety and outside accountability. This will include preventing racial biases, for instance, to ensure that the AI systems do not develop to the point of causing harm. 

Previously, the company was a part of forming the Frontier Model Forum with Google, Anthropic and Microsoft with the goal of regulating AI development to ensure it is developed and harnessed responsibly. 

The forum aims to help advance research into AI safety, identity safety best practices for frontier models and share knowledge with policymakers and academics to advance responsible AI development and leverage AI to address social challenges.


Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


AI Magazine is a BizClik brand


Featured Articles

Upskilling Global Workers in AI with EY’s Beatriz Sanz Saiz

We speak with Beatriz Sanz Saiz, EY Global Consulting Data and AI Leader, about how AI shapes the global workforce and how EY is harnessing ethical AI

Intuitive Machines: NASA's Odysseus bets on Private Company

Discover more about the small private company that landed the first US spacecraft on the moon in 50 years, with NASA continuing to test new technologies

Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Tech giant Google, with Google DeepMind, launches Gemma, consisting of new new state-of-the-art open AI models built for an open community of developers

Sustainability LIVE: Net Zero a Key Event for AI Leaders

AI Strategy

US to Form AI Task Force to Confront AI Threats to Safety

AI Strategy

Wipro to Advance Enterprise Gen AI Adoption with IBM watsonx

AI Strategy