AI Safety Summit Seoul: Did it Meet Industry Expectations?

Before the summit, there were high hopes for meaningful outcomes - we see if industry leaders like EY's Beatriz Sanz Saiz thinks so

The AI Safety Summit in Seoul, South Korea held on 21-22 May 2024, has just concluded with a global effort to ensure the safe development and deployment of AI. 

This summit saw major tech firms from around the world come together to make a new commitment to AI safety, known as the "Frontier AI Safety Commitments."

 Key developments 

At the summit, 16 leading tech companies, including Amazon, Anthropic, Cohere, Google/Google DeepMind, G42, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Naver, OpenAI, Samsung Electronics, Technology Innovation Institute, xAI, and, agreed to a set of voluntary commitments. 

These commitments include promises not to develop or deploy AI models if the associated risks cannot be mitigated. Additionally, the companies pledged to increase transparency by publishing safety frameworks that measure the risks of their frontier models.


This summit builds on the legacy of the first-ever AI Safety Summit hosted by the UK at Bletchley Park in November 2023. That summit saw 28 countries and the EU sign the 'Bletchley Declaration,' pledging to develop AI responsibly and collaboratively, and to advance AI safety and research measures. 

The Bletchley Declaration underscored the need for proactive measures to ensure AI is developed in a human-centric, trustworthy, and responsible manner, addressing risks related to transparency, fairness, accountability, safety, ethics, privacy, and data protection.

Business steps up 

While the Bletchley Summit focused on governmental commitments, the Seoul Summit marked a significant shift by placing the onus on businesses. 

Prior to this, businesses were often seen as lagging behind world leaders in taking AI safety seriously. A 2024 Bank of Ireland report highlighted that most businesses lacked AI governance policies, despite the widespread use of AI across various sectors. 

Microsoft allegedly ignored safety concerns raised by its engineers about its AI image generator, and OpenAI disbanded its team focused on long-term AI risks just a year after its formation.

Expectations and hopes

Before the summit, there were high hopes for meaningful outcomes.

Ayesha Iqbal, IEEE senior member and engineering trainer at the Manufacturing Technology Centre, UK, emphasised the importance of collaboration between government, tech leaders, and academia to establish standards for the safe and responsible development of AI. 

"AI has implications for almost every business sector. Every day, we are hearing some news about a new application or implementation of AI, to the extent that the term "AI" was named the most notable word of 2023 by the dictionary publisher Collins,” said Iqbal. "AI is growing faster than ever and we are already reliant on a number of devices and systems, so government, tech leaders and academia should work together to establish standards for the safe, responsible development of AI-based systems.”

Ayesha Iqbal, IEEE senior member and engineering trainer at MTC

Eleanor Watson, IEEE senior member and AI ethics engineer, outlined the potential and risks of agentic AI systems, which can autonomously pursue open-ended objectives. 

"AI has made remarkable strides in recent months. These AI systems, powered by deep learning on vast datasets, have demonstrated increasingly general and flexible capabilities,” stated Watson. “We are now on the cusp of a new paradigm in AI: agentic AI systems that can autonomously pursue open-ended objectives by taking sequences of actions in complex environments, which have the capacity for independent decision-making and long-horizon planning. While promising tremendous value, raises serious concerns about AI optimising for objectives misaligned with human intent, as well as harms to knowledge and understanding.”

Eleanor Watson, IEEE senior member and AI ethics engineer and AI Faculty at Singularity University

Industry Reactions

The commitments made at the summit were met with cautious optimism by industry leaders.

Beatriz Sanz Saiz, EY Global Consulting Data and AI Leader, welcomed the commitment from global tech companies to publish safety frameworks. 

"This commitment from global tech companies to publish safety frameworks on how they will measure the risks of frontier AI models is welcomed. Providing transparency and accountability is essential in the development and implementation of trustworthy AI,” said Sanz Saiz. “While AI has vast potential for businesses and individuals alike, this potential can only be harnessed through a conscientious and ethical approach to its development. It is imperative that we collectively commit to upholding ethical AI principles, and this initiative will encourage other companies and industries globally to approach frontier AI with high standards.”

Beatriz Sanz Saiz, EY Global Consulting Data and AI Leader

While the commitments made by tech companies are a positive development, the true test will be in how these commitments are implemented and whether they lead to tangible improvements in AI safety. 


Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


AI Magazine is a BizClik brand


Featured Articles

Samsung’s AI-Era Vision Coincides With its New Chip Tech

Samsung announced its AI-Era Vision will be fuelled by its new chip tech alongside its new Samsung AI Solutions platform

SolarWinds Report Finds Limited Business Confidence in AI

SolarWinds argues few IT professionals are confident in their organisation’s readiness to integrate AI, citing data limitations and security concerns

Apple & OpenAI: Elon Musk Threatens Apple Device Ban

The Tesla, SpaceX and xAI billionaire warns that he will ban Apple devices within his companies if Apple integrates OpenAI within its operating system

AI Now on Agenda for Apple as OpenAI to add ChatGPT to Siri

AI Strategy

AI Accelerator Offers Startups Free Use of Nvidia GPU Server

Machine Learning

How Amazon Used AI to Design the F1 Trophy

AI Applications