Are Business and Government Diverging on AI Safety?

With all this talk of governments focusing on AI safety, is business also showing the same level of concern for safety as they push to implement AI into their operations?
As the UK government seeks to expand its AI Safety Institute just as OpenAI disbands its team on long-term AI safety, we look at the gap in approach to AI

In a big announcement, the UK Government has announced it will open the AI Safety Institute (AISI) in the US. The office, due to open in the summer of 2024, will be located in San Francisco and is the first UK overseas office in San Francisco.  

Its proscribed aim will be to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs headquartered in both London and San Francisco, and cement relationships with the US to advance AI safety for the public interest.  

This announcement comes amid the UK’s push to position itself as the authority of responsibility surrounding AI use. 

In 2023, Rishi Sunak created the same AISI ahead of the first global summit on AI, the AI Safety Summit, which saw 28 national governments sign a declaration to promote AI safety. 

With all this talk of governments focusing on AI safety, is business also showing the same level of concern for safety as they push to implement AI into their operations?

Youtube Placeholder

Business v government position

As is patently obvious at this point, AI holds great potential for business. A Oliver Wyman Forum study estimates that GenAI could add up to US$20 trillion to global GDP by 2030 and save 300 billion work hours a year. 

Such incentive is having businesses cross sector race to adopt AI, in one form or another, into their operations. A Tata study revealed how 86% of execs already deploy AI to enhance revenue. 

Yet a 2024 Bank of Ireland report highlighted Most businesses have no AI governance policies in place. 

Microsoft allegedly ignored safety problems its engineer told about its AI image generator, and even trailblazer and self-described practitioner of ‘responsible AI’ OpenAI recently disbanded its team focused on the long-term risks of AI just a year after it was announced.

The AISI meanwhile produced a selection of recent results from safety testing of five publicly available advanced AI models.

Speaking on the announcement, AI Safety Institute Chair, Ian Hogarth, said: “Our evaluations will help to contribute to an empirical assessment of model capabilities and the lack of robustness when it comes to existing safeguards.”

AI disagreement

Following the AI Safety Summit, businesses were hoping the UK would announce some regulatory framework for them to anchor their AI safety concerns in, like was done in the EU. The UK’s recently released AI bill takes a much lighter approach.

The EU AI Act, will be the world's first comprehensive law regulating AI, and takes a risk-based approach classifying each application into three categories: unacceptable risk, high risk, and limited, minimal, or no risk. The regulations restricting the AI system vary depending on which risk level it is classified as having.

Yet, Many of the EU’s largest companies - including executives from Renault, Heineken, Siemens, and Airbus - signed a letter warning the European Commission that the drafted legislation “would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” 

The next chance for government and industry to meet at a broad scale to discuss AI safety will be at the second AI Safety summit. Whether a consensus can be better reached remains to be seen.


Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


AI Magazine is a BizClik brand


Featured Articles

AWS Bedrock Gets Anthropic's New Claude 3.5 Sonnet Model

Amazon Bedrock and Anthropic upgrade their partnership to build Gen AI applications... with Gen AI assistance by adding Claude 3.5 Sonnet to copilot tasks

What Dell and Super Micro can Bring Musk’s xAI Supercomputer

Elon Musk's xAI partnership with server hosting titans Dell and Super Micro could see his ambition for 'the world's largest supercomputer' lift off

Toshiba Takes Another Step to Ushering in Embodied AI

Toshiba's Cambridge Research Lab has announced two breakthroughs in Embodied AI alongside a new group to renew focus on the tech

Why AWS is Investing $230m in Credits for Gen AI Startups

Cloud & Infrastructure

How Retrieval Augmented Generation (RAG) Enhances Gen AI

AI Applications

Synechron’s Prag Jaodekar on the UK's AI Regulation Journey

AI Strategy