Are Business and Government Diverging on AI Safety?

Share
With all this talk of governments focusing on AI safety, is business also showing the same level of concern for safety as they push to implement AI into their operations?
As the UK government seeks to expand its AI Safety Institute just as OpenAI disbands its team on long-term AI safety, we look at the gap in approach to AI

In a big announcement, the UK Government has announced it will open the AI Safety Institute (AISI) in the US. The office, due to open in the summer of 2024, will be located in San Francisco and is the first UK overseas office in San Francisco.  

Its proscribed aim will be to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs headquartered in both London and San Francisco, and cement relationships with the US to advance AI safety for the public interest.  

This announcement comes amid the UK’s push to position itself as the authority of responsibility surrounding AI use. 

In 2023, Rishi Sunak created the same AISI ahead of the first global summit on AI, the AI Safety Summit, which saw 28 national governments sign a declaration to promote AI safety. 

With all this talk of governments focusing on AI safety, is business also showing the same level of concern for safety as they push to implement AI into their operations?

Youtube Placeholder

Business v government position

As is patently obvious at this point, AI holds great potential for business. A Oliver Wyman Forum study estimates that GenAI could add up to US$20 trillion to global GDP by 2030 and save 300 billion work hours a year. 

Such incentive is having businesses cross sector race to adopt AI, in one form or another, into their operations. A Tata study revealed how 86% of execs already deploy AI to enhance revenue. 

Yet a 2024 Bank of Ireland report highlighted Most businesses have no AI governance policies in place. 

Microsoft allegedly ignored safety problems its engineer told about its AI image generator, and even trailblazer and self-described practitioner of ‘responsible AI’ OpenAI recently disbanded its team focused on the long-term risks of AI just a year after it was announced.

The AISI meanwhile produced a selection of recent results from safety testing of five publicly available advanced AI models.

Speaking on the announcement, AI Safety Institute Chair, Ian Hogarth, said: “Our evaluations will help to contribute to an empirical assessment of model capabilities and the lack of robustness when it comes to existing safeguards.”

AI disagreement

Following the AI Safety Summit, businesses were hoping the UK would announce some regulatory framework for them to anchor their AI safety concerns in, like was done in the EU. The UK’s recently released AI bill takes a much lighter approach.

The EU AI Act, will be the world's first comprehensive law regulating AI, and takes a risk-based approach classifying each application into three categories: unacceptable risk, high risk, and limited, minimal, or no risk. The regulations restricting the AI system vary depending on which risk level it is classified as having.

Yet, Many of the EU’s largest companies - including executives from Renault, Heineken, Siemens, and Airbus - signed a letter warning the European Commission that the drafted legislation “would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” 

The next chance for government and industry to meet at a broad scale to discuss AI safety will be at the second AI Safety summit. Whether a consensus can be better reached remains to be seen.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy