UK & US Reach Landmark Agreement to Advance Responsible AI
The United Kingdom (UK) has signed a Memorandum of Understanding (MoU) with the United States (US) to partner in AI safety measures.
Signed on Monday 1st April 2024, the AI Institutes across both countries aim to work seamlessly with each other and partner on research, safety evaluations and guidance for AI safety. Both countries have agreed to work together to develop more robust methods for evaluating the safety of AI tools and systems.
Both institutes will also seek to develop shared capabilities via information-sharing, close cooperation and expert personnel exchanges.
AI digital transformation: Prioritising safety
The UK and US AI Safety Institutes have laid out a framework to build a common approach to AI, which includes safety testing and to share capabilities to ensure risks can be tackled effectively.
Likewise, both countries aim to complete at least one joint testing exercise on a publicly accessible model, in addition to using a collective pool of expertise.
This is a world-first agreement between two countries and aims to build upon discussions held at the UK AI Safety Summit in November 2023.
The event was attended by AI world leaders and government officials, including OpenAI's Sam Altman, Google DeepMind's Demis Hassabis and Elon Musk. Landmark talks were held, alongside UK Prime Minister Rishi Sunak and US Vice President Kamala Harris, with both the UK and US creating AI Safety Institutes designed to evaluate open and closed-source AI systems.
According to a UK GOV.UK press release, the agreement is a mutual recognition by both governments that there is a need to “act now” to ensure a shared approach to AI safety. There is a need for governments and business leaders alike to keep pace with the continually emerging risks of AI.
As the countries strengthen their partnership on AI safety, they have also committed to similar partnerships around the world to promote global AI responsibility.
Keeping AI development safe
Those within the AI sector have been busy in recent months to develop greater AI safety initiatives. Increased innovations from world-leading businesses comes in tandem with new AI regulations coming into law such as the upcoming EU AI Act.
US Commerce Secretary Gina Raimondo said the agreement will enable both governments a better understanding of AI systems, allowing them to offer better guidance as a result.
“It will accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society,” she said. “Our partnership makes clear that we aren't running away from these concerns - we're running at them.”
There have been greater causes for concerns about the safety of AI from a business perspective, as the technology is being used in an increasingly malicious way to digitally impact organisations.
The UK and the US are keen to highlight that safe development of these systems is a global priority. Their work aims to underpin a common approach to AI safety testing, with governments and businesses in partnership, to achieve ongoing international collaboration.
“We do need to see global powers like the US and UK convert their soft rhetoric into hard regulation,” comments AI Scientist Peter van der Putten, Head of the AI Lab at Pegasystems. “As more and more countries begin to properly regulate the use of AI in the coming year, we’ll see businesses starting to realise that they must do more than just talk the talk when it comes to using this technology in a moral and ethical way – they may soon have to walk the walk too.”
************
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
************
AI Magazine is a BizClik brand