Powering Responsible AI With Government Regulation Strategy

The British Standards Institution (BSI) Has Launched a Pioneering AI Management System to Enable Safe and Responsible Use of Artificial Intelligence (AI)

There is a continued global conversation about government involvement in AI.

With the rapid development of AI and generative AI (Gen AI) systems in recent years, people around the world have been calling for greater guidance and regulations into the technology. In fact, the US has already implemented AI guidelines for businesses, requiring technology companies to share their data on AI safety.

In the wake of the EU reaching a provisional agreement on its AI Act, regulatory guidelines are being discussed more frequently. It is thought that regulating AI could lead to safer models, less biases and an overall improved AI ethics landscape. 

Addressing calls for greater AI regulation

Several countries ironed out regulatory frameworks for AI in 2023 as part of strategic efforts to handle AI and ensure the safety of new models.

These have mostly taken place in line with the UK AI Safety Summit at the end of 2023, which brought together global leaders and business experts to discuss how best to regulate AI moving forward. The summit consisted of 25 countries and the EU signing an international declaration acknowledging the need to address AI development risks.

This is also interesting to consider in line with countries wanting to lead in the ‘AI race’ - ensuring that they are doing everything to develop the best, most accurate AI models. However, safety also needs the utmost consideration in this respect, hence the need for guidelines.

Building global AI trust

An recent notable example where a national regulating body has launched clear guidelines on AI is the BSI’s new AI management system. The BSI’s guidance aims to empower organisations to safely manage AI - a regulatory framework style that could be extended worldwide.

The guidance, published in the UK by BSI as the UK’s National Standards Body, sets out how to establish, implement, maintain and continually improve an AI management system, with focus on safeguards. It is referenced in the UK Government’s National AI Strategy as a step towards ensuring that AI is developed safely and ethically.

The safeguards put in place help to build trust so that businesses and society can fully benefit from AI opportunities. It is also intended to help organisations to make AI in a responsible way, addressing concerns like non-transparent automatic decision-making, the utilisation of machine learning.

BSI recently conducted a study into AI confidence, which found that 61% of those surveyed were keen to call for global guidelines for the technology. The company’s Trust in AI Poll of 10,000 adults found that 62% worldwide wanted international guidelines of AI. Likewise, 38% already use AI every day at work, while more than two-thirds (62%) expect their industries to do so by 2030. 

BSI suggests that closing the AI confidence gap and building AI trust is crucial to harnessing the benefits of the technology for humanity worldwide.

Scott Steedman, Director General of Standards at BSI, says: “AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework. While the government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them. 

“The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Upskilling Global Workers in AI with EY’s Beatriz Sanz Saiz

We speak with Beatriz Sanz Saiz, EY Global Consulting Data and AI Leader, about how AI shapes the global workforce and how EY is harnessing ethical AI

Intuitive Machines: NASA's Odysseus bets on Private Company

Discover more about the small private company that landed the first US spacecraft on the moon in 50 years, with NASA continuing to test new technologies

Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Tech giant Google, with Google DeepMind, launches Gemma, consisting of new new state-of-the-art open AI models built for an open community of developers

Sustainability LIVE: Net Zero a Key Event for AI Leaders

AI Strategy

US to Form AI Task Force to Confront AI Threats to Safety

AI Strategy

Wipro to Advance Enterprise Gen AI Adoption with IBM watsonx

AI Strategy