Powering Responsible AI With Government Regulation Strategy

BSI Recently Conducted a Study Into AI Confidence, Which Found That 61% of Those Surveyed Were Keen to Call for Global Guidelines for the Technology
The British Standards Institution (BSI) Has Launched a Pioneering AI Management System to Enable Safe and Responsible Use of Artificial Intelligence (AI)

There is a continued global conversation about government involvement in AI.

With the rapid development of AI and generative AI (Gen AI) systems in recent years, people around the world have been calling for greater guidance and regulations into the technology. In fact, the US has already implemented AI guidelines for businesses, requiring technology companies to share their data on AI safety.

In the wake of the EU reaching a provisional agreement on its AI Act, regulatory guidelines are being discussed more frequently. It is thought that regulating AI could lead to safer models, less biases and an overall improved AI ethics landscape. 

Addressing calls for greater AI regulation

Several countries ironed out regulatory frameworks for AI in 2023 as part of strategic efforts to handle AI and ensure the safety of new models.

These have mostly taken place in line with the UK AI Safety Summit at the end of 2023, which brought together global leaders and business experts to discuss how best to regulate AI moving forward. The summit consisted of 25 countries and the EU signing an international declaration acknowledging the need to address AI development risks.

This is also interesting to consider in line with countries wanting to lead in the ‘AI race’ - ensuring that they are doing everything to develop the best, most accurate AI models. However, safety also needs the utmost consideration in this respect, hence the need for guidelines.

Youtube Placeholder

Building global AI trust

An recent notable example where a national regulating body has launched clear guidelines on AI is the BSI’s new AI management system. The BSI’s guidance aims to empower organisations to safely manage AI - a regulatory framework style that could be extended worldwide.

The guidance, published in the UK by BSI as the UK’s National Standards Body, sets out how to establish, implement, maintain and continually improve an AI management system, with focus on safeguards. It is referenced in the UK Government’s National AI Strategy as a step towards ensuring that AI is developed safely and ethically.

The safeguards put in place help to build trust so that businesses and society can fully benefit from AI opportunities. It is also intended to help organisations to make AI in a responsible way, addressing concerns like non-transparent automatic decision-making, the utilisation of machine learning.

BSI recently conducted a study into AI confidence, which found that 61% of those surveyed were keen to call for global guidelines for the technology. The company’s Trust in AI Poll of 10,000 adults found that 62% worldwide wanted international guidelines of AI. Likewise, 38% already use AI every day at work, while more than two-thirds (62%) expect their industries to do so by 2030. 

BSI suggests that closing the AI confidence gap and building AI trust is crucial to harnessing the benefits of the technology for humanity worldwide.

Scott Steedman, Director General of Standards at BSI, says: “AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework. While the government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them. 

“The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Toshiba Takes Another Step to Ushering in Embodied AI

Toshiba's Cambridge Research Lab has announced two breakthroughs in Embodied AI alongside a new group to renew focus on the tech

Why AWS is Investing $230m in Credits for Gen AI Startups

Amazon is investing US$230m in AWS cloud credits to entice Gen AI startups to get onboard with using its cloud services

How Retrieval Augmented Generation (RAG) Enhances Gen AI

RAG is a technique that promises to improve the way Gen AI fetches answers and provide business with a more reliable use case for client-facing uses

Synechron’s Prag Jaodekar on the UK's AI Regulation Journey

AI Strategy

LGBTQ+ in AI: Vivienne Ming and the Human Power of AI

Machine Learning

Samsung’s AI-Era Vision Coincides With its New Chip Tech

AI Strategy