EU AI Act: Provisional agreement on AI safety reached

"Used wisely and widely, AI promises huge benefits to our economy and society"
The European Union (EU) has released its initial proposals for the EU AI Act, which aims to see all types of AI developed and deployed responsibly

The EU has reached a landmark provisional agreement on the EU AI Act.

The draft regulations aim to ensure that AI systems are safe and respect the rights of people, in addition to giving out fines for those that break the new rules.

It comes in the wake of 36 hours of talks and negotiations as rules around AI systems like ChatGPT were reached. Next year, the European Parliament will vote on the proposals reached, with legislation not expected to take effect before 2025.

According to the BBC, the US, UK and China are now working to publish their own guidelines.

A starting point to help enterprises consider their use of AI

President of the European Commission, Ursula von der Leyen, released a statement which states: “I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act.

“AI is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. [The] agreement focuses regulation on identifiable risks, provides legal certainty and opens the way for innovation in trustworthy AI.”

She continues: “By guaranteeing the safety and fundamental rights of people and businesses, the Act will support the human-centric, transparent and responsible development, deployment and take-up of AI in the EU.”

The European Parliament defines AI as software that can “generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” This includes AI systems like ChatGPT and DALL-E, for example.

Youtube Placeholder

Current proposals within the EU AI Act summarise that rules will be brought in on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems.

The EU proposes that a revised system of governance with some enforcement powers at EU level will be brought in, with the possibility of law enforcement using “remote biometric identification” in public spaces, subject to safeguards.

Ultimately, the proposed regulations are designed to offer a better protection of rights through the obligation for those who deploy high-risk AI systems to undergo fundamental rights impact assessments prior to putting AI systems to use.

They would also take into account situations where AI systems can be used for many different purposes, or where general-purpose AI technology is integrated into another high-risk system. Specific rules have also been proposed for foundation models, suggesting that these must comply with “specific transparency obligations” before being placed into the market.

Proposed penalties for those who ‘break the rules’

The EU has proposed fines for violations of the AI act, which will be set as a percentage of the company’s annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would mean €35m (US$37.6m) or 7% for violations of the banned AI applications, for instance.

In addition, the proposed agreement states that a natural or legal person may make a complaint to the relevant market surveillance authority concerning any non-compliance with the AI act.

It will certainly be interesting to see how these proposed regulations will be received over the next year. By regulating AI that could cause biases, or that are unsafe, these proposals could work to significantly transform the AI ethics landscape.

More globally, tech giants have already been discussing how AI can be created in a more regulated way to promote safe use. IBM and Meta in particular have already announced an AI Alliance just before the EU AI Act announcement (in December 2023) to advocate for more open-source AI.

Bernd Greifeneder, Founder and CTO of Dynatrace, offers insight into what the EU may need to consider as it formalises the specific aspects of the regulations.

He says: “The EU’s provisional agreement is a promising first step on what is likely to be a long road ahead. There is no doubt that global cooperation between both governments and technologists will be a cornerstone for the future of AI-led innovation. Alongside the implications for the use of the technology in law enforcement, much of the focus is on the regulation of general purpose AI models, such as ChatGPT.”

He continues: “As the finer points of the regulations are hammered out over the coming weeks, the EU will need to acknowledge that not all AI is created equal. The regulatory framework will therefore need to establish internationally-defined trust, rules, and risk profiles for each class of AI to govern the ways they can be used.

“The EU AI Act will get off to a great start if it can provide clarity around these key differences between AI models.”

******

For more insights into the world of AI - check out the latest edition of AI Magazine and be sure to follow us on LinkedIn & Twitter.

Other magazines that may be of interest -Technology Magazine | Cyber Magazine.

Please also check out our upcoming event - Sustainability LIVE Net Zero on 6 and 7 March 2024.

******

BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.

BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.

Share

Featured Articles

What Dell and Super Micro can Bring Musk’s xAI Supercomputer

Elon Musk's xAI partnership with server hosting titans Dell and Super Micro could see his ambition for 'the world's largest supercomputer' lift off

Toshiba Takes Another Step to Ushering in Embodied AI

Toshiba's Cambridge Research Lab has announced two breakthroughs in Embodied AI alongside a new group to renew focus on the tech

Why AWS is Investing $230m in Credits for Gen AI Startups

Amazon is investing US$230m in AWS cloud credits to entice Gen AI startups to get onboard with using its cloud services

How Retrieval Augmented Generation (RAG) Enhances Gen AI

AI Applications

Synechron’s Prag Jaodekar on the UK's AI Regulation Journey

AI Strategy

LGBTQ+ in AI: Vivienne Ming and the Human Power of AI

Machine Learning