EU AI Act Comes into Effect: What Enterprises Need to Know

Share
First proposed by the European Commission in 2020, the law aims to address the negative impacts of AI
The EU AI Act is the world's most comprehensive and wide-reaching legislation aimed at mitigating negative consequences that come with using the technology

The EU's eagerly anticipated and ground-breaking AI legislation, the AI Act, officially comes into force today, putting the first major legal guardrails on a technology that has erupted over the past few years.

This law, four years in the making, aims to regulate the development, usage, and application of AI. 

With major companies like Microsoft and Google investing billions into AI development, the implications of the legislation are expected to be significant, with moves expected to be seen in the strategies of enterprise to accommodate.

“With nearly 80% of UK adults now believing AI needs to be heavily regulated, the introduction of the EU’s AI Act is something that businesses have been long-waiting for,” explains Paul Cardno, Global Digital Automation & Innovation Senior Manager at 3M.

Contextualising the AI Act

The EU has ushered in a new era of AI regulation with the implementation of its AI Act, the world’s largest and most comprehensive AI law. 

This pioneering legislation seeks to address potential negative consequences of AI technologies and therefore legislate their use to mitigate them.

President of the European Commission Ursula von der Leyen hailed the act as "The AI Act transposes European values to a new era."

“The speed of AI adoption has outpaced legislation, with the onus on companies to ensure that AI is used responsibly and safely,” says Chris Royles, Field CTO EMEA at Cloudera.

While the law's primary targets fall largely on US tech firms at the forefront of AI development, its reach extends far beyond, potentially affecting a wide range of businesses, including those outside the tech sector.

A risk-based approach

A cornerstone of the AI Act is its risk-based regulatory framework. This approach tailors the level of regulation to the perceived risk that different AI applications pose to society.

High-risk applications face stringent obligations, including thorough risk assessment and mitigation protocols, high-quality, bias-minimising training datasets, continuous activity logging, and mandatory sharing of detailed model documentation with authorities. 

Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decisioning systems, and remote biometric identification technologies.

The Act outright bans AI uses deemed to pose unacceptable risks, such as social scoring systems that rank citizens based on data analysis, predictive policing tools, and emotional recognition technology in workplaces or educational settings.

A field that received particular attention in the act is Gen AI. The technology, ushered in by the likes of OpenAI, has seeped into society and has seen organisations and individuals alike using it for all sorts of optimisation and creative functions. 

75% of organisations have set aside budgets to invest in Gen AI in the next financial year, according to software company SAS

The AI Act categorises Gen AI as a form of "general-purpose" AI. This classification encompasses tools designed to perform a wide array of tasks at a level comparable to or surpassing human capabilities.

For these general-purpose AI systems, the Act imposes strict requirements. These include adherence to EU copyright law, issuance of transparency disclosures regarding model training methods, routine testing, and implementation of adequate cybersecurity protections.

“As Gen AI becomes increasingly sophisticated, so too must the tools designed to verify the origins of content seen online,” notes Stefanie Valdes-Scott, Director of Policy and Government Relations EMEA for Adobe.

However, the Act acknowledges that not all AI models are created equal. Developers of open-source models have voiced concerns about potential overregulation. In response, the EU has outlined certain exceptions for open-source Gen AI models.

To qualify for exemption, open-source providers must publicly disclose their parameters, including weights, model architecture, and model usage. They must also allow for "access, usage, modification and distribution of the model. Open-source models that are still deemed to pose "systemic" risks will not be eligible for these exemptions.

Companies found in breach of the regulations could face fines ranging from €35m (US$ 37.7m) (or 7% of their global annual revenues, whichever is higher). The severity of the penalty will depend on the nature of the infringement and the size of the company involved.

Industry sentiments 

Although the law is EU-based, the EU is the largest economy in the world, and companies wishing to operate within the bloc of 27 countries will need to abide by the legislation. 

“This is the world’s first comprehensive regulatory framework for AI and will have a global reach, applying to all businesses that sell AI-powered products and services to customers in EU member states,” explains Tommy Ross, Head of Global Public Policy, Alteryx.

Amazon and Meta executives had previously expressed that fears about artificial intelligence are overblown and that the EU’s sweeping new AI rules risk holding back innovation. Meta and Apple began restricting AI Releases in EU countries ahead of the announcement as a result.

Youtube Placeholder

Enterprises like Unilever, however, have been preparing for a more responsible use of AI in anticipation, implementing safeguards like testing of potential use cases with new AI systems through a cross-functional team of subject matter experts prior to deployment.

Such regulation has also been met with positivity for now giving enterprises and understanding of what parameters they have to work between. 

“While the EU Act isn't perfect, and needs to be assessed in relation to other global regulation, having a clear framework and guidance on AI from one of the world's major economies will help encourage those who remain on the fence to tap into the AI revolution,” Paul notes. 

Moving forward with compliance

Although the AI Act is now in force, most of its provisions won't take effect until at least 2026. 

Restrictions on general-purpose systems won't begin until 12 months after the Act's entry into force, and currently available Gen AI systems like ChatGPT and Gemini have been granted a 36-month "transition period" to achieve compliance.

The phases attempt to help companies implement policies in a gradual manner, yet this still poses problems to a number of them. 

“Many are still encountering roadblocks with the adoption of AI - 43% of UK businesses that have adapted AI say AI governance is the main obstacle, closely followed by AI ethics (42%),” Greg  Hanson, GVP of EMEA North at Informatica comments.

With the new act coming into force, the world exits the gold-rush era of AI and into the legislative one. The impact of which is yet to be seen landscape and the tech industry at large remains to be seen. What is clear, however, is to expect some changes from companies and industry to take place when developing the technology 

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

How American Airlines Used AI to Clean up the Sky

Google Research, American Airlines, Bill Gates’ Breakthrough Energy and Eurocontrol have used an AI solution to cease aviation contrails

AI in Education: D2L Ed Director Talks Transforming Learning

Rob Telfer, Director of Higher Education at D2L, explains how AI is transforming education—enhancing accessibility, efficiency, and personalisation

AWS and the Alliance Made to Make Gen AI Integration Easier

AWS has announced an imitative to help organisations role out their Gen AI ambitions with the Gen AI Partner Innovation Alliance

Project Jarvis: Google’s AI That Will Browse the Web for You

AI Applications

Balfour Beatty, Microsoft and AI's Potential in Construction

AI Applications

The UK’s Plan for Business to Build AI Safely

AI Strategy