EU AI Act: Balancing Enterprise Compliance & Avoiding Risk

With a provisional agreement having been reached on the new EU AI Act, we consider how businesses can best prepare during the regulation transition period

Two committees in the European Parliament have endorsed the initial agreement for the EU AI Act ahead of the legislative assembly vote in April 2024.

Setting a precedent for the world’s first legislation on AI technology, the act aims to ensure that it will comply with the protection of fundamental human rights.

With the provisional agreement reached in December 2023, the new act aims to set clear guidance for AI technology that is used across a wide range of industries. In its current form, the act will require foundation models and general-purpose AI systems to comply with transparency obligations before they are made available in the market.

A need for AI transparency

The proposed legislation is designed to regulate foundation models, generative AI (Gen AI) and tools such as chatbots and deepfakes. First proposed by the European Commission in 2021, had initially been delayed by divisions over the regulation of large language models (LLMs).

According to the European Parliament, AI systems that are categorised as unacceptable risks will be banned. On the other hand, models that are deemed high risk will undergo a compulsory fundamental rights impact assessment before being released. These will also be labelled with a CE mark.

The agreement will not only impose rules on smaller European AI companies, but also the US tech giants participating in AI development.

EU countries have already started backing the agreement earlier in the year, with France in particular securing concessions to lighten administrative burdens on higher risk AI systems, as well as offering better protection for business secrets.

Given that AI development and deployment has occurred at such a rapid pace, nations around the world are looking to collaborate with technology companies to ensure better safety. In particular, the US announced in October 2023 that it now requires technology companies to share data on AI safety in the hope of continuing a global precedent concerning responsible systems.

Weighing up regulation with business risk

When it comes to business development, the European Parliament states that it will offer “regulatory sandboxes” and “real-world testing” to help small and medium enterprises (SMEs) grow. MEPs wanted to ensure that businesses, especially those that are smaller, can develop AI solutions for their operations without allowing industry giants to hold a monopoly of safe AI development.

However, whilst these organisations could find themselves compliant under the new regulations, there is a possibility that they are still exposed to AI risks. Bernd Greifeneder, CTO at Dynatrace, believes that the Act does not focus enough on the potential business risks of AI. 

He tells AI Magazine: “Many organisations will be asking what they should do to prepare during the two year transition period that will likely begin in the coming months. Based on the guidelines that have been established, it seems that the EU has, understandably, focused its regulation on reducing the geopolitical risks of AI, but not the business risks. 

“It is vital that [businesses] take this advice seriously, or organisations may find that despite being “compliant”, they are exposing themselves to risk. As they develop their own codes of conduct, it’s first important that organisations recognise that not all AI is created equal. 

“Business leaders need to categorise their AI based on their own risk parameters, considering potential impact to revenue, reputation, and stakeholder relationships. As part of this, they need to consider how AI is making decisions, whether it is transparent, and which processes it has access to and control over. They should identify whether its outputs are deterministic, derived from relevant and contextual data that is updated in real time – and are therefore highly accurate, or if the AI is drawing conclusions from random and closed data, making it prone to error and hallucination. 

“Without a classification framework that clearly maps out these characteristics, organisations will struggle to use AI safely – no matter if they’re compliant or not.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

CoreWeave adds to UK's AI ecosphere with £1bn investment

In a boost to the UK’s AI sector, cloud provider CoreWeave announced that it has opened an office in London to stand as its European headquarters

AlphaFold 3: Google DeepMind Seeks to Transform Healthcare

Leading AI company Google DeepMind launches AlphaFold 3, a new model that can predict DNA and protein structures to revolutionise the drug discovery world

The Startup That Secured Europe’s Biggest-Ever AI Investment

Startup Wayve has secured funding from tech giants Nvidia and Microsoft to further develop its embodied AI system for next-generation autonomous cars

Democratising Data: Atlan Reaches US$750m Valuation

Data & Analytics

Microsoft AI: Rumoured AI Product Could Advance Global Tech

AI Applications

TM Forum leads the way for the AI-Native telco at DTW24

AI Strategy