EU AI Act: Regulating Tech's Future as World-First Laws Pass

The EU AI Act will divide AI technology into categories of risk, ranging from “unacceptable” - which would lead to a ban - to high, medium and low hazard
The world-first EU AI Act legislation has officially passed in the European Parliament, as leaders work towards an AI-regulated future for businesses

Today (13th March 2024) saw the European Parliament approve the world’s first comprehensive framework to regulate artificial intelligence (AI).

The EU AI Act aims to set a precedent for the world’s first legislation on AI technology, aiming to ensure it will comply with the protection of fundamental human rights. It places the EU at the forefront of global attempts to address AI-associated risks in a rapidly-changing digital landscape.

“The AI act is not the end of the journey but the starting point for new governance built around technology,” MEP Dragos Tudorache highlights, as reported by the BBC.

What are some of the mandates?

With AI continuing to develop, as large tech companies like Google, Microsoft and OpenAI continue to invest, the technology will only continue to surge in popularity - highlighting the need for a clear regulatory framework.

Regulations from the EU AI Act are expected to be enforced from May 2024, after receiving endorsement from the European Council. Businesses will now have time to consider how best to comply with the new regulations.

During a climate of huge AI adoption across the global business landscape, the AI Act is designed to provide developers and deployers with clear requirements and obligations concerning specific AI use cases.

Significantly, the AI Act aims to impose significant penalties for non-compliance with authorised AI systems. Fines could reach up to €35m (US$38.2m).

Barry Scannell on the AI Advisory Council for the Government of Ireland, offers some advice concerning upcoming legislation that will be imposed by the act.

Some of the systems that will be prohibited by the end of 2024 include:
  • Manipulative and deceptive practices: AI that is designed to distort a person’s decision-making capacity
  • Exploitation of vulnerabilities: AI that targets individuals based on protected characteristics such as age, disability or socio-economic status
  • Biometric categorisation: AI that categorises individuals based on biometric data to infer sensitive information like race, political opinions, or sexual orientation
  • Social Scoring: AI that evaluates people or groups based on social behaviour or predicted personal characteristics
  • Real-time biometric identification will also be heavily restricted in publicly accessible spaces - unless approved
  • Risk Assessment in Criminal Offences: AI that risk-assesses the risk of individuals committing criminal offences based solely on profiling
  • Facial Recognition Databases: AI systems that create or expand facial recognition databases through untargeted scraping of images are prohibited
  • Emotion Inference in Workplaces and Educational Institutions: AI that infers emotions in sensitive environments such as the workplace or school settings

The EU AI Act will divide AI technology into categories of risk, ranging from “unacceptable” - which would lead to a ban - to high, medium and low hazard.

It aims to provide reassurances to wider society, after concerns increased on the lead up to several key elections taking place this year. With the rapid increase of AI technology like deepfakes and malicious AI cyberattacks leading to data breaches, having set regulations like this should work to dispel public anxieties.

Making AI technology more “human-centric”

Other nations around the world have recently introduced specific laws targeted at AI use and development. US President Biden announced an executive order in October 2023 that requires technology businesses to share all AI-related data with the government.

The country is also set to form an AI Task Force to better confront digital threats to safety.

Likewise, the People’s Republic of China also announced regulations in 2023 as it seeks to manage the rapid growth of AI technology. The nation is aiming towards being a global leader in AI by 2030.

Europe has arguably gone one step further with the new AI Act regulations, hoping to inspire trust and foster a culture of AI ethics moving forward.

Several key executives within the technology sector have commented on the announcement.

“I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems,” comments Christina Montgomery, Vice President and Chief Privacy & Trust Officer at IBM

“IBM stands ready to lend our technology and expertise – including our watsonx.governance product – to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI.”

Also speaking on the news, Keith Fenner, SVP and GM EMEA at Diligent, says: “The onus is now on British and Irish businesses to prepare for compliance. To best prepare, GRC professionals should build and implement an AI governance strategy. This will involve mapping, classifying and categorising the AI systems that they use or are under development based on the risk levels in the framework.

“Compliance is just the tip of the iceberg. To truly thrive in this new era, UK/Irish business leaders need to reimagine their approach to AI. 

“This means finding the right balance between innovation and regulation.”

************

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

************

AI Magazine is a BizClik brand

Share

Featured Articles

What Dell and Super Micro can Bring Musk’s xAI Supercomputer

Elon Musk's xAI partnership with server hosting titans Dell and Super Micro could see his ambition for 'the world's largest supercomputer' lift off

Toshiba Takes Another Step to Ushering in Embodied AI

Toshiba's Cambridge Research Lab has announced two breakthroughs in Embodied AI alongside a new group to renew focus on the tech

Why AWS is Investing $230m in Credits for Gen AI Startups

Amazon is investing US$230m in AWS cloud credits to entice Gen AI startups to get onboard with using its cloud services

How Retrieval Augmented Generation (RAG) Enhances Gen AI

AI Applications

Synechron’s Prag Jaodekar on the UK's AI Regulation Journey

AI Strategy

LGBTQ+ in AI: Vivienne Ming and the Human Power of AI

Machine Learning