Sumsub Compliance Lead Talks Effects of EU's AI Act
The world of AI is undergoing a seismic shift as governments across the globe introduce legislation to regulate its use.
In the EU, the AI Act represents the first comprehensive attempt to set standards for the development and deployment of AI systems.
The UK is not far behind, with similar regulatory frameworks on the horizon. For businesses, this raises pressing questions about compliance, liability, and the steps necessary to adapt.
To explore these challenges and opportunities, we spoke with Natália Fritzen, AI and Compliance Specialist at Sumsub, about how businesses can navigate the evolving landscape of AI legislation.
Adapting to AI legislation: a proactive approach
Businesses affected by the EU’s AI Act need to start by assessing their exposure to the legislation. Natália advises, “Start your assessments. Try to identify whether the Act ‘hooks’ your company in any way.”
This initial step is crucial, as the Act’s regulatory requirements depend on factors such as the risk level of an AI system and the company’s role—whether it is a provider or deployer. High-risk AI systems, for example, come under stricter scrutiny. Businesses need to categorise their AI systems carefully and implement corresponding compliance measures.
Natália further explains that speed is of the essence. “It’s key for corporations to move fast when reacting to AI legislation, assessing their business for any potential exposure and liability.” The faster a company adapts, the better it can mitigate risks and align with new legal requirements.
The global reach of EU AI laws
One of the standout aspects of the EU AI Act is its extraterritorial application. According to Natália, “The AI Act applies not only to businesses based in the EU, but it has extraterritorial application as well. For example, if your business provides AI systems whose output can generate effects in the EU, that system must comply with the Act.”
This means that even foreign companies selling AI-based products in the EU must adhere to the Act. As Natália points out, this could make it difficult for global companies to escape its regulatory reach.
While the Act’s potential to influence global AI governance is clear, its “Brussels effect”—the export of EU regulatory models—remains to be fully realised.
Timelines and challenges in enforcement
The EU AI Act’s implementation varies based on the risk category of AI systems.
Provisions regarding prohibited AI systems will come in quickly and be enforceable six months after the Act comes into force. Provisions regarding high-risk systems, on the other hand, have a grace period of 24 to 36 months,” Natália explains.
While these timelines allow businesses some flexibility, Natália emphasises the importance of enforcement.
She notes, “The effectiveness of these timelines in creating real sea-change in the industry will depend on the willingness of EU institutions to enforce them.” The newly created AI Office will play a pivotal role in this, publishing guidelines and codes of conduct to ensure compliance.
However, certain provisions, such as watermarking AI-generated content, raise questions about practical implementation.
Natália highlights this challenge: “Many experts have concerns over the technology’s technical implementation, accuracy, and robustness. At present, regulations in this area lack important technical details and sufficiently strong enforcement mechanisms.”
Without robust standards, the rise of harmful AI uses, such as deepfake fraud, remains a pressing concern.
For instance, Germany saw a 142% increase in deepfake cases between 2023 and 2024. Natália believes standardisation requirements for watermarks are essential to curbing such risks.
Mitigating AI-related risks
While legislation aims to safeguard society, companies must also take proactive steps to protect themselves.
“Beyond their own responsibilities to use AI safely, companies should look into protecting themselves from AI-related fraud," Natália stresses.
Fraud networks powered by AI are growing, targeting sectors such as banking with increasing sophistication.
Natália notes that every 100th user of online services is linked to fraud networks, creating significant risks for financial institutions.
One solution is to fight AI with AI. “Mastercard, for instance, has doubled their fraud detection rate using generative AI,” she says.
This demonstrates the potential of advanced AI systems to detect and prevent fraud, safeguarding businesses and their customers alike.
The EU AI Act and similar regulations signal a new era for businesses leveraging AI.
By acting swiftly, understanding their obligations, and harnessing the power of AI responsibly, organisations can turn compliance challenges into opportunities for innovation and growth.
Explore the latest edition of AI Magazine and be part of the conversation at our global conference series, Tech & AI LIVE.
Discover all our upcoming events and secure your tickets today.
AI Magazine is a BizClik brand