Synechron’s Prag Jaodekar on the UK's AI Regulation Journey

The AI landscape could get more regulated as time goes on
We hear from Prag Jaodekar, Technology Director at Synechron, about how the EU AI Act will impact business innovation and how they can best use AI safely

With plenty of regulatory changes set to impact countries across Europe in the coming months, the United Kingdom (UK) is having to establish its relationship with AI.

The country played a significant role in seeking to build international collaboration around emerging AI technology, particularly with last year’s AI Safety Summit at Bletchley Park. Likewise, the UK government released its first piece of guidance for AI regulations in February 2024, before announcing that it will be launching a regulatory framework in the near future.

With leading AI companies around the world agreeing to publish safety frameworks and countries such as the United States (US) pushing for enforced reporting, the AI landscape could get more regulated as time goes on.

We hear from Prag Jaodekar, Technology Director at Synechron about the potential impact of AI regulations for global companies and how governments can best partner with them to create a safer technology landscape.

Adopting responsible AI: The role of private companies

When considering how the world adopts frameworks for AI, Prag suggests that Europe went a more comprehensive route.

“The EU AI Act is a series of rules and real, tangible implications if these rules are breached. It prioritises minimising social harms, emphasising trust, transparency, and accountability, whilst trying to allow space for further development and innovation,” he says.

“Meanwhile, the US has concentrated on overseeing areas seen as ‘high risk’, including healthcare and financial services. The White House has issued an Executive Order proposing more overreaching federal and national legislation, but this is in the early stage.”

Indeed, the EU AI Act has sought to compartmentalise AI technology into categories of risk, ranging from unacceptable use cases, to medium and lower hazard concerns. This aims to provide reassurance to the public, which will hold particular importance in the lead up to crucial general elections taking place in both the UK and US this year.

The businesses who develop large language models (LLMs) generative AI (Gen AI) will be able to help others understand proposed legislations moving forward, suggests Prag, whilst also being mindful of the importance of innovation.

“Companies will want to protect their business model,” he highlights. “They need to ensure they won’t be regulated out of existence, while voluntary measures may help us make AI safer now; the intense competition between companies to release ever-more-capable systems means we will need to remain highly vigilant to meaningful compliance, accountability, and effective risk mitigation.”

Facilitating collaboration between governments and technology companies

As a result, governments and businesses will benefit from working in tandem, particularly if they are going to achieve responsible AI innovation. Prag explains that there are multiple ways they could achieve this.

“They could look to offer accreditation schemes that would show alignment with regulatory requirements and help gain market access, such as product certification, to maximise the benefits to AI innovators,” he explains.

“Other financial incentives include innovation grants, tax credits, and free or funded participation in supervised test environment sandboxes. Funding would help start-ups and smaller businesses with less organisational resources to participate in research and development focused sandboxes.”

He adds: “These sandboxes could act as a vehicle between UK and international investment companies to build opportunities for participating entities and the wider ecosystem.”

Within the enterprise landscape, particularly key industries such as the financial sector, some of the primary interests in AI are currently tools designed to improve the customer experience. Likewise, research tools are liked by businesses as they can help with idea generation, automation and improvement of workplace efficiencies.

With this in mind, Prag suggests that companies are looking for an AI model that will give them a competitive advantage.

“They will look for clear guidelines about what is and what is not allowed in legislation and will want to see how reporting requirements align with existing structures,” he explains. “Broadly, they will want to have as light a compliance burden as possible to prevent added costs. 

“By staying vigilant on all aspects of currently evolving AI legislation across their current jurisdiction, financial firms can proactively adapt their practices and ensure responsible and compliant use of AI technology.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

Sophia Velastegui: Overcoming Enterprise AI Challenges

AI Magazine speaks with AI business leader Sophia Velastegui as she offers advice for businesses seeking to advance their AI use cases responsibly

Bigger Not Always Better as OpenAI Launch New GPT-4o mini

OpenAI release new GPT-4o mini model designed to be more cost-efficient whilst retaining a lot of the same capabilities of larger models

Why are the UK and China Leading in Gen AI Adoption?

China and the UK are leading the adoption of Gen AI, which although sounds surprising to begin with, becomes clearer as you dig into their state strategies

Moody's Gen AI Tool Alerts CRE Investors on Risk-Posing News

Data & Analytics

AWS Unveils AI Service That Makes Enterprise Apps in Minutes

Data & Analytics

Jitterbit CEO: Confronting the Challenges of Business AI

AI Strategy