UK’s new roadmap to a ‘world-leading’ AI assurance ecosystem

The Centre for Data Ethics and Innovation (CDEI) has published the world's first roadmap to catalyse development of AI assurance ecosystem

Data-driven technologies, such as artificial intelligence (AI), have the potential to bring about significant benefits for the economy and society. They offer the opportunity to make existing processes faster and more effective, however, they also introduce risks that need to be managed. 

The Centre for Data Ethics and Innovation (CDEI), the government expert body enabling trustworthy innovation in data and AI, has set out the steps required to build a ‘world-leading’ AI assurance ecosystem in the UK.

The new roadmap provides a vision of what a mature ecosystem for AI assurance might look like in the UK and how the UK can achieve this vision. 

Antony Walker, Deputy CEO of techUK, said: “Today’s publication marks a key first step in operationalising the UK’s National AI Strategy and the UK leading the way in how a world-leading AI assurance ecosystem and market can become a reality.”

 

Addressing issues with AI governance

The roadmap, which was a commitment in the UK’s National AI Strategy, follows calls from public bodies such as the Committee on Standards in Public Life, to build an ecosystem of tools and services that can identify and mitigate the range of risks posed by AI and drive trustworthy adoption. 

It addresses one of the biggest issues in AI governance identified by international organisations including the Global Partnership on AI, OECD and World Economic Forum.

The roadmap sets out the roles and responsibilities of different stakeholders, and identifies six priority areas for action:

  1. Generate demand for reliable and effective assurance across the AI supply chain, improving understanding of risks, as well as accountabilities for mitigating them
  2. Build a dynamic, competitive AI assurance market, that provides a range of effective services and tools
  3. Develop standards that provide a common language for AI assurance
  4. Build an accountable AI assurance profession to ensure that AI assurance services are also trustworthy and high quality
  5. Support organisations to meet regulatory obligations by setting requirements that can be assured against
  6. Improve links between industry and independent researchers, so that researchers can help develop assurance techniques and identify AI risks

 

Expanding the UK’s contribution to global AI standards

The CDEI will take a number of steps over the next year to deliver on the roadmap, along with partners across industry, regulators and government. It will support DCMS and the Office for Artificial Intelligence as they work with stakeholders to pilot an AI Standards Hub, which will expand the UK’s contribution to global AI standards.

Chris Philp MP, Minister for Technology and the Digital Economy at the Department for Digital, Culture, Media and Sport, said: “AI has the potential to transform our society and economy; and help us tackle some of the greatest challenges of our time. However, this will only be possible if we are able to manage the risks posed by AI and build public trust in its use.”

 

Share

Featured Articles

IBM's Salesforce Partnering Shows watsonx's Enterprise Reach

IBM and Salesforce's expansion of their partnership shows how watsonx’s is making inroads in enterprises across sectors

Are Business and Government Diverging on AI Safety?

As the UK government seeks to expand its AI Safety Institute just as OpenAI disbands its team on long-term AI safety, we look at the gap in approach to AI

Alteryx Industry-First AI Copilot Sees New Era of Analytics

Alteryx unveils AiDIN Copilot, the first AI assistant that chats with users to build data analysis workflows

Tamr’s Anthony Deighton: Integrating AI into Enterprise Data

Data & Analytics

IBM and Tech Mahindra Seek to Power Up Gen AI Adoption

Technology

NASA's First Chief AI Officer Shows AI's Value Cross Sector

AI Strategy