UK’s new roadmap to a ‘world-leading’ AI assurance ecosystem

The Centre for Data Ethics and Innovation (CDEI) has published the world's first roadmap to catalyse development of AI assurance ecosystem

Data-driven technologies, such as artificial intelligence (AI), have the potential to bring about significant benefits for the economy and society. They offer the opportunity to make existing processes faster and more effective, however, they also introduce risks that need to be managed. 

The Centre for Data Ethics and Innovation (CDEI), the government expert body enabling trustworthy innovation in data and AI, has set out the steps required to build a ‘world-leading’ AI assurance ecosystem in the UK.

The new roadmap provides a vision of what a mature ecosystem for AI assurance might look like in the UK and how the UK can achieve this vision. 

Antony Walker, Deputy CEO of techUK, said: “Today’s publication marks a key first step in operationalising the UK’s National AI Strategy and the UK leading the way in how a world-leading AI assurance ecosystem and market can become a reality.”

 

Addressing issues with AI governance

The roadmap, which was a commitment in the UK’s National AI Strategy, follows calls from public bodies such as the Committee on Standards in Public Life, to build an ecosystem of tools and services that can identify and mitigate the range of risks posed by AI and drive trustworthy adoption. 

It addresses one of the biggest issues in AI governance identified by international organisations including the Global Partnership on AI, OECD and World Economic Forum.

The roadmap sets out the roles and responsibilities of different stakeholders, and identifies six priority areas for action:

  1. Generate demand for reliable and effective assurance across the AI supply chain, improving understanding of risks, as well as accountabilities for mitigating them
  2. Build a dynamic, competitive AI assurance market, that provides a range of effective services and tools
  3. Develop standards that provide a common language for AI assurance
  4. Build an accountable AI assurance profession to ensure that AI assurance services are also trustworthy and high quality
  5. Support organisations to meet regulatory obligations by setting requirements that can be assured against
  6. Improve links between industry and independent researchers, so that researchers can help develop assurance techniques and identify AI risks

 

Expanding the UK’s contribution to global AI standards

The CDEI will take a number of steps over the next year to deliver on the roadmap, along with partners across industry, regulators and government. It will support DCMS and the Office for Artificial Intelligence as they work with stakeholders to pilot an AI Standards Hub, which will expand the UK’s contribution to global AI standards.

Chris Philp MP, Minister for Technology and the Digital Economy at the Department for Digital, Culture, Media and Sport, said: “AI has the potential to transform our society and economy; and help us tackle some of the greatest challenges of our time. However, this will only be possible if we are able to manage the risks posed by AI and build public trust in its use.”

 

Share

Featured Articles

Microsoft: building robust AI strategies in manufacturing

Manufacturing leaders have a duty to understand AI strategies, says a Microsoft Thought Leader, and data could be the key to unlocking AI opportunities

Uniphore: supporting customer service with AI innovation

Balaji Raghavan, Uniphore’s CTO discusses AI, the customer service industry and how his company’s software for customer conversation optimisation works

Catching up with Sophia: gender bias in AI

Gender bias in AI is discussed often. Here, Hanson Robotics’ robot, Sophia, shares how this bias is experienced by humans and robots alike

Artificial intelligence could be a stroke of genius for golf

AI Applications

Are we ready to hand humanity’s future over to AI?

AI Strategy

A watershed moment: feeding the world with AgriTech

Data & Analytics