UK can set the standard in ethical Artificial Intelligence

Share
Going for gold: Study by The Chartered Institute for IT (BCS) finds the UK can set the “gold standard” in ethical AI

The UK is home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI and others and is Europe’s leader in AI. However, the country is unable to match the funding and support available to similar companies based in countries like the US and China.

Because of this, many experts have suggested that the UK should tap its strengths in leading universities and institutions, diplomacy, and democratic values, in order to become a world leader in creating AI that ‘cares about humanity’.

The importance of Artificial Intelligence on our lives going forward

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT and a lead author of the report, said creating the gold standard  would be a critical part of the UK’s economic recovery. He added that everyone deserved to have confidence in AI as it will affect our lives during the coming years and the technology should reflect the needs of everyone the developers are engineering the software for. He gave examples of credit scoring apps, cancer diagnosis and software which could decide if you get a job or not.

It is commonly feared that current biases in many AI systems could lead to increasing existing societal problems, including the wealth gap and discrimination based on race, sex, age, and more. It is thought access to digital skills and training across the board could combat these tendencies.

Some high profile AI mismodelling

Public trust in AI has been damaged through high-profile missteps including the crisis last summer when an algorithm was used to estimate the grades of students. A follow-up survey from YouGov – commissioned by BCS – found that 53 percent of UK adults had no faith in any organisation to make judgements about them.

In May last year, the national press reported that code written by Professor Neil Ferguson and his team at Imperial College London that informed the decision to enter a lockdown was “totally unreliable” and also damaged public trust in software. 

The report also found a large disparity in the competence and ethical practices of organisations using AI. And in the UK government’s National Data Strategy, it states: “Used badly, data could harm people or communities, or have its overwhelming benefits overshadowed by public mistrust.

Share

Featured Articles

JLL PDS CEO: How AI is Transforming the Built Environment

JLL’s property services chief Cynthia Kantor outlines how artificial intelligence and data analytics are reshaping commercial real estate development

How Trump Scrapping AI Safety Regulations Impacts Global AI

US President Donald Trump's executive order removes federal oversight requirements for AI companies developing high-risk AI, shifting US tech regulations

SAP’s Enterprise AI Adoption Trends For 2025

SAP identifies five AI themes that will shape 2025, as enterprises pivot towards practical AI applications, anticipating tangible returns on investments

Global Tech Leaders Responses to The UK’s AI Action Plan

AI Strategy

AI Adoption Challenges for Australian Tech Leaders

AI Strategy

Why Dynatrace Signs Analytics Deal With F1 Team VCARB

AI Strategy