UK can set the standard in ethical Artificial Intelligence

Going for gold: Study by The Chartered Institute for IT (BCS) finds the UK can set the “gold standard” in ethical AI

The UK is home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI and others and is Europe’s leader in AI. However, the country is unable to match the funding and support available to similar companies based in countries like the US and China.

Because of this, many experts have suggested that the UK should tap its strengths in leading universities and institutions, diplomacy, and democratic values, in order to become a world leader in creating AI that ‘cares about humanity’.

The importance of Artificial Intelligence on our lives going forward

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT and a lead author of the report, said creating the gold standard  would be a critical part of the UK’s economic recovery. He added that everyone deserved to have confidence in AI as it will affect our lives during the coming years and the technology should reflect the needs of everyone the developers are engineering the software for. He gave examples of credit scoring apps, cancer diagnosis and software which could decide if you get a job or not.

It is commonly feared that current biases in many AI systems could lead to increasing existing societal problems, including the wealth gap and discrimination based on race, sex, age, and more. It is thought access to digital skills and training across the board could combat these tendencies.

Some high profile AI mismodelling

Public trust in AI has been damaged through high-profile missteps including the crisis last summer when an algorithm was used to estimate the grades of students. A follow-up survey from YouGov – commissioned by BCS – found that 53 percent of UK adults had no faith in any organisation to make judgements about them.

In May last year, the national press reported that code written by Professor Neil Ferguson and his team at Imperial College London that informed the decision to enter a lockdown was “totally unreliable” and also damaged public trust in software. 

The report also found a large disparity in the competence and ethical practices of organisations using AI. And in the UK government’s National Data Strategy, it states: “Used badly, data could harm people or communities, or have its overwhelming benefits overshadowed by public mistrust.


Featured Articles

Intuitive Machines: NASA's Odysseus bets on Private Company

Discover more about the small private company that landed the first US spacecraft on the moon in 50 years, with NASA continuing to test new technologies

Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Tech giant Google, with Google DeepMind, launches Gemma, consisting of new new state-of-the-art open AI models built for an open community of developers

Sustainability LIVE: Net Zero a Key Event for AI Leaders

Sustainability LIVE: Net Zero, taking place in London on 6th and 7th March 2024, promises to be a valuable event for AI leaders

US to Form AI Task Force to Confront AI Threats to Safety

AI Strategy

Wipro to Advance Enterprise Gen AI Adoption with IBM watsonx

AI Strategy

Dr Joy Buolamwini: Helping Tech Giants Recognise AI Biases

Machine Learning