UK can set the standard in ethical Artificial Intelligence

Going for gold: Study by The Chartered Institute for IT (BCS) finds the UK can set the “gold standard” in ethical AI

The UK is home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI and others and is Europe’s leader in AI. However, the country is unable to match the funding and support available to similar companies based in countries like the US and China.

Because of this, many experts have suggested that the UK should tap its strengths in leading universities and institutions, diplomacy, and democratic values, in order to become a world leader in creating AI that ‘cares about humanity’.

The importance of Artificial Intelligence on our lives going forward

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT and a lead author of the report, said creating the gold standard  would be a critical part of the UK’s economic recovery. He added that everyone deserved to have confidence in AI as it will affect our lives during the coming years and the technology should reflect the needs of everyone the developers are engineering the software for. He gave examples of credit scoring apps, cancer diagnosis and software which could decide if you get a job or not.

It is commonly feared that current biases in many AI systems could lead to increasing existing societal problems, including the wealth gap and discrimination based on race, sex, age, and more. It is thought access to digital skills and training across the board could combat these tendencies.

Some high profile AI mismodelling

Public trust in AI has been damaged through high-profile missteps including the crisis last summer when an algorithm was used to estimate the grades of students. A follow-up survey from YouGov – commissioned by BCS – found that 53 percent of UK adults had no faith in any organisation to make judgements about them.

In May last year, the national press reported that code written by Professor Neil Ferguson and his team at Imperial College London that informed the decision to enter a lockdown was “totally unreliable” and also damaged public trust in software. 

The report also found a large disparity in the competence and ethical practices of organisations using AI. And in the UK government’s National Data Strategy, it states: “Used badly, data could harm people or communities, or have its overwhelming benefits overshadowed by public mistrust.

Share

Featured Articles

Jitterbit CEO: Confronting the Challenges of Business AI

AI Magazine speaks with the President & CEO and Jitterbit, Bill Conner, about the growing AI hype and how it can be integrated into a business successfully

Graphcore: Who is the Nvidia Challenger SoftBank Acquired?

SoftBank's acquisition of the UK startup Graphcore could accelerate development of the more efficient IPU AI chips and challenge chip giant Nvidia

Amazon Takes On AI Hallucinations Across Its AI Portfolio

Amazon is upgrading the memory capacity across a range of its services to improve the accuracy of responses Gen AI systems return to prompts

LG’s Athom Acquisition to Accelerate AI-Enabled Smart Homes

AI Applications

Why AI is Behind Samsung’s Expected 15-Fold Profit Surge

AI Strategy

AI Patent Race: What China’s Dominance Means for the Market

Technology