AI decisions require legal protection, warns BCS

BCS, the Chartered Institute for IT, warns human reviews of AI decisions need legal protections

BCS, The Chartered Institute for IT, issued the warning in line with the launch of the ‘Data: A New Direction’ consultation from the Department for Digital, Culture, Media and Sport (DCMS).

Post-Brexit data regulations

The consultation aims to re-examine the UK’s data regulations post-Brexit. EU laws that were previously mandatory while the UK was part of the bloc, such as GDPR, will be looked at again to determine whether a better balance can be struck between data privacy and ensuring that innovation is not stifled at the same time.

Earlier this year the Culture Secretary, Oliver Dowden, said: “There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light-touch a way as possible.”

The DCMS is now considering the removal of Article 22 of GDPR, which focuses specifically on the right to review fully automated decisions.

Examination of automated decisions

Dr Sam De Silva, Chair of BCS’ Law Specialist Group and a partner at law firm CMS, explained: 

“Article 22 is not an easy provision to interpret and there is danger in interpreting it in isolation like many have done.

“We still do need clarity on the rights someone has in the scenario where there is fully automated decision making which could have a significant impact on that individual.”

Artificial Intelligence systems are being used for increasingly critical decisions, including whether to offer loans or grant insurance claims. Given the unsolved issues with bias, there’s always the chance that discrimination could end up becoming automated.

One school of thought is that humans should always make the final decisions, especially ones that impact people’s lives. And BCS believes that human reviews of AI decisions should at least have legal protection.

“Protection of human review of fully automated decisions is currently in a piece of legislation dealing with personal data. If no personal data is involved the protection does not apply, but the decision could still have a life-changing impact on us,” added De Silva.

He went on: “For example, say an algorithm is created deciding whether you should get a vaccine. The data you need to enter into the system is likely to be DOB, ethnicity, and other things, but not name or anything which could identify you as the person.

“Based on the input, the decision could be that you’re not eligible for a vaccine. But any protections in the GDPR would not apply as there is no personal data.”

BCS says it welcomes that the government is consulting carefully prior to making any decision. The body says that it supports the consultation and will be gathering views from across its membership.



Featured Articles

Microsoft: building robust AI strategies in manufacturing

Manufacturing leaders have a duty to understand AI strategies, says a Microsoft Thought Leader, and data could be the key to unlocking AI opportunities

Uniphore: supporting customer service with AI innovation

Balaji Raghavan, Uniphore’s CTO discusses AI, the customer service industry and how his company’s software for customer conversation optimisation works

Catching up with Sophia: gender bias in AI

Gender bias in AI is discussed often. Here, Hanson Robotics’ robot, Sophia, shares how this bias is experienced by humans and robots alike

Artificial intelligence could be a stroke of genius for golf

AI Applications

Are we ready to hand humanity’s future over to AI?

AI Strategy

A watershed moment: feeding the world with AgriTech

Data & Analytics