Can AI be trusted?

By Paddy Smith
Ethics in AI is hotly debated. How much control should we give machines? And whose fault will it be when they overstep...

Science fiction is filled with dystopian futures where robots run berserk, maiming and killing humans as they bypass their command lines in a bid to break free from the shackles of human enslavement.

It’s a compelling narrative, and one that increasingly grazes the real world as machine learning takes hold in the mainstream. Of course, Amazon recommending further reading of Peppa Pig books when you’ve just bought your niece’s birthday present is a far cry from “I can’t do that, Dave” Hal, but as the potential for AI becomes clearer, questions over the ethics of its ‘behaviour’ have established themselves with greater urgency. 

The European Commission recently published its Assessment List for Trustworthy Artificial Intelligence (ALTAI) precisely to address the potential pitfalls of poorly implemented AI. Already there have been headlines about machine-made decisions in medicine, public health, business and government. Will we ever be able to trust AI with ethical decision making?

For now, the answer is ‘no’. As Prof Paul Clough, head of data science at Peak Indicators, points out, “most AI is relatively shallow in its ability to capture and exhibit human intelligence”.

John Yardley, CEO of Threads Software, takes a similar line. He says, “There is nothing any more magical or morally questionable about AI that there was about the introduction of the Jacquard loom in 1804. Both are/were intended to replace humans and save money. The only difference is that, according to Alan Turing’s original definition, for a machine’s behaviour to be classed as intelligent, it has to be able to fool another human.

“Other than that, it is just another bit of software. If the humans that write that software want it to act like a human, they have the choice of emulating the human brain (eg neural networks) or emulating human behaviour. Programmers generally do not make moral judgements. They simply enjoy creating algorithms and let someone else decide how to commercialise them.”

Yet it is the method of employment that has stirred most controversy. Human input into machines learning algorithms carries with it human bias, leading to AI which entrenches often unseen or unconscious cultural leanings.

Prof Andy Pardoe, founder and managing director of Pardoe Ventures, explains, “Most ethical decision-making issues are not driven by poor AI algorithms but inherent data bias and a lack of approaches to identify and manage these data challenges. We have to acknowledge that what we are seeing with AI now is simply a reflection of reality that is being exposed by the underlying data we are capturing. Training a machine learning algorithm with datasets that are not balanced for each group represented will cause potential predictive biases that reflect the underlying bias within the data.”

His solution is to train AI to spot bias. “Researchers are now working on methods to both identify and resolve such biases allowing predictive models to be more equalised for each group represented. Even now experienced data scientists are able to reduce the impact of data bias using various techniques to select a more balanced training dataset.

“As the tools to support the process of training AI algorithms mature, the problems of data bias and issues of ethical decision making will be a thing of the past, and we will be able to better trust the automated decisions from AI systems.”

Others feel the control should be inserted at a human level, reviewing algorithmic decision-making to ensure lack of bias.

Dr Nick Lynch, consultant and investment lead at the Pistoia Alliance, says, “We’ve seen AI systems go ‘wrong’ in many industries due to limited diversity in datasets, leading to unethical decision-making. Algorithms used in recruitment, for instance, are known to favour white men. In medicine, adult males dominate the clinical trial population and around 86 percent of participants are white.

“When such data are used to ‘teach’ an algorithm that informs healthcare or recruitment decisions, there is a risk of inaccurate and even harmful outcomes. If organisations and ecosystems work together, they can ensure greater awareness of the ethical issues. Through shared risk analyses and data, we can reduce the pitfalls of AI and ensure more trustworthy outcomes.”

Jabe Wilson, consulting director for text and data analytics at Elsevier R&D Solutions, cautions that while “there’s no one-size-fits-all approach to guarantee ethical decision making when using AI systems”, human intervention is crucial, for now.

“One step that should be taken by everyone to improve trust is ensuring full transparency and accountability. This means allowing researchers to go back and review the algorithms an AI has used, the data calculations are based on, and the workings of the scientists who interpreted the results so that there can be accountability at every step.

“We need greater transparency around how AI tools operate and how they have reached the conclusions that they have. Not every firm can easily challenge the algorithms these systems are based on, especially researchers without a background in data science. However, every firm can, and should, do more to improve the quality and cleanliness of their data to make sure undetected biases are removed.”

Accountability is central to the role of ethics in AI. Should injury, damage or death occur, who should take the rap: owner or manufacturer?

Chris Holder and Ralph Giles are lawyers specialising in robotics and AI at technology law firm Bristows. Their view: “When there is a dispute, there are usually multiple factors in play and each case will turn on its facts. To avoid the ethics of the decisions made by AI being another issue to consider, ideally, responsibilities should be clearly attributed and shared at the outset.

“We can perhaps draw from data protection and cyber security best practice principles of ‘Privacy by Design’ and ‘Security by Design’ respectively, whereby companies would be responsible for identifying the impact and potential issues that may arise when AI is incorporated in a decision making process, and what measures they should put in place from the start to avert or manage adverse events.

Where this is not the case, given the above, it is unlikely that apportioning all the blame to a single person or system would be appropriate. As recently stated in a report from the European Commission, “from inception to use, best practices promoting ethical responsibility must be fostered and shared. This way, humans can remain accountable to users, instead of complex systems”.”

Akash Sachdeva, partner in litigation at London law firm Joelson, takes a more pragmatic view. He says, “In reality, both manufacturer and owner are at risk in any claim involving AI gone wrong. And anyone wanting to sue will go after those with the deepest pockets – manufacturer or owner.

“For me, the real issue that is going to arise is around ethical decision making. In terms of litigation, the fundamental question is whether one can still be found liable for an offence – even if the ‘correct’ ethical decision was made. For example, if someone purposely determines that one person dies to save five people, which would be considered the ‘correct’ ethical decision, does that mean they are not liable for the death of that one person? Until we get AI determining legal cases in their entirety, every decision that is made by AI will ultimately be determined by human beings: judges, lawyers and juries.”

For Pardoe, the blame rests squarely with the user. “In the same way that it’s the person who pulls the trigger of the gun is ultimately responsible for shooting someone and not the gun manufacturer, there has to be significant responsibility with the entity who is the user of the AI application.”

But he makes an important distinction, that “as an industry we have an ethical responsibility to ensure we can control and limit the use of such technologies, for specific applications, in a way that is acceptable to the general public and limits the risk to the reputation of the technologies themselves.”

It's a dilemma of the AI ethics debate that, clouded by poor judgement as humans are, machines could make better ethical decisions, yet cannot unbind themselves from a legal framework built around human fallibility.

Holder and Giles say, “The issue is with us rather than the AI – we would prefer to ask a person to do what they feel is right, which is something that a machine using AI simply cannot do. It remains a machine, not a human being and so it is incapable of ‘emotion’ in the true human sense and is incapable of having an ‘ethical view’.

“A machine using AI cannot, therefore, reach a ‘correct’ ethical answer to [Judith Jarvis Thomson’s famous ethics survey] the trolley problem and it remains a human imperative to construct autonomous machines that react to the data around them, in accordance with current ethical and regulatory standards.”

Yardley agrees that AI should only be held accountable in a situation where “there is negligence involved”. He says, “The customer must ultimately take a view on whether the advantages outweigh the risks and needs to do this in the light of the overall performance of the aid with respect to human options. So if, for instance, the number of accidents caused by driverless cars is a lower proportion than those caused by human drivers, it is not reasonable to seek compensation for a contingency that could not be designed for.”

At the root of the AI ethics debate is the question of scale. As Peter van der Putten, assistant professor at Leiden University and global director for decisioning solutions at Pegasystems, points out, “One maverick human making decisions can have bad impact on tens to hundreds of decisions, but an issue in models and logic driving automated decisions can affect millions. In that situation, it is justified that more scrutiny should be applied to AI.”

So where next for the AI ethics debate? Lynch argues that organisations must “remain realistic about what AI is capable of, and don’t stretch a tool beyond what it was made to do”. He adds, “It’s crucial that leaders educate employees about AI and set the right expectations. The best applications of AI are those which combine AI models with human decision-making. Organisations must remember that AI can augment humans in drawing conclusions from data, but should never replace them.”

For Holder and Giles, the path ahead is lined with the lessons of the past. “This debate inevitably leads to the production of rules, regulations and standards that machines using AI must adhere to. Machines have been a part of human life for centuries and during each stage of technological advancement, their creation and use have had to fit in with the ethical, legal and societal zeitgeist – and AI powered machines will be no different.

“Taking the car as an example, when they were first used on the roads, it was a requirement that a person walked in front of one with a red flag to warn passers-by of what was about to come down the road. That was the norm then. This requirement quickly faded into history as the general public got used to automobiles sharing the roads with horses and their speed meant that flag waving became more of a hindrance than a help. The rules changed. Society adapted.”

Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning