What is explainable AI?

By Paddy Smith
As algorithms become more complicated and machine-to-machine learning makes judgements harder to read, explainable AI is becoming essential...

How does AI make decisions? We ought to know because we built it. But high-profile cases of machine learning bias have taught us that while we know what we’re putting into the training data, and how we have programmed the computer to learn from it, we cannot always predict unexpected outcomes.

Why is explainable AI important?

As AI is being used increasingly in fields where lives are at stake – medicine, safety controls – and we start to look at scenarios where humans might be taken away from supervisory positions, knowing that your AI is making the right decisions might not be enough. It will be important, not least from a legal perspective, to be able to show how the AI made its judgement(s).

Explainable AI and the ‘black box’ phenomenon

The ‘black box’ phenomenon occurs when AI reaches a decision via processes that are not easy to explain. As cybersecurity experts grapple with data poisoning, we can see that machines can be trained to be misled. Equally, without foul play, engineers may not be able to foresee how data will be processed, leading to unexpected outcomes. Explainable AI seeks to address this.

How does explainable AI help?

With explainable AI, the deep learning algorithm not only produces a result, but also shows its workings. That means that where a decision has been reached using a boilerplate algorithm, but where other factors may have had an influence, data scientists will be able to deem whether outlying parameters should have been taken into account. When an autonomous vehicle causes damage, injury or death, an enquiry can use the explainable AI to identify the soundness of the machine’s ‘decision’.

Who is developing explainable AI?

Fujitsu Laboratories is working with Hokkaido University to develop a way to explain ‘counterfactual’ explanations (what might have happened in a different scenario). This uses LIME, an explainable AI technology that gives simple, interpretable explanations for decisions, and SHAP, which looks at explanatory variables (what if…?). At the moment scientists are working in three fields: diabetes, loan credit screening and wine evaluation.

When can we expect to see explainable AI in the real world?

Fujitsu AI Technology Wide Learning is planned for commercial use this year, but expect the wider AI community to jump on the opportunity to fast-track AI’s wider adoption, and acceptance, by society.

Share

Featured Articles

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Qualtrics has appointed Microsoft veteran Gurdeep Singh Pall as its new President of AI Strategy to transform the company’s AI offerings for customers

Should Tech Leaders be Concerned About the Power of AI?

Technology

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

Machine Learning