What is explainable AI?

By Paddy Smith
As algorithms become more complicated and machine-to-machine learning makes judgements harder to read, explainable AI is becoming essential...

How does AI make decisions? We ought to know because we built it. But high-profile cases of machine learning bias have taught us that while we know what we’re putting into the training data, and how we have programmed the computer to learn from it, we cannot always predict unexpected outcomes.

Why is explainable AI important?

As AI is being used increasingly in fields where lives are at stake – medicine, safety controls – and we start to look at scenarios where humans might be taken away from supervisory positions, knowing that your AI is making the right decisions might not be enough. It will be important, not least from a legal perspective, to be able to show how the AI made its judgement(s).

Explainable AI and the ‘black box’ phenomenon

The ‘black box’ phenomenon occurs when AI reaches a decision via processes that are not easy to explain. As cybersecurity experts grapple with data poisoning, we can see that machines can be trained to be misled. Equally, without foul play, engineers may not be able to foresee how data will be processed, leading to unexpected outcomes. Explainable AI seeks to address this.

How does explainable AI help?

With explainable AI, the deep learning algorithm not only produces a result, but also shows its workings. That means that where a decision has been reached using a boilerplate algorithm, but where other factors may have had an influence, data scientists will be able to deem whether outlying parameters should have been taken into account. When an autonomous vehicle causes damage, injury or death, an enquiry can use the explainable AI to identify the soundness of the machine’s ‘decision’.

Who is developing explainable AI?

Fujitsu Laboratories is working with Hokkaido University to develop a way to explain ‘counterfactual’ explanations (what might have happened in a different scenario). This uses LIME, an explainable AI technology that gives simple, interpretable explanations for decisions, and SHAP, which looks at explanatory variables (what if…?). At the moment scientists are working in three fields: diabetes, loan credit screening and wine evaluation.

When can we expect to see explainable AI in the real world?

Fujitsu AI Technology Wide Learning is planned for commercial use this year, but expect the wider AI community to jump on the opportunity to fast-track AI’s wider adoption, and acceptance, by society.

Share

Featured Articles

Lenovo: Employees prefer mix of AI and human IT support

New Lenovo survey shows 91% of employees believe they would be more productive when their IT issues at work are resolved quickly and effectively

Kyndryl’s Data and AI Console to simplify data management

Data-driven solution expands and increases observability and insights, while enhanced data governance helps identify irregularities and threats

Deep neural networks still struggling to match human vision

New study by researchers in Canada finds artificial intelligence still can't match the powers of human vision despite deep learning's ability with big data

Metaverse destined to become an impossible, dangerous place

Technology

Clever coders lead the way as Microsoft launches 365 Copilot

AI Applications

Baidu’s ERNIE doesn’t want confrontation with United States

AI Applications