What is explainable AI?

By Paddy Smith
Share
As algorithms become more complicated and machine-to-machine learning makes judgements harder to read, explainable AI is becoming essential...

How does AI make decisions? We ought to know because we built it. But high-profile cases of machine learning bias have taught us that while we know what we’re putting into the training data, and how we have programmed the computer to learn from it, we cannot always predict unexpected outcomes.

Why is explainable AI important?

As AI is being used increasingly in fields where lives are at stake – medicine, safety controls – and we start to look at scenarios where humans might be taken away from supervisory positions, knowing that your AI is making the right decisions might not be enough. It will be important, not least from a legal perspective, to be able to show how the AI made its judgement(s).

Explainable AI and the ‘black box’ phenomenon

The ‘black box’ phenomenon occurs when AI reaches a decision via processes that are not easy to explain. As cybersecurity experts grapple with data poisoning, we can see that machines can be trained to be misled. Equally, without foul play, engineers may not be able to foresee how data will be processed, leading to unexpected outcomes. Explainable AI seeks to address this.

How does explainable AI help?

With explainable AI, the deep learning algorithm not only produces a result, but also shows its workings. That means that where a decision has been reached using a boilerplate algorithm, but where other factors may have had an influence, data scientists will be able to deem whether outlying parameters should have been taken into account. When an autonomous vehicle causes damage, injury or death, an enquiry can use the explainable AI to identify the soundness of the machine’s ‘decision’.

Who is developing explainable AI?

Fujitsu Laboratories is working with Hokkaido University to develop a way to explain ‘counterfactual’ explanations (what might have happened in a different scenario). This uses LIME, an explainable AI technology that gives simple, interpretable explanations for decisions, and SHAP, which looks at explanatory variables (what if…?). At the moment scientists are working in three fields: diabetes, loan credit screening and wine evaluation.

When can we expect to see explainable AI in the real world?

Fujitsu AI Technology Wide Learning is planned for commercial use this year, but expect the wider AI community to jump on the opportunity to fast-track AI’s wider adoption, and acceptance, by society.

Share

Featured Articles

MoD Powers Up Data and AI Analytics with £50m Investment

Kainos has secured a £50m agreement to enhance the UK Ministry of Defence’s Defence Data Analytics Platform, supporting Royal Navy, Army and RAF operations

AI's Thirst for Water Raises Sustainability Concerns

In the wake of the UKs planned AI infrastructure boom, concerns are being raised over the potential impact more data centres could have on water supply

How Google’s AI Chip Upgrades Set Sustainability Standards

Google unveils new method for measuring environmental impact of AI hardware as processor efficiency gains deliver 70% decrease in carbon emissions

Google Makes Gemini 2.0 AI Model Available to Everyone

AI Applications

AI for Good: Why Microsoft is Using AI for Positive Change

AI Strategy

How Volvo's Autonomous Trucks are Using Gen AI

Technology