Feb 4, 2021

What is explainable AI?

AI
Machine Learning
Paddy Smith
2 min
Explainable AI
As algorithms become more complicated and machine-to-machine learning makes judgements harder to read, explainable AI is becoming essential...

How does AI make decisions? We ought to know because we built it. But high-profile cases of machine learning bias have taught us that while we know what we’re putting into the training data, and how we have programmed the computer to learn from it, we cannot always predict unexpected outcomes.

Why is explainable AI important?

As AI is being used increasingly in fields where lives are at stake – medicine, safety controls – and we start to look at scenarios where humans might be taken away from supervisory positions, knowing that your AI is making the right decisions might not be enough. It will be important, not least from a legal perspective, to be able to show how the AI made its judgement(s).

Explainable AI and the ‘black box’ phenomenon

The ‘black box’ phenomenon occurs when AI reaches a decision via processes that are not easy to explain. As cybersecurity experts grapple with data poisoning, we can see that machines can be trained to be misled. Equally, without foul play, engineers may not be able to foresee how data will be processed, leading to unexpected outcomes. Explainable AI seeks to address this.

How does explainable AI help?

With explainable AI, the deep learning algorithm not only produces a result, but also shows its workings. That means that where a decision has been reached using a boilerplate algorithm, but where other factors may have had an influence, data scientists will be able to deem whether outlying parameters should have been taken into account. When an autonomous vehicle causes damage, injury or death, an enquiry can use the explainable AI to identify the soundness of the machine’s ‘decision’.

Who is developing explainable AI?

Fujitsu Laboratories is working with Hokkaido University to develop a way to explain ‘counterfactual’ explanations (what might have happened in a different scenario). This uses LIME, an explainable AI technology that gives simple, interpretable explanations for decisions, and SHAP, which looks at explanatory variables (what if…?). At the moment scientists are working in three fields: diabetes, loan credit screening and wine evaluation.

When can we expect to see explainable AI in the real world?

Fujitsu AI Technology Wide Learning is planned for commercial use this year, but expect the wider AI community to jump on the opportunity to fast-track AI’s wider adoption, and acceptance, by society.

Share article

Jun 10, 2021

Google is using AI to design faster and improved processors

AI
ML
Google
processors
2 min
Google scientists claim their new method of designing Google’s AI accelerators has the potential to save thousands of hours of human effort

Engineers at Google are now using artificial intelligence (AI) to design faster and more efficient processors, and then using its chip designs to develop the next generation of specialised computers that run the same type of AI algorithms.

Google designs its own computer chips rather than buying commercial products, this allows the company to optimise the chips to run its own software, but the process is time-consuming and expensive, usually taking two to three years to develop.

Floorplanning, a stage of chip design, involves taking the finalised circuit diagram of a new chip and arranging the components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge impact on speed and power consumption. 

Previously floorplanning has been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.

Fast chip design

In a new paper, Googlers Azalia Mirhoseini and Anna Goldie, and their colleagues, describe a deep reinforcement-learning system that can create floorplans in under six hours. 

They have created a convolutional neural network system that performs the macro block placement by itself within hours to achieve an optimal layout; the standard cells are automatically placed in the gaps by other software. This ML system should be able to produce an ideal floorplan far faster than humans at the controls. The neural network gradually improves its placement skills as it gains experience, according to the AI scientists. 

In their paper, the Googlers said their neural network is "capable of generalising across chips — meaning that it can learn from experience to become both better and faster at placing new chips — allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."

Generating a floorplan can take less than a second using a pre-trained neural net, and with up to a few hours of fine-tuning the network, the software can match or beat a human at floorplan design, according to the paper, depending on which metric you use.

"Our method was used to design the next generation of Google’s artificial-intelligence accelerators, and has the potential to save thousands of hours of human effort for each new generation," the Googlers wrote. "Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.

Share article