Feb 17, 2021

What is graph AI?

Paddy Smith
3 min
Graph AI. Credit: Getty
Is the next chapter in the AI story graph AI? What’s the difference between machine learning and graph AI? Read on to find out...

AI isn’t a new phenomenon. It’s just become more useful with the advent of machine learning, or deep learning, and with the increased focus of businesses on collecting and mining data. But ML has limitations too, and those are what graph-based AI – or graph AI – promises to help with.

Confused about AI, ML, graph?

AI is about knowledge. Early AI had strict parameters and was used in highly specific ways. As you can imagine, that was costly to implement and relatively narrow in its scope. Then came machine learning – also known as deep learning – which uses statistical modelling to recognise similarities and anomalies in data, the latter being particularly useful for applications such as predictive maintenance. ML is great at pattern spotting. It can ‘learn’ what things should ‘look’ like and report when they do, or don’t. That means ML can be used to identify images, speech patterns and so on. Graph AI goes beyond that.

What are the limitations of machine learning vs graph AI?

Machine learning has taken quite a bit of flak for being biased, a problem caused by imbalance sets of training data, or simply poor quality training data. That is relatively easily fixed by employing more rigorous standards to the data fed into the ML at the training stage. A larger problem is that ML has a tendency to ignore contextual information, because it works best when its data are regarded in isolation. That means ML can miss things that would be staggeringly obvious to a human. More worryingly, it means the algorithms can be gamed by bad actors engaging in corporate sabotage or cybercrime.

What is graph AI?

Graph AI isn’t reinventing the AI wheel, but it is tuning the engine and fitting alloys. By pivoting to graph modelling, the AI can look across a number of different datasets to infer context and probe relational correspondence. That means when things change in one dataset, the AI doesn’t ring the alarm bell until it’s looked at corresponding anomalies in other relevant datasets to establish if, in fact, the problem is not isolated.

Where does graph AI get its data?

Anywhere, depending on the application. It could be meteorological data, or transactional data relating to specific customers. It could be social networks which can be mined for social insight. It could be traffic movement, content, scientific data or IoT. It could be all of these things combined and correlated to produce contextual graph-based AI that is aware of more than life inside its own training data ‘bubble’.

What can graph AI be used for?

In a word, everything. Graph neural networks are set to become a big trend in enterprise IT. The computational power required is awesome but the timely advent of edge/cloud means that is no longer a case of upgrading expensive hardware. And products capable of handling graph-based data analytics such as Microsoft Azure’s Cosmos DB and AWS’s Neptune are already on the market. Advances in network architecture twinned with a surge in the move to cloud is sure to accelerate adoption of this powerful new breed of AI.

Share article

Jun 10, 2021

Google is using AI to design faster and improved processors

2 min
Google scientists claim their new method of designing Google’s AI accelerators has the potential to save thousands of hours of human effort

Engineers at Google are now using artificial intelligence (AI) to design faster and more efficient processors, and then using its chip designs to develop the next generation of specialised computers that run the same type of AI algorithms.

Google designs its own computer chips rather than buying commercial products, this allows the company to optimise the chips to run its own software, but the process is time-consuming and expensive, usually taking two to three years to develop.

Floorplanning, a stage of chip design, involves taking the finalised circuit diagram of a new chip and arranging the components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge impact on speed and power consumption. 

Previously floorplanning has been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.

Fast chip design

In a new paper, Googlers Azalia Mirhoseini and Anna Goldie, and their colleagues, describe a deep reinforcement-learning system that can create floorplans in under six hours. 

They have created a convolutional neural network system that performs the macro block placement by itself within hours to achieve an optimal layout; the standard cells are automatically placed in the gaps by other software. This ML system should be able to produce an ideal floorplan far faster than humans at the controls. The neural network gradually improves its placement skills as it gains experience, according to the AI scientists. 

In their paper, the Googlers said their neural network is "capable of generalising across chips — meaning that it can learn from experience to become both better and faster at placing new chips — allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."

Generating a floorplan can take less than a second using a pre-trained neural net, and with up to a few hours of fine-tuning the network, the software can match or beat a human at floorplan design, according to the paper, depending on which metric you use.

"Our method was used to design the next generation of Google’s artificial-intelligence accelerators, and has the potential to save thousands of hours of human effort for each new generation," the Googlers wrote. "Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.

Share article