Graphcore’s Intelligent Processing Unit for AI and ML
Bristol, United Kingdom-based startup Graphcore offers a microprocessor specifically designed for AI and machine learning tasks.
Known as the Intelligent Processing Unit (IPU), the microprocessor has applications in areas such as AI computing in data centres. Traditionally, graphical processing unit-based systems take on the workload of AI training and other tasks thanks to their highly parallel nature, but Graphcore says its offering is capable of offering improved performance.
Since its foundation in 2016, the company has raised around across six funding rounds. Its latest was its largest to date, raising $222mn alone and launching its valuation to around $2.8bn. The round was led by Ontario Teachers' Pension Plan alongside a raft of existing investors such as Robert Bosch Venture Capital and Dell Technologies Capital, and new investors Fidelity International and Schroders.
In , Graphcore CEO and co-founder Nigel Toon said: "Having the backing of such respected institutional investors says something very powerful about how the markets now view Graphcore. The confidence that they have in us comes from the competence we have demonstrated building our products and our business. We have created a technology that dramatically outperforms legacy processors such as GPUs, a powerful set of software tools that are tailored to the needs of AI developers, and a global sales operation that is bringing our products to market."
The company said it would use the funds to drive its expansion globally and further develop its IPU product.
Olivia Steedman, Senior Managing Director, Teachers' Innovation Platform (TIP) at Ontario Teachers Pension Plan, said: "The market for purpose-built AI processors is expected to be significant in the coming years because of computing megatrends like cloud technology and 5G and increased AI adoption, and we believe Graphcore is poised to be a leader in this space."
What is neuromorphic AI?
AI is dead. Long live AI?
AI is evolving. The first generation of machine learning used ordinary logic and rules to draw conclusions in a very specific manner. A good example would be IBM’s Deep Blue computer, which was trained to play chess to championship standard. That hasn’t disappeared, but it has been augmented by more perceptive deep learning networks that can analyze a broader set of parameters and provide intelligent insights.
And neuromorphic AI is next?
Correct. Neuromorphic computing is a way of designing hardware – microprocessors, really – to work more like human brains. The idea is that this new iteration of AI hardware will allow machine learning of the future to deal better with ambiguity and contradiction, things that are currently difficult to process for computers.
How does neuromorphic AI work?
The problem with current chip architecture is that it is not very efficient. Because of the linearity of the process, the chips have to built with a massive amount of horsepower just in case it’s needed. Building a human brain that way would be unfeasible, so engineers have had to rethink the nature of chip design in their quest to get computers to perform more of the tasks human brains are good at. Enter SNNs.
What’s an SNN?
A spiking neural network (SNN) is, in the words of chipmaker Intel, “a novel model for arranging those elements to emulate natural neural networks that exist in biological brains.” Each ‘neuron’ fires independently, triggering other neurons only when they are required. Intel again: “By encoding information within the signals themselves and their timing, SNNs simulate natural learning processes by dynamically remapping the synapses between artificial neurons in response to stimuli.”