A team of researchers from the University of Pennsylvania’s School of Engineering and Applied Science, in partnership with scientists from Sandia National Laboratories and Brookhaven National Laboratory, has introduced a computing architecture specifically designed for use in artificial intelligence (AI).
It is hoped the new chip will help usher in a new wave of hardware and software co-design. Until now, the AI industry has been dominated by software companies, due to the unique challenges presented by Big Data, artificial intelligence and machine learning.
Co-led by Deep Jariwala, Assistant Professor in the Department of Electrical and Systems Engineering (ESE), Troy Olsson, Associate Professor in ESE, and Xiwen Liu, a PhD candidate in Jarawala’s Device Research and Engineering Laboratory, the research group has adapted an approach known as compute-in-memory (CIM) for the new chip architecture.
AI presents a major challenge to conventional computing architecture, say the researchers. In standard models, memory storage and computing take place in different parts of the machine, and data must move from an area of storage to a CPU or GPU for processing.
CIM architectures reduce transfer time and minimise energy consumption by processing and storing data in the same place. The team’s new CIM design - the subject of a recent study published in Nano Letters - is transistor-free and optimised for Big Data applications.
As AI software continues to develop and the rise of the Internet of Things produces larger data sets, researchers have focused on hardware redesign to deliver improvements in speed and energy usage.
“Even when used in a compute-in-memory architecture, transistors compromise the access time of data,” says Jariwala. “They require a lot of wiring in the overall circuitry of a chip and thus use time, space and energy in excess of what we would want for AI applications. The beauty of our transistor-free design is that it is simple, small and quick and it requires very little energy.”
Mobile tech and wearable devices can benefit from new chip
The advance is not only at the circuit-level design, say researchers, and the new computing architecture builds on the team’s earlier work in materials science focused on a semiconductor known as scandium-alloyed aluminium nitride (AlScN).
“One of this material’s key attributes is that it can be deposited at temperatures low enough to be compatible with silicon foundries,” says Olsson. “Most ferroelectric materials require much higher temperatures. AlScN’s special properties mean our demonstrated memory devices can go on top of the silicon layer in a vertical hetero-integrated stack.”
Olsson compares this to a multistory parking lot with a hundred-car capacity and a hundred individual parking spaces spread out over a wider space. “The same is the case for information and devices in a highly miniaturised chip like ours,” he explains. “This efficiency is as important for applications that require resource constraints, such as mobile or wearable devices, as it is for applications that are extremely energy intensive, such as data centres.”
In 2021, the team established the viability of the AlScN as a compute-in-memory powerhouse. In the most recent study debuting the transistor-free design, the team observed that their CIM ferrodiode may be able to perform up to 100 times faster than a conventional computing architecture.
“It is important to realise that all of the AI computing that is currently done is software-enabled on a silicon hardware architecture designed decades ago,” says Jariwala. “This is why artificial intelligence as a field has been dominated by computer and software engineers. Fundamentally redesigning hardware for AI is going to be the next big game changer in semiconductors and microelectronics. The direction we are going in now is that of hardware and software co-design.”