Human intelligence: The future of cloud & edge intelligence

By Dr. Biswa Sengupta, Technical Fellow and Director of Machine Learning Research, at Zebra Technologies
Dr. Biswa Sengupta, Technical Fellow and Director of Machine Learning Research, at Zebra Technologies discusses AI and the path to edge intelligence

The world of artificial intelligence (AI) loves borrowing ideas and analogies from the world of human cognition. For example, ‘intelligence’, ‘neural networks’, ‘deep learning’, ‘reinforcement learning’ and ‘computer vision’ are all things associated with the powers of the mind to sense, analyse, and recommend actions. We could say that the central nervous system (CNS) – the data centre – is the command centre of the body, made up of the brain and spinal cord, which is connected to the peripheral nervous system (PNS) – the edge – representing the front line, looking after the internal and external functions and reactions of the body. In the same way, an autonomous mobile robot (AMR) or front-line worker with a mobile computer or tablet can be connected to a cloud-based software platform and apps, with data being created and flowing between the two. 

Within the human nervous system, there’s also something called a reflex arc – a rapid reaction that involves the nervous system yet bypasses the brain, meaning our body can react fast to something like putting our hand too near a flame. It’s a bit like a handheld computer that has enough compute power to enable a front-line worker to do their best work, without having to log into a cluster or a workstation computer. 

Examples from elsewhere in the mammal world include the octopus, which has a central brain even though two-thirds of its neurons are found in the tentacles, forming what’s known as ganglia – smaller, decentralised brains that allow the octopus tentacles to sense the world around them, make decisions and carry out actions. Another example is the leech, which has a brain in its head, 21 individual ganglia body brains, and seven tail brains, forming a sort of ‘mesh’ of brains for it to function. In these latter examples of the octopus and the leech, we find inspiration for the future of edge intelligence. Let me explain.

Cloud services and various AI application programming interfaces (API) have catapulted digital transformation across multiple industries, including warehousing and retail. In the last several years, the Internet of Things (IoT) concept has given rise to millions of sensor data being sent to and analysed using infrastructure on both public and private clouds. 

Use cases range from handheld devices recording the current inventory balance and dynamically ordering the requisite items to collating information from vision feeds that tell us if a specific person has the necessary compliance and public safety gear during their shift in a warehouse. In the first case, a time series is consistently sent from the edge-based handheld device or a fixed camera to the cloud data analytics services. In contrast, for the second use case, image grabs are processed with deep-learning models on the cloud using person detection, tracking and re-identification modules. The edge device is simply a conduit for transferring information to the cloud in both cases. This is very similar to the nervous system, where information is triaged using the PNS (edge) before being passed onto the CNS (cloud).

Why do we need edge intelligence?

A network of sensors that runs closer to end-users provides them with reduced latency, saves bandwidth, pre-processes data on the device before passing it to a more extensive cloud computing infrastructure, and helps guarantee the privacy of data being collected and inferred from those devices. These form some of the core requirements for edge intelligence. These edge devices or sensors with a low level of computing elastically create a network (a mesh) wherein these edge devices — chatbots, dashcams, your smartphone, temperature sensors, etc. — intermittently join the network to collect, compute and share information. In yesteryears — I mean the IoT world—these sensors were sensing the world and faithfully transmitting signals to a mothership (public or private cloud). 

But consider this: what if two or more edge devices can share their inputs and the limited onboard computing to attain a goal? We then have a lot of limited-onboard-compute edge devices that can be brought together under a mesh network to solve an asset tracking problem in a warehouse or retail store, for example. Loosely, use cases can range from autonomous devices (drones, AMRs, autonomous vehicles), immersive experiences (augmented reality/virtual reality wearable devices) to IoT analytics (industrial and home sensors), among others. 

Artificial intelligence on the edge

Here’s where we can inject some powerful ‘intelligence’ into edge devices, with essentially two ways to operationalise machine learning at the edge — a centralised topology using tools from centralised federated learning and a mathematically obtuse field of decentralised and distributed (no cloud data centre) federated learning algorithms. The latter is akin to the decentralised brains of the octopus and the leech. 

Federated learning relies on training machine learning models in devices with lower computational power and transferring the locally learned models to the oracle for further processing. As a first step, a cloud-trained model (from the cloud data centre) is sent to individual edge devices. This model is then fine-tuned with local data; then, the models are sent to the cloud platform so that its model can be updated. The communication pattern here can be either synchronous or asynchronous. There are also types of statistical inference algorithms that can enable edge devices to send messages to one another, reducing the local model’s communication load. Depending on the use case, metrics such as proximity, latency, and mobility also need to be considered.

The second way, using decentralised, federated learning, is often the preferable way of deploying machine learning models. Information is spread across devices rather than a singular point, decreasing the available surface of any cybersecurity attack. To give it a technical name, communication patterns for decentralised formalism can be via a graph, a distributed ledger, or simply peer-to-peer. 

It would mean companies could put sophisticated machine learning-powered devices into the hands of front-line workers in a warehouse, manufacturing plant, or across a retail supply chain or store, augmenting and accelerating communication and decision-making with machine learning models within devices and shared in a mesh. As EU Commissioner Thierry Brenton wrote, by 2025, 80% of data will be generated at the edge and 20% centrally in the cloud. Where data goes, machine learning follows. 

The real prospect of edge intelligence lies in this decentralised topology, which will give rise to a new generation of chip companies focusing on computation (making operations per watt usage more efficient), but also on co-designing the computation to go hand-in-glove with communication, i.e., the mesh topology (peer-to-peer, distributed ledger, a graph). 

Toward edge intelligence

The network edge is constantly changing topology and devices. Orchestration algorithms for scheduling payloads dynamically in the cloud exist. Still, for edge devices, this comes with problems of scale (millions of devices), more heterogeneous power envelopes, and the computational capacity of various devices. Data collection and models have increasingly fragmented, wherein each device has a certain percentage of the entire data (rather than having access to the whole dataset). More edge devices are becoming miniaturised, low-powered with limited computing. 

However, the benefits to business and front-line workers are compelling: greater privacy and protection from attacks, greater automation of decision-making in real-time, reduction in internet bandwidth and cloud costs, energy efficiency, more powerful computing, and a clear investment in giving front-line workers the best devices for their roles. 

Researchers, business leaders and front-line workers need to collaborate. For researchers, it means supporting technology maturity by concentrating future work on decentralised algorithms, federating dynamically changing communication patterns, optimising chips, and making computing interwoven with communication.

For business leaders in warehousing, logistics and retail, it’s about having a roadmap to develop your technology maturity level to keep pace with the market, and ensure your front-line workers have the best available devices and software tools to get the job done. 

To learn more about where warehousing operators and other supply chain entities are on their technology journey today, click here.

Share
Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology