HPE to take AI to space with ISS
Hewlett Packard Enterprise (HPE) is to take AI to space, delivering edge computing to the International Space Station (ISS).
The Spaceborne Computer-2 system promises to speed up time to insight on space data processing from “months to minutes” in experiments from medical imaging and DNA sequencing to reading data from remote sensors and satellites.
The computer is scheduled to launch on February 20 and is intended to be available for use on the ISS for the next two to three years.
Previously HPE sent the Spaceborne Computer to the ISS in 2017 for a one-year mission in partnership with NASA.
High levels of radiation
It’s hoped that if the new concept can deliver as expected in low-Earth orbit, with virtually zero gravity and high levels of radiation, it may become strategically important in the race to take humans to Mars and beyond, where mission will rely on reliable communications.
As well as offering double the speed of its predecessor, Spaceborne Computer-2 is equipped with GPUs with which to process image-intensive data from satellites and cameras. Currently, much of the huge volume of data captured by remote sensors in space has be relayed to Earth for processing.
It’s thought the computer may help to detect environmental changes, monitor traffic and air quality and track moving objects in space and in the atmosphere.
Dr Mark Fernandez, solution architect, Converged Edge Systems at HPE, and principal investigator for Spaceborne Computer-2, said, “The most important benefit to delivering reliable in-space computing with Spaceborne Computer-2 is making real-time insights a reality. Space explorers can now transform how they conduct research based on readily available data and improve decision-making. We are honored to make edge computing in space possible and through our longstanding partnerships with NASA and the International Space Station US National Laboratory, we are looking forward to powering new, exciting research opportunities to make breakthrough discoveries for humanity.”
‘Our next mission in edge computing’
Shelly Anello, general manager, Converged Edge Systems at HPE, said, “Edge computing provides core capabilities for unique sites that have limited or no connectivity, giving them the power to process and analyse data locally and make critical decisions quickly. With HPE Edgeline, we deliver solutions that are purposely engineered for harsh environments. Here on Earth, that means efficiently processing data insights from a range of devices – from security surveillance cameras in airports and stadiums, to robotics and automation features in manufacturing plants. As we embark on our next mission in edge computing, we stand ready to power the harshest, most unique edge experience of them all: outer space. We are thrilled to be invited by NASA and the International Space Station to support this ongoing mission, pushing our boundaries in space and unlocking a new era of insight.”
What is neuromorphic AI?
AI is dead. Long live AI?
AI is evolving. The first generation of machine learning used ordinary logic and rules to draw conclusions in a very specific manner. A good example would be IBM’s Deep Blue computer, which was trained to play chess to championship standard. That hasn’t disappeared, but it has been augmented by more perceptive deep learning networks that can analyze a broader set of parameters and provide intelligent insights.
And neuromorphic AI is next?
Correct. Neuromorphic computing is a way of designing hardware – microprocessors, really – to work more like human brains. The idea is that this new iteration of AI hardware will allow machine learning of the future to deal better with ambiguity and contradiction, things that are currently difficult to process for computers.
How does neuromorphic AI work?
The problem with current chip architecture is that it is not very efficient. Because of the linearity of the process, the chips have to built with a massive amount of horsepower just in case it’s needed. Building a human brain that way would be unfeasible, so engineers have had to rethink the nature of chip design in their quest to get computers to perform more of the tasks human brains are good at. Enter SNNs.
What’s an SNN?
A spiking neural network (SNN) is, in the words of chipmaker Intel, “a novel model for arranging those elements to emulate natural neural networks that exist in biological brains.” Each ‘neuron’ fires independently, triggering other neurons only when they are required. Intel again: “By encoding information within the signals themselves and their timing, SNNs simulate natural learning processes by dynamically remapping the synapses between artificial neurons in response to stimuli.”