$3.4bn deal to take autonomous driving firm Luminar public
Luminar is a Palo Alto, California-based developer of lidar sensors and software for use with autonomous vehicles.
Lidar, the technology in its own pursuit of driverless cars, is used by the majority of competitors. The visual equivalent of radar, lidar involves measuring distances by shining a laser on an object and sensing its reflection. Extant since the 1960s, the technology found uses in many geographical pursuits such as surveying before being harnessed for autonomous vehicles.
Luminar says it partners with 7 of the 10 largest automotive manufacturers, helping them to enable the introduction of self-driving in line with the Society of Automotive Engineers (SAE) . To date, commercial offerings typically fall into Level 2, which denotes a vehicle with automated steering and acceleration features, such as stay-in-lane and self-parking.
Luminar says its lidar sensor meets the necessary requirements to enable vehicles for level 3 all the way up to level 5, representing full autonomy at all times.
The company has announced it is to go public through a merger with Gores Metropoulos Inc, with the deal also including $400mn cash from Gores Metropoulos and $170mn of financing from investors including Alec Gores, Van Tuyl Companies, Peter Thiel, Volvo Cars Tech Fund, Crescent Cove, Moore Strategic Ventures, Nick & Jill Woodman and VectoIQ.
In , Austin Russell, Founder and CEO of Luminar, said: “This milestone is pivotal not just for us, but also for the larger automotive industry. Eight years ago, we took on a problem to which most thought there would be no technically or commercially viable solution. We worked relentlessly to build the tech from the ground up to solve it and partnered directly with the leading global automakers to show the world what’s possible. Today, we are making our next industry leap through our new long-term partnership with Gores Metropoulos, a team that has deep experience in technology and automotive and shares our vision of a safe autonomous future powered by Luminar.”
What is neuromorphic AI?
AI is dead. Long live AI?
AI is evolving. The first generation of machine learning used ordinary logic and rules to draw conclusions in a very specific manner. A good example would be IBM’s Deep Blue computer, which was trained to play chess to championship standard. That hasn’t disappeared, but it has been augmented by more perceptive deep learning networks that can analyze a broader set of parameters and provide intelligent insights.
And neuromorphic AI is next?
Correct. Neuromorphic computing is a way of designing hardware – microprocessors, really – to work more like human brains. The idea is that this new iteration of AI hardware will allow machine learning of the future to deal better with ambiguity and contradiction, things that are currently difficult to process for computers.
How does neuromorphic AI work?
The problem with current chip architecture is that it is not very efficient. Because of the linearity of the process, the chips have to built with a massive amount of horsepower just in case it’s needed. Building a human brain that way would be unfeasible, so engineers have had to rethink the nature of chip design in their quest to get computers to perform more of the tasks human brains are good at. Enter SNNs.
What’s an SNN?
A spiking neural network (SNN) is, in the words of chipmaker Intel, “a novel model for arranging those elements to emulate natural neural networks that exist in biological brains.” Each ‘neuron’ fires independently, triggering other neurons only when they are required. Intel again: “By encoding information within the signals themselves and their timing, SNNs simulate natural learning processes by dynamically remapping the synapses between artificial neurons in response to stimuli.”