GrAI Matter Labs (GML) is one of the most exciting new startups at High Tech Campus Eindhoven. Working on the next generation of ultra low-power chips to power IoT devices, GML is paving the way to true artificial intelligence.
Once upon a time self-driving cars, robots and talking refrigerators were pure science fiction. Now it is only a matter of time before they become universally adopted and fully functioning. GrAI Matter Labs has set itself the daunting task to help speed up that technological revolution, as they work on developing the chips that will eventually power these devices.
Also located in Paris and in Silicon Valley, GML opened their latest office in the Beta building at the High Tech Campus this year. As their name implies, GrAI Matter Labs is working on technology inspired by the human brain with a link to artificial intelligence (AI). We asked Ingolf Held (CEO) and Menno Lindwer (VP Engineering) to tell us more about the thrilling technology of neuromorphic computing.
First of all, what made you decide to move to High Tech Campus Eindhoven? Menno: “All the know-how for the combination of complex chip design and complex software development is available here at the High Tech Campus. It’s one of the few places in the world where all of this knowledge is condensed on one site. In this office we design our hardware and run part of the architecture exploration. That also means developing the tools to design and program the chips.”
So what is ‘neuromorphic computing’ all about? Ingolf: “Neuromorphic technology is inspired by our current understanding of how the brain works. Although that understanding is not complete, we do know that the compute elements in the brain are neurons and that data communication happens via spikes. That’s very different than standard CPUs, where there is a distinct separation between memory and data. Moving data around is one of the biggest power consumption problems in embedded systems today. Yet in the brain there is no memory, it all happens within that same mesh. This is also how we architect our chips.”
And what is so special about your chips? Ingolf: “The purpose of our chips is to support AI use cases and functions, particularly at the edge of the Internet-of-Things (IoT). One of these use cases is autonomous navigation or everything that moves by itself. Think of autonomous cars, robots and drones. Another use case is the smart home, from security cameras to everything else that smartly responds to your presence. A third one is healthcare, particularly smart health monitoring. This includes wearables, but it could also be a wall mounted device in a hospital that observes and monitors patients. Our key focus is to build specialized chips for real-time sensor data, particularly from sensors that provide the data in a sparse, irregular way, like the human sensory system does. There are sensors emerging that are faster and see more because they function more like the human eye.”
Menno Lindwer (VP Engineering) & Ingolf Held (CEO)
How do those new sensors work? Menno: “A regular camera in your phone takes one picture at a time and you don’t see what happened before or after. It’s a moment frozen in time. Videos are done the same way, just taking a series of these static pictures in a short time frame. The way these new sensors work is that they register all the changes happening all the time, continuously. When something the camera sees changes, it registers that change, and only that particular change. This is also how your eye works. It results in a much closer time frame of knowing exactly when and where a change occurs. These sensors respond to minute changes the moment they happen, just like our brain does. So you respond quicker and you process less.”
Could you give an example? Ingolf: “Think about pedestrian detection in autonomous cars. You drive in a dense urban environment and a child jumps in front of your car. You only have a few milliseconds to respond. A smartphone takes up to 30 milliseconds to register the next frame. That’s too slow. You need to be able to register it a hundred times faster and also be immune to blinding sunlight. A smartphone blanks out in sunlight, but you can’t have that in the car. Humans adjust to that. But machines should be even better.”
How does that relate to what you are doing at GML? Ingolf: “We build the chips to which these sensors connect. One of our key propositions is making chips more power efficient. Many traditional devices like laptops still have fans for cooling. You can’t have a cooling fan when you wear a sensor in your ear, for example. You don’t even want fans in your car, because they are fragile and easily break. That’s why everything needs to be low power. And the best low power computing device we know is the brain. It does a tremendous task with only 20 Watts of power consumption. That’s an inspiration to us.”
You talk a lot about edge computing. What do you mean by ‘edge’? Menno: ”An edge device is a device that has sensors and actuators to interface with the world around it. But today’s edge devices do not process all their data locally and still largely depend on the cloud. We talk about smartphones, but your phone is actually not that smart. It’s mostly a medium between you and some software in the cloud. Sending data from your phone to the cloud and back requires a lot of battery power. Smart devices in the IoT sphere don’t have that kind of battery and computing power. We want to liberate these devices from dependening on the cloud by giving them intelligent chips that can process data locally. That’s how we can realize these extremely low power levels. It’s the only way.”