How to make robots and self-driving cars think faster

May 30, 2014

A visual odometry algorithm uses low-latency brightness change events from a Dynamic Vision Sensor (DVS) and the data from a normal camera to provide absolute brightness values. The left photograph shows the camera frame, and the right photograph shows the DVS events (displayed in red and blue) plus grayscale from the camera. (Credit: Andrea Censi and Davide Scaramuzza)

Andrea Censi, a research scientist in MIT’s Laboratory for Information and Decision Systems, has developed a new type of camera sensor system that can take measurements a million times a second.

The new system combines a Dynamic Vision Sensor (DVS) ) (to rapidly detect changes in luminance) with a conventional CMOS-camera sensor (to provide the absolute brightness values, or grayscale values).

An autonomous vehicle using a standard camera to monitor its surroundings might take about a fifth of a second to update its location — not fast enough to handle the unexpected. With an event-based sensor, the vehicle could update its location every thousandth of a second or so, allowing it to perform much more nimble maneuvers.

“In a regular camera, you have an array of sensors, and then there is a clock,” Censi explains. “If you have a 30-frames-per-second camera, every 33 milliseconds the clock freezes all the values, and then the values are read in order.”

With an event-based sensor, by contrast, “each pixel acts as an independent sensor,” Censi says. “When a change in luminance — in either the plus or minus direction — is larger than a threshold, the pixel says, ‘I see something interesting’ and communicates this information as an event. And then it waits until it sees another change.”

UPDATE 5/30/2014: Added description of the two different sensors used.


Abstract of Technical Report

The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it pos- sible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion.