A team of researchers at the Massachusetts Institute of Technology (MIT) is developing new depth sensors that could be sensitive enough to make self-driving vehicles practical.
The Camera Culture group at MIT’s Media Lab – that has two Indian-origin researchers Achuta Kadambi and Ramesh Raskar – has been developing innovative imaging systems by using “time of flight” – an approach that gauges distance by measuring the time it takes light projected into a scene to bounce back to a sensor.
In a new paper in the journal IEEE Access, the team has presented new approach to time-of-flight imaging that increases its depth resolution 1,000-fold – the type of resolution that could make self-driving cars practical. The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.
“As you increase the range, your resolution goes down exponentially,” said Kadambi, a joint PhD student in electrical engineering and computer science and media arts and sciences. At a range of two metres, existing “time-of-flight” systems have a depth resolution of about a centimetre. That’s good enough for the assisted-parking and collision-detection systems on today’s cars.
Kadambi conducted tests in which he sent a light signal through 500 metres of optical fibre with regularly spaced filters along its length. The tests suggest that at a range of 500 metres, the MIT system should still achieve a depth resolution of only a centimetre.
“We’re modulating the light at a few gigahertz, so it’s like turning a flashlight on and off millions of times per second. But we’re changing that electronically, not optically. The combination of the two is really where you get the power for this system,” explained Raskar, dead of the Camera Culture group.