scorecardresearch
Friday, Dec 09, 2022

Self-driving cars ‘learn’ to predict pedestrian movement

The results have shown that this new system improves upon a driverless vehicle's capacity to recognise what is most likely to happen next.

Self driving cars, LiDAR, GPS, self driving cars predict pedestrian movements, University of Michigan, driverless cars, driverless car technology Equipping vehicles with the necessary predictive power requires the network to dive into the minutiae of human movement. (Image: University of Michigan)

Scientists are using humans’ gait, body symmetry and foot placement to teach self-driving cars to recognise and predict pedestrian movements with greater precision than current technologies.

Data collected by vehicles through cameras, LiDAR and global positioning system (GPS) allowed the researchers at the University of Michigan in the US to capture video snippets of humans in motion and then recreate them in three dimensional (3D) computer simulation.

With that, they have created a “biomechanically inspired recurrent neural network” that catalogs human movements. The network can help predict poses and future locations for one or several pedestrians up to about 50 yards from the vehicle, at about the scale of a city intersection.

LiDAR is a surveying method that measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor.

Subscriber Only Stories
Delhi Confidential: Despite tumultuous relationship, Mamata is first to c...Premium
The new Gujarat House: 105 fresh faces, 14 women legislators, 1 Muslim; 7...Premium
Pratibha Singh throws hints, old dilemma for Congress: too many CM aspira...Premium
Behind surge in Gujarat: Modi 2022, Modi 2024, missing CongressPremium

“Prior work in this area has typically only looked at still images. It wasn’t really concerned with how people move in three dimensions,” said Ram Vasudevan, an assistant professor at the University of Michigan.

“But if these vehicles are going to operate and interact in the real world, we need to make sure our predictions of where a pedestrian is going does not coincide with where the vehicle is going next,” said Vasudevan.

Equipping vehicles with the necessary predictive power requires the network to dive into the minutiae of human movement: the pace of a human’s gait (periodicity), the mirror symmetry of limbs, and the way in which foot placement affects stability during walking.

Advertisement

Much of the machine learning used to bring autonomous technology to its current level has dealt with two dimensional images — still photos.

A computer shown several million photos of a stop sign will eventually come to recognise stop signs in the real world and in real time.

However, by utilising video clips that run for several seconds, the system can study the first half of the snippet to make its predictions, and then verify the accuracy with the second half.

Advertisement

“Now, we are training the system to recognise motion and making predictions of not just one single thing — whether it is a stop sign or not — but where that pedestrian’s body will be at the next step and the next and the next,” said Matthew Johnson-Roberson, an associate professor at the University of Michigan.

“If a pedestrian is playing with their phone, you know they are distracted,” Vasudevan said. “Their pose and where they are looking is telling you a lot about their level of attentiveness. It is also telling you a lot about what they are capable of doing next,” he said.

The results have shown that this new system improves upon a driverless vehicle’s capacity to recognise what is most likely to happen next.

First published on: 13-02-2019 at 07:14:52 pm
Next Story

Gully Boy movie review: This is a film to enjoy

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement
close