Hand-Tracking Tech Watches Riders in Self-Driving Cars to See If They’re Ready to Take the Wheel

A new system tracks a rider’s elbows and wrists to see how quickly they could take control of a self-driving car in an emergency

2 min read

A man sits in the driver's seat of a Yandex driverless car equipped with manual controls and sensors such as lidar, cameras, and radars, as it undergoes testing on roads of Moscow; in November 2018.
Photo: Mikhail Pochuyev/Getty Images

Researchers have developed a new technique for tracking the hand movements of a non-attentive driver, to calculate how long it would take the driver to assume control of a self-driving car in an emergency.

If manufacturers can overcome the final legal hurdles, cars with Level 3 autonomous vehicle technology will one day be chauffeuring people from A to B. These cars allow a driver to have his or her eyes off the road and the freedom to do minor tasks (such as texting or watching a movie). However, these cars need a way of knowing how quickly—or slowly—a driver can respond when taking control during an emergency.

To address this need, Kevan Yuen and Mohan Trivedi at the University of California, San Diego developed their new hand-tracking system, which is described in a study published 22 November in IEEE Transactions on Intelligent Vehicles.

While tracking someone’s hands may sound simple, it can be hard to do in the cramped confines of a car, where there are only a few good spots to place a camera. A driver’s hands can also become occluded by one another or by objects, and cameras may be hindered by, for example, the harsh lighting of the sun on the driver’s arm.

In their new approach, Yuen and Trivedi took an existing program for tracking the full-body movements of people and adapted it to track the wrists and elbows of a driver, and also of a passenger, if present. It distinguishes between the right and left joints of both riders in the front seats. The researchers then develop and applied machine learning algorithms to train the system to support Level 3 autonomous technology. They trained the system with 8,500 annotated images.

“The approach is capable of highly accurate, and very efficient hand detection, localization, and activity analysis in a very wide range of real-world driving situations, involving multiple humans and multiple vehicles,” says Trivedi.

Their analysis shows that the system was able to identify the location of each of eight joints present (the right/left-side elbows/wrists of both passenger/driver) with 95 percent accuracy. However, the system has a localization error of 10 percent when estimating the average length of someone’s arms.

Some instances where the tracking system did not work include when the driver was wearing unique clothing with heavy artistic texturing that was not represented in the training set, and when one of the driver’s arms blocked the camera’s view of the other arm.

The researchers say some of the problems encountered during their tests can be addressed by placing the camera in a better location to avoid occlusions, using multiple camera views, and increasing the training dataset to include more variety in clothing.

“This project is part of our larger research effort on the development of safe autonomous vehicles,” says Trivedi. He adds that the team is talking with at least one potential client about using this technology in a commercial setting, but said he couldn’t divulge which company has expressed interest. 

The Conversation (0)