Rheinmetall subsidiary MIRA offering modular system for teleoperated driving
Baidu granted China’s first permits for commercial fully driverless ride-hailing services

QUT, Ford researchers find way to tell autonomous vehicle which cameras to use when navigating

Queensland University of Technology (QUT) robotics researchers working with Ford Motor Company have found a way to tell an autonomous vehicle which cameras to use when navigating. Professor Michael Milford, Joint Director of the QUT Center for Robotics and Australian Research Council Laureate Fellow and senior author, said the research comes from a project looking at how cameras and lidar sensors, commonly used in autonomous vehicles, can better understand the world around them.

FC49F6D7-48C4-4908-869C-9767B9134A5B

The key idea here is to learn which cameras to use at different locations in the world, based on previous experience at that location. For example, the system might learn that a particular camera is very useful for tracking the position of the vehicle on a particular stretch of road, and choose to use that camera on subsequent visits to that section of road.

—Professor Milford

This research took place as part of a larger fundamental research project with Ford looking at how cameras and LIDAR sensors, commonly used in autonomous vehicles, can better understand the world around them.

Dr. Punarjay (Jay) Chakravarty is leading the project on behalf of the Ford Autonomous Vehicle Future Tech group.

Autonomous vehicles depend heavily on knowing where they are in the world, using a range of sensors including cameras. Knowing where you are helps you leverage map information that is also useful for detecting other dynamic objects in the scene. A particular intersection might have people crossing in a certain way.

This can be used as prior information for the neural nets doing object detection and so accurate localization is critical and this research allows us to focus on the best camera at any given time.

—Dr Chakravarty

To make progress on the problem, the team has also had to devise new ways of evaluating the performance of an autonomous vehicle positioning system.

This work has just been published in the IEEE Robotics and Automation Letters journal, and will also be presented at the upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems in Kyoto, Japan in October.

Resources

  • S. Hausler et al., “Improving Worst Case Visual Localization Coverage via Place-Specific Sub-Selection in Multi-Camera Systems,” in IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10112-10119, Oct. 2022, doi: 10.1109/LRA.2022.3191174.

Comments

The comments to this entry are closed.