PowerCell in deal with Chinese company for fuel cell range extender with methanol reformer
Ecole Centrale Nantes and Renault launch Research Chair dedicated to the propulsion performance of EVs

Tesla leans on radar for Autopilot in Version 8 software

With the release of Version 8 of its software, Tesla has made many updates to its Autopilot function. The most significant change however, is its new heavy reliance on the onboard radar. The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but originally was only meant to be a supplementary sensor to the primary camera and image processing system.

Now, however, Tesla is using more advanced signal processing to use the radar as a primary control sensor without requiring the camera to confirm visual image recognition. The changes come in the wake of the fatal crash (earlier post) in which Autopilot apparently did not detect the white side of the trailer against a brightly lit sky; nor did the driver. As a result, no brake was applied. In the aftermath of that incident, Tesla CEO Elon Musk tweeted that the company was “Working on using existing Tesla radar by itself (decoupled from camera) w temporal smoothing to create a coarse point cloud, like lidar”. (Earlier post.)

Radar is typically unaffected by contrast issues (light and dark), as it uses reflected radio waves at about 76 to 77 GHz to identify and classify objects.

However, using radar as the primary control sensor is “a non-trivial and counter-intuitive problem,” Tesla explains. Anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar. However, any metal surface with a dish shape is not only reflective, but also amplifies the reflected signal to many times its actual size. Thus, a big problem in using radar to stop the car is avoiding false alarms, Tesla said.

Slamming on the brakes is critical if you are about to hit something large and solid, but not if you are merely about to run over a soda can. Having lots of unnecessary braking events would at best be very annoying and at worst cause injury.

—Tesla Motors

The first part of solving that problem is having a more detailed point cloud.

Software 8.0 delivers a more detailed point cloud, unlocking access to six times as many radar objects with the same hardware with a lot more information per object.

Tesla assembles these into a 3D “picture” of the world. It is hard to tell from a single frame whether an object is moving or stationary or to distinguish spurious reflections. By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision.

An overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath often looks like object on a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake.

Tesla says it will address this problem through fleet learning. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to a geo-coded list.

When the data shows that false braking events would be rare, the car will begin mild braking using radar, even if the camera doesn’t notice the object ahead. As the system confidence level rises, the braking force will gradually increase to full strength when it is approximately 99.99% certain of a collision.

This may not always prevent a collision entirely, but the impact speed will be dramatically reduced to the point where there are unlikely to be serious injuries to the vehicle occupants.

—Tesla Motors

A Tesla will also be able to bounce the radar signal under a vehicle in front to detect an oncoming obstacle (Tesla used a UFO landing on the road in dense fog as an example)—using the radar pulse signature and photon time of flight to distinguish the signal—and still brake even when trailing a car that is opaque to both vision and radar.

Other notable enhancements to Autopilot include:

  • TACC (Traffic Aware Cruise Control) braking max ramp rate increased and latency reduced by a factor of five
  • Now controls for two cars ahead using radar echo, improving cut-out response and reaction time to otherwise-invisible heavy braking events
  • Will take highway exit if indicator on (8.0) or if nav system active (8.1). Available in the United States initially
  • Car offsets in lane when overtaking a slower vehicle driving close to its lane edge
  • Interface alerts are much more prominent, including flashing white border on instrument panel
  • Improved cut-in detection using blinker on vehicle ahead
  • Reduced likelihood of overtaking in right lane in Europe
  • Improved auto lane change availability
  • Car will not allow reengagement of Autosteer until parked if user ignores repeated warnings
  • Automatic braking will now amplify user braking in emergencies
  • In manual mode, alerts driver if about to leave the road and no torque on steering wheel has been detected since Autosteer was deactivated
  • With further data gathering, car will activate Autosteer to avoid collision when probability ~100%
  • Curve speed adaptation now uses fleet-learned roadway curvature

Comments

HarveyD

Multi-sensors (Radar-Lidar-Camera-GPS) may be required for all weather, all terrains, all conditions ADVs?

It could be done by 2020/2022?

mahonj

@harvey, absolutely (+ ultrasonic for parking sensors as we have now)
Tesla tried a fast one with the single camera Mobileye sensor and radar and got burned for it (well, the guy who got his head chopped off got burned).
Autonomous driving is a hard problem that can't be solved by "terms of use" - people will always do stupid things, even if Tesla drivers are less dumb than the general public, they will still do dumb things (or fall asleep, for instance) now and then.
So you can't take shortcuts with autonomous driving (not when you have tens / hundreds of thousands of drivers out there).

On the other hand, I would expect a fair few fatalities as more and more AVs are deployed (by whomever), but "the industry" should persevere as the eventual upside in lives saved will justify it. If you try to develop AVs without ANY fatalities, it will take so long that tens of thousands of people will die needlessly.
(Remember 1.3M people are killed ont he world's roads every year)[ Mainly in the developing world ].

CheeseEater88

I am all for it, 100%, some machine intervention at some level will no doubt save lives.


Biggest life saver will be auto breaking.(anything to slow the vehicle before an impact will greatly reduce the potential crash energy, even if it is only slowing the car from 60 to 40... that little bit of difference could save a life)

Second will be lane keeping. (not ending up in a ditch / tree/ telephone pole is a good way to stay alive)

Third the improbable step of taking human error out of the equation (well most of it). Taking ill equipped drivers from out behind the wheel. People say they'll go to the grave before giving up the wheel, but in all likely hood most of the population would love a solution such as a self driving car. (people make accidents happen)


Cars are used 5% of the time for most households... If laws change where sharing is welcomed/allowed... we could have a much better society, sure miles traveled might go up slightly, but likely another solution would be more premium technologies to reduce pollution(electric drivetrains), that and very advanced carpooling logistics.

Account Deleted

Musk said the new Autopilot could potentially reduce the probability of accidents when Autopilot is active with 66% so this is a very big deal. The subsequent data will show. Musk also credited Bosh for making new and better drivers for their radar that enabled Tesla to make it the Autopilots primary driving sensor.
Radar is the future primary visual sensor for driverless cars because it can create better 3D mappings of the cars environment by seeing through stuff (like heavy snow and rain and even other cars and buildings) that lidar and cameras cannot. Cameras may still function as primary visual sensor at low speed for identifying humans, animals, traffic signals, etc when driving through intersections in cities. I have noticed that I and others use a lot of body language to communicate intention and awareness in low speed city traffic. We need cameras to identify such language and possible some new sorts of signal system on driverless cars that make up for missing body language and eye contact between driverless cars and cars with human drivers.

Juan Valdez

As a computer guy, their use of machine learning is fantastic. The fleet learning capability means that every time one of their cars passes a bridge, overpass, whatever, it learns it and share it with the Telsla virtual world, so immediately all their cars become smarter. This is the essence of AI, or artificial intelligence, learning as they go.

With by far the largest fleet on the road, it means the Tesla map of the world will become increasing accurate very quickly. This is 100's of times faster, and more accurate than manually mapping the roads as we hear some car companies doing.

This is so cool..

Arnold

Henrik's comment on body language and eye contact is spot on.

Motorcyclists don't survive by relying on ( OR others obeying road) rules.

mahonj
My take on the subject around any suggestion that following 'terms of use' may help alleviate stupid mistakes misses the point.

I.M.O. and experience there is a continuous stream of feedback to the driver from the moment the vehicle starts a journey that keeps the operator informed of road conditions and how the vehicle responds under those conditions.
The driver inputs and the vehicles response are noted by the driver.
I believe this is constantly processed and updated second by second.

Inattentiveness that will inevitably be encouraged by autonomous vehicle driver assistance etc that allows disengagement (hence distraction opportunity) by the driver for even a few seconds will leave the driver many seconds behind the game.
It is going to end badly. Same as other distractions like mobile phone etc.

As mechanics will know the test drive is the best indication of how the vehicle drives brakes handles stops accelerates etc etc. It is the way to validate the effectiveness of the service or repair and gives the tech an overall impression of the road worthiness or driveability of the vehicle.

Ordinary consumers become accustomed to deteriorating characteristics over time and often fail to recognise a problem.
The mechanic in full test mode will take the vehicle to the safe limit and test all systems.
This is to be sure that it performs satisfactorily in emergency situations.
The customer can then expect the vehicle will have a high safety margin when called on.

Driving 101:

If a driver has to go from inattention or even complacency - to emergency situation, it WILL end badly.


HarveyD

Another advantage of 'multiple sensors' would be increased performance and SECURITY due to 'fail soft' total system possibility.

One (1) of the 4 to 5 on-board systems could fail without catastrophic effects and avoid accident and fatalities.

The comments to this entry are closed.