BGU researchers developing more efficient process for hydrogenation of CO2 to synthetic crude
Toyota to develop hybrid vehicles with two Chinese partners

Toyota unveils Driver Awareness Research Vehicle

Toyota_DARV_Sienna_door_closeup
DAR-V’s personalized information display on the side window. Click to enlarge.

Toyota used the Los Angeles Auto Show as the forum at which to unveil a Driver Awareness Research Vehicle (DAR-V). The DAR-V was developed in partnership with Microsoft Research to help reduce driver distractions before the key is even in the ignition. Utilizing Microsoft technologies such as Kinect, the interactive systems integrated into the design of the vehicle display important, highly personalized information on the side window when the driver approaches the car.

Using a combination of gesture control, voice and the key fob, drivers can navigate information such as updates on traffic and the weather, appointments and schedules for the day ahead, and even route details that might include a gas station if the vehicle is low on fuel. By addressing these critical daily priorities before even setting foot in the vehicle, a driver potentially has more mental bandwidth to focus on driving, Toyota suggests.

For the last 100 years the car has played the role of a functional tool, dutifully responding to our human needs and input. That relationship has forever changed. Not only can cars see things and react quicker than humans—they are becoming intelligent.

In fact, we now find ourselves at a point where perhaps the most important focus of all … may be on what is often called the driver-vehicle interface. In truth, it should more aptly be called the driver-vehicle relationship. People relate to electronic devices socially. They build bonds with them. And like any human-to-human connection, they have an emotional effect on people.

We are now capable of creating a true inter-relationship between the driver and an intelligent vehicle. And it will have a profound effect on saving lives on the highway. Today, I want you to start thinking of the car and the driver as teammates sharing the common goal of saving lives.

—Chuck Gulash, Director of Toyota’s Collaborative Safety Research Center (CSRC)

Because the DAR-V system can recognize and differentiate between individuals, the system might also be used to reduce driver distractions in other ways. For example, children might play “games” designed to help them buckle their seatbelts quickly, easing the stress on parents and helping them focus more of their attention on the road.

At Toyota, our focus is not only on protecting people in case of an accident, but also on preventing that accident from happening in the first place. While the auto industry will never eliminate every potential driver distraction, we can develop new ways to keep driver attention and awareness where it needs to be—on the road ahead.

—Chuck Gulash

Gulash discussed three specific safety research initiatives aimed at better leveraging vehicle design and interaction to help drivers keep their eyes on the road, hands on the wheel and brain engaged and aware. These included the funding of two university research programs in addition to the DAR-V new-concept research vehicle.

MIT AgeLab observes the human factors of voice command. Toyota has helped to fund a study undertaken at MIT’s AgeLab. These results were published in a white paper authored by Dr. Bryan Reimer and Bruce Mehler of MIT, whose purpose was to expand understanding of the human factors of voice command.

Researchers found that the mental demands on drivers while using voice command were actually lower than expected, potentially because drivers compensate by slowing down, changing lanes less frequently or increasing the distance to other vehicles.

However, in a number of the voice interactions studied, the amount of time drivers took their eyes off the road during voice command tasks was greater than expected. The situation is often more pronounced among older drivers, some of whom were found to physically orient their bodies towards the voice command system’s graphical interface when engaging with it.

Stanford autonomous driving human factors. Using one of the most advanced driving simulators in the country, a collaborative project between the CSRC and the Stanford University is studying how drivers interact with new automated safety technologies that are increasingly capable of taking over responsibility for driving the car.

The system combines EEG sensors to track brain activity, skin sensors to measure emotional arousal and eye-tracking headgear to follow directional glances. The system can perfectly align what’s happening inside the car, what’s happening outside the car and what’s happening inside the driver’s brain.

The simulator is unique in its ability to instantly shift from fully automated control to driver in full control to mixed control. The research will help inform design improvements to automated systems that will improve how they work in partnership with the driver to improve safety for everyone.

For example, the project will help to understand how a driver responds to a sudden “takeover now!” alert compared to less aggressive commands or explanations. Other issues include studies of how driver abilities are affected by prolonged periods in fully automated mode, including potential reduction in reaction times or situational awareness.

These are questions that need to be answered, not only to help build a product. But also, to build a foundation of understanding and guidelines for how we proceed with further research into the human factors of automated vehicles.

—Chuck Gulash

Comments

HarveyD

Isn't all that information part of the first steps towards autonomous driving?

Once the car computers have collected sufficient information, driving the car will be the next step.

Good going Toyota and Microsoft and others.

The comments to this entry are closed.