Beyond the algorithms: Shaping the future of the Automotive Assistant for autonomous cars

Automotive assistants are changing the way people experience the connected car – but when connected cars become autonomous, those assistants must leverage artificial intelligence to not only keep their passengers connected, but also informed and engaged in the event they need to take the wheel. A recent study from Nuance and DFKI takes a look at the most effective ways of engaging drivers as passengers when even just seconds matter.
By
Automobile cockpit, various information monitors and head up displays.

Conducting research on artificial intelligence (AI) and intelligent assistants is often discussed in the context of algorithms and deep learning – and of course data. However, we as researchers must also understand the environment in which this intelligence will operate. Only if you adapt the functionality to the context of use can your system be successful. This is especially true for the automotive assistant as the car industry is experiencing rapid change across everything from fuel sources to software. As cars become connected to the Internet, drivers expect the same experience on their smartphone to be in their dashboard.

This is paving the way for the rapid acceleration of autonomous automotive innovation – the self-driving car, which is changing the roles we as drivers have in the car, to one of passengers. However, this will largely be a transitioning back and forth as needed. Autonomous cars are mastering traffic situations, but not all, and if a hazard appears, drivers need to be engaged if their attention lies elsewhere.

Designing the Automotive Assistant for this autonomous future is the more pressing challenge in the industry, while none of us know exactly how it will look like, the vision is getting clearer with each question answered. We first need to understand what people will do with their time after the switch from driver to passenger; how they will react to requests to take over control, and what is the best way of handling those situations? We rely on surveys, simulations, and usability studies as our makeshift crystal ball.

 

The study

In a user study we just completed in cooperation with DFKI, the world largest AI research institute (where Nuance is a proud shareholder), we investigated specifically the best way to hand over control from the car to the driver, what information the car should provide and what the impact on the user’s trust is. To do this, we put users in a driving simulator (based on the popular OpenDS SW, originally developed under Nuance’s participation and now extended to also cover self-driving tasks) and tried out different ways of notifying the driver: through auditory, visual, and haptic (vibrational) channels, and combinations thereof. We let them try different things during their non-driving times, again engaging their visual, auditory, or haptic capabilities (reading vs. watching videos, playing games, or listening to music, etc.).

We varied the amount of information shown to them on the rationale for why the car was requesting the transfer of control, the current traffic situation and the current plan the car had maintained before the transfer. Then we asked users afterwards about usefulness, convenience, trust, etc. and measured the actual time it took to complete the transfer of control.

 

The results

Drivers don’t like notifications using the same modality as their current activity. Instead they prefer a combination of complementing modalities adapting to their current activity. For example, if someone is reading a book, transferring the control back to the user is best done by combining audio and touch, or “haptic” vibration signals. And, if someone is reading and answering e-mails, audible alerts or cues are the best way to get a passenger’s attention. This highlights the importance of integrated, multimodal user interfaces leveraging voice, touch and displays in an intelligent way.

  • To achieve best results, it is important that the system has context information from the car and the car sensors, including information about the current activity of the driver and the respective sensory modality, to inform him or her about the transfer of control using the optimal modalities. This results in faster reaction and better user experience.
  • Independent of the current driver activity, the auditory channel is considered more pleasant and usable than the visual channel and leads to faster reactions than the haptic channel.
  • Drivers trust auditory and haptic information from the autopilot more than purely visual information.
  • Data indicates that the reaction time is lowest when the driver is engaged in an auditory activity, such as listening to a book or music

multimodal-user-interfaces-car-data

 

The last point fits in nicely with results from a recent complementing survey conducted by Nuance in US and the UK among 400 drivers taking a look at the type of activities that drivers are planning to do as passengers in an autonomous car. If alone, respondents cited their Top five activities in the car would be listening to the radio (64%) relaxing (63%), talk on the phone (42%), browsing the Internet (42%) and messaging (36%). The first three are already “auditory” or at least hands-free in nature, the other two cam be made so via an Automotive Assistant.

 

Continued research

With many other facets to research in this emerging space, studies and simulations enable us to develop an Automotive Assistant today that in the future will assist drivers in various intelligent ways, developing a multi-modal companion relationship with the driver, changing the roles of driver-to-passenger, as well as a spokesperson for the car, and enhancing the experience for other passengers.

 

Find out more about Dragon Drive

Check out our website

Learn more

Tags: , , ,

Nils Lenke

About Nils Lenke

Nils joined Nuance in 2003, after holding various roles for Philips Speech Processing for nearly a decade. Nils oversees the coordination of various research initiatives and activities across many of Nuance’s business units. He also organizes Nuance’s internal research conferences and coordinates Nuance’s ties to Academia and other research partners, most notably IBM. Nils attended the Universities of Bonn, Koblenz, Duisburg and Hagen, where he earned an M.A. in Communication Research, a Diploma in Computer Science, a Ph.D. in Computational Linguistics, and an M.Sc. in Environmental Sciences. Nils can speak six languages, including his mother tongue German, and a little Russian and Mandarin. In his spare time, Nils enjoys hiking and hunting in archives for documents that shed some light on the history of science in the early modern period.