What’s next:
In the Labs

×

Beyond the algorithms: Shaping the future of the Automotive Assistant for autonomous cars

Automotive assistants are changing the way people experience the connected car – but when connected cars become autonomous, those assistants must leverage artificial intelligence to not only keep their passengers connected, but also informed and engaged in the event they need to take the wheel. A recent study from Nuance and DFKI takes a look at the most effective ways of engaging drivers as passengers when even just seconds matter.

By
Automobile cockpit, various information monitors and head up displays.

Conducting research on artificial intelligence (AI) and intelligent assistants is often discussed in the context of algorithms and deep learning – and of course data. However, we as researchers must also understand the environment in which this intelligence will operate. Only if you adapt the functionality to the context of use can your system be successful. This is especially true for the automotive assistant as the car industry is experiencing rapid change across everything from fuel sources to software. As cars become connected to the Internet, drivers expect the same experience on their smartphone to be in their dashboard.

This is paving the way for the rapid acceleration of autonomous automotive innovation – the self-driving car, which is changing the roles we as drivers have in the car, to one of passengers. However, this will largely be a transitioning back and forth as needed. Autonomous cars are mastering traffic situations, but not all, and if a hazard appears, drivers need to be engaged if their attention lies elsewhere.

Designing the Automotive Assistant for this autonomous future is the more pressing challenge in the industry, while none of us know exactly how it will look like, the vision is getting clearer with each question answered. We first need to understand what people will do with their time after the switch from driver to passenger; how they will react to requests to take over control, and what is the best way of handling those situations? We rely on surveys, simulations, and usability studies as our makeshift crystal ball.

 

The study

In a user study we just completed in cooperation with DFKI, the world largest AI research institute (where Nuance is a proud shareholder), we investigated specifically the best way to hand over control from the car to the driver, what information the car should provide and what the impact on the user’s trust is. To do this, we put users in a driving simulator (based on the popular OpenDS SW, originally developed under Nuance’s participation and now extended to also cover self-driving tasks) and tried out different ways of notifying the driver: through auditory, visual, and haptic (vibrational) channels, and combinations thereof. We let them try different things during their non-driving times, again engaging their visual, auditory, or haptic capabilities (reading vs. watching videos, playing games, or listening to music, etc.).

We varied the amount of information shown to them on the rationale for why the car was requesting the transfer of control, the current traffic situation and the current plan the car had maintained before the transfer. Then we asked users afterwards about usefulness, convenience, trust, etc. and measured the actual time it took to complete the transfer of control.

 

The results

Drivers don’t like notifications using the same modality as their current activity. Instead they prefer a combination of complementing modalities adapting to their current activity. For example, if someone is reading a book, transferring the control back to the user is best done by combining audio and touch, or “haptic” vibration signals. And, if someone is reading and answering e-mails, audible alerts or cues are the best way to get a passenger’s attention. This highlights the importance of integrated, multimodal user interfaces leveraging voice, touch and displays in an intelligent way.

  • To achieve best results, it is important that the system has context information from the car and the car sensors, including information about the current activity of the driver and the respective sensory modality, to inform him or her about the transfer of control using the optimal modalities. This results in faster reaction and better user experience.
  • Independent of the current driver activity, the auditory channel is considered more pleasant and usable than the visual channel and leads to faster reactions than the haptic channel.
  • Drivers trust auditory and haptic information from the autopilot more than purely visual information.
  • Data indicates that the reaction time is lowest when the driver is engaged in an auditory activity, such as listening to a book or music

multimodal-user-interfaces-car-data

 

The last point fits in nicely with results from a recent complementing survey conducted by Nuance in US and the UK among 400 drivers taking a look at the type of activities that drivers are planning to do as passengers in an autonomous car. If alone, respondents cited their Top five activities in the car would be listening to the radio (64%) relaxing (63%), talk on the phone (42%), browsing the Internet (42%) and messaging (36%). The first three are already “auditory” or at least hands-free in nature, the other two cam be made so via an Automotive Assistant.

 

Continued research

With many other facets to research in this emerging space, studies and simulations enable us to develop an Automotive Assistant today that in the future will assist drivers in various intelligent ways, developing a multi-modal companion relationship with the driver, changing the roles of driver-to-passenger, as well as a spokesperson for the car, and enhancing the experience for other passengers.

 

Read full article

More from the editor

Dragon celebrates its 20th anniversary
Two decades later, we’re still talking to our computers – and much more
Nuance Research Conference 2017: Reflections on Deep Learning and AI innovation
Keynotes from John Searle & Barbara Grosz inspire Nuance’s global voice, AI research teams
1,000 years of emoji history and what Machine Learning means for its future
A look at emoji: how they’ve changed over time and where they are going
Dragon, do you speak my dialect?
Once divisive, now unifying, dialects play an important role in defining who we are
Nuance’s inaugural engineering conference, Ncode, rocks it in Montreal
Q&A with Nuance Mobile R&D from the event
How the technology transcribing your meetings actually works
Simple isn’t always as simple as it seems
Why we’re using Deep Learning for our Dragon speech recognition engine
Unique application of Neural Nets results in greater productivity
Winograd Schema Challenge: Can computers reason like humans?
Results from the inaugural Winograd Schema Challenge unveiled at the IJCAI-16 in New York
Hearing is like seeing – for our brains and for machines
How CNNs developed for image recognition help with ASR and NLU, too
Part 1: How to avoid 5 common automotive HMI usability pitfalls
Audio and touch input are at the core of a powerful automotive HMI system
Part 3 – AI for customer care: Using Machine Learning to solve customer requests
Turning big data into big knowledge for better customer service
Part 2 – AI for customer care: Turning ‘bags of words’ into meaning with machine learning
Machine learning and AI turn big data into big knowledge for a better customer experience
Nuance and DFKI help students create interactive appliances of the future with speech tools
Providing easy to use speech tech helps usher forth tomorrow’s interactive appliances
Part 1 – AI for customer care: Human assisted virtual agents get smart with big knowledge
Machine learning and AI turn big data into big knowledge for a better customer experience
Mercedes-Benz’s Margarete Wies discusses the future of the connected car
Extending digital living with infotainment systems, autonomous vehicles, and more
In a galaxy (not so) far, far away
Star Wars and the relationship between man and machine
Then and NAO: Bringing conversational robots to homes, hotels, and hospitals
Aldebaran's NAO and Pepper show the power of specialized voice experiences for robotics
How many Neural Nets does it take to catch the big fish in Machine Learning?
NLU and AI innovation goes deeper so machines can understand human language
KITT – Please open the garage
How talking cars that talk to "things" make life simpler, smarter
Meet Lisa, a world championship robot with a lot to say
How this student-built social robot can lend a helpful hand to our aging society
Just be yourself: More on variation, voice biometrics, and the science of voice technology
Using Deep Neural Networks to add variation and improve accuracy
Taking a pause to discuss speaker variation… and Machine Learning
New research observes variation in communication to abstract meaning
Innovating dialog: How machines use and make sense of ellipses
Building speech systems that naturally use ellipses in human-machine interaction
Innovating dialogue: How machines make sense of anaphora
Building speech systems that naturally use anaphora in human-machine interaction
#iLookLikeAnEngineer: Breaking down gender stereotypes in tech
An inside look at Nuance and how diversity fuels innovation
Lost in translation: A solo trip abroad and discovering the art of language
Sunrises in Spain and machines you can talk to
Innovating dialogue: How machines make sense of paraphrasing and adult language
Building speech systems that naturally use paraphrases in human-machine interaction
Innovating machine dialog: Brush up on your Greek and read Aristotle
Building systems that can make sense of Rhetoric and nuanced speech
Deep learning, coming to a car near you
Computing systems inspired by the human brain change the way we interact with cars
Getting “deep” about “deep learning”
A detailed exploration of deep machine learning, a concept rooted in metaphors
The personality of Science: The traits that help define an industry
Nuance researchers answer, “What qualities does a good researcher possess?”
Technology inspired by humans – A look back at NRC 2015
Reflections from Nuance Research Conference 2015
The intersection of Science Fiction, super-pi, and technology innovation
An ode to Mr. Spock and to chasing the impossible
Star Trek, Mr. Spock, and a highly sought-after future
How a vision for the future helped allay the anxieties of the time.
It’s time to take off your tinfoil hats: AI is safe for human consumption
Exploring the effects of artificial intelligence on our daily lives
Nuance senior research scientist David Martin receives AAAI Senior Member status
Leading Artificial Intelligence industry group recognizes Martin for career achievements
Can machines think?
Nuance to host annual Winograd Schema Challenge, an alternative to the Turing Test
Humanizing technology through Cognitive Computing and Artificial Intelligence
Nuance furthers AI investment with DFKI research center
Why “innovation” doesn’t always have to be new (or at least on first sight)
Decades old concepts give light to revolutionary innovations
Ethics and design: Doing the right thing
The importance of design stretches far beyond basic appeal
Will machine language bring about the demise of voice actors?
The science behind creating next-generation synthetic voices
Can we build ‘Her’?: What Samantha tells us about the future of AI
The journey to making virtual assistants more humanlike
Innovation and design: The coolness (and unusability) of our rich friends’ houses
The challenges of designing for fanfare vs. usability
The never-ending evolution of natural language understanding
Rapid development in natural language understanding creates new possibilities
Nuance’s Ron Kaplan awarded honorary doctorate from University of Copenhagen
Award recognizes significant contributions to linguistics and natural language
Video: Innovating a relationship between people and technology
Making technology that works *for* us - not against us
Nuance’s Peter Patel-Schneider receives prestigious SWSA Ten-Year Award
SWSA award honors most impactful research
Beyond the GUI: It’s time for a conversational user interface
Conversational user interface promoting new interactions between people and devices
Voice recognition and the dawn of intelligent systems
Examining the rapid progress of voice recognition and natural language understanding
Nuance opens new Mobile Innovation Center in Cambridge’s Central Square
The new mobile innovation center is home to the expanding segment of Nuance’s R&D.
Nuance Chief Technology Officer Vlad Sejnoha named 2013 CTO of the year
Nuance CTO presented with the CTO of the Year Award at Mass TLC Awards
Smart watches need intelligent systems
Here's how you deliver intelligent systems for the evolving wearables ecosystem
Nuance and Intel keep their heads in the cloud
Collaborating on cloud computing to advance intelligent NLU systems
Got GPUs? Nuance puts groundbreaking NVIDIA GPUs to work to accelerate voice innovation
Ushering a new era in Machine Learning
Dragon speech recognition software celebrates its 20 year anniversary
Dragon celebrates its 20th anniversary
Two decades later, we’re still talking to our computers – and much more
Nuance speech technology can understand over 80 languages and their dialects
Dragon, do you speak my dialect?
Once divisive, now unifying, dialects play an important role in defining who we are
Dragon uses deep learning for more accurate speech recognition.
Why we’re using Deep Learning for our Dragon speech recognition engine
Unique application of Neural Nets results in greater productivity
How to get the key foundation right for automotive HMI with audio and touch input
Part 1: How to avoid 5 common automotive HMI usability pitfalls
Audio and touch input are at the core of a powerful automotive HMI system
DFKI students use nuance speech tools to create interactive IoT applications
Nuance and DFKI help students create interactive appliances of the future with speech tools
Providing easy to use speech tech helps usher forth tomorrow’s interactive appliances
Star Wars’ portrayal of relationships between robots and humans is becoming today’s reality with new technological advancements
In a galaxy (not so) far, far away
Star Wars and the relationship between man and machine
Connected cars are becoming more connected to the IoT and more useful, employing ubiquitous personal assistants that exist across devices and experiences
KITT – Please open the garage
How talking cars that talk to "things" make life simpler, smarter
Speech systems need to observe and deal with pauses and other variation to elicit more natural communication between man and machine
Taking a pause to discuss speaker variation… and Machine Learning
New research observes variation in communication to abstract meaning
Female Nuance engineers share stories about combatting gender stereotypes in the tech industry
#iLookLikeAnEngineer: Breaking down gender stereotypes in tech
An inside look at Nuance and how diversity fuels innovation
The ancient Greeks discovered rhetorical devices which are now common in everyday language - something we need to specially design speech systems to accommodate
Innovating machine dialog: Brush up on your Greek and read Aristotle
Building systems that can make sense of Rhetoric and nuanced speech
Childlike curiosity, being comfortable with a blank page... Nuance researchers share what qualities they think a good researcher possesses
The personality of Science: The traits that help define an industry
Nuance researchers answer, “What qualities does a good researcher possess?”
Star Trek, Mr. Spock, and the future of patient care
Star Trek, Mr. Spock, and a highly sought-after future
How a vision for the future helped allay the anxieties of the time.
winograd-schema-challenge
Can machines think?
Nuance to host annual Winograd Schema Challenge, an alternative to the Turing Test
Putting The Puzzle Together
Ethics and design: Doing the right thing
The importance of design stretches far beyond basic appeal
innovation and design
Innovation and design: The coolness (and unusability) of our rich friends’ houses
The challenges of designing for fanfare vs. usability
inno6
Video: Innovating a relationship between people and technology
Making technology that works *for* us - not against us
Smart TV Living Room-1
Voice recognition and the dawn of intelligent systems
Examining the rapid progress of voice recognition and natural language understanding
Wearables like smart watches need intelligent systems to enable a meaningful human-device interaction
Smart watches need intelligent systems
Here's how you deliver intelligent systems for the evolving wearables ecosystem
bringing together leading minds to tackle advancements in AI
Nuance Research Conference 2017: Reflections on Deep Learning and AI innovation
Keynotes from John Searle & Barbara Grosz inspire Nuance’s global voice, AI research teams
Nuance engineers gather at Ncode
Nuance’s inaugural engineering conference, Ncode, rocks it in Montreal
Q&A with Nuance Mobile R&D from the event
Contestants for the Winograd Schema Challenge build intelligent systems to test natural language and reasoning capabilities.
Winograd Schema Challenge: Can computers reason like humans?
Results from the inaugural Winograd Schema Challenge unveiled at the IJCAI-16 in New York
How you can use machine learning and natural language methods to accurately answer customer service questions
Part 3 – AI for customer care: Using Machine Learning to solve customer requests
Turning big data into big knowledge for better customer service
An agent in a call center supports virtual agents
Part 1 – AI for customer care: Human assisted virtual agents get smart with big knowledge
Machine learning and AI turn big data into big knowledge for a better customer experience
Machine Learning, Neural Nets, and advanced voice technology are making the robots for homes, banks, hotels, and more, even smarter
Then and NAO: Bringing conversational robots to homes, hotels, and hospitals
Aldebaran's NAO and Pepper show the power of specialized voice experiences for robotics
Students from the University of Koblenz-Landau built Lisa, a helpful social robot who can communicate with humans and perform daily tasks.
Meet Lisa, a world championship robot with a lot to say
How this student-built social robot can lend a helpful hand to our aging society
How machine speech systems use and make sense of ellipses rhetorical devices
Innovating dialog: How machines use and make sense of ellipses
Building speech systems that naturally use ellipses in human-machine interaction
Traveling alone in Spain, I formed a new appreciation for language as an art and the technology we’ve built to interpret and interact with people
Lost in translation: A solo trip abroad and discovering the art of language
Sunrises in Spain and machines you can talk to
deep learning connected car echnology
Deep learning, coming to a car near you
Computing systems inspired by the human brain change the way we interact with cars
Nuance Research Conference 2015 explored R&D topics like Deep Neural Nets, Artificial Intelligence, Natural Language Understanding, Anaphora, and more
Technology inspired by humans – A look back at NRC 2015
Reflections from Nuance Research Conference 2015
Two things so different can live in harmony - these are the positive effects of artificial intelligence on humanity
It’s time to take off your tinfoil hats: AI is safe for human consumption
Exploring the effects of artificial intelligence on our daily lives
artificial-intelligence
Humanizing technology through Cognitive Computing and Artificial Intelligence
Nuance furthers AI investment with DFKI research center
audio waves
Will machine language bring about the demise of voice actors?
The science behind creating next-generation synthetic voices
Dragon Mobile Assistant Girl and Phone
The never-ending evolution of natural language understanding
Rapid development in natural language understanding creates new possibilities
SWSA Logo
Nuance’s Peter Patel-Schneider receives prestigious SWSA Ten-Year Award
SWSA award honors most impactful research
cmb_office_103113_37
Nuance opens new Mobile Innovation Center in Cambridge’s Central Square
The new mobile innovation center is home to the expanding segment of Nuance’s R&D.
intel-logo
Nuance and Intel keep their heads in the cloud
Collaborating on cloud computing to advance intelligent NLU systems
The history of emoji and its future with Machine Learning
1,000 years of emoji history and what Machine Learning means for its future
A look at emoji: how they’ve changed over time and where they are going
New technology can transcribe meetings between colleagues
How the technology transcribing your meetings actually works
Simple isn’t always as simple as it seems
Seeing is like hearing for machines and human brains
Hearing is like seeing – for our brains and for machines
How CNNs developed for image recognition help with ASR and NLU, too
Machine learning turns bags of words from big data into big knowledge for customer care
Part 2 – AI for customer care: Turning ‘bags of words’ into meaning with machine learning
Machine learning and AI turn big data into big knowledge for a better customer experience
The Future Mobility vehicle becomes a contextual and highly personalized digital living space.
Mercedes-Benz’s Margarete Wies discusses the future of the connected car
Extending digital living with infotainment systems, autonomous vehicles, and more
How many Neural Nets does it take to catch the big fish in Machine Learning?
How many Neural Nets does it take to catch the big fish in Machine Learning?
NLU and AI innovation goes deeper so machines can understand human language
Variation can improve accuracy of speaker verification for voice biometrics
Just be yourself: More on variation, voice biometrics, and the science of voice technology
Using Deep Neural Networks to add variation and improve accuracy
in communication, speech systems are built to interpret and use rhetorical devices like anaphora
Innovating dialogue: How machines make sense of anaphora
Building speech systems that naturally use anaphora in human-machine interaction
in communication, speech systems are built to make sense of and use rhetorical devices like paraphrase
Innovating dialogue: How machines make sense of paraphrasing and adult language
Building speech systems that naturally use paraphrases in human-machine interaction
deep-machine-learning-metaphors
Getting “deep” about “deep learning”
A detailed exploration of deep machine learning, a concept rooted in metaphors
On Super Pi Day, we celebrate those who dare to chase the impossible and innovate a futuristic world full of things even Mr. Spock couldn't imagine
The intersection of Science Fiction, super-pi, and technology innovation
An ode to Mr. Spock and to chasing the impossible
aaai-senior-member-david-martin
Nuance senior research scientist David Martin receives AAAI Senior Member status
Leading Artificial Intelligence industry group recognizes Martin for career achievements
nuance-research-conference-2014
Why “innovation” doesn’t always have to be new (or at least on first sight)
Decades old concepts give light to revolutionary innovations
future of AI her movie
Can we build ‘Her’?: What Samantha tells us about the future of AI
The journey to making virtual assistants more humanlike
The Queen congratulating Nuance's Ron Kaplan after he receives his award.
Nuance’s Ron Kaplan awarded honorary doctorate from University of Copenhagen
Award recognizes significant contributions to linguistics and natural language
Dragon Mobile Assistant Photo
Beyond the GUI: It’s time for a conversational user interface
Conversational user interface promoting new interactions between people and devices
vladheadshot
Nuance Chief Technology Officer Vlad Sejnoha named 2013 CTO of the year
Nuance CTO presented with the CTO of the Year Award at Mass TLC Awards
nvidialogo2
Got GPUs? Nuance puts groundbreaking NVIDIA GPUs to work to accelerate voice innovation
Ushering a new era in Machine Learning
Show more articles