Part 1: How to avoid 5 common automotive HMI usability pitfalls

When creating and designing the automotive HMI, there are five usability pitfalls that are common across the industry. Part 1 of this five-part blog series explores audio and touch input, the foundation of a reliable and strong HMI.
How to get the key foundation right for automotive HMI with audio and touch input

If you work in the automotive industry, you’re probably familiar with the JD Power IQS report survey results released each year. The survey provides drivers the opportunity to talk about their experiences with their cars. In recent years, one of the themes has been that human-machine interfaces (HMIs) have usability issues, and that as an industry, we have an opportunity to improve in-car user experience. Here at Nuance we have identified similar issues as part of the evaluations we conduct on our own of many in-car infotainment systems. We also engage consumers to learn what they would like to see improved.

By reviewing these various data points, we’ve identified a few usability challenges that are common across the industry. This is the first post in a series where we will identify and discuss five of the biggest pitfalls that we’re hoping future systems will avoid.


Pitfall 1: Poor input (audio and touch)

Garbage in, garbage out. Even a solid user experience can only compensate so much when the quality of the input is poor. Let’s look at touch first. Think back to the earliest touch screens, where the resolution was poor and there was a delay in response due to slow processing times.  Imagine the same scenario in a car, where the driver needs to keep their eyes on the road. A multi-modal experience can create a disconnect between modalities, which is both distracting and unsatisfactory.

Audio quality is even more important for a system with voice recognition. Everything we do begins with that automated speech recognition (ASR) string.  Capturing what the user says incorrectly can’t help the driver get to the perfect outcome. If we go back to the JD Power results, we know that drivers have reported that systems often don’t understand what they say, which sends them into endless error loops… and may drive them away from using these systems.  It isn’t only the quality of the entire string we care about, though. We know that drivers often speak too early, pause for too long, and have other people interrupt them while speaking to the system. All of these factors can play into a challenging audio signal.

Some of the solutions are straightforward. For touch, a high quality screen with fine-grained touch resolution and a powerful CPU reduces latency. For audio, a high quality microphone placed appropriately.  Each vehicle model will have unique acoustics to account for. For example, to account for a driver’s moving head position while driving, a properly used microphone array can help. Fortunately, we can do more than just increase spending for better hardware to fix these problems. With Nuance VoCon Speech Signal Enhancement (SSE), for example, we can filter out multiple other passengers, filter road noise, and create a better signal.

Finally, Natural Language Understanding (NLU) can help overcome many of these challenges. While NLU helps the user experience in many ways, there is value in speech input as well. NLU uses a statistical classifier to look at the sentence structure, word order, and other factors to make accurate guesses at the driver’s intent. This makes it easier to get to the desired outcome even with missing or incorrect words in the ASR string.


The foundation of a good user experience

Despite the fact that this first pitfall seems quite obvious, we talk about it first for a reason: this is the foundation of a solid user experience. Even if we manage to avoid all four of the other pitfalls, the user experience will suffer if the quality of the input isn’t addressed first. This is especially the case as users’ expectations are being shifted through voice interactions through other tools and services we have no control over. The quality of speech recognition is less of a problem nowadays, but getting the voice user interface well implemented still holds its challenges. My hope is that we stay ahead of the curve in the automotive domain and ensure that users can come to expect their in-car voice recognition systems recognize them most (or all) of the time.

In the next post, I’ll talk about the second pitfall: giving the user too much.

Learn more in our webinar

Watch the replay of the “Top 5 automotive usability pitfalls (and how to avoid them)” webinar

Learn more

Tags: , , , ,

Adam Emfield

About Adam Emfield

Adam Emfield is the principal user experience manager at Nuance Automotive. He leads the Design & Research, Innovation, and In-Vehicle Experience (DRIVE) Lab, and is responsible for the usability program for Nuance’s Automotive division. Coming from a background in both cognitive psychology and industrial engineering, he and his cross-functional team work across the division to develop new ideas for in-vehicle experiences, as well as to validate existing concepts.