In the third post in this series, I discussed what happens when the prompts and the dialog are not complementary. This introduces some cognitive dissonance in the driver, much like conflicting body language and words. While this is often due to overlooking small (but frustrating) things, sometimes it is less self-evident. The example I discussed was how the preferred and friendly “How can I help you?” prompt can introduce usability issues if the system doesn’t have the proper NLU (Natural Language Understanding) to handle the broader range of responses this prompt invites.
You’ll notice that I referenced the first three pitfalls as obvious on the surface, with subtle – but important – nuances. However, the fourth pitfall is less obvious than the others. Let’s talk about what happens when a system tries too hard to avoid errors.
Pitfall 4: Trying too hard to avoid errors
The JD Power IQS responses make it apparent that drivers are concerned and unsatisfied by infotainment system errors. Errors may cause the user to find the system difficult to use, especially if the error prompts do not lead to higher task completion. One common complaint is that these systems throw too many errors. A noble solution is to reduce the number of errors that users receive, but only if done properly. However, some systems on the road today have taken avoiding errors a step too far by trying to do something with every single voice command. If we do that, the driver doesn’t get an error. But is that a good thing? Good usability requires a system that is designed to prevent errors whenever possible, but is also helpful and handles them gracefully when they need to occur. The last part that is very often overlooked.
Nobody’s perfect and in fact, people learn through making mistakes, provided that they understand how they made a mistake, its implications, and that they know how to recover. When I read through the IQS responses, drivers report that the systems are not meeting those criteria. For example, many users report that the commands were not recognized, and repeated attempts did not resolve the issue. Others reported that the error messages were long, but communicated nothing that was helpful. The systems are not failing gracefully. A simple example of a graceful error can be found on nearly any user registration page to a website; when you miss a field, or enter values incorrectly, these pages turn those fields red and provide a hint to show you the proper formatting.
Let’s go back to our examples from the last pitfall, and look at them from a different angle. Imagine speaking to a toddler who agrees to do what you’ve asked, but then goes off and does something completely different. It’s likely not defiance (ok, in some instances, maybe) but in many cases lack of understanding. Some of the cars on the road today act the same way, and will execute a command with absolutely anything a driver says in order to simply deliver on a command whether right or wrong. It’s frustrating when it happens with another person, so we shouldn’t be surprised that it’s equally frustrating to our drivers. This makes a system seem incompetent. If we’re driving toward a true, powerful automotive assistant – which we are [link here to a blog about the assistant if possible] – the driver must trust the system as capable and competent. These types of systems can reduce trust by appearing unable to handle the basic commands.
The first step is to try prevent errors whenever possible, but not be afraid to throw an error if you need to. If you throw an error, make the error messages concise and helpful. Think about the words you are using – do they convey the right information and help the user complete an interaction with as little thought as possible? Will the error help teach the driver how to complete his or her task next time, and will it reinforce what they did right? Next, escalate error prompts. Our research shows that the best error handling strategies are three-step escalation prompts. First, it is best to begin with a short error that simply asks the driver to try again (e.g. “Pardon?”). Next, provide a longer prompt with a more information that may help them change phrasing or behavior to move them toward task completion. Finally, if another error prompt is needed, throw the terminal error, giving the user enough information to start over and get through the task successfully. By using proper wording, proper escalation, and proper dialog design, you’ll find that drivers end up being more acceptable of errors than they currently seem to be.
For me, the litmus test of using a natural language system often comes back to considering how I’d like another person to respond if I had the same interaction with them. This remains true when it comes to a person having a difficult time understanding me, or if they can’t do something I would like. Build in-car systems with this in mind. As we design systems in the future, let’s collectively make sure we’re building robust, reasonable error handling that helps build trust.
In the next – and final – post, we’ll talk about one last pitfall: Assuming a shorter interaction is a safer interaction.
Read more in this blog series:
Part 1: How to avoid 5 common automotive HMI usability pitfalls
Part 2: How to avoid 5 common automotive HMI usability pitfalls
Part 3: How to avoid 5 common automotive HMI usability pitfalls