Last week, my wife and I drove to Berlin for the weekend, and when we tried to find ‘Hotel Bellevue’ all the high-end navigation system in my premium brand car could tell us was ‘nothing found.’ This made me panic for a moment, afraid, that I made a reservation at a non-existing hotel (this was my wife’s immediate assumption), but I realized that was not the case and eventually we found the hotel and spent a lovely weekend in Berlin. Still, it made me think, why did that happen?
Today’s connected car navigation systems allow access to a variety of different location based content services (LBS). Some of the content resides within the car and some of the content sits in the cloud. Location based cloud content (‘off-board’) typically includes Hotels/Restaurants, Fuel Stations, Parking, Traffic, etc., whereas the on-board content includes the navigation map, which is augmented with a complete destination address list and a limited set of POIs (Point-of-Interests). What’s happening here is important: the cloud resources are extending the on-board capability by a long list of POI categories (such as restaurants, train stations, hospitals, medical doctors, schools, tourist attractions, landmarks, etc.) and individual listings by name like Hotels/Restaurants, Landmarks, Amusement Parks, and so on.
The problem is that cloud and on-board content are accessible via different menu items in my car’s navigation menu. Before performing a POI query, I have to specify which type of search I want the system to perform. For the user, this means a breach in the user experience. It’s inconsistent and leads to confusion. As a consumer, I don’t know if the information that I’m looking for resides in the vehicle or in the cloud, and I don’t even want to bother. Now going back to my weekend trip… It turns out my wife had chosen the on-board menu path when searching for the Hotel in Berlin. Can you blame her? How is she supposed to know that this information is only available via the cloud?
Voice is the most natural interface so I am using the voice recognition feature a lot in my car. But, current voice recognition applications still require that drivers first say, “POI Search” followed by the query, such as, “Hotel Bellevue” or “Fuel station nearby.”. We need to fix this: remove the gate command (“POI Search”) and instead, just say what you want – no extra steps. For example, searching, “Hotel Bellevue in Berlin” with voice technology, which is partly embedded and partly in the cloud, will return the desired result. The voice system needs to have the hotel name in its vocabulary and be able to classify the search query as a POI/hotel search. To solve this technological challenge, access to and deep knowledge of big LBS databases is essential. Therefore, Nuance recently signed partnerships with major providers of POI and address data including industry giants like HERE, Infogroup, Factual, and AutoNavi.
But let’s not stop there: future in car voice user experiences will need to go even beyond what LBS can provide. Music and radio as well as news, other entertainment, and even CRM services should be incorporated into the voice experience. The All-In-One menu is the goal here. The user can enter his query haptically, by gesture, or by voice, and without following hierarchic voice menus or using gate commands like POI Search, Radio or Address Entry. The technology will classify the query into the right context.
Voice technology and content need to grow together to create smart, contextual and, ultimately, proactive experiences for the driver. Technology that finally adapts to the personal needs of the user – isn’t that what we all want?