When was the last time that you spoke to one of your devices? Most likely, it was your phone, but it easily could have been your car, your computer, or even your thermostat. At any point did you find yourself wondering, ‘How does all of that actually work?’
Voice interfaces are becoming more and more of a design standard as people are increasingly expecting to be able to interact with their connected devices in the most convenient format possible. Often times, this means via voice. The Nuance Mobile Cloud, for example, handles over 1.2 billion voice interactions a month across smart phones, tablets, cars, watches, and other smart devices. But the uptick in adoption by no means lessens the technical expertise needed to ensure that they are resulting in positive experiences for the people using them. Voice interfaces are not memory chips being stamped out on an assembly line – they instead require thoughtful, device and experience-specific design.
My colleagues, Tim Lynch and Tanya Kraljic, know a thing or two about that. Two of the leading experts in designing voice interfaces for consumer devices, Tim and Tanya recently held a webcast in conjunction with O’Reilly Media that explored the fundamental considerations that designers and developers should be thinking about as they bring speech to an ever-expanding lineup of devices and systems. Some of the topics that Tim and Tanya covered include:
- The foundational elements of speech
- Design considerations for building a great speech experience for the user
Click here to access the full webcast replay – featuring Tim and Tanya’s commentary, as well as a thoughtful dialogue among the webcast audience and participants. Have questions? Share them in the comments section below and we’ll get back to you.