Lessons from SXSW: How to design the UI for the Internet of Things

The Internet of Things is evolving and the number of connected devices growing, but with each new device and form factor created comes some novel way to interact with it – different inputs and outputs. As designers, we have an exciting opportunity in front of us to inform how speech, alongside other modalities, manifests itself on these different devices, and what that ultimately means for the people using them.
By
Mobile UX designer Tim Lynch shares his presentation from SXSW 2015 on how to design user interfaces for the Internet of Things.

When you consider IDC’s projection that there will be 15 billion connected devices by 2015 – and with that number ballooning to a massive 200 billion by 2020 – it is apparent that the Internet of Things and its newly-defined ecosystem is the next phase of connectivity that will shape our lives.

This begs the question of how we will best engage with so many different devices within a new connected ecosystem. For all of the options available to us, there is bound to be inconsistency. With each new device and form factor created comes some novel way to interact with it – different inputs and outputs.  My phone, for example, might still boast a touch screen and keyboard, but what about my refrigerator or my home lighting system? Will people need to learn and adapt to a different user interface for each device?

Speech holds the potential to alleviate these potential inconsistencies, by serving as the common interface across the Internet of Things. Why speech? It’s the simplest and most human communication method. We should be able to communicate with all of these devices as we would with each other. When thoughtfully executed using speech, the user interface should become almost invisible and blend into the day-to-day, complementing the devices that are becoming ubiquitous in every aspect of our lives.

People are, of course, already doing this. Phones, computers, cars, watches – they are all (to varying degrees) capable of speech interaction. But as the number of connected devices increases, what needs to remain at the forefront is how we design these speech experiences. A speech system itself, without a holistic and thoughtful design, won’t make for a successful user experience. As designers, we have an exciting opportunity in front of us to inform how speech manifests itself on different devices, and what that ultimately means for the people using them. To do so, we look at questions like:

  • How do we set the right expectations for people looking to interact through speech?
  • What are the strengths of speech, and how do those map to what a person is trying to accomplish?
  • How does speech complement other modalities, like touch and gesture?

I recently explored these and other concepts during a presentation I gave at SXSW 2015, which I have shared with you here. As designers, what is our ultimate goal? To create thoughtfully-designed voice interactions that will allow us to meaningfully engage with our connected devices.

Add speech to your app

Ready to bring speech to your app? You can with Nuance’s NDEV Mobile developer program.

Learn more

Tags: , , ,

Tim Lynch

About Tim Lynch

Tim Lynch leads all design activities for Nuance's Mobile-Consumer division, encompassing a range of devices, including smartphones, televisions, the connected car, wearables, and many others. His experience ranges from leading design efforts for several consumer-facing applications to overseeing the design elements for large-scale OEM deployments. With a background in user experience design, Tim has been at the forefront of implementing intelligent voice experiences on devices for simple and intuitive consumer use.