NVIDIA has announced very exciting and fascinating news from ISC 2013 – NVIDIA collaborated with a research team at Stanford University to create the world’s largest artificial neural network built to model how the human brain learns. In fact, the network is 6.5 times bigger than the previous record-setting network developed by Google in 2012.
At the heart of this research are NVIDIA’s GPU accelerators that are driving a new era in Machine Learning through neural networks, and is a rapidly-growing segment of artificial intelligence (AI). Machine learning is the science of getting computers to act without being explicitly programmed, and is behind some impressive recent advances – including voice technology.
Nuance is among those actively leveraging the power of NVIDIA’s GPUs to advance voice technologies, and ultimately create intelligent and intuitive spoken interactions with our devices, apps, systems, personal assistants, and much more. Nuance trains its neural network models to understand users’ speech by using terabytes of audio data. Once the models are trained, they can then recognize the pattern of spoken words by relating them to the patterns that the model learned earlier.
What does this all mean for the average consumer? The end result is an interaction between man and machine that is effective, efficient and quite frankly – exciting. It’s the point where technology is adapting to us as humans – and less so humans having to adapt to the limitations of our devices.
Nuance’s CTO Vlad Sejnoha recently spoke to the acceleration of voice technologies at the MIT Mobile Summit in San Francisco as part of an evolution of the way we’ll engage devices in the era of mobile computing. Nuance’s advancements in voice and language cut through the clutter of menus, buttons, and provide direct access to people, information and content – through simple voice commands or conversational interactions.
To learn more about NVIDIA’s announcement, view the press release at http://nvidianews.nvidia.com/.