Affectiva Emotion AI: Detecting driver’s emotions and cognitive states

Recently, Affectiva and Nuance announced a collaboration to bring emotional intelligence to AI-powered mobility assistants by augmenting them with industry-first understanding of cognitive and emotional states. The result of this cooperation can be seen at CES 2019, where we are excited to share a number of new demonstrations integrating Affectiva’s technologies for the next generation of emotion-enabled vehicles.
By
How Nuance and Affectiva are bringing emotional intelligence to the mobility assistant

We’re seeing a significant shift in the way that people today want to interact with technology and devices. As virtual assistants become more popular – in smart speakers, online, and even in our cars – these assistants need to interact with people in the same way that we interact with one other. The transactional, question-and-response dialogue between people and technology will no longer cut it. A key element of human communication is the ability to convey complex emotions and cognitive states through non-verbal expressions from face and voice. So how do we emulate this in technology?

Recently, Affectiva and Nuance announced a collaboration to bring emotional intelligence to AI-powered mobility assistants by augmenting them with industry-first understanding of cognitive and emotional states. The result of this cooperation can be seen  at CES  2019, where we are excited to share a number of new demonstrations integrating Affectiva’s technologies for the next generation of emotion-enabled vehicles.

But first, let’s have a closer look at Affectiva and their emotion AI solution:

 

What does Affectiva’s software do and how does it work?

Using in-cabin cameras and microphones, Affectiva emotion AI can analyze facial and vocal expressions to identify expressions, emotions and reactions of the people in a vehicle. Our algorithms are built using deep learning, computer vision, speech science, and massive amounts of real-world data collected from people driving or riding in cars.

 

What metrics are taken into account and where is the data processed?

Affectiva Automotive AI includes a subset of facial metrics that are relevant for automotive use cases. These metrics are developed to work in in-cabin environments, supporting different camera positions and head angles. We also have vocal metrics.

Deep neural networks analyze the face at a pixel level to classify facial expressions and emotions. They also analyze acoustic-prosodic features (tone, tempo, loudness, pause patterns) to identify speech actions.

All of this is processed locally – we do not send any facial or vocal data to the cloud. As a result, this technology fits perfectly with Nuance’s hybrid approach for mobility assistants.

 

What kind of database is powering this technology?

To develop metrics that provide a deep understanding of the state of occupants in a car, we need large amounts of diverse, real-world data to fuel our deep learning-based algorithms. To date, Affectiva has collected over 7.5 million face videos in 87 different countries. This opt-in dataset of crowdsourced spontaneous emotion gathered in people’s homes, phones and cars represents a broad cross-section of age groups, ethnicities, and genders. Using this foundational dataset and the latest advances in transfer learning, Affectiva Automotive AI learned how to detect facial and vocal expression of emotion in the wild. (Read more about our emotion database.)

 

Which emotional and cognitive states are detected and why?

Affectiva Automotive AI takes driver state monitoring to the next level, analyzing both face and voice for levels of driver impairment caused by physical distraction, mental distraction from cognitive load or anger, drowsiness and more. In addition, we are able to measure the mood and reactions of a vehicle’s occupants so OEMs and Tier 1s can take this data and personalize the in-cabin environment and overall ride. Here are the metrics our technology currently provides, with more to come:

  • Tracking of all in-cabin occupants
  • Three facial emotions: Joy, Anger, and Surprise
  • Facial based valence: overall positivity or negativity
  • Four facial markers for drowsiness: Eye Closure, Yawning, Blink, and Blink Rate
  • Eight facial expressions: Smile, Eye Widen, Brow Raise, Brow Furrow, Cheek Raise, Mouth Open, Upper Lip Raise, and Nose Wrinkle
  • Two vocal Emotions: Anger and Laughter
  • Vocal expression of arousal: the degree of alertness, excitement, or engagement
  • Head pose estimation: Head Pitch, Head Yaw, Head Roll

 

Tell us more about driver monitoring: what does it mean for the car and driver safety?

Every day, over 1,000 injuries and nine fatalities are caused by distracted driving in the US alone – and up to 6,000 fatal crashes each year may be caused by drowsy drivers. This indicates a major need for driver monitoring to help improve road safety. To do this, there must be a deep understanding of driver emotions, cognitive states, and reactions to the driving experience. Additionally, the European New Car Assessment Programme (Euro NCAP) will require the automotive industry to develop next-generation safety features, such as driver monitoring. These systems are quickly becoming a standard safety requirement for regulatory bodies such as Euro NCAP, and will scale with evolving safety standards and future of mobility needs.

Affectiva Automotive AI unobtrusively measures, in real time, complex and nuanced emotional and cognitive states from face and voice. These insights can then be used to implement actions from showing a coffee-cup signal to more sophisticated solutions – for example suggesting safe places to rest or to lower the in-cabin temperature – as realized in Nuance’s mobility assistant.

 

 

 

About Affectiva:

Affectiva envisions a world where technology can understand all things human. An MIT Media Lab spin-off, Affectiva is the pioneer of Human Perception AI – software that can detect nuanced human emotions, complex cognitive states, behaviors, activities, interactions and objects people use. Built on deep learning, computer vision, speech science and massive amounts of real-world data, Affectiva’s patented Human Perception AI is enabling leading automotive OEMs, Tier 1 suppliers, ridesharing providers and fleet management companies to build intelligent vehicles that can perceive all things happening with the people inside of them. Affectiva’s technology is also used by 25 percent of the Fortune Global 500 companies to test consumer engagement with ads, videos and TV programming.

Affectiva created and defined the category of Emotion AI – technology that can detect expressions of emotion from face and voice – which is projected to become a multi-billion dollar industry. With its evolution beyond emotion detection, Affectiva’s Human Perception AI enables more personalized and meaningful interactions between people and their devices, consumers and brands, and ultimately, between humans and the world around us.

Learn more about the Nuance and Affectiva Collaboration

In a recent webinar hosted by Nuance, Affectiva CEO Dr. Rana el Kaliouby outlined why we need human perception technology such as Emotion AI, how the deep learning-based models are built, and where they are being used in automotive applications. Here's the webinar recording.

Watch now

Tags: , ,

Patrick Gaelweiler

About Patrick Gaelweiler

Patrick Gälweiler is senior digital content and social media manager for Nuance’s automotive business. Prior to joining the Nuance team, Patrick spent years in public relations, corporate and marketing communications with a strong focus on B2B automotive communications. Most recently, Patrick worked as Corporate Communications Officer for a global automotive engineering service provider. There he was responsible for the development and implementation of an internal and external communications strategy with a strong focus on digital communication channels. Patrick spends his free-time with DIY and restoring Vespa oldtimers.