The University of Alabama (UA) recently released research claiming voice biometrics systems could be easily compromised. The attack described used off-the-shelf software to mimic the voice of an individual. Shortly thereafter, Opus Research published a sobering response, highlighting existing mitigation techniques to address this type of attack. With the increasing prevalence and use of voice biometrics by organizations to secure our bank accounts, health records and government data (alongside the increase in data breaches), public discourse around voice biometrics is now increasingly focused on uncovering possible vulnerabilities and how these are addressed. I’m thrilled by the maturing view of voice biometrics in the public, and with this post, I hope to contribute to the discussion by sharing my insights as to the top three attack vectors on voice biometrics and how to address them.
Before I dive into attack vectors, I’d like to highlight why voice biometrics is becoming an increasingly prevalent replacement for PINs, passwords and security questions, especially in the contact center and mobile apps use-cases. Not only is voice biometrics a simple way for customers to authenticate to their mobile app or validate their identity when calling into a contact center, it’s also more secure. In a report published in 2013, Opus Research examined the vulnerability of various authentication methods to a variety of attacks. The report found voice biometrics compared favorably:
Although voice biometrics is inherently more secure than a password or a token, no security technology is invulnerable. Understanding possible attack vectors and how to mitigate them is key to minimizing risk. As I covered in a previous post, voice biometrics is much less vulnerable to massive breaches than any knowledge based credential (passwords, PINs and security questions). However, attacks on individual accounts are possible. Here are the top three attack vectors and how to mitigate the risk of a successful attack:
1. Brute force attack
What is it? A brute force attack is when the attacker uses their own voice in an attempt to access another individual’s account. The attacker keeps trying different accounts until they land on a voice that is similar enough to theirs to trigger a false accept.
How do you fight this? To address brute force attacks, voice biometric systems can detect that the same voice is voiced on several accounts. That suspicious voice can then be added to a fraudster blacklist and detected immediately.
2. Recording attack
What is it? The attacker records the voice of a legitimate account holder, and plays back the recording during the authentication process.
How do you fight this? Recording attacks can be addressed through algorithms that are on the lookout for audio anomalies that are created by the recording and playback process, as well as with liveness tests. One way of performing a liveness test is to prompt for a random phrase, such as “the sky is green.” A fraudster that has a recording of the authentication phrase will be unprepared for the random phrase, and will then be caught red handed.
3. Synthetic speech attack
What is it? The attacker creates an artificial voice with software, or alters their own voice, to mimic the voice of the legitimate account holder.
How do you fight this? Synthetic speech attacks, such as the one described in the research by UA, can be addressed with synthetic speech detection algorithms, that are on the lookout for voices created or modified by software.
There can be a number of variations on each one of the attacks listed above, for example a recording can be collected through vishing, or it can be collected by splicing speech from a recording of the legitimate account holder on a social media channel. However, virtually all possible attacks on a voice biometric system fall under one of these three attack vectors.
Any organization that wishes to use voice biometrics for high-security applications should ensure that mitigating strategies are in place for the key attack vectors. Organizations that have followed deployment best practices have reported perfect track records after several years of using voice biometrics, including within high-risk use-cases such as corporate and high-net worth banking.
As you may expect, Nuance voice biometric solutions incorporate all of the mitigation strategies listed above. In fact, to my knowledge Nuance is the only vendor that has productized algorithms to detect synthetic speech. Our philosophy has always been to stay at the forefront of voice biometric attack vectors, even in scenarios that have never played out in the real-world, such as synthetic voice attacks. We’re fully aware that the attacks that are theoretical in nature today will be leveraged by fraudsters in the future.
For a more comprehensive view of how to approach security and risk assessments for voice biometrics, I recommend reviewing the following report by Opus Research: “The Security Officer’s Guide to Voice Biometrics” and of course working in partnership with an experienced professional services team that has proven experience delivering voice biometrics solutions for high-security applications.