Today’s digital savy consumers move faster than ever. With little time and patience, they expect the brands they choose to deliver seamless experiences — regardless of where they are and the type of business they’re interacting with.
Consumers are also more aware of digital security. They know that a simple email and password login isn’t enough for financial services anymore.
As a result, banks and credit unions are shifting to biometrics as a method of authentication to maintain critical security measures and deliver a better experience for their on-to-go customers. We’ve seen it firsthand. At Veridas, we’ve seen a 325% increase in the use of voice biometrics from clients analyzing real production data over the past two years.
While biometrics is easier and more secure than managing passwords, it is not infallible. The reality is cybercrime doesn’t sleep and deepfake threats have become a new enemy for organizations using biometrics as part of their authentication strategy.
For financial institutions already relying on — or considering — biometrics as part of their security measures, it’s important to understand deepfake threats and what can be done to prepare for the risks that come with them.
What is a Deepfake Threat?
The term “deepfake” is derived from artificial intelligence (AI) technology known as deep learning algorithms that teach themselves to solve complex problems when given access to large sets of data. In the case of banking, these data sets might include voice and video recordings of their customers.
When a voice or face is used to access an account or conduct a transaction, organizations are relying on sophisticated algorithms to evaluate authenticity and “liveness.” Unfortunately, the effectiveness of those algorithms varies greatly.
As a result, well-funded hackers are preying on these algorithm vulnerabilities by using AI and machine learning (ML) technologies to create increasingly sophisticated deep voice and facial fakes. These deepfakes are fooling both people and systems, including breaching many modern authentication measures.
Understanding Threats to You and Your Customers
In the banking world, deepfakes have already facilitated scams as large as $35M (USD). Deepfakes are becoming more commonplace and being used to perpetrate a broader range of attack types, including:
New Account Fraud
Deepfakes create identities used to open new accounts for accruing debt, laundering money and other purposes.
Stolen and fake credentials are used construct an artificial identity used to secure loans and acquire credit or debit cards.
Ghost Fraud Deepfakes
Stolen identities of recently deceased people are used to breach their accounts to drain funds, apply for loans and other fraudulent purposes.
Deepfakes from family members convince financial institutions that a deceased person is still alive and collect on any existing benefits.
Balancing Security and User Experience
In the age of instant gratification, the convenience that comes with using biometrics as a method of authentication is resonating with consumers. When compared to other traditional authentication methods, facial and voice biometrics do not require physical contact and are highly accurate so they provide a more secure, reliable and convenient way to verify identities.
As they acquire more and more digital connections on an almost daily basis, consumers are already in the process of transitioning away from the burden of remembering or managing passwords — especially as they jump between multiple devices, apps and networks. Yet, the impact of a security breach is a looming reality for both consumers and financial institutions they trust. Finding the right mix requires security and customer experience leaders within banking and credit unions to work together and make the right strategy and technology choices.
Finding a Best-In-Class Algorithm
Some good news: less than 1.5% of all successful digital attacks in 2022 were the result of compromised biometrics. To help sort out discrepancies in biometric algorithm effectiveness, institutions like the National Institute of Standards and Technology (NIST) are conducting regular tests that involve millions of deepfake attack attempts on the top algorithms in the world. NIST is one benchmark financial institution security leaders can use to identify a best-in-class algorithm as build a modern tech stack that is reliable, secure, and up-to-date with the latest fraud prevention measures.
Veridas joins the world elite in this sector. According to NIST, our algorithm is positioned in second place among the best facial biometrics engines among the nearly 150 algorithms submitted.