• Solutions
  • Industries
  • Success Stories
  • Developers
  • Compliance
  • Resources
  • TRY A DEMO

Digital Onboarding

> Free Buyers Guide

Digital Onboarding

> Free Buyers Guide

Digital Onboarding

> Free Buyers Guide

Digital Onboarding

> Free Buyers Guide

Digital Onboarding

> Free Buyers Guide

Two keys to stop deepfakes with biometrics

Table of contents

More and more news, articles and studies are being published about the ability of Deepfakes to trick us into distinguishing between an image of a real human being and one that is not. News, articles and studies that manage to generate uncertainty in society about whether new methods of digital identity verification, especially biometric ones, will be able to detect these attempts at impersonation and thus effectively prevent fraud.

One of the most recent, a study published by Sophie Nightingale, from the Department of Psychology at Lancaster University in the UK, and her colleague Hany Farid, from the Department of Electrical Engineering and Computer Science at the University of California, Berkeley, showed that humans are worse than chance itself when it comes to distinguishing between fictitious human faces generated by artificial intelligence programs and photographs of real people. This study, among others, warns of the harmful use of these technologies, which “open the door to future scams and blackmail” such as “revenge pornography or fraud”.

Images of the faces rated by participants in Nightingale and Farid’s study as most and least trustworthy along with their score on a scale from 1 (very untrustworthy) to 7 (very trustworthy). It is interesting to note how the 4 faces rated as reliable are all of synthetic origin (S) and the 4 least reliable faces are all real (R), and to understand the risks involved.

At a time when we are experiencing a boom in biometric methods for remote identity verification, what role can these new solutions play in preventing Deepfake fraud? Can they be effective technologies for combating Deepfakes? Here are some important keys.

The key lies in distinguishing between capturing and analyzing

Within any digital identity verification process, there are two stages: the capture of the evidence to be analysed (an identity document, a selfie, etc.) and the analysis of that evidence to determine its veracity or to make a comparison between them. Both stages are of equal importance, although the focus of the news is usually on the second part.

A more precise definition of these stages would be as follows:

  1. Capturing: understood as the action of collecting a recording of material reality, through the generation of a photograph or video. In short, materialising what is happening in the real world by means of the sensors associated with the cameras and printing it in a physical or digital visual representation.
  2. Analyzing: any kind of human or machine interpretation of previously captured content. An analysis can indicate whether a photograph or recording is authentic, whether the image has been manipulated, whether it contains certain characteristics (is there a face, is there a dog, etc.), etc.

In this sense, Nightingale and Farid’s study refers to how the creation of Deepfakes presents situations in which the aforementioned layer of analysis obtains an inappropriate result: the human’s analysis interprets something as authentic even though it is fake. But we have also learned how on other occasions it is not the human (alone) that is misled, but the very artificial intelligence algorithms trained to distinguish genuine faces from artificial faces are also misled by other algorithms. 

The latter is known as Generative Adversarial Networks (GAN) algorithms and are trained to compete in a zero-sum game against biometric engines to render them useless. All in all, we can deduce that the analytics layer is compromised in the face of these new types of attacks. 

Therefore, it is really key to try to understand the process as consisting of two stages, capturing and analysing, and therefore it is of vital importance to control and know the method of capture. 

Request Demo

Don’t let “anyone” do the capturing

When we see a photo of a person and start to analyse its veracity, we should ask ourselves a few questions first: Do I know the capture? Do I know when it was captured? Under what conditions? With what device? Who captured it? Where was it captured? etc. 

None of this is known when we look at and analyse databases of photographs like the ones in the study above. After all, a Deepfake is an artificial image that was generated (not captured by a camera) or that was captured by a camera and subsequently manipulated.

So let us now consider a digital identity verification process such as Veridas’, either for new customer onboarding or for authentication of an existing customer. In this case, Veridas controls the two axes: capture and analysis. The capture SDKs are from Veridas and therefore we can know if the photo was captured with them, if it has been altered, when it was captured, how it was captured, etc. 

We also control the analysis part, the algorithms are 100% owned by Veridas. Therefore, we are in a much better position to verify whether the information coming in is authentic or not. Introducing a Deepfake or an alteration of this kind in a Veridas process is practically impossible because it would require very advanced hacking on both the front-end (the capture) and the back-end (the biometrics engines). Additionally, Veridas is working on specific detection algorithms for Deepfakes.

Ultimately, it is vitally important to realise that if the capture and analysis process do not go hand in hand, Deepfake attacks are easy to provoke. But when there is control of both points, such as in an onboarding or authentication process with Veridas, and also 100% automated technology, which does not depend on humans (No human in the loop), this will not be possible.

Share

Subscribe to our newsletter

TRY A DEMO

Translate this website