Facial biometric systems are achieving a massive insertion in many use cases: digital customer registration, authentication of access to web services, unlocking cell phones, physical access to offices or sporting events, etc.
This spread of technology is inevitably accompanied by new forms of deception to gain fraudulent access to the different use cases mentioned. In the United States alone, the Federal Trade Commission (FTC; Federal Trade Commission) estimated 2,331.2 million dollar losses due to identity fraud in 2021, doubling the figure estimated for 2019. According to the FTC, identity fraud accounted for more than 50% of the total reported cases of fraud.
Not that fraud is anything new; any process related to a person’s identity, using biometrics or not, is targeted by the malice of a few to gain access to usage rights that don’t belong to them.
And with biometrics, it’s not going to be any different, but the way the fraud is carried out is. This post aims to answer some questions about fraud in a facial biometric system and how we do technology to protect against it.
Presentation attacks: a reality to prevent
In a biometric system, different weaknesses can be exploited in an attack depending on the component of the system that is being targeted. In this post, we will focus on what are known as Presentation Attacks (PA).
In presentation attacks, the attacker can either construct a replica artifact of a human biometric feature (e.g. a latex mask that has the physical features of a certain person) or modify a biometric feature of his own (e.g. the face, to hide his identity or make it resemble someone else’s).
In these fraud attempts, the attacker presents either the artifact or the human characteristic to the biometric capture subsystem (camera and/or software in charge of taking a picture of a face) in a way that interferes with the logic that allows the system to decide whether or not a face belongs to the person being authenticated or identified.
In other words, the system is attacked without modifying any component, but by altering the system’s response, “teaching” the sensor a biometric characteristic that is fraudulent. In facial biometrics the sensor is the camera, and the process of “teaching” consists of recording with the camera an artifact that represents a fraudulent biometric characteristic, this artifact being a photo on a screen, a photo printed on paper, a realistic mask, etc.
When we say that the biometric characteristic is fraudulent, we are referring to one of these scenarios:
- An artifact that has been fabricated to look like the person it is intended to impersonate, or it does not look like anyone but allows the attacker to conceal their identity.
- The attacker’s real face but made up, painted, tattooed in some way that allows him to hide his identity, or to look like a person he wants to impersonate.
A regulatory framework to protect against this type of attacks
The framework for evaluating this type of attack is developed within ISO/IEC 30107-3:2017. This standard was born from the interest in classifying attacks based on parameters such as the effort required by the attacker and the motivation to establish a methodological framework that objectively measures how robust an attack detection system is. Some evaluators, such as iBeta Quality Assurance, specify an ordering of the types of attacks into two levels of difficulty:
- Level 1: The attacker does not know the algorithms used by the biometric solution to be attacked. It is estimated that the attacker can spend up to 8 hours with the same type of attack, with an approximate maximum budget of $30 to build each device to be used. This type of attack can be carried out with office material, such as mobile devices, paper, etc., and, therefore, they are attacks that can be carried out without difficulty.
- Level 2: The attacker knows details about the algorithms used by the biometric solution. It is estimated that the attacker can spend up to 24 hours to perform the attacks with the same attack species and that he has a budget of approximately $300 to build each device to be used.
At Veridas, following our strict and firm commitment not to use customer data, we have developed specific databases to measure the performance of our algorithms on different species (types) of presentation attacks. These databases are essential to fine-tune the algorithms and to be able to choose those that generalize best for all cases. We will now show a description of the different species we have and some example recordings where you can learn about them. We give you the Veridas facial anti-fraud laboratory.
Level 1: five different types of attacks
1. Photo Replay Attack: this type of attack consists of showing to the camera a screen displaying the face that the attacker wants to use.
2. Video Replay Attack: like the previous one, it reproduces the face on a screen but uses a video instead of using a photo as the source. This allows the person to blink their eyes, move their head or make other gestures that can be reproduced on the video.
3. Print Attack: the camera records a sheet of paper on which the target face of the attack has been printed.
4. Print 3D Layered Mask: like the previous one, this attack consists of printing a photograph of a face on paper. But in this case, instead of printing it once, it is done several times, and the features are cut out and glued to give a sense of 3D volume.
5. Print Mask Attack: this is a printed sheet, like the “Print Attack,” but then the eyes, the outline of the face, the mouth… are cut out, constructing a “paper mask” and allowing the target face to be used.
Level 2: nine different types of attacks
6. Avatar Attack 2D-to-3D:: consists of automatically generating a 3D avatar of the person’s face from a photograph of the target and then reproducing this avatar on a screen and showing it to the camera.
7. Photo Replay 3D Render Attack: consists of rendering a static image from a 3D face and presenting the image on a screen. Generating the 3D face is handcrafted using computer graphics editing tools. Finally, a very realistic render of the face is reproduced and shown to the camera.
8. Video Replay 3D Render Attack: consists of building a 3D model of a face, as above, but instead of rendering a single photograph, a video is rendered in real-time. This animation is used to impersonate the person in the process.
9. Curved Paper 3D Mask: in this case, the person’s face is printed on a sheet of high-quality photographic paper and curved to give a sense of volume.
10. Layered 2D Transparent Photo: a photograph of the face is printed on two different paper substrates. One is photographic and very high quality. The other is a transparent vegetable substrate. The two prints are superimposed to give a certain sense of depth.
11. Latex Mask Attack: for this case, a handmade latex mask is manufactured, made with organic materials, which can undergo rapid degradation. The manufacturer requires the presence of the person to be impersonated or a 3D reproduction of his face because the mask manufacturing process needs to use the person’s face to build the mask mold. After the mask is made, it is then artistically painted to make the features of the person you want to impersonate.
12. Plastic Mask Attack: this is similar to the above but uses a thermoplastic or similar material. This type of mask is manufactured similarly to the latex ones but withstands the passage of time (the plastic material does not degrade in the same way).
13. Silicone Mask Attack: as the two previous ones, but using a silicon substrate for the mask. Being made of silicone, they are more durable than latex masks preserving the same degree of flexibility.
14. Mannequin Head Attack: uses a mannequin like those found in clothing stores. It is dressed with accessories and hair to make it look more realistic.
Veridas, at the forefront of precision and security
Veridas has developed proprietary Presentation Attack Detection (PAD) technology to successfully identify all of the types of attacks mentioned above and thus determine whether the evidence collected belongs to a genuine customer or an attacker.
These technologies are materialized in a proof of life score, which allows for discarding Digital Onboarding processes that do not exceed the minimum standards established by the client. Many companies have already implemented Veridas systems; success stories that you can read in detail in our website.
Veridas is one of the few manufacturers in the world to have passed the ISO/IEC iBeta PAD Level 2 evaluation of its liveness detection technology and also has the NIST 1:1 and NIST 1:N evaluated facial biometrics engine. This reinforces our commitment to offering the best technology in both accuracy and security, constantly improving both facial recognition and proof-of-life algorithms.
Focusing on proof-of-life detection, several types of attacks can be combined to increase detection complexity. To be effective in their identification and prevention, two proof-of-life detection methods are available: active proof-of-life and passive proof-of-life:
- An active proof-of-life test is performed by asking the user to carry out a series of actions, such as executing random head movements. This type of life testing is mandatory for specific use cases of some regulations but involves a process with a usually higher abandonment rate.
- In contrast, the user is not prompted to take any action in passive liveness testing. This makes the interaction with the user extremely simple, as there is no friction in the process, and therefore the process is faster and more efficient.
Veridas has both technologies available for both customer enrollment and biometric authentication processes. Depending on the use case and the specific needs of our customers, our teams will provide recommendations that best fit the system based on the suitability and risks that the companies can assume on a case-by-case basis.