Not all biometric use cases are the same, nor do they have the same legal implications. This article explains the different regulations applicable to biometric identification, aimed at validating one individual among many (1:N), and identity verification, recognizing that someone is who they say they are (1:1). The AEPD authorizes both biometric identification and biometric verification, provided that the measures required by the RGPD are met.
We also explain that the sanction recently imposed on Mercadona by the AEPD responds to a completely different factual situation than the use cases proposed by Veridas and dasGate, so the legal reasoning of that sanction does not apply to them.
We provide information and references to understand the European regulatory framework for the use of biometric technologies and how the use cases proposed by dasGate and Veridas are secure by design and scrupulously comply with all current regulations.
The Mercadona case
The Spanish Data Protection Agency (AEPD) has recently issued a sanction to Mercadona for the use of facial identification systems in the open field, without the user’s consent, and we believe it is appropriate to try to clarify concepts and provide our opinion on the different uses and implications of using biometric technology.
As has been known, Mercadona installed, in June 2020, as part of a pilot project, a facial recognition identification system in forty of its stores. According to what has been published, Mercadona’s system was a biometric identification system (1:N) since it tried to recognize whether the person identified (1) was one of the persons who had a sentence or a restraining order in force against Mercadona or its workers (N).
These people, without their consent, were recognized in a space open to the public, the entrance to the stores of that supermarket chain, remotely and identifying all those who passed through that space and not only those who approached a particular place for pre-established access.
In July 2020, two complaints were registered at the AEPD, which, together with the facts and documents to which this organization had access, led to the initiation of an investigation by the AEPD, which has concluded with the resolution published, one year later, with a fine of 2.5 million euros to the supermarket chain.
The resolution of the AEPD
In its resolution, the AEPD analyzes the allegations presented by Mercadona concerning the legitimacy, the judgment of proportionality (suitability, necessity, and proportionality in the strict sense), the impact assessment, the transparency of the information risks derived from the processing, etc.
Firstly, the AEPD indicates that since it is a 1:N identification system, which uses biometric data aimed at univocally identifying a specific person from among several (one-to-many, or 1:N), we are dealing with a particular category of data processing subject to the guarantees outlined in Article 9 of the RGPD.
Likewise, the AEPD argues in its resolution that the processing of personal data affected persons with final judgments and restraining orders and any Mercadona employee and customer. By imposing a system of detection of convicted persons, the same recognition method was automatically applied to the rest of the people.
In addition, the AEPD considers the facial recognition system used by Mercadona to be a massive and remote facial recognition system («remote biometric identification») as defined in the European Commission’s White Paper on Artificial Intelligence. This is so because biometric data qualified as sensitive data are used. It is produced remotely in a space accessible to the general public and is automatic and continuous processing.
Based on this, the AEPD carries out an exhaustive study concluding that «the processing of data based on facial recognition for identification purposes implemented by Mercadona is prohibited by the provisions of Article 9.1, as there is no cause to lift the prohibition among those set out in Article 9.2 of the RGPD», and that its use is not proportionate to the purpose pursued.
Differences between biometric identification and verification
“As far as facial recognition is concerned, «identification» means that a person’s facial image template is compared with many other templates stored in a database to find out whether their image is stored in it. «Authentication» (or «verification»), on the other hand, usually refers to the search for correspondences between two specific templates. It allows the comparison of two biometric templates that, in principle, are assumed to belong to the same person; thus, the two templates are compared to determine whether the person in the two images is the same.”
And based on this, already in its Report 36/2020, the AEPD concluded that:
“Attending to the distinction above, it can be interpreted that, under Article 4 of the GDPR, the concept of biometric data would include both cases, both identification and verification/authentication. However, and in general, biometric data will only be considered as a special category of data in cases where they are subject to technical processing aimed at biometric identification (one-to-many) and not in the case of biometric verification/authentication (one-to-one)”.
More recently, in its guide “Data protection in labor relations,” the AEPD reiterated this criterion.
This distinction, which was previously considered only at the technical level, now has significant implications at the legal level. It should be noted, in any case, that this differentiation does not automatically imply a total legitimization or prohibition of the processing of such personal data but rather means that more or fewer issues must be taken into account (e.g., the exceptions in Article 9.2 of the GDPR, the need to carry out an impact assessment, etc.). In other words, the AEPD authorizes both biometric identification and biometric verification, provided that the measures required by the GDPR are met.
Best practices in the use of biometric technology
Clear examples of this are the White Paper on Artificial Intelligence of the European Commission, already mentioned, or the proposal for a Regulation on Artificial Intelligence recently published by the European Commission, in which the knowledge of the users (data subjects) about the processing that can be done with their data (what data, for what purpose, for how long, by whom, etc.) and the control they have over such processing, is the starting point, in most cases, to determine the level of risk.
Thus, remote biometric identification scenarios in open space and individuals lack the power to decide on the processing of their biometric data are the type of biometric technology governed by the most stringent regulations due to their exceptional characteristics. However, in cases of controlled or limited biometric identification and cases of biometric verification, the privacy of individuals is respected, and the risk is lower.
Differences between dasGate and the Mercadona case
In the design of their biometric technology, dasGate and Veridas have privacy by default and by design as a fundamental principle. From the early stages of product development and the construction of specific solutions for each use case, the protection of all users’ data protection rights and freedoms is at the heart of our business.
The biometric technology developed by Veridas, and used by dasGate, is based on Artificial Intelligence, which has significant consequences on the characteristics of the mathematical vector (template created from the facial image) used to make the comparisons:
- The mathematical vector is only interpretable by the biometric engine (and engine version) that created it. This makes the vector non-interoperable.
- The mathematical vector is irreversible, i.e., we cannot go back to the image from which it was created.
- The mathematical vector is for comparison purposes only. It will be used to compare it with another vector through a biometric engine (remember, only one in particular) and thus obtain a percentage of similarity between the two to tell us whether or not they correspond to the same person.
- The mathematical vector does not offer more information than that, i.e., no characteristic of the subject (emotions, gender, ethnicity, behavior) can be inferred from it.
In short, its use is always controlled and limited. Even without considering many other security measures applied to the system, if there is unwanted access to a mathematical vector, nothing can be done with it.
Thus, Veridas biometric solutions and dasGate biometric access control solutions can be used in verification mode (1:1) or identification mode (1:N). Each sector and use case will recommend one system or the other, especially considering the intended purpose, the level of security required, and the desired user experience.
As a general rule, the legitimate basis for carrying out this type of biometric processing must be, in our opinion, the informed consent of the users. This applies to verification (article 6.1.a of the GDPR) and identification (article 9.2.a of the GDPR).
However, when it comes to identification, we are dealing with particular category data, so we have already seen that it is imperative to ensure that all persons whose data could be subject to processing are aware of it and are free to give their explicit consent or refuse it.
At dasGate, this is guaranteed by ensuring that the facial recognition identification system is not indiscriminate and open but is confined to a small capture area (less than one meter from the reading terminal) that is adequately marked, which means that only and exclusively the data of persons who voluntarily place themselves in the capture area can be processed.
The data will not be processed and will not be subject to the identification of persons who can move freely around the site, but only those in front of the terminal. This means that the dasGate biometric identification access system is not considered a remote biometric identification system.
In dasGate’s access control systems, regardless of whether they are with verification or biometric identification, we always start from a prior informed registration and under the consent of the users who want to be verified or identified. The system will use only the data of the registered persons to carry out the facial comparisons. They will only be compared against the data of those who voluntarily provide them at any given time (by presenting a credential and/or standing in front of the capture terminal).
In addition, both Veridas and dasGate have conducted Data Protection Impact Assessments (DPA) of their solutions, which is required by the AEPD (Spanish Data Protection Agency). This is done with the firm conviction that, although we act as data processors in the services we provide, our systems design and use must be analyzed before offering them to the market.
We also have a complete information security management system, backed by certifications such as ISO 27001 and the National Security Scheme (ENS).
At dasGate and Veridas, we believe that protecting the personal data of the users of our solutions must be at the core of our business. Thus, the systems we develop always seek control by the users of when, how, and by whom their personal data can be processed.
The solutions we offer and the use cases in which dasGate and Veridas customers are currently using them are based on user consent and user control over the capture of their data. We strongly believe that these should be the main guidelines for the correct use of biometric identification technology, especially when there is no other legitimate basis and proportionality sufficiently justified.