Home > Data protection
We believe in the right of people to use their biometrics to be identified voluntarily and securely.
Do you want to learn how we do it?
To ensure that we conduct our business in a safe manner for our users and customers, we comply with various national and international standards. These include, among others:
California Consumer Privacy Act
General Data Protection Regulation
Spanish Data Protection Agency
National Institute of Transparency, Access to Information and Protection of Personal Data
Superintendency of Industry and Commerce
In addition, our solutions must guarantee the highest level of trust, which is why we certify both the accuracy of our technologies and the security of our systems, in accordance with various standards. These certificates involve analysis by accredited third parties, and therefore provide added assurance that your data is in safe hands.
National Institute of Standards and Technology
ISO 30107-3 – iBeta Level 2
ISO 27001 – Information Security
Qualified product catalog – Video identification tools
National Security Scheme
Veridas has developed solutions that can be adapted to a wide range of use cases, depending on the needs of our customers. In this section we will try to explain what happens at the Data Protection level in each of them, although it should be noted that some issues may be modified depending on the specific application made for the customer’s use case.
This solution allows the entire identity verification procedure to be carried out remotely, without the need to travel and with great savings in the personal and material resources required to carry it out.
During the registration or onboarding process, either through an app or a website, a photograph or video of the user and a document that allows him to prove his identity will be taken.
All the data contained both in the document and in the image of the face will be sent to the validation systems developed by Veridas.
In this phase, the analysis of the veracity of the document (with the processing of the data contained therein) and the identity of the person will be performed, for which two biometric vectors will be created: one from the photo contained in the document, and another with the selfie photo taken by the user at that moment; by comparing them with each other (in a process known as 1:1 or one-to-one), it is possible to verify that a person is who he/she says he/she is.
After verification, this data is sent to Veridas’ client company, which will generally act as the Responsible Party (although there may be cases where our client acts as a Processor, in which case it will pass the information to the Responsible Party), and Veridas’ systems are instantly deleted.
Our facial biometrics technology allows you to operate securely in the digital world by simply being you. No need to remember usernames and passwords, with this solution you can authenticate yourself from anywhere.
Authentication also involves verifying that a person is who he or she claims to be, but in this case the person must already be registered in the systems. Therefore, either they are in our customer’s database, or they have received a biometric QR code that will allow them to log in.
To make the comparison we will use the biometric vector obtained from the photograph to be taken at the time, on the one hand, and, depending on the use case: a) the one that our client company sends us and that is already in its possession (for example, derived from the previous onboarding process), or b) the data extracted from the biometric QR code in possession of the user (which may be on a card, on your smartphone…).
And that’s it! In just a few seconds you will have shown who you are, with the guarantee that no data will remain in our systems, as they will be used only for comparison purposes, and will then be deleted.
The voice recognition solution, with one of the leading technologies on the market, allows you to authenticate yourself from anywhere, in any language and with any phrase, and with high accuracy.
It works in much the same way as facial authentication: initially, a small voice recording will be made, whose biometric vector will be stored in the databases of Veridas’ client company, with whom the user contracts.
Once this is done, every time the user wants to authenticate to access the service, a short audio recording (three seconds is enough) will be processed in Veridas’ systems. After generating the biometric vector of this recording, it will be compared with the one previously stored in the client company’s servers, thus determining whether the person is who he/she claims to be.
Finally, in an operation that takes less than a second, the data is removed from all Veridas systems.
Our Biometric Anti-fraud Engine helps to prevent fraud. This service makes it possible to discern whether a person trying to sign up for the client company’s services is already in the database through our facial recognition technology, thus avoiding duplicate identities.
Das-FaceBond is a solution that works by matching the image of the person or document, taken (for example) in the onboarding process, with those that the client has stored in its databases. Therefore, it will always be an identification (1:N).
As it is a service provided at our client’s premises, in this case Veridas will not be in possession of the data at any time.
Veridas solutions are a guarantee of trust and security in the processing of personal data of our customers’ users. These are some of the measures we take to achieve this:
1. We always act as Data Processors: We do not process user data for our own purposes, but only to provide services to our client companies (the Data Controllers), and only in accordance with their guidelines.
2. Minimization of personal data: The personal data to which Veridas may have access during the provision of services is minimized as much as possible. The data processed will be different depending on the solution being used, but in any case will be only those essential to provide the service.
3. Biometric technology based on Artificial Intelligence: This technology works, broadly speaking (it may change depending on the solution), as follows:
Therefore, in case of theft or unauthorized access, this data would be totally useless: you would only have a string of numbers that you could not make sense of.
4. We do not retain personal data: Veridas retains neither the user’s personal data nor the biometric vectors that have been created: once the process for which we have been hired has been completed and the relevant information has been sent to the Client, we delete all the information that has been on our servers automatically.
In Spain, the protection of personal data is a fundamental has recently gained real weight after 2018, following the entry into force of the General Data Protection Regulation (GDPR) and the approval in Spain of Organic Law 3/2018, on Data Protection and Guarantee of Digital Rights (LOPDGDD, in Spanish).
In essence, it is a right that involves us all, as it seeks to prevent the indiscriminate and obscure use of our data, while at the same time giving us tools to ensure that it is respected. This, in a context where data is both the most valuable business asset and a risk to us as individuals if misused, gives us certain added guarantees when it comes to protecting our presence not only on the internet, but also offline.
You don’t have to do anything: it just depends on where you live. If you live in the European Union, in general, the rules that exist in that area apply to you, and the GDPR is one of them; likewise, those of the Member States that improve their protection will also apply to you, although only in their territorial scope. After BREXIT, the UK currently has its own GDPR.
On the other hand, Veridas, as a European and Spanish company, even if it operates in other countries, will always be subject to the provisions of the GDPR and the LOPDGDD, so that all our customers and users will see their rights covered at the same level, whether they are EU citizens or not. In addition, many other countries (in Latin America, US States, etc.) have their own regulations on the subject that also protect their citizens; at Veridas, as Data Processors, we strive to comply with the regulations where we operate.
Veridas only processes the data found in identification documents, as well as image and voice that the person provides to verify their identity (by means of photograph, video or recording), depending on the solution being used. Thus, for:
On the other hand, Veridas, as a European and Spanish company, although operating in other countries, will always be obliged to comply with the provisions of the GDPR and the LOPDGDD, so that all our customers and users will see their rights covered at the same level, whether or not they are EU citizens. In addition, many other countries (in Latin America, US States, etc.) have their own regulations on the matter that also protect their citizens, and which we at Veridas strive to comply with, in our capacity as Data Processors.
Finally, of course, Veridas also processes certain data as Data Controller, such as website data, which is limited to the use of certain cookies and data that may be provided to us for the sending of newsletters; more information can be accessed via this link.
In production environments, Veridas only processes user data for the time necessary to generate the biometric vectors that will be used to verify that the user is who he/she claims to be, and to perform the matching. This process usually just takes seconds.
After finishing this process, and at the request of the client, all the data collected will be sent to him (Controller), and will be deleted from the Veridas’ cloud. In fact, it is our own client company who is in charge of deleting them, and Veridas has established automatic processes of periodical deletion every few seconds in case the deletion is not carried out correctly.
Therefore, Veridas does not store data from our clients’ users when they use our solutions in real environments, as it only processes them for the minimum time required to provide the service.
In the case of a service that does not make use of our cloud servers, Veridas does not have access to the data at any time, as it is only processed locally. Therefore, in these cases Veridas does not even act as Data Processor.
Veridas, within the scope of the provision of its SaaS services, always acts as a Data Processor. This has several implications:
It never processes end-user data for its own purposes, as it is obliged not to deviate from the guidelines set by its clients (Data Controllers).
The direct relationship is between the end-user and the Controller, who must ensure that it can process personal data for this purpose and, consequently, that Veridas can also do so.
Often this will involve express consent to do so, a contract between the two parties justifying it, or a legal requirement to do so.
In the case of services that do not make use of our cloud servers, Veridas does not act as a Processor, as it does not have access to users’ personal data at any time.
In addition, Veridas’ obligations include the maintenance of appropriate security levels depending on the data being processed. In this sense, Veridas has, among others, the ISO 27001 and National Security Scheme certifications, in addition to other measures that it sometimes agrees with its Clients.
The main difference lies in determining who has the capacity to direct the processing to be carried out; thus, it is the Controller who determines what is to be done with the personal data, and the Processor must abide by its guidelines and not deviate from them.
In fact, if a Processor were to do so, he would be taking the risk of becoming a Controller himself, with all the added obligations that this entails.
Although the previous regulation already introduced some specific rights in the field of Data Protection, with the GDPR others have been added and regulated in a uniform manner throughout the territory of the European Union. Thus, these are:
For more information about the GDPR rights, we recommend you to visit the Spanish Data Protection Agency website, as well as any other in the EU, where you will find much more information.
And this is about the rights recognised by the GDPR, but other countries’ data protection legislation also recognises very similar or even identical rights for users.
First of all, remember that you cannot be charged for the exercise of these rights. The appropriate entity before which to exercise your rights of Access, Rectification, Deletion, Limitation, Opposition or Portability (or GDPR Rights) will always be the company with whom you have contracted or wish to contract; that is, the Data Controller.
However, in case you are not sure how to address the Controller and only know that you have contacted us (as Processors), we will do our best to provide you with contact information or even channel requests to the Controller, if necessary. To do so, you can contact us at firstname.lastname@example.org.
Finally, if you have not received a response from the Data Controller, another option is to contact the local Data Protection Agency.
As is well known, the General Data Protection Regulation (GDPR) prohibits by default international data transfers, unless certain special safeguards are met, such as specific consent being sought, an adequacy decision by the Commission, etc.
With this in mind, Veridas uses by default the servers that Amazon Web Services has in Germany and Ireland to host its cloud services; therefore, it does not under any circumstances carry out international data transfers within the meaning of the GDPR. The data is only processed for a few seconds on those servers, and AWS furthermore guarantees that there is no international transfer of such data.
Of course, its participation as a Sub-processor is always agreed in advance with the Controller through the Data Processing Agreement (or DPA) that must be signed by the Parties, and any change in such provider must be authorised by the Controller.
Veridas also has AWS servers in the United States (Virginia and Oregon) to provide service to its customers located in America, provided that this is agreed with those customers in a DPA.
As defined in Article 4,14) of the GDPR, these are “personal data obtained from specific technical processing, relating to the physical, physiological or behavioural characteristics of a natural person which allow or confirm the unique identification of that person, such as facial images or dactyloscopic data”.
According to this definition, a simple photograph of the face is not considered as biometric data; it is the specific technical processing to which the photograph is subjected that determines whether we are dealing with this type of data.
For example, in our case, the biometric vector that is created is biometric data, but not the image of the face that is used to obtain it.
This is a fairly common question, as the wording of Article 9 of the GDPR mentions that sensitive data are, among others, “biometric data intended to uniquely identify a natural person”.
However, the answer is to be found in the same sentence, as it specifies that only data intended to “uniquely identify a natural person” are sensitive data.
The Spanish Data Protection Agency (AEPD), the Irish Data Protection Commission, the “Article 29” Working Party, the European Commission, the European Data Protection Board (EDPB), the European Data Protection Supervisor (EDPS) and even the British ICO (Information Commissioner’s Office), which, although it refers to the scope of its own GDPR, is deeply inspired by its European precursor, have already expressed their views on this matter. All have come to the following conclusion: when the GDPR mentions biometric data among sensitive data (Article 9), it refers exclusively to those that are to be used to identify individuals (1:N), but not those that are processed for the purpose of verification or authentication (1:1). In other words, although all are biometric data, only the former require special protection.
Verification or authentication is the process of checking that a person is who they say they are. In other words, what is compared is their face or voice (usually by taking a photograph or video or audio recording in situ) with that of the person they claim to be (for example, through the image extracted from their ID card or passport or another document they show, such as a biometric QR code). In other words, the data of the person claiming an identity is not compared with other data stored in a database; the authentication consists of verifying that a person (1) is the true holder of a document that he/she presents (1:1). This alternative allows the person to have full control over his personal data.
On the other hand, identification (1:N), which also aims at verifying the identity of the person, is carried out by matching the data obtained at the time (in our case, biometric vector of the photograph, voice record, etc.) (the “1”) against other data that are already registered in a database (the “N”). In these cases, the legislator has considered this type of data to be ‘sensitive data’, so that Article 9 of the GDPR applies.
When we talk about identification, it is also important to analyse another question: how is the database against which the comparison (or ‘N’) is made and used? This is relevant because it is not the same whether the set of identities against which the comparisons are made are of persons who have been previously informed of this fact (or have even given their consent, according to the legitimate basis of application), or whether they are of persons who are unaware that their data are being processed for this purpose.
Thus, when assessing the associated risk, it is essential to ensure that identification systems do not lead, for example, to massive and indiscriminate surveillance. To this end, compliance must be guaranteed by implementing specific security measures and ensuring compliance with obligations such as adequately informing users or providing a valid legitimising basis for the processing.
This is, in fact, the line taken by the European Commission in its proposal for the regulation of Artificial Intelligence, where, for example, it considers that only those remote biometric identification systems in which there is no prior registration would be considered high-risk, as that the user is not adequately informed and other basic requirements of the GDPR are not met.
The classification of a data as sensitive does not mean that it ceases to be personal data, nor that it is data that cannot be processed: it is personal data that is considered particularly important for its owner, either because of its ethical, moral or economic value, or because of the risk that its loss, leakage or other risks may entail; for that reason, the processing of such data will therefore require a higher level of assurance.
In principle, the processing of such data is prohibited by Article 9(1) of the GDPR unless one of the circumstances set out in Article 9(2) is met, but, in practice, the main consequence of the classification of data as sensitive is that they must be collected and processed with special care. This does not necessarily imply a greater risk for data controllers, but it does imply greater diligence: they must simply carry out their processing operations under appropriate security measures, on the basis of the appropriate legitimisation (both Article 6 and Article 9.2 of the GDPR), duly informing data subjects, having a Data Protection Officer and having carried out a prior Impact Assessment.
Short answer: because we believe it is the most convenient and secure way to prove our identity. As advances in security make it possible, we should be able to participate in the digital world with our real identity, without the need to resort to artificial credentials or access, just as we do in the physical world.
Moreover, even taking into account the youth of today’s AI-based biometric technology, the high levels of security and very low error rates already being achieved indicate that these are technologies that are here to stay, as they can outperform even the assessment that humans can carry out.
While we understand that the youth of this technology may raise suspicions (as well as the bad press that past mistakes and certain government policies have brought to the biometrics industry), the fact is that progress is steady. For example, against the old landmark-based facial recognition systems, which could have an accuracy of around 96% and which provided approximate knowledge of a person’s face if the data could be accessed, the new systems, such as those of Veridas, have a hit rate of over 99%, and the biometric data generated are absolutely unusable if used outside the specific engine in which they were created (remember that irreversibility and non-interoperability are intrinsic characteristics of today’s biometric vectors).
Veridas offers its solutions mainly in SaaS (Software as a Service) mode, providing our customers with credentials so that they can make use of our solutions hosted in the cloud. In these cases, the data flow would be as mentioned above:
In these services we always act as Data Processors, after signing a Data Processing Contract and always complying with the rest of the obligations and precautions provided for in the regulations.
Because of the way the Internet works, when there is a process taking place behind a loading screen, it may seem to us that this process is “floating like a cloud” somewhere in cyberspace. However, saying that something is on the cloud really means that it is on very real servers, located in a very real place.
In the case of Veridas, we typically make use of the Amazon Web Services cloud, where we deploy our systems to serve all our customers. By default, this cloud is located on servers in the European Economic Area (Germany and Ireland). In addition, where Veridas’ customer operates in the Americas and it is mutually agreed, we may also make use of servers located in the United States (Virginia and Oregon).
A biometric engine is the name given to the set of algorithms that transform face or voice data (obtained, in our case, through a photograph, video or voice recording), into what is known as a “biometric vector”: a piece of biometric data, because it has been subjected to a “specific technical treatment”, made up of a chain of irreversible numbers (which do not allow reconstructing the data from which they were created).
So to speak, if we were to assimilate a photograph to coffee beans, a biometric engine would be a coffee grinder, and the biometric vector would be the coffee powder, once it has been treated and separated from those components that are not useful for the purpose for which it is to be used.
As in the case of the grinder, each version of each biometric engine has a special and unique way of working and “grinding” the data, which converts the images of the faces or voices of people into a special and unique code that is impossible to read even by different versions of the same engine. To continue with the simile, it would be as if, in addition, this coffee could only be used in a specific coffee machine; not because of the packaging, but because of the features of the coffee powder itself.
In the training of a biometric engine we can distinguish two phases, and, for that reason, two distinct and separate databases must be created:
The reason why they are different databases is that, if we test the engines with the same data they have been trained with, they would already “know” them, so they could not fail! In other words, it would be like cheating.
It goes without saying that special care is taken when selecting the databases to be used for these purposes, as they must comply with the information and consent requirements of personal data protection regulations. For example, we never train our systems using the data that “passes” through our biometric engine when our customers use our systems in production, i.e. we do not use the data of the citizen who is proving his or her identity through our solution. The systems are trained only in development phases and not once the solution is in production, so we never use final user data.
This is one of the points where advances in biometrics have been most relevant. First, because much higher levels of reliability have been achieved (from 96% with the landmark technology to over 99% with Artificial Intelligence).
Second, because with the old method was designed to “map” a person’s face and then encrypt that data, so, in the event of theft and decryption, it could be used. In other words, several points on the face were selected (eyebrows, eyes, lips, etc.) and the distances between them were measured (hence the term “bio-metrics”). The vector was then constructed as a summary of these measurements and stored for future comparison. Therefore, this technology had the problem that, in case of theft and decryption of the data, what was obtained was a “map of coordinates” of the person’s face that could be used to recognise the person or even fraudulently access certain physical or virtual places.
With AI technology, on the other hand, what is obtained is a “biometric vector”, a string of numbers created by a specific biometric engine; it can be said that this data is created in a “language” that can only be spoken by the biometric engine itself. This has two basic implications:
Last but not least, these vectors are in turn protected by many other security measures. All in all, a very high level of security is achieved, because even if someone were to gain improper access to these vectors, they would not be able to do anything with them.
A “bias”, in biometrics, can be defined as the existence of significant drops in performance when the technology is applied to certain groups of data or people, even though it is designed in principle to be used with them. For example, the system may not recognise a black person as well as a Caucasian person.
That said, the starting point is to bear in mind that biases do not arise because the biometric technology itself generates them: they arise when there have been errors in the designing phase. The main (though not the only) source of such errors is training a biometric engine by AI with already biased databases. Going back to the previous example, if we only give it photographs of Caucasian men, it will most likely have trouble recognising women or people of other ethnicities.
Therefore, the “secret” (and our obligation) is to properly design and programme the entire work plan, one of the essential parts of which is to carefully choose the information (photographs, videos, voice recordings…) to be used to “teach” the Artificial Intelligence.
Finally, there is a post-training phase, the purpose of which is to ensure that the system is free of biases and errors: the “testing”. During this phase, the biometric engine is subjected to various tests using specific databases, which would bring to light the existence of almost any problems that might exist.
Fill in this form and talk to one of our experts.