The recent agreement on Artificial Intelligence (AI) Act by the European Union brings us a step closer to its final approval and marks a milestone in global technological regulation. This legislation, expected to be finalized in the first quarter of 2024, establishes a comprehensive regulatory framework for the use of AI systems in EU member states, prioritizing safety, transparency, and accountability.
The most significant aspect of the regulation is its focus on classifying AI systems according to their risk level. This strategy allows for more precise and effective regulation with robust safety standards and fundamental rights protection.
These standards will be crucial in preventing the use of AI in applications that could be repressive, manipulative, intrusive, or discriminatory.
The EU’s Artificial Intelligence Act sets a precedent for how emerging technologies can be effectively and ethically regulated without sacrificing innovation. This law is expected to have a significant impact not only in Europe but also as a global regulatory model, promoting the development and use of AI that is safe, ethical, and respectful of fundamental rights and freedoms.
Categorization of Risk Levels
- Unacceptable Risk: This category includes AI systems that directly threaten public safety, fundamental rights, or privacy. Their use is strictly prohibited except in very exceptional situations.
- High Risk: This category includes AI systems that could have a considerable impact on individuals’ fundamental rights due to their relation to services and processes affecting health, safety, employment, etc. Their use is allowed, provided that certain additional safeguards are met, and their operation is monitored.
- Low Risk or Nonexistent Risk: Finally, systems that do not fit the above categories are considered low risk. It is important to note that this classification is implicit rather than explicit. In this category, the decision-making capacity of citizens in a free, informed, voluntary, and unequivocal manner is paramount for the use of these technologies. Some systems with specific transparency obligations are included in this low-risk category (for example, generative AI systems, chatbots, etc.).
Biometric Recognition Systems: A Focus on Consent and Risk
This AI Regulation introduces a significant distinction in the use of biometric technologies, making user awareness and voluntary decision on its use a key factor for classification.
Applications involving active and conscious user participation, demonstrating their knowledge and approval, are considered low risk. This approach underscores the importance of consent in the digital age.
On the other hand, Remote Biometric Identification (RBI), which implies passive user involvement and is often associated with mass surveillance, is classified as high risk. Moreover, the use of RBI systems in real-time in public spaces for law enforcement purposes is prohibited except in very exceptional cases where additional security measures should be applied.
Among the low-risk applications are biometric verification for account opening and voluntary physical access control to buildings and facilities.
Non-Remote Biometric Identification: Classified as Low Risk
A clear example of this low-risk classification is non-remote Facial Authentication, where active and conscious participation of the citizen is essential. This classification reflects a commitment to protecting privacy and security in the use of biometric technologies. These biometric technologies’ defining features that justify their low-risk classification include:
- Guaranteed Privacy: The biometric vectors used do not directly reveal the individual’s identity.
- Irreversibility: Recovering the original facial image from the biometric vector is impossible, ensuring personal data protection.
- Exclusivity and Non-Interoperability: Each system uses unique biometric vectors, preventing cross-use between different manufacturers and applications.
- Advanced Encryption: Biometric data is protected through robust encryption, ensuring its security.
- Revocability: Biometric vectors can be updated or canceled, providing flexibility and security in case of data compromise.
- Equity and Non-Bias: Commitment to equitable accuracy for all groups, promoting fair and ethical treatment.
- These biometric systems are not designed to infer personal characteristics, such as health status, mood, gender or other user information.
Benefits of Artificial Intelligence in Businesses
The Proper use of Artificial Intelligence (AI) in businesses provide not only security but also efficiency and precision in responding to customer demands. Companies can ensure safer and more efficient operations with wholly-owned solutions that are continuously updated and trained.
This constant evolution in AI is reflected in technologies with high-quality certifications like NIST or iBeta, ensuring the reliability and efficiency of their applications.
Another crucial aspect is AI’s ability to quickly and securely verify identities, saving management costs and enhancing identity protection, a key factor in today’s business environment.
Despite implementation challenges like needing high-quality data and ensuring transparency and accountability, AI’s benefits are significant. A KPMG study indicates that over 50% of businesses consider AI essential for growth in the coming years.
Enforcement of the Artificial Intelligence Act and Next Steps
Following the conclusion of political trilogues, technical trilogues will soon finalize the AI Regulation text, expected to be approved by the European Parliament and Council in early 2024. The regulation’s application will range from 6 months (end of 2024) for prohibited systems to 24 months (2026) for the entire regulation.
The EU plans to create an AI office to oversee the implementation of the Regulation.
The Future of Artificial Intelligence Laws: A Global Perspective
The European Union’s recent AI legislation sets a precedent within the EU and as a model globally, guiding other nations in forming their AI policies with a balance between innovation and ethical regulation.
Spain’s ‘Regulatory Sandbox‘ exemplifies EU members’ national implementation of the AI Regulation framework, creating a secure environment for AI development.
Concurrently, the UK’s ‘Online Safety Bill’ aims to enhance digital user protection. These regulatory efforts, including the EU’s legislation, indicate a global trend toward regulating emerging technologies, emphasizing the protection of individual rights and safety in the digital age.
The EU’s AI Act marks a significant international regulatory advancement, setting a standard for ethical and safe technology adoption and influencing future legislation worldwide.
This approach underscores the EU’s commitment to protecting its citizens while fostering responsible innovation, leading the way towards a more ethical and secure future in AI.