Paul Forrest of Ecaveo discusses a primer on the new EU Ai Act

EU AI Act Primer

AI Privacy Compliance

A few days ago, I discussed the importance of Data Privacy in the context of the new EU AI Act.  Today, I have pulled together what I can to present a primer on why this is important and what you need to do if you are likely to be impacted by it.

Introduction

The EU AI Act (AIA) is a landmark regulation shaping the future of artificial intelligence in the EU and beyond. It imposes rigorous requirements on organisations developing, distributing, or using AI systems, with hefty fines for non-compliance. This article breaks down the key aspects of the Act, focusing on the obligations placed on AI providers and operators, the timeline for enforcement, and the different risk classifications AI systems are subjected to.

Understanding the AIA

The AIA was published in the Official Journal of the EU on 12 July 2024 and took effect on 1 August 2024. As a critical regulatory framework, it has profound implications for organisations both within and outside the EU that engage with AI technology. The primary focus is on safeguarding fundamental rights and ensuring the responsible use of AI by regulating its development, distribution, and deployment based on associated risks.

Organisations found in breach of the AIA face steep penalties, which can amount to up to €35 million or 7% of global annual turnover, whichever is higher.

How the AIA Applies

The application of the AIA is grounded in a risk-based approach, whereby AI systems are classified into categories based on the potential harm they may cause to individuals or society. This structured framework provides clarity on the varying obligations faced by AI operators depending on the type of technology used and its intended purpose.

Prohibited AI Systems:

Certain AI systems will be outright banned under the AIA, particularly those that pose a high risk to fundamental rights. Examples include AI systems for biometric identification through facial data scraping, as well as systems that exploit human vulnerabilities, manipulate behaviour, or threaten public safety and rights. AI systems used in law enforcement or education for emotion recognition or social scoring will also face prohibitions.

High-Risk AI Systems (HRAIS)

AI systems that are deemed high-risk will face the strictest regulations. These include systems used in crucial infrastructure, public services, education, recruitment, and judicial processes. Systems in banking, insurance, and those influencing democratic processes also fall under this category. The AIA requires high-risk systems to comply with a range of obligations, including a robust risk management framework, data governance, transparency, and human oversight to ensure safety, accuracy, and accountability.

General Purpose AI (GPAI)

General Purpose AI models, which include systems like large language models and foundational AI, are subject to tailored transparency requirements. These systems, while not immediately high-risk, have the potential to affect multiple domains, and therefore, developers must ensure clear documentation, regulatory reporting, and compliance with EU copyright laws. Systems that demonstrate systemic risk, owing to the scale and scope of their capabilities, will face additional oversight and obligations, including reporting on energy efficiency and performing stringent adversarial testing.

Low-Risk AI Systems

The AIA doesn’t neglect lower-risk systems either. While these systems are subject to fewer obligations, transparency remains a key requirement, particularly when they interact with individuals. Providers of low-risk AI must ensure that users are aware they are interacting with an AI system and offer basic information about how the system operates. This ensures that even less risky AI systems maintain a degree of accountability.

Timeline and Enforcement

The AIA is designed to roll out in phases. Following its formal adoption on 1 August 2024, most of the provisions will only be enforced after a two-year implementation period, from 1 August 2026. However, certain key elements of the Act, including prohibitions on specific AI systems and AI literacy requirements, will apply earlier – from 1 February 2025. The requirements for general-purpose AI will follow shortly after, with enforcement starting on 1 August 2025.

This staggered implementation allows time for the development of supporting legislation, guidelines, and standards, which will provide further clarity to organisations regarding compliance with the AIA.

Obligations for High-Risk AI Systems (HRAIS)

High-risk AI systems are subject to extensive obligations, mainly applicable to providers. Providers are responsible for developing or procuring AI systems intended for the market under their own name or trademark. Other operators, including distributors and deployers, are also bound by certain duties, such as ensuring proper usage of the AI system and monitoring its performance post-deployment.

Key obligations for HRAIS providers include:

Risk Management System

Providers must establish risk management processes covering the entire AI system lifecycle to identify, assess, and mitigate risks effectively.

Data Governance and Quality

Ensuring that data used for training and testing adheres to strict governance standards is crucial. This includes verifying that data is accurate, representative, and free from biases.

Technical Documentation

Providers must compile comprehensive documentation detailing the AI system’s design, function, and potential risks, ensuring it is clear and accessible to relevant stakeholders.

Record-Keeping and Transparency

The system must automatically log critical data throughout its lifecycle, and providers must store this information securely for defined periods.

Human Oversight

High-risk systems must be overseen by qualified humans, who possess the required AI literacy and can intervene when necessary.

Accuracy, Robustness, and Cybersecurity

Systems must maintain a high level of accuracy and be resilient to cyberattacks, errors, or inconsistencies.

Post-Market Monitoring

Providers must continue to monitor and evaluate AI systems after deployment, collecting feedback and data to assess performance over time.

Implications for General Purpose AI (GPAI)

The rise of general-purpose AI models, such as those used in various sectors for a range of tasks, has led to additional scrutiny under the AIA. These models, including generative AI and foundational models, are subject to enhanced transparency obligations, especially those posing systemic risks.

Developers of GPAI must ensure that their models are compliant with the following:

Technical Documentation and Reporting

Comprehensive technical documentation is required, detailing the model’s training process, data sources, and system capabilities. Developers must also report any serious incidents involving their models to the appropriate authorities.

Systemic Risk Assessment

If a model poses systemic risks, additional evaluations are required, including adversarial testing and performance assessments.

Energy Efficiency and Cybersecurity

Developers of systemic-risk GPAI must ensure their models meet strict energy efficiency and cybersecurity standards, demonstrating a commitment to sustainability and security.

Financial Penalties

The penalties for non-compliance with the AIA are significant. Organisations found in breach of the regulation may face fines ranging from €7.5 million (or 1.5% of global turnover) to €35 million (or 7% of global turnover), depending on the severity of the infraction and the size of the organisation. These penalties aim to deter violations and encourage adherence to the stringent requirements set out by the AIA.

Conclusion

The EU AI Act is a transformative piece of legislation that will shape the development and use of AI for years to come. Its risk-based framework ensures that AI systems are scrutinised based on their potential harm, with stringent obligations for high-risk systems and appropriate transparency requirements for lower-risk models. As the AI landscape continues to evolve, the AIA offers a robust regulatory framework to guide the ethical and responsible deployment of AI technologies. Organisations developing or using AI should begin preparing for compliance now, as the timelines for implementation are fast approaching.

Want to study the legislation?  Click here

Feel free to reach out if you need help with the EU AI Act in your organisation.