140 bis Rue de Rennes, 75006 Paris, France

+33 (0)6 98 56 51 31

How to Stay Compliant with the EU AI Act

Artificial intelligence is transforming industries at an unprecedented pace, offering innovative solutions across healthcare, finance, education, and beyond. However, these advancements also bring risks, from unintentional bias and privacy breaches to systemic harm.   
The EU AI Act provides the first comprehensive regulatory framework worldwide, ensuring that AI technologies are developed and deployed responsibly. 

This blog provides with a summary for EU AI Act, timeline for implementation, and specifics for HRAIS, General Purpose AI Models and Foundation Models.

Why Do We Need Laws on AI?

As AI technologies become increasingly powerful and widespread, they bring both tremendous opportunities and significant risks. Without clear regulations, AI systems could be misused, cause unintentional harm, or operate in ways opaque to users and regulators.   

The EU AI Act establishes a clear, risk-based framework to protect individuals, businesses, and society from threats such as bias, misinformation, and privacy breaches. It also builds trust and accountability, ensuring AI developers and deployers follow ethical and legal standards. 

Who Is Affected by the EU AI Act?

The EU AI Act applies to all actors in the AI ecosystem: providers, deployers, importers, distributors, and product manufacturers.   
In practice, anyone developing, using, importing, distributing, or manufacturing AI systems in the EU falls within its scope. 

Importantly, the EU AI Regulations also have extraterritorial reach: providers and deployers located outside the EU must comply if their AI systems are intended for use within the EU. 

The regulation defines “AI systems” broadly, covering machine learning, statistical approaches, and symbolic reasoning. This ensures that both advanced generative models and traditional rule-based AI are included. 

Timeline for Implementation

The EU AI Act entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026. A phased rollout applies:

Rule
Date
Prohibitions on certain AI practices, plus obligations related to AI literacy, come into force
2 Feb. 2025
Governance framework and obligations for general-purpose AI (GPAI) models apply.
2 Aug. 2025
Providers of high-risk AI systems embedded into regulated products have a longer transition period to comply.
2 Aug. 2027

Understanding the Risk-Based Framework

The EU AI Act categorizes AI systems into four levels of risk: 

  • Unacceptable risk: certain practices are strictly prohibited, such as manipulative or deceptive AI, social scoring, untargeted facial recognition scraping, or emotion recognition in workplaces and schools.   
  • High risk: systems with serious implications for safety or fundamental rights, such as AI in medical devices, recruitment, education, law enforcement, border control, or judicial decision support. These are subject to the strictest obligations.   
  • Limited risk: systems like chatbots or generative AI tools must meet transparency requirements, ensuring users know when they interact with AI.   
  • Minimal or no risk: most everyday applications, like video games or spam filters, which are exempt from regulatory requirements. 
Risk Based Framework under EU AI Act

Compliance Requirements for High-Risk AI Systems

Providers of high-risk AI systems (HRAIS) must implement safeguards throughout the system’s lifecycle. These include: 

  • Establishing a risk management system to identify and mitigate risks.
  • Ensuring strong data governance and high-quality training datasets.   
  • Preparing detailed documentation and logging mechanisms. 
  • Providing transparency about the system’s capabilities and limitations.   
  • Guaranteeing human oversight, so operators can supervise and intervene if needed.   
  • Ensuring accuracy, robustness, and cybersecurity against errors and threats.   
  • Setting up a quality management system for internal compliance.   
  • Conducting post-market monitoring and reporting serious incidents within 15 days. 


Before deployment, providers must complete a conformity assessment, affix CE marking, and register their system in the EU’s central database. 

Deployers (users) also face obligations: in some cases, they must conduct a fundamental rights impact assessment (FRIA), follow the provider’s instructions, monitor operation, and keep system logs. 

Bringing a High-Risk AI System to Market

Bringing a High Risk AI System to Market

The compliance process follows four main steps: 
 

  1. Develop the system – design with risk and compliance in mind.   
  2. Conformity assessment – verify compliance, sometimes with the involvement of a notified body.   
  3. Registration – record the system in the EU database.   
  4. Declaration and CE marking – sign a declaration of conformity before market launch. 

 

Any substantial modification to the AI system requires reassessment

General Purpose AI and Foundation Models

General-purpose AI (GPAI) models like ChatGPT, Gemini, or DALL·E face obligations mainly around transparency: preparing technical documentation, ensuring compliance with copyright law, and providing summaries of training data. 

The Act also introduces rules for foundation models trained on large datasets. Some of these are designated as systemic risk foundation models, given their potential impact across multiple sectors. They face enhanced obligations, including rigorous model testing (such as red-teaming), systemic risk assessments, detailed regulatory reporting, strong cybersecurity, and even monitoring of energy efficiency. 

Penalties for Non-Compliance

Non-compliance with the EU AI Act carries heavy sanctions: fines of up to €35 million or 7% of global annual turnover, depending on the type and severity of the violation. This makes compliance not just a legal necessity but a business-critical priority. 

Beyond the AI Act: Interplay with Other EU Laws

The EU AI Act does not exist in isolation. It interacts with other key frameworks, including the GDPR (data protection), the Cyber Resilience Act (security), and the Product Safety & Machinery Regulation (for AI embedded in physical goods).  

You can read more about how to stay compliant with the GDPR here.

A successful compliance strategy must therefore be holistic, covering all these areas. 

The EU AI Act is a landmark regulation: the first of its kind to establish a risk-based, trust-driven approach to AI. It balances innovation with protection, ensuring that harmful practices are banned, high-risk applications are tightly regulated, and transparency is guaranteed for general-purpose AI. 

For businesses, compliance is not optional. Those that act early will not only avoid penalties but also position themselves as trusted leaders in responsible AI. 

Previous Post
Reawave France Logo

140 bis Rue de Rennes, 75006 Paris, France

+33 (0)6 98 56 51 31

REAWAVE supports companies in their transformation projects with tailored advice to maximize their performance and growth.