Skip to content.

Picture this: an AI system that can predict your mood better than you can, or a car that always takes you on the scenic route… whether you want it to or not.

AI is transforming the world around us, and with its extraordinary potential come many questions about safety, fairness and its impact on our lives. The EU’s new AI Act, overseen by the new European AI Office, tackles these questions head-on and provides a first-of-its-kind framework for responsible AI development and use.

This article provides a summary of the EU AI Act as of March 25, 2024, including: 

  • What the EU AI Act is and what it covers
  • High-risk AI development and deployment requirements
  • The consequences of EU AI Act non-compliance
  • The EU AI Act status and timeline
  • Global impact of the EU AI Act

What is the EU AI Act and what does it over?

The AI Act seeks to balance innovation with safeguarding user rights, categorizing AI systems into four tiers based on the level of potential risk they pose.

The EU AI Act tiers are:

  • Unacceptable risk – Prohibited AI applications that threaten citizen’s rights. These include systems that categorize people based on sensitive traits like race or facial features, untargeted facial recognition technology, emotion recognition in workplaces and schools, general-purpose social scoring, predictive policing and AI designed to exploit vulnerabilities or manipulate human behavior. Essentially, Big Brother tech won’t fly under this legislation.
  • High risk – AI used in critical areas like infrastructure, education, safety, law enforcement and more. Think medical AI diagnosing diseases, AI optimizing power grids, or even AI judges in court cases. These high-stakes systems could be incredibly beneficial to society – but they also need guardrails and meticulous handling to limit the potential of harm from malfunction or misuse.
  • Limited risk – Generally lower-risk AI technologies, with transparency being a key requirement for its use – think chatbots or deepfake tech. The AI Act insists you know when you’re interacting with AI, so no mistaking your new bot buddy for a real friend (yet).
  • Minimal risk – AI in video games or spam filters falls into this category. No need to call in the regulators for these.

High-risk AI development and deployment requirements

The AI Act outlines specific obligations for providers of high-risk AI systems, including:

  • Development – High-risk systems must be designed in accordance with the Act’s requirements, avoiding forbidden practices.
  • EU AI Act conformity assessments and testing – The system undergoes evaluation (potentially involving a notified body) to ensure it meets the Act’s standards. Rigorous testing and usage auditing ensures reliability, security and compliance with the Act’s purposes – because no one wants glitchy AI running the power grid.
  • Registration – Standalone systems are registered in an EU database.
  • Declaration and marking – A declaration of conformity is signed and the system receives a clear CE marking, which allows it to be sold in the EEA.
  • Post-market monitoring – Providers and deployers monitor the system’s performance, report incidents and adjust as needed.
  • Human oversight – There must be human oversight of high-risk AI to minimize risks and underline accountability.
  • Citizen complaint process – People must be able to submit queries or complaints and receive explanations around decisions made by high-risk AI that may affect their rights.

That’s not all. As well as these requirements are specifically for high-risk AI, the AI Act also covers requirements that affect both high-risk AI obligations and other categories carrying risk:

  • Explainability – Prevents “black box” decisions and roots out bias in AI logic and processes.
  • Regulatory sandboxes – Allows companies, particularly SMEs and startups, to test and develop innovative AI under supervision.
  • Training content auditing – General-purpose AI (GPAI) technologies must meet transparency requirements and provide logs and summaries of the resources and content used to train them. Some types of GPAI will be subject to further requirements and evaluations, such as incident reporting and risk assessments.

The consequences of EU AI Act non-compliance

The EU AI Act takes violations seriously and the fines are hefty enough to make even tech giants double-check their code.

  • Tier 1: Using prohibited AI systems – The most severe fines (up to €35,000,000 or 7% of annual global turnover) apply to using AI systems deemed as unacceptable risks.
  • Tier 2: Violating high-risk AI obligations – Failing to meet the Act’s requirements for high-risk AI systems could result in fines up to €15,000,000 or 3% of annual global turnover.
  • Tier 3: Misleading regulators – Providing incorrect, incomplete, or misleading information to authorities carries fines up to €7,500,000 or 1% of annual global turnover.

All this said, the AI Act recognizes the challenges facing smaller businesses. It allows for reduced fines for SMEs and start-ups based on the severity of the offense and other factors. Overall, each member state will determine how to implement these provisions within their legal systems.

Who’s liable?

The AI Act holds multiple parties accountable for ensuring ethical AI use. This includes providers, deployers, importers, distributors and notified bodies (responsible for assessments). Even providers of GPAI models can be held liable if their models are later incorporated into harmful systems.

EU AI Act status and timeline: deadlines for transposition and adoption

The EU AI Act is undergoing final preparations before being officially adopted, and it’s expected to be finalized before the end of the current legislative session.

Once published in the Official Journal of the European Union, the Act will officially become law 20 days later. However, different provisions will come into force on a staggered timeline.

Most of the Act’s provisions, including those governing high-risk AI, will be fully applicable 24 months after it enters into force. There are a few exceptions to this timeline:

  • Prohibited AI practices – Bans on unacceptable AI systems will go into effect sooner, only six months after the entry into force date.
  • Codes of practice – Guidelines for voluntary compliance will be applicable nine months after the entry into force date.
  • General-purpose AI – Rules specifically for large-scale AI models, including governance requirements, will apply 12 months after the entry into force date.

Important considerations for this are that the exact dates depend on when the legislation completes the final stage of review and publication – and that this law is designed to adapt to a field changing by the day.

Why the AI Act has global impact – and what comes next

The AI Act reflects a growing worldwide awareness of the need to balance the extraordinary possibilities of AI with ethical considerations. It addresses real-world concerns and has the potential to influence and shape responsible AI development far beyond Europe.

Here’s why this legislation sets a crucial precedent:

  • Building trust – By outlining clear expectations for safe and fair AI, the Act increases public confidence in AI systems. This assures people that safeguards are in place to protect their rights and experiences.
  • Protecting rights – It creates safeguards against AI-fueled discrimination, biased predictive policing, and ensures AI doesn’t become a tool for suppressing individual freedoms.
  • Innovation with guardrails – The Act encourages the development of safe and ethical AI, fostering a thriving, responsible technology sector.

The EU’s AI Act is a major step towards ensuring that AI benefits society while safeguarding fundamental rights. However, the  AI technology and regulations evolve rapidly. With its far-reaching impact, we might see a regulatory chain reaction – as other countries look to the EU’s example and rush to draft their own AI rulebooks.

We can expect revisions and further regulations to address the complexities that will undoubtedly arise as humans and AI continue to coexist. Hopefully, with proactive legislation like the EU AI Act, we’ll avoid needing that metaphorical off-switch for the entire internet someday. Watch this space.

For further updates on the latest regulatory updates and legislative changes, don’t forget to sign up for updates from the NAVEX Risk & Compliance Matters blog.