Shield Glossary

EU AI Act

The EU AI Act introduces a structured framework for regulating artificial intelligence across the European Union. The Act is a risk-based classification system that determines the level of regulation for different AI applications. For businesses, developers, and deployers building or using AI systems that affect people in meaningful ways, understanding these classifications is not optional. It is a compliance imperative.

Introduction to High-Risk AI Systems

The EU AI Act is the world’s first comprehensive legal framework specifically designed to govern artificial intelligence. Its central premise is straightforward: the stricter the potential harm an AI system can cause, the more rigorous the rules that apply to it.

High-risk AI systems are those that the EU has determined pose a significant threat to health, safety, or fundamental rights. These systems are not banned outright, but they are subject to a demanding set of obligations before they can be placed on the EU market or put into service.

The rationale for this approach is to balance innovation with protection. A blanket prohibition on powerful AI tools would stifle the economic and social benefits they can deliver. But leaving consequential systems entirely unregulated risks serious harm to individuals and to society. The high-risk category is the EU’s attempt to draw that line deliberately and consistently.

The Act applies to companies based in the EU, as well as any provider or deployer whose AI systems produce outputs used within the EU, giving it a broad extraterritorial reach comparable to the GDPR.

Classification of AI Systems

The EU AI Act organizes AI systems into four broad risk tiers, each carrying different regulatory obligations.

High-Risk AI Systems

A system is classified as high-risk if it falls into one of two categories defined in the Act.

The first category covers AI systems that are safety components of products already regulated under existing EU law, such as machinery, medical devices, or vehicles. If an AI system plays a critical safety role in one of these regulated products, it inherits a high-risk classification.

The second category is a specific list of standalone AI applications across eight sectors set out in Annex III of the Act:

  • Biometric identification and categorization of natural persons
  • Critical infrastructure management (energy grids, water, transport)
  • Education and vocational training — such as systems that determine access to educational institutions or assess students
  • Employment and HR — including CV screening tools and systems used in hiring, promotion, or termination decisions
  • Access to essential services — such as credit scoring, insurance risk assessment, and social benefits eligibility
  • Law enforcement — including systems used to assess the risk of an individual committing a crime
  • Migration and border control — such as tools used to assess visa or asylum applications
  • Administration of justice and democratic processes — including AI used in judicial decision-making

This list is subject to revision by the European Commission as AI technology and its applications evolve.

Limited Risk AI Systems

Limited risk systems face lighter-touch obligations focused primarily on transparency. The most prominent example is chatbots and AI-generated content: users must be informed that they are interacting with an AI rather than a human. Similarly, systems that generate deepfakes, such as synthetic images, audio, or video, must disclose that the content has been artificially created.

The goal is not to restrict these systems but to ensure users can make informed decisions about how they engage with AI-generated outputs.

Minimal Risk AI Systems

The vast majority of AI applications fall into the minimal risk category. This includes spam filters, AI-powered recommendation engines, inventory management tools, and similar systems. These face no mandatory obligations under the Act, though the EU encourages voluntary codes of conduct.

Obligations for High-Risk AI Systems

Developers and deployers of high-risk AI systems must meet a comprehensive set of requirements before deployment and on an ongoing basis. Non-compliance can result in fines of up to €30 million or 6% of global annual turnover, whichever is higher.

Risk Management

Providers must establish and maintain a risk management system throughout the entire lifecycle of a high-risk AI system. This means identifying foreseeable risks, estimating and evaluating them, and adopting measures to address them as a continuous process that is updated as the system is used and new information emerges.

Data Quality and Governance

Training, validation, and testing datasets must meet strict quality standards. They must be relevant, sufficiently representative of the real-world populations and scenarios the system will encounter, and free from errors and bias as far as reasonably possible. This obligation directly targets one of the most common sources of harm in AI systems: biased or unrepresentative training data that leads to discriminatory outcomes.

Technical Documentation and Transparency

Before a high-risk system is placed on the market, providers must prepare detailed technical documentation demonstrating that the system meets the Act’s requirements. This documentation must be kept up to date and made available to regulators on request.

Additionally, high-risk systems must be designed so that their operation is sufficiently transparent to enable deployers to interpret outputs and use them appropriately. In practice, this means providing clear instructions for use, including the system’s intended purpose, its level of accuracy, and any known limitations or foreseeable risks.

Human Oversight

High-risk AI systems must be designed to allow effective human oversight. This includes the ability for a human operator to monitor the system’s operation, intervene when necessary, and override or shut down the system. The Act recognizes that AI systems can fail in unexpected ways, and that humans must remain capable of catching and correcting errors.

Accuracy, Robustness, and Cybersecurity

Systems must achieve appropriate levels of accuracy for their intended purpose and must be resilient against attempts to manipulate or circumvent their behavior, including adversarial attacks.

Registration

Providers of high-risk AI systems must register their systems in an EU-wide database before deploying them. This registry is publicly accessible, giving regulators, researchers, and affected individuals visibility into which high-risk systems are operating in the market.

Forbidden AI Systems

The EU AI Act does not merely regulate certain AI applications; it prohibits some outright. These are systems whose risks are considered so fundamental that no business justification can offset them.

Prohibited practices under the Act include:

  • Subliminal manipulation — AI systems that use techniques operating below the threshold of conscious awareness to influence people’s behavior in ways that are harmful.
  • Exploitation of vulnerabilities — Systems that deliberately target the vulnerabilities of specific groups, such as children or people with disabilities, to distort their behavior in harmful ways.
  • Social scoring — AI systems used by public authorities to evaluate or classify individuals based on social behavior or personal characteristics in ways that lead to detrimental treatment unrelated to the context in which the data was generated. This is a direct response to social credit systems deployed in other jurisdictions.
  • Real-time remote biometric identification in public spaces — The use of live facial recognition or similar systems in publicly accessible spaces for law enforcement purposes is prohibited, with narrow exceptions for targeted searches for missing persons, prevention of imminent terrorist threats, and prosecution of serious crimes.
  • Predictive policing based on profiling — AI systems that assess the likelihood of a person committing a crime based solely on profiling or personality traits, without any objective basis tied to actual behavior.
  • Emotion recognition in workplaces and schools — Systems that infer an individual’s emotional state in employment or educational settings are prohibited due to the significant power imbalances involved.
  • Biometric categorization revealing sensitive attributes — Systems that categorize individuals based on biometric data to infer race, political opinion, religion, or sexual orientation.

Violations of these prohibitions carry the highest fines under the Act: up to €35 million or 7% of global annual turnover.

Understanding Systemic Risk

Beyond individual high-risk classifications, the EU AI Act introduces the concept of systemic risk for the most powerful general-purpose AI (GPAI) models. These are models trained using compute above a defined threshold (currently 10²⁵ FLOPs).

Systemic risk refers to the potential for a GPAI model to cause widespread, cascading harm across entire sectors or society at large. Because these models are general-purpose, they can be integrated into thousands of downstream applications simultaneously, amplifying the impact of any failure, bias, or misuse.

Examples of systemic risks in AI include:

  • A foundational language model with embedded biases that propagates those biases across hundreds of applications built on top of it.
  • A critical AI infrastructure model that, if compromised, could disrupt essential services across multiple sectors simultaneously.
  • Highly capable AI systems that could be repurposed to assist in the development of chemical, biological, radiological, or nuclear weapons.

Providers of GPAI models deemed to pose systemic risk face additional obligations, including adversarial testing (red-teaming), incident reporting to the European Commission, and cybersecurity measures appropriate to the scale of potential harm. The European AI Office, established within the Commission, has primary oversight responsibility for these frontier models.

Frequently Asked Questions

What is a high-risk AI system under the EU AI Act? A high-risk AI system is one that poses a significant risk to health, safety, or fundamental rights. This includes AI used in areas such as biometric identification, employment decisions, credit scoring, law enforcement, and critical infrastructure. These systems are not banned but must comply with strict regulatory requirements before deployment.

How does the EU AI Act classify AI systems?

The EU AI Act uses a risk-based classification model with four categories:

  • High risk: Strict regulatory requirements
  • Limited risk: Transparency obligations (e.g., chatbots, deepfakes)
  • Minimal risk: No mandatory requirements
  • Unacceptable risk: Prohibited systems

This framework ensures that regulation is proportional to the potential harm of the AI system.

What obligations apply to high-risk AI systems?

High-risk AI systems must meet several requirements, including:

  • Risk management throughout the system lifecycle
  • High-quality, bias-mitigated training data
  • Technical documentation and transparency
  • Human oversight mechanisms
  • Accuracy, robustness, and cybersecurity standards
  • Registration in an EU database

Failure to comply can result in significant financial penalties.

Which AI systems are prohibited under the EU AI Act?

The Act bans AI systems considered to pose unacceptable risks, including:

  • Subliminal manipulation of behavior
  • Exploitation of vulnerable groups
  • Social scoring by public authorities
  • Real-time biometric surveillance in public spaces (with limited exceptions)
  • Predictive policing based solely on profiling
  • Emotion recognition in workplaces or schools
  • Biometric categorization revealing sensitive attributes

These prohibitions carry the highest penalties under the Act.

Does the EU AI Act apply to companies outside the EU? Yes. The EU AI Act has extraterritorial scope, meaning it applies to any company whose AI systems are used within the European Union, regardless of where the company is based. This is similar to the approach taken by the GDPR.