The regulation uses a risk-based classification framework that sorts every AI system into one of four tiers: unacceptable risk, high risk, limited risk, and minimal risk. Each tier triggers different obligations, from outright bans to voluntary best practices. Getting this classification right is the single most important compliance step, because every subsequent requirement flows from it.
Why the EU AI Act Matters for US Companies
The EU AI Act carries extraterritorial reach modeled on the GDPR. A US-based company falls within scope if it develops an AI system placed on the EU market, deploys an AI system whose outputs are used within the EU, or imports or distributes AI systems to EU customers. A SaaS company running its servers in Virginia but serving enterprise clients in Frankfurt is subject to the regulation just as much as a Berlin-based startup.
The enforcement timeline is already in motion. Prohibitions on unacceptable-risk AI practices took effect in February 2025. General-purpose AI model obligations and the governance infrastructure became applicable in August 2025. The most consequential deadline for many businesses, covering the full requirements for high-risk AI systems, arrives on August 2, 2026. For US companies with AI products in the European market, that leaves a narrow window to classify systems, assess gaps, and build compliance programs.
Parallel domestic developments make this doubly relevant. Colorado’s AI Act takes effect in mid-2026 with risk management and impact assessment requirements. Illinois enacted AI notification laws for employers starting January 2026. President Trump’s December 2025 Executive Order on AI signals federal interest in consolidating oversight. Understanding the EU’s risk classification system is practical preparation not just for European compliance but for the US regulatory trajectory that is following a similar pattern.
The Four Risk Tiers at a Glance
The EU AI Act organizes AI systems into four categories based on the potential harm they pose to health, safety, and fundamental rights. The logic is straightforward: the greater the risk, the heavier the regulatory burden.
| Risk Tier | Regulatory Treatment | Key Obligations | Maximum Penalty |
|---|---|---|---|
| Unacceptable | Banned entirely | Cannot be developed, deployed, or used in the EU | €35M or 7% global turnover |
| High Risk | Permitted with strict compliance | Risk management, data governance, technical documentation, human oversight, conformity assessment | €15M or 3% global turnover |
| Limited Risk | Transparency requirements | Disclose AI interaction to users; label AI-generated content | €7.5M or 1% global turnover |
| Minimal Risk | Largely unregulated | Voluntary codes of conduct encouraged | No specific AI Act penalties |
Most AI applications available today, from spam filters to recommendation engines, fall into the minimal risk category with no mandatory obligations. The regulation concentrates its enforcement weight on the top two tiers, which represent a relatively small percentage of all AI systems but carry the most significant consequences for individuals and society.
Unacceptable Risk: Banned AI Practices
These prohibitions have been enforceable since February 2, 2025. Any organization found developing or deploying these systems faces the Act’s maximum penalties.
The EU considers certain AI applications fundamentally incompatible with democratic values and human dignity. Eight categories of AI practices are banned outright:
- Subliminal manipulation: AI systems that deploy techniques below the threshold of conscious awareness to distort behavior in ways that cause significant harm. A system that influences purchasing decisions through imperceptible audio cues embedded in digital content would fall here.
- Exploitation of vulnerabilities: Systems targeting individuals based on age, disability, or socioeconomic circumstances to manipulate their behavior. An AI-powered toy that encourages dangerous actions in children is a commonly cited example.
- Social scoring: Government or government-authorized systems that evaluate or classify people based on social behavior or personality traits, leading to detrimental treatment disproportionate to the original context.
- Predictive policing based solely on profiling: AI that assesses the risk of an individual committing a crime based exclusively on profiling or personality assessment, without objective and verifiable facts linked to criminal activity.
- Untargeted facial image scraping: Building facial recognition databases through mass scraping of images from the internet or CCTV footage without targeted consent.
- Emotion recognition in workplaces and schools: AI that infers emotional states of employees or students, except where strictly necessary for medical or safety purposes.
- Biometric categorization inferring sensitive attributes: Systems that use biometric data to deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.
- Real-time remote biometric identification in public spaces for law enforcement: With very narrow exceptions for searching for missing persons or preventing imminent terrorist threats.
For US companies: Audit any AI system that touches biometric data, behavioral profiling, or automated psychological assessment. If it operates in the EU market and falls into any of these eight categories, it must be discontinued immediately.
High-Risk AI Systems: The Core of the Regulation
High-risk AI represents the most extensively regulated category under the Act. These systems are not banned, but they must meet a comprehensive set of requirements before entering the EU market and throughout their operational lifecycle. The compliance deadline for most high-risk obligations is August 2, 2026.
An AI system qualifies as high-risk through one of two pathways. First, if the system serves as a safety component of a product already regulated under EU harmonization legislation (medical devices, machinery, radio equipment, civil aviation, automotive, and similar product categories) and that product requires third-party conformity assessment. Second, if the system’s intended use falls within one of eight domain categories specified in Annex III of the Act.
Annex III: The Eight High-Risk Domains
| Domain | Examples of High-Risk Use Cases |
|---|---|
| 1. Biometrics | Remote biometric identification, biometric categorization by sensitive attributes, emotion recognition systems |
| 2. Critical Infrastructure | Safety components in road traffic, water, gas, heating, electricity supply, and digital infrastructure management |
| 3. Education & Vocational Training | Determining access to education, assessing students, evaluating learning outcomes, monitoring exam integrity |
| 4. Employment & Worker Management | AI-driven recruitment, resume screening, performance evaluation, promotion decisions, task allocation, contract termination |
| 5. Essential Services | Credit scoring, insurance risk assessment and pricing (life and health), emergency call triage and dispatch prioritization |
| 6. Law Enforcement | Crime risk assessment, evidence reliability evaluation, polygraph-type tools, profiling during investigations |
| 7. Migration & Border Control | Security risk assessment of travelers, asylum application processing, residence permit evaluation, border surveillance |
| 8. Justice & Democratic Processes | AI assisting judicial authorities in researching and interpreting facts and law, election-related influence systems |
Context determines everything in this classification. The same underlying technology, such as a natural language processing model, could be minimal risk when used as a customer service chatbot but high risk when deployed to screen job applicants. Classification follows the intended use, not the technology itself.
One important nuance: an AI system listed in Annex III can claim an exemption from high-risk status if it does not pose a significant risk of harm and meets specific conditions, such as performing only a narrow procedural task, improving the result of a previously completed human activity, or serving a preparatory function without replacing human judgment. Providers who claim this exemption must formally document their assessment and register the system in the EU database.
What High-Risk Compliance Requires
The compliance obligations for high-risk AI systems are the most resource-intensive part of the Act. Seven core requirements apply throughout the system’s lifecycle:
- Risk management system. A continuous, documented process for identifying, analyzing, estimating, and evaluating risks. This is not a one-time assessment but an iterative system that runs from design through deployment and monitoring.
- Data governance. Training, validation, and testing datasets must be relevant, representative, and as free of errors as possible. Bias detection and mitigation measures are expected, not optional.
- Technical documentation. Detailed records covering the system’s design, development process, capabilities, limitations, and risk profile. This documentation must be sufficient for authorities to assess compliance.
- Record-keeping and logging. High-risk AI systems must automatically log events throughout their lifecycle, creating an audit trail that can be reviewed by regulators or deployers.
- Transparency and information to deployers. Clear instructions for downstream users that explain the system’s intended purpose, capabilities, known limitations, and the level of human oversight required.
- Human oversight measures. The system must be designed so that qualified humans can effectively oversee its operation, understand its outputs, and intervene or override when necessary.
- Accuracy, robustness, and cybersecurity. Performance must meet stated benchmarks, and the system must be resilient against errors, faults, and attempts at manipulation, including adversarial attacks.
Before placing a high-risk system on the EU market, providers must also complete a conformity assessment (either self-assessed or through a notified body, depending on the category), affix CE marking, register the system in the EU database, and issue a declaration of conformity. For US companies accustomed to less prescriptive regulatory environments, this represents a significant shift in how AI products are documented, tested, and governed.
Limited Risk: Transparency as the Primary Obligation
The limited risk tier covers AI systems that interact directly with people or generate content that could be mistaken for human-created material. The regulatory burden is lighter here: the central requirement is transparency.
Chatbots must inform users they are interacting with an AI system, unless the AI nature is obvious from the context. Deepfakes and AI-generated content, including images, audio, and video, must be labeled as artificially generated or manipulated. Emotion recognition systems and biometric categorization systems that are not prohibited must notify the people being analyzed. These transparency obligations for deployers become binding on August 2, 2026.
General-purpose AI (GPAI) models, including large language models and multimodal systems, also carry specific transparency requirements that took effect in August 2025. Providers of GPAI models must maintain technical documentation, provide downstream deployers with integration information, publish training data summaries, and comply with EU copyright law. If a GPAI model is classified as having systemic risk (currently defined by a training compute threshold of 1025 floating point operations), additional obligations apply, including model evaluations, adversarial testing, serious incident reporting, and cybersecurity protections.
Minimal Risk: The Majority of AI Applications
AI systems that do not fit into any of the categories above fall into minimal risk. This includes the vast majority of AI applications currently deployed: spam filters, AI-powered video games, inventory management systems, basic recommendation algorithms, and similar tools. No mandatory obligations apply under the AI Act, though voluntary codes of conduct are encouraged.
The absence of AI Act requirements does not mean zero regulatory exposure. Minimal risk AI systems still need to comply with existing EU legislation, including the GDPR for systems processing personal data, consumer protection directives, and sector-specific regulations. Good governance practices, such as documenting your system’s purpose and maintaining human oversight protocols, are worth implementing regardless of classification, because an AI system can shift risk categories as its use case evolves.
How to Classify Your AI System: A Practical Approach
Classification under the EU AI Act follows intended purpose, not the technical architecture of the system. A transformer model is not inherently high-risk. A computer vision algorithm is not automatically banned. The regulatory classification depends on what the system does and in what context it operates.
A straightforward classification sequence works well for most organizations:
- Screen for prohibited practices first. Does the system involve any of the eight banned categories? If yes, it cannot operate in the EU market.
- Check Annex III domains. Does the system’s intended use fall within one of the eight high-risk sectors? Map each AI use case against these categories specifically, not generically.
- Evaluate the product safety pathway. Is the AI system a safety component of a product regulated under EU harmonization legislation? Does that product require third-party conformity assessment?
- Assess exemption eligibility. If the system falls under Annex III, does it qualify for the narrow exemption? Document this assessment formally.
- Check transparency triggers. Does the system interact directly with users, generate synthetic content, or perform emotion recognition? These may trigger limited risk obligations.
- Default to minimal risk. If none of the above apply, the system is minimal risk under the AI Act, though other EU laws still apply.
Organizations with multiple AI systems should build a centralized AI inventory. Map each system against the classification criteria, document the assessment rationale, and assign ownership for ongoing monitoring. Risk classifications are not static. A change in how a system is used, the data it processes, or the decisions it informs can shift its classification to a higher tier.
How ISO/IEC 42001 Supports EU AI Act Compliance
ISO/IEC 42001 is the first international standard for an Artificial Intelligence Management System (AIMS). It provides a structured framework for establishing, implementing, and continuously improving AI governance within an organization. While the EU AI Act defines what must be done, ISO/IEC 42001 offers a methodology for how to do it.
The overlap between the two frameworks is substantial. The AI Act requires risk management systems for high-risk AI; ISO/IEC 42001 provides a risk assessment and treatment process aligned with ISO 31000. The Act mandates data governance; the standard includes controls for data quality, bias management, and lifecycle governance. The Act demands technical documentation and record-keeping; ISO/IEC 42001’s management system approach ensures documentation is maintained as a natural output of the governance process, not as a last-minute compliance exercise.
For US companies preparing for EU AI Act compliance, implementing ISO/IEC 42001 provides two practical advantages. First, it creates a repeatable governance structure that can be audited, scaled, and adapted as regulations evolve across jurisdictions. Second, certification against the standard signals credible commitment to responsible AI practices, which is increasingly relevant in vendor selection and enterprise procurement processes.
Key Compliance Deadlines for US Companies
| Date | What Takes Effect |
|---|---|
| Feb 2, 2025 | Prohibited AI practices banned; AI literacy obligations begin |
| Aug 2, 2025 | GPAI model obligations; governance infrastructure; penalty regime operational |
| Aug 2, 2026 | Full high-risk AI requirements; transparency obligations for deployers; remaining provisions |
| Aug 2, 2027 | Extended transition for high-risk AI embedded in regulated products (Annex I) |
The August 2, 2026 deadline is the most consequential for the majority of US companies with AI exposure in Europe. By that date, conformity assessments should be completed, technical documentation finalized, CE marking affixed for applicable systems, and EU database registration finished for high-risk systems.
Where to Start
The EU AI Act’s risk classification framework is not ambiguous about what it expects, even if individual edge cases require careful analysis. For US companies, the essential first step is building a complete inventory of AI systems, classifying each against the four risk tiers, and identifying which obligations apply. Starting with the prohibited practices audit and working down through high-risk, limited, and minimal categories creates a clear compliance roadmap.
Organizations that take this seriously now will be better positioned than those treating August 2026 as a distant deadline. Implementing an AI management system aligned with ISO/IEC 42001 provides the governance backbone to meet EU AI Act requirements and adapt to the expanding patchwork of AI regulations globally.
GAICC’s ISO/IEC 42001 training programs equip compliance professionals, AI project managers, and governance teams with the practical skills to classify AI systems, build conformity documentation, and implement the management structures the EU AI Act demands. Explore GAICC’s ISO/IEC 42001 certification courses to start building your compliance framework today.
