In Deloitte’s 2025 State of Generative AI in the Enterprise survey, 38% of respondents identified regulatory compliance as the top barrier to deploying AI, a figure that climbed 10 percentage points in a single year. For U.S. organizations building or buying AI systems, the question is no longer whether governance matters but which framework to follow. ISO/IEC 42001:2023, the first international management system standard for artificial intelligence, answers that question with a structured, certifiable approach to AI risk management. This article breaks down exactly what the standard requires, clause by clause, and explains how each requirement applies to organizations operating in the American regulatory environment.
What Is ISO/IEC 42001 and Why Does Risk Management Sit at Its Core?
ISO/IEC 42001:2023 is the world’s first certifiable AI management system standard, published in December 2023 by the International Organization for Standardization and the International Electrotechnical Commission. It establishes requirements for an Artificial Intelligence Management System (AIMS) that covers governance, ethics, transparency, and continuous improvement across the AI lifecycle.
The standard follows the same Annex SL high-level structure as ISO/IEC 27001 (information security) and ISO 9001 (quality management), which means organizations already certified against those frameworks will recognize the clause numbering and Plan-Do-Check-Act methodology. That structural familiarity is deliberate. AI risk management does not exist in isolation; it connects to data governance, information security, and enterprise risk programs already in place.
Risk management is the engine that drives the entire AIMS. Clauses 4 through 10 each touch some aspect of identifying, evaluating, treating, or monitoring AI risk, and Annex A provides 38 specific controls that organizations select based on their risk assessment results. Without a functioning risk management process, no other part of the standard works. The AI policy has no teeth, operational controls have no justification, and performance evaluation has nothing to measure against.
Key distinction for U.S. organizations: ISO/IEC 42001 provides a compliance foundation that aligns with the NIST AI Risk Management Framework while offering something NIST alone does not: third-party certifiability. A company can claim NIST alignment, but ISO/IEC 42001 certification means an accredited conformity assessment body has verified the management system through an independent audit.
How the Standard Structures AI Risk Management: A Clause-by-Clause Breakdown
ISO/IEC 42001 distributes risk management responsibilities across multiple clauses rather than confining them to a single section. This design is intentional: risk should inform every decision from scope definition to continuous improvement. Here is how each relevant clause contributes.
Clause 4: Understanding the Organization and Its Context
Before any risk can be assessed, the organization must define the boundaries of its AIMS. Clause 4 requires identifying internal factors (organizational culture, technical capabilities, existing governance structures) and external factors (regulatory landscape, industry trends, stakeholder expectations) that influence AI risk. For a U.S. healthcare company using AI for diagnostic imaging, internal factors might include the maturity of its data governance program, while external factors would include FDA guidance on AI/ML-based software as a medical device.
The scope definition under Clause 4.3 determines which AI systems fall under the management system. Getting this wrong, either too narrow or too broad, creates blind spots or unsustainable overhead. Organizations that deploy third-party AI services alongside internally developed models need to account for both categories in their scope statement.
Clause 5: Leadership and Commitment
Clause 5 places accountability for AI governance squarely on top management. The leadership team must establish an AI policy, assign clear roles and responsibilities for AI risk oversight, and ensure that governance is woven into the organization’s broader business strategy. This requirement exists because fragmented ownership is one of the most common failure points in AI risk programs. Deloitte’s research has noted that AI risk management often spans product management, data engineering, legal, compliance, and trust-and-safety teams, and without executive sponsorship, these groups operate in silos.
Clause 6: Planning (The Heart of AI Risk Management)
Clause 6 is where the standard’s risk management requirements become most granular. It breaks into several sub-clauses, each addressing a distinct stage of the risk management lifecycle.
Clause 6.1.1 (General) requires the organization to consider context factors from Clause 4 and stakeholder requirements from Clause 4.2 when planning the AIMS. The goal is to identify risks and opportunities that could affect the system’s ability to achieve its intended outcomes.
Clause 6.1.2 (AI Risk Assessment) mandates a formal, documented process for identifying and evaluating AI-specific risks. This goes beyond traditional IT risk registers. AI risks include algorithmic bias, model drift, data poisoning, adversarial attacks, lack of explainability, and environmental impact from compute-intensive training. The assessment must evaluate both the likelihood and the potential severity of each risk, then compare results against the organization’s defined risk criteria.
Clause 6.1.3 (AI Risk Treatment) requires the organization to select treatment options for each identified risk. The four standard approaches apply: mitigate, avoid, transfer, or accept. After choosing treatments, organizations must compare their selected controls against the 38 controls listed in Annex A to confirm that no critical safeguard has been overlooked. The result is two mandatory documents: a Statement of Applicability (SoA) listing all selected and excluded Annex A controls with justification, and an AI Risk Treatment Plan detailing implementation responsibilities, timelines, and resource allocation.
Clause 6.1.4 (AI System Impact Assessment) introduces a requirement that has no direct equivalent in ISO/IEC 27001. While the risk assessment in 6.1.2 focuses on organizational risks, the impact assessment looks outward, evaluating consequences for individuals, groups, and society. This includes examining intended uses, foreseeable misuse, and the potential for harm to people who interact with or are affected by the AI system. The published companion standard ISO/IEC 42005:2025 provides detailed guidance for conducting these assessments.
Clause 8: Operational Planning and Control
If Clause 6 is the planning phase, Clause 8 is where those plans become operational. Clause 8.1 covers the day-to-day processes that keep the AIMS running: implementing controls, managing changes, and maintaining documented procedures. Clauses 8.2, 8.3, and 8.4 require the organization to periodically re-run its AI risk assessments, update risk treatments, and conduct fresh impact assessments, particularly when significant changes occur in AI systems, data sources, or deployment contexts.
The distinction between Clause 6 and Clause 8 risk assessments matters. Clause 6 establishes the process and criteria. Clause 8 executes those processes on an ongoing basis and documents the results. Auditors look for evidence that both exist and that operational assessments reflect current conditions rather than copying the initial planning-phase output.
Clauses 9 and 10: Performance Evaluation and Continual Improvement
Clause 9 closes the feedback loop. Organizations must monitor and measure AIMS performance, conduct internal audits, and perform management reviews that assess whether risk treatments are achieving their intended outcomes. Clause 10 then requires corrective actions for any nonconformities discovered during evaluation, driving the system toward continuous improvement. Together, these clauses ensure that AI risk management is iterative rather than a one-time compliance exercise.
Risk Management Requirements at a Glance
| Clause | Risk Management Focus | Key Outputs |
|---|---|---|
| 4 (Context) | Internal/external factors affecting AI risk | AIMS scope statement, stakeholder register |
| 5 (Leadership) | Executive accountability, AI policy | AI policy document, role assignments |
| 6.1.2 (Risk Assessment) | Identify and evaluate AI-specific risks | AI risk register, risk criteria matrix |
| 6.1.3 (Risk Treatment) | Select and justify controls for each risk | Statement of Applicability, Treatment Plan |
| 6.1.4 (Impact Assessment) | Evaluate societal and individual impacts | AI impact assessment report |
| 8 (Operation) | Execute and update risk processes | Updated assessments, operational evidence |
| 9 (Performance) | Monitor, audit, and review effectiveness | Audit reports, management review records |
| 10 (Improvement) | Correct nonconformities, drive improvement | Corrective action records |
Annex A Controls: The Practical Toolkit for AI Risk Treatment
When Clause 6.1.3 tells you to select controls, Annex A is where you shop. The standard includes 38 controls organized into themes that span the entire AI lifecycle: governance policies, resource documentation, system lifecycle management, data governance, transparency and explainability, responsible use, third-party management, and stakeholder communication.
These controls are not optional suggestions. During certification, auditors verify that the organization has reviewed every Annex A control and either implemented it or documented a risk-based justification for exclusion. Annex B provides implementation guidance for each control, while Annex C lists AI-specific risk sources and organizational objectives to consider during the assessment process.
A few controls are particularly relevant to organizations managing AI risk in the U.S. market. Control A.5.1 addresses AI system lifecycle documentation, requiring traceability from design through deployment and retirement. Control A.7.1 covers data quality and provenance, a critical concern for organizations subject to state-level privacy laws or sector-specific regulations like HIPAA. Control A.9 addresses transparency and explainability, which increasingly matters as state legislatures pass AI disclosure requirements. And Control A.10 covers monitoring, requiring organizations to track AI system performance and detect anomalies like model drift or bias amplification over time.
AI Risk Assessment vs. AI System Impact Assessment: Two Distinct Requirements
One of the most misunderstood aspects of ISO/IEC 42001 is the relationship between the AI risk assessment (Clause 6.1.2) and the AI system impact assessment (Clause 6.1.4). They are separate processes that serve different purposes.
The AI risk assessment focuses inward: what risks does AI pose to the organization’s objectives, operations, reputation, and compliance posture? Think cybersecurity vulnerabilities, intellectual property exposure through model training, vendor lock-in with third-party AI providers, or regulatory penalties from non-compliant systems.
The impact assessment faces outward: what consequences could the AI system create for people and communities? This includes examining algorithmic bias that disadvantages protected groups, economic displacement effects, environmental costs of compute-intensive models, and scenarios where system failures could cause physical harm. ISO/IEC 42005:2025, published in April 2025, provides a structured methodology for conducting these outward-facing assessments.
Critical audit point: Both assessments feed into the same risk treatment plan, but they capture fundamentally different categories of consequence. Organizations that conflate them into a single exercise will likely miss critical external impacts that auditors specifically look for during certification.
How ISO/IEC 42001 Risk Requirements Align with the NIST AI RMF
U.S. organizations frequently ask whether they need ISO/IEC 42001, the NIST AI Risk Management Framework, or both. The answer depends on organizational goals, but the two frameworks complement rather than duplicate each other.
NIST AI RMF organizes risk management into four functions: Govern, Map, Measure, and Manage. ISO/IEC 42001’s Clause 5 (Leadership) maps directly to NIST’s Govern function, which emphasizes oversight, accountability, and documented risk ownership. Clause 6’s planning requirements align with Map and Measure, where organizations define the AI system context and assess risks quantitatively. Clause 8’s operational controls correspond to Manage, where treatment strategies are executed and monitored.
The critical difference: NIST provides the risk vocabulary and assessment methodology, while ISO/IEC 42001 provides the management system structure and third-party certifiability. Organizations can use NIST AI RMF as their risk assessment methodology within the ISO/IEC 42001 framework, satisfying both in a single integrated program. AWS, for example, has published detailed guidance on using ISO/IEC 42001 alongside NIST-aligned threat modeling techniques like STRIDE and PASTA to create a layered risk governance model.
Practical Steps to Meet ISO/IEC 42001 Risk Management Requirements
For U.S. organizations beginning their ISO/IEC 42001 journey, the risk management requirements can feel abstract. Here is a practical sequence that translates them into concrete actions.
- Inventory your AI use cases. Map every AI system in your organization, including third-party tools, internal models, and embedded AI features in SaaS products. For each, document the owner, purpose, data sources, and lifecycle stage.
- Define risk criteria and appetite. Establish what “acceptable risk” means for your organization. This should address financial, reputational, regulatory, ethical, and operational dimensions.
- Build an AI-specific risk register. Go beyond generic IT risks. Include AI-native threats: training data poisoning, model inversion attacks, prompt injection, output hallucination, concept drift, and fairness degradation.
- Conduct the dual assessment. Run both the organizational risk assessment (6.1.2) and the societal impact assessment (6.1.4) as distinct exercises with separate documentation.
- Map controls to Annex A. For each risk treatment, identify which Annex A controls apply. Document the rationale for every inclusion and exclusion in your Statement of Applicability.
- Integrate with existing frameworks. If your organization already holds ISO/IEC 27001 certification, reuse your ISMS backbone for governance, risk, policy, and audit processes. Align AI-specific controls to the existing management review cycle.
Third-Party AI and Supply Chain Risk Under ISO/IEC 42001
Most U.S. organizations do not build every AI system they use. They purchase models, license APIs, embed AI-powered features from SaaS vendors, and rely on pre-trained models from companies like OpenAI, Google, or Anthropic. ISO/IEC 42001 does not let the organization off the hook for these external dependencies.
Clause 8’s operational controls require organizations to govern any externally provided process, product, or service that affects the AIMS. If a vendor’s model influences customer-facing decisions, the organization must assess the risks that vendor introduces, define requirements the vendor must meet, and monitor whether those requirements are being fulfilled. Annex A reinforces this through controls that require allocation of responsibilities across the entire AI lifecycle, including the portions handled by partners and suppliers.
This matters especially for organizations using generative AI. When StackAware, an AI-powered cybersecurity company, pursued ISO/IEC 42001 certification through Schellman, their risk assessment identified third-party data leakage as a significant concern because OpenAI had experienced a cross-tenant breach. They formally accepted that residual risk after evaluating the trade-offs, and documented their reasoning. That kind of deliberate, documented decision-making is exactly what auditors look for.
The U.S. Regulatory Context: Why ISO/IEC 42001 Matters Now
The United States does not currently mandate ISO/IEC 42001 certification by law. That does not mean it lacks relevance. Several regulatory signals point toward a future where structured AI risk management is expected rather than optional.
The NIST AI RMF, while voluntary, has been referenced in executive orders and federal procurement guidance. Multiple states, including Colorado, Illinois, and California, have introduced or passed AI-specific legislation addressing automated decision-making, bias auditing, and transparency requirements. Federal agencies like the FDA, SEC, and FTC have each issued sector-specific AI guidance that aligns with the kinds of risk management practices ISO/IEC 42001 requires.
Organizations that serve European customers face an additional driver. The EU AI Act, which became law in 2024, imposes mandatory risk management obligations on high-risk AI systems. ISO/IEC 42001’s clause structure maps closely to the EU AI Act’s requirements for risk assessment, data governance, human oversight, and post-market monitoring. Achieving ISO/IEC 42001 certification positions a U.S. organization to demonstrate compliance readiness across both domestic and international regulatory requirements simultaneously.
Common Mistakes Organizations Make with ISO/IEC 42001 Risk Management
After reviewing publicly available certification case studies and audit perspectives, several recurring errors stand out.
Conflating the risk assessment with the impact assessment. These are two separate processes under Clauses 6.1.2 and 6.1.4. The risk assessment evaluates threats to the organization. The impact assessment evaluates consequences for external parties. Merging them into one document almost always results in missing the societal dimension that auditors specifically check.
Treating the Statement of Applicability as a checkbox exercise. Every inclusion or exclusion of an Annex A control needs a documented, risk-based rationale. Saying “not applicable” without explaining why invites audit findings.
Performing a one-time risk assessment. ISO/IEC 42001 requires ongoing risk assessments under Clause 8, not just the initial planning exercise in Clause 6. AI systems change. Models drift. Data distributions shift. A risk register from 12 months ago may not reflect current conditions.
Ignoring third-party AI risks. Organizations that use commercial AI APIs or embedded AI features but exclude them from their AIMS scope leave a significant governance gap. If a third party can influence system behavior, the organization remains accountable for the outcome.
Building a Risk Management Practice That Lasts
ISO/IEC 42001’s risk management requirements are not a compliance checkbox to satisfy once and forget. They form a continuous cycle of identification, assessment, treatment, monitoring, and improvement that keeps pace with the AI systems it governs. For U.S. organizations facing a fragmented but rapidly evolving regulatory landscape, this structured approach provides both operational clarity and audit-ready evidence of responsible AI governance.
The most productive first step is mapping your current AI inventory against the standard’s clause requirements to identify where your existing governance, risk, and compliance programs already satisfy ISO/IEC 42001 and where gaps remain.
GAICC offers ISO/IEC 42001 Lead Implementer training that equips professionals with the practical knowledge to build, operate, and certify an AI Management System aligned with these risk management requirements. Explore the program to get started.
