That voluntary status is deceptive. The Federal Trade Commission, the Consumer Financial Protection Bureau, the Food and Drug Administration, the Securities and Exchange Commission, and the Equal Employment Opportunity Commission all reference NIST AI RMF principles in their enforcement guidance. Federal contractors face growing expectations to demonstrate NIST-aligned AI governance. And the framework’s crosswalk to ISO/IEC 42001 means that organisations adopting the AI RMF are simultaneously building toward international certification.
Through 2025 and into early 2026, NIST has expanded the framework’s ecosystem significantly. The March 2025 update addressed generative AI risks, supply chain vulnerabilities, and third-party model assessment. The Generative AI Profile (NIST AI 600-1) provides specific guidance for large language models and multimodal systems. In December 2025, NIST released the preliminary draft Cyber AI Profile (NIST IR 8596) bridging AI risk management with the Cybersecurity Framework 2.0. And SP 800-53 Control Overlays for Securing AI Systems are in active development.
This guide breaks down the framework’s architecture, explains each core function with practical implementation guidance, maps it to international standards, and provides a concrete roadmap for adoption.
What the NIST AI RMF Is and Why It Matters
The NIST AI Risk Management Framework (AI RMF 1.0, formally NIST AI 100-1) is a voluntary, technology-agnostic, and sector-neutral framework for managing risks associated with AI systems across their entire lifecycle. It was developed through an open, consensus-driven process involving government agencies, private sector organisations, academic institutions, and civil society groups.
The framework treats AI as socio-technical: risks emerge not only from models and data, but from how people build, deploy, and use AI systems. This perspective shapes everything about the AI RMF’s structure. Rather than prescribing specific technical controls, it defines outcomes that organisations should achieve, allowing flexibility in how those outcomes are reached.
Seven characteristics define trustworthy AI under the framework: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. These characteristics are not independent requirements to be satisfied individually. They interact, and sometimes trade off against each other. An AI system optimised exclusively for accuracy might sacrifice explainability; one focused entirely on privacy might reduce fairness testing capability. The AI RMF acknowledges these tensions and asks organisations to make deliberate, documented decisions about how to balance them.
The Four Core Functions: Govern, Map, Measure, Manage
The AI RMF organises risk management activities into four interconnected functions. Govern is a cross-cutting function that informs and is infused throughout the other three. Map, Measure, and Manage operate iteratively across the AI system lifecycle.
Govern
Establish organisational culture, policies, accountability structures, and risk tolerance. Cross-cutting function that informs Map, Measure, and Manage.
Map
Identify context, stakeholders, intended uses, and potential impacts. Produces the go/no-go decision foundation for AI system development.
Measure
Quantify and assess risks using metrics, bias testing, explainability evaluation, adversarial testing, and continuous monitoring.
Manage
Allocate resources for risk treatment, implement controls, establish incident response, and maintain post-deployment oversight.
Govern: Establishing the Foundation
The Govern function creates the organisational culture, policies, and accountability structures that make AI risk management possible. Without effective governance, the other three functions lack direction and authority.
Govern requires organisations to: understand and document AI-related legal and regulatory requirements (Govern 1.1); integrate trustworthy AI characteristics into policies and processes (Govern 1.2); establish risk tolerance thresholds and decision-making procedures (Govern 1.3); document and make transparent risk management processes and outcomes (Govern 1.4); ensure ongoing monitoring of AI risks (Govern 1.5); maintain inventories of AI systems prioritised by risk (Govern 1.6); and establish safe decommissioning procedures (Govern 1.7).
Govern also addresses workforce composition (Govern 2), requiring that diversity, equity, inclusion, and accessibility are prioritised in AI risk management teams. NIST specifically links workforce diversity to the quality of AI risk identification, arguing that homogeneous teams are more likely to miss risks affecting populations unlike themselves.
Map: Understanding Context and Risks
The Map function identifies the context in which an AI system operates and the risks it introduces. After completing the Map function, organisations should have enough contextual knowledge to make an informed go/no-go decision about whether to proceed with design, development, or deployment.
Map covers: intended purposes, potentially beneficial uses, and context of operation (Map 1); identification of likely users, affected stakeholders, and non-user impacts (Map 2); AI system benefits relative to alternatives (Map 3); mapping of risks specific to the AI system’s use case (Map 4); and documenting likelihood and magnitude of impacts (Map 5). The Map function is not a one-time exercise. It must be applied continuously as context, capabilities, risks, and potential impacts evolve.
Measure: Quantifying and Assessing Risk
The Measure function employs quantitative, qualitative, or mixed-method tools to analyse, assess, benchmark, and monitor AI risks. It translates the contextual understanding from Map into measurable indicators.
Key activities include: selecting metrics appropriate to the risk context and AI system type (Measure 1); conducting bias testing across demographic groups (Measure 2); evaluating explainability and interpretability (Measure 3); assessing security and resilience including adversarial testing (Measure 4); and establishing continuous monitoring processes that detect drift, degradation, and emergent risks.
Manage: Taking Action on Risks
The Manage function allocates resources to address mapped and measured risks. It encompasses risk treatment plans, incident response, recovery procedures, and communication protocols.
Manage activities include: prioritising risk treatments based on severity and likelihood (Manage 1); implementing technical and procedural controls (Manage 2); establishing appeal and override mechanisms for AI decisions affecting individuals (Manage 3); and maintaining post-deployment monitoring with defined triggers for intervention, retraining, or decommissioning (Manage 4).
The NIST AI RMF Playbook: From Outcomes to Actions
The AI RMF defines what organisations should achieve but intentionally avoids prescribing how. The companion NIST AI RMF Playbook bridges this gap by providing suggested actions, references, and practical guidance for each subcategory across all four functions.
The Playbook is a living resource, updated based on community feedback on a semi-annual basis. It is not a checklist to be completed in full. Organisations can borrow as many or as few suggestions as apply to their specific use case. This flexibility is the Playbook’s greatest strength and its most common source of confusion: organisations accustomed to prescriptive compliance standards sometimes struggle with the AI RMF’s outcome-oriented approach.
The Playbook is complemented by crosswalk documents mapping the AI RMF to other frameworks including ISO 42001, the EU AI Act, and Singapore’s AI Verify; use case documents demonstrating implementations across government, industry, and academia; a roadmap outlining NIST’s priorities for future framework development; and profiles tailored to specific domains including generative AI.
Generative AI: The NIST AI 600-1 Profile
Released in July 2024, NIST AI 600-1 (the Generative AI Profile) extends the AI RMF specifically to address risks associated with large language models, multimodal systems, and other generative AI technologies. It identifies 12 risk categories unique to or amplified by generative AI:
| Risk Category | Description |
|---|---|
| CBRN Information | Generation of content related to chemical, biological, radiological, or nuclear threats |
| Confabulation | Generation of plausible but factually incorrect content (hallucinations) |
| Data Privacy | Exposure or reconstruction of training data, including personal information |
| Environmental Impact | Energy consumption and carbon footprint of training and inference |
| Homogenisation | Reduction of diversity in AI outputs across the ecosystem |
| Human-AI Configuration | Inappropriate reliance on or trust in AI-generated outputs |
| Information Integrity | Generation of misinformation, disinformation, or manipulated content |
| Information Security | New attack vectors including prompt injection and model extraction |
| Intellectual Property | Use of copyrighted material in training data and generated outputs |
| Obscene/Abusive Content | Generation of harmful, violent, or exploitative content |
| Toxic/Biased Content | Amplification of stereotypes, hate speech, or discriminatory patterns |
| Value Chain | Risks from third-party components, pre-trained models, and data supply chains |
The Generative AI Profile maps these risks to the existing AI RMF functions, creating additional subcategories within Govern, Map, Measure, and Manage that specifically address generative AI challenges. For organisations deploying large language models or building products on top of foundation models, this profile is essential reading.
The Cyber AI Profile: Bridging AI and Cybersecurity
In December 2025, NIST released the preliminary draft Cybersecurity Framework Profile for Artificial Intelligence (NIST IR 8596). This profile addresses the intersection of AI and cybersecurity from three angles: securing AI systems against attack, using AI to enhance cybersecurity defences, and defending against AI-enabled threats.
The Cyber AI Profile maps AI-specific considerations onto the NIST Cybersecurity Framework 2.0’s existing functions: Govern, Identify, Protect, Detect, Respond, and Recover. Each subcategory receives a priority rating (High, Moderate, or Foundational) to help organisations allocate resources effectively.
In parallel, NIST is developing SP 800-53 Control Overlays for Securing AI Systems (COSAiS), which provide implementation-level controls that complement the Cyber AI Profile’s outcome-oriented guidance. The concept paper was released in August 2025, with community collaboration ongoing through a dedicated Slack workspace. Together, the Cyber AI Profile and COSAiS create a comprehensive AI cybersecurity governance stack.
How the AI RMF Connects to US Regulatory Expectations
The AI RMF’s voluntary status does not mean it operates in a regulatory vacuum. Multiple federal agencies reference its principles in their enforcement and guidance activities.
| Agency | AI Governance Relevance | NIST AI RMF Connection |
|---|---|---|
| FTC | Enforcement against deceptive or unfair AI practices | References trustworthy AI principles in enforcement guidance |
| CFPB | Fair lending and automated decision-making | Expects bias testing and explainability aligned with Measure function |
| FDA | AI-enabled medical devices and clinical decision support | References NIST risk management principles for AI/ML devices |
| SEC | AI in trading, advisory, and market surveillance | Model risk management aligned with Govern and Manage functions |
| EEOC | AI in hiring, promotion, and employment decisions | Emphasises bias testing and adverse impact assessment |
| DOD | AI in defence and national security applications | Responsible AI strategy explicitly maps to NIST AI RMF |
Federal contractors face the most direct expectations. Executive orders and agency-specific guidance increasingly require demonstration of NIST-aligned AI governance as a procurement condition. For private sector organisations, the AI RMF serves as the primary reference point when federal regulators evaluate whether AI practices meet “reasonable” standards of care.
While the NIST AI RMF provides voluntary governance guidance in the United States, organisations operating globally must also consider legally binding regulatory frameworks such as the European Union’s AI regulation. To understand the structure and compliance requirements of Europe’s risk-based AI law, read our EU AI Act Beginner’s Guide: AI Regulation and Business Compliance.
NIST AI RMF and ISO/IEC 42001: The Crosswalk
NIST has published an official crosswalk mapping every AI RMF subcategory to corresponding ISO/IEC 42001 clauses and annexes. This mapping demonstrates substantial overlap between the two frameworks, particularly in governance structures, risk assessment processes, lifecycle management, and stakeholder engagement.
The key differences are structural rather than substantive. The AI RMF organises guidance through four outcome-oriented functions (Govern, Map, Measure, Manage), while ISO 42001 uses the traditional ISO management system clause structure (Clauses 4-10 plus Annexes). The AI RMF is not certifiable; ISO 42001 is. The AI RMF provides more tactical flexibility through its Playbook suggestions; ISO 42001 demands documented, auditable management system processes.
For US organisations, the practical implication is significant: work done to implement the NIST AI RMF directly contributes to ISO 42001 certification readiness. The Govern function substantially addresses ISO 42001’s Clauses 5 (Leadership), 6 (Planning), and portions of Annex B. Map aligns with AI impact assessment requirements. Measure maps to monitoring and measurement clauses. Manage connects to operational controls and incident response.
Organisations that want the governance rigour of the AI RMF combined with the external credibility of certification should implement both, using the crosswalk to eliminate duplicated effort. NIST’s AI RMF provides the intellectual framework and tactical playbook; ISO 42001 provides the auditable management system structure and internationally recognised credential.
Because the NIST AI RMF is not a certifiable standard, many organisations complement it with ISO/IEC 42001, which provides a formal AI management system and certification pathway. You can explore how the standard operationalises AI governance in our guide How ISO/IEC 42001 Strengthens AI Governance and Compliance Frameworks.
Implementation Tiers: Assessing Maturity
The AI RMF describes four implementation tiers that reflect an organisation’s AI risk management maturity.
| Tier | Name | Description |
|---|---|---|
| Tier 1 | Partial | Limited awareness of AI risks. Risk management is reactive and ad hoc. No formalised governance. |
| Tier 2 | Risk-Informed | Formal processes exist but may not be consistent. Some AI risk awareness at management level. |
| Tier 3 | Repeatable | Organisation-wide consistent processes. Policies are integrated into operations. |
| Tier 4 | Adaptive | Continuous monitoring and dynamic adaptation. Comprehensive governance with full lifecycle coverage. |
Most organisations entering AI risk management land between Tier 1 and Tier 2. The goal is not necessarily to reach Tier 4 for every AI system. Risk-proportionate governance means that low-risk AI applications may warrant Tier 2 practices while high-risk systems deployed in regulated environments should target Tier 3 or Tier 4.
A Practical Roadmap for Implementation
Moving from awareness to implementation requires a structured approach. Based on the AI RMF’s architecture and NIST’s companion resources, here is a seven-step roadmap.
Step 1: Inventory all AI systems. Identify every AI system in development, production, or embedded in vendor products. Include systems that use pre-trained models, open-source components, or third-party APIs. Classify each by criticality, intended use, and potential for harm.
Step 2: Establish governance structures. Form an AI risk oversight group that includes technical, legal, compliance, and business stakeholders. Assign clear ownership for each AI RMF core function. Define risk tolerance thresholds and decision-making authority.
Step 3: Map context and stakeholders. For each AI system, document the intended purpose, operational context, user groups, and affected non-user stakeholders. Identify potential harms across the system’s lifecycle.
Step 4: Define metrics and measurement. Select quantitative and qualitative metrics appropriate to each AI system’s risk profile. Implement bias testing, explainability assessment, robustness evaluation, and performance monitoring.
Step 5: Implement risk treatments. Deploy technical controls (model validation, monitoring pipelines, access controls), procedural controls (human-in-the-loop checkpoints, escalation procedures), and organisational controls (training, documentation, review cadences).
Step 6: Establish continuous monitoring. Configure automated drift detection, performance degradation alerts, and anomaly monitoring. Define triggers for model retraining, intervention, or decommissioning.
Step 7: Prepare for ISO 42001 certification. Use the official NIST-ISO crosswalk to map your AI RMF implementation to ISO 42001 clauses. Identify gaps where ISO 42001 requires documented management system processes. Build the auditable evidence repository.
Common Implementation Challenges
The flexibility paradox. The AI RMF’s intentional non-prescriptiveness is its most valuable feature and its most common obstacle. Teams accustomed to prescriptive frameworks struggle with the absence of specific control requirements. The solution is to use the Playbook’s suggested actions as a starting point while building organisation-specific procedures.
Scope management. Organisations often undercount their AI systems. AI embedded in vendor products, SaaS platforms, and internal tools is frequently overlooked. A thorough inventory, including third-party and shadow AI, is essential before meaningful risk management can begin.
Measurement maturity. The Measure function requires quantitative and qualitative assessment capabilities that many organisations lack. Bias testing, explainability evaluation, and adversarial testing require specialised skills and tooling that need investment.
Cross-functional coordination. AI risk management cannot live exclusively within a data science, legal, or compliance team. The Govern function’s requirement for cross-functional accountability means that engineering, product, legal, risk, and business teams must collaborate continuously.
Looking Ahead: 2026 and Beyond
NIST’s AI governance ecosystem is expanding on multiple fronts. RMF 1.1 guidance addenda, expanded profiles for specific use cases, the Cyber AI Profile finalisation, and the SP 800-53 Control Overlays for AI are all expected through 2026. The integration of AI risk management with cybersecurity and privacy frameworks reflects NIST’s recognition that AI governance cannot exist in isolation from broader enterprise risk management.
For US organisations, the strategic message is clear: the NIST AI RMF is not a static document to be read once and filed. It is the foundation of an evolving governance ecosystem that federal regulators, industry standards bodies, and international frameworks all reference and build upon. Organisations that treat it as operational infrastructure rather than a compliance exercise will find themselves better prepared for whatever regulatory developments emerge.
Ready to operationalise AI risk management? Explore GAICC’s ISO/IEC 42001 certification programmes to complement your NIST AI RMF implementation with internationally recognised certification.
