That gap between intense state activity and federal inaction defines AI governance in the United States right now. Organizations deploying AI systems face a tangle of overlapping state obligations, a shifting federal policy stance shaped by the December 2025 executive order, and a set of voluntary frameworks (most notably the NIST AI Risk Management Framework) that increasingly carry real legal weight. This guide maps the full picture: who regulates what, which laws apply to your organization, and the concrete steps that matter in 2026.
What Is AI Governance, and Why Is the U.S. Approach Unusual?
Most countries that have moved on AI regulation picked one of two approaches. The European Union wrote a single, comprehensive law, the EU AI Act, that applies uniformly across 27 member states. China imposed targeted rules on generative AI, algorithmic recommendations, and deepfakes through a series of sector specific mandates issued by central authorities.
The United States did neither. There is no federal AI Act. No single agency owns the issue. Instead, AI governance here is a layered system: presidential executive orders that set policy direction, federal agencies enforcing existing consumer protection and civil rights law as it applies to AI, a fast growing body of state legislation, and voluntary technical standards that organizations adopt on their own.
That layered structure isn’t accidental. It reflects a political environment where innovation first rhetoric meets real enforcement pressure from states, where a Republican White House and Democratic led states disagree fundamentally on how much regulation AI needs, and where the private sector is increasingly expected to self govern because neither level of government has filled the gap.
The practical result: if your organization uses AI in hiring, customer facing decisions, or content generation, you are almost certainly subject to multiple overlapping governance obligations right now, even without a federal law.
Federal AI Governance: Executive Orders, Agency Enforcement, and the Action Plan
Federal AI policy in the U.S. has been shaped almost entirely by executive action, not legislation. That makes it unusually volatile. Each administration can (and has) reversed the previous one’s direction.
The Executive Order Timeline
The first significant federal move came in February 2019, when President Trump signed Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence.” It directed agencies to prioritize AI research and development funding but imposed no obligations on the private sector.
President Biden’s October 2023 Executive Order 14110 shifted the tone sharply. It framed AI as a matter of civil rights and national security, required safety testing for powerful AI models, and directed agencies to issue guidance on AI use in employment, housing, and government services. It was the most prescriptive AI action any U.S. president had taken.
It lasted about fifteen months. In January 2025, President Trump revoked it with Executive Order 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence.” The stated goal: eliminate regulatory obstacles to AI innovation. In July 2025, the administration released the AI Action Plan, which explicitly framed AI governance as a competitive tool against China and directed federal agencies to export the U.S. “AI stack” internationally.
Then came the December 2025 order that reshaped the debate entirely.
The December 2025 Executive Order: What It Actually Does
Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” is the most consequential federal AI policy action of the current administration. But it’s widely misunderstood.
What the order does: It directs the Attorney General to establish an AI Litigation Task Force within 30 days to challenge state AI laws deemed inconsistent with federal policy. It instructs the Commerce Department to evaluate existing state AI laws. It makes states with “onerous” AI laws potentially ineligible for certain federal funding. And it directs the FTC to issue a policy statement by March 2026 on how existing federal law applies to AI.
What the order does not do: It does not repeal, preempt, or invalidate any state AI law. It does not create a new federal regulation. It does not establish the “minimally burdensome national policy framework” it calls for. It only signals the intent to create one.
Federal Agencies and Their AI Roles
No single agency “regulates AI” at the federal level. Instead, existing agencies apply their current authority to AI related issues:
| Agency | AI Governance Role |
|---|---|
| FTC | Enforces consumer protection law against deceptive or unfair AI practices under Section 5 of the FTC Act. Directed by the Dec 2025 EO to issue an AI specific policy statement. |
| NIST | Develops technical standards and voluntary frameworks, including the AI Risk Management Framework (AI RMF 1.0) and the Generative AI Profile. |
| EEOC | Applies Title VII and ADA to AI driven employment decisions. Has issued guidance on algorithmic discrimination in hiring. |
| DOJ Civil Rights Division | Enforces civil rights law in AI mediated decisions affecting housing, lending, and public services. |
| OMB | Issues guidance for federal agencies on AI procurement and use. Sets standards for how the government itself deploys AI. |
The key phrase in all of this: “existing authority.” These agencies aren’t enforcing AI specific laws. They’re applying decades old consumer protection, civil rights, and administrative frameworks to AI systems. That means enforcement actions tend to be reactive, triggered by complaints and harms, rather than proactive.
State AI Laws Taking Effect in 2026: What’s Actually Enforceable
While Washington debates frameworks, states are writing enforceable law. The pace is remarkable: the National Conference of State Legislatures reports that 38 states adopted AI measures in 2025, and multiple major laws take effect in the first half of 2026. For organizations operating across state lines, this creates real compliance obligations, not theoretical ones.
Colorado AI Act (SB 24 205): Effective June 30, 2026
Colorado’s law is the most comprehensive state AI regulation in the country, and it’s the closest thing the U.S. has to the EU AI Act’s risk based approach. It applies to any organization developing or deploying “high risk AI systems,” defined as AI used to make or substantially assist “consequential decisions” in employment, education, financial services, healthcare, housing, insurance, and legal services.
What it requires: Developers must document known risks, describe training data, and disclose limitations to deployers. Deployers must conduct annual impact assessments, provide consumer notice and opt out rights, and use “reasonable care” to avoid algorithmic discrimination. Violations count as deceptive trade practices, enforced exclusively by the state attorney general. There is no private right of action.
One provision matters more than the rest: the law explicitly recognizes compliance with the NIST AI Risk Management Framework as evidence of reasonable care. That’s a safe harbor, and it’s turning a voluntary standard into something with real legal teeth.
California’s Multi Layered AI Laws (Effective January 1, 2026)
California didn’t pass one AI law. It passed several, each targeting a different angle:
| Law | What It Requires |
|---|---|
| SB 53 (TFAIA) | Frontier AI transparency and safety: applies to developers of large scale AI systems. Requires safety assessments and incident reporting for models exceeding defined compute thresholds. |
| SB 942 | AI content labeling: mandates disclosure when content is AI generated, with specific watermarking and labeling requirements. |
| AB 2013 | Training data transparency: requires AI providers to disclose information about the datasets used to train their models. |
| Civil Rights Dept. Regs | Employment AI discrimination: restricts discriminatory use of AI in employment, makes bias testing relevant to discrimination claims, imposes recordkeeping requirements. |
California’s approach is fragmented by design. Each law targets a specific harm rather than creating a comprehensive framework. That makes compliance more complex for organizations, because each law has different applicability criteria, effective dates, and enforcement mechanisms.
Illinois and Texas: Two Different Philosophies
Illinois HB 3773, effective January 1, 2026, amends the state’s Human Rights Act to prohibit employers from using AI that results in discrimination against protected classes. It’s narrow in scope, focused solely on employment, but it carries the highest litigation risk of any state AI law because it includes a private right of action. Individuals can sue directly. No other major state AI law provides that.
Texas took the opposite approach. The Responsible Artificial Intelligence Governance Act (HB 149), also effective January 1, prohibits specific harmful AI uses (deepfakes, child exploitation, incitement to harm) but explicitly states that a showing of disparate impact alone is insufficient to demonstrate discriminatory intent. It creates a regulatory sandbox for testing AI systems and provides no private right of action. Business friendly is an understatement.
Which State Laws Apply to Your Organization?
This is the question most guides skip, and it’s the one that actually matters. The answer depends on three factors: where you operate (or where your users are), how you use AI, and your company’s size.
Colorado’s law triggers based on high risk AI use, not company size. Even small businesses must comply if they deploy AI for consequential decisions. California’s CCPA adjacent rules have revenue and data volume thresholds. Illinois applies to any employer using AI in hiring decisions for Illinois based employees. Texas applies broadly but has the least onerous requirements.
A company headquartered in Georgia that uses AI to screen job applicants across all 50 states? It’s subject to Illinois, Colorado, and California employment AI rules for applicants in those states. The interstate nature of AI deployment means geography matters less than it used to.
The Federal vs. State Showdown: Will Preemption Actually Happen?
The December 2025 executive order names the Colorado AI Act specifically. That’s unusual. Executive orders rarely single out individual state laws. It signals that the administration views comprehensive state AI regulation as the primary threat to its “minimally burdensome” federal vision.
But here’s the constitutional reality: executive orders cannot preempt state law. Only Congress can do that through legislation, or courts can do it through judicial review. The president can direct the Attorney General to challenge state laws in court, can withhold federal funding, and can pressure state legislatures politically. Those are real tools. They are not, however, the same as repealing a law.
The AI Litigation Task Force has three available paths. First, it can argue that specific state AI laws violate the Commerce Clause by imposing undue burdens on interstate commerce. Second, it can argue that existing federal statutes (like the FTC Act) impliedly preempt conflicting state requirements. Third, it can use funding leverage to discourage state enforcement.
Each path faces obstacles. Commerce Clause challenges require showing that state laws discriminate against or disproportionately burden interstate commerce, a high bar for laws that regulate local harms from AI driven decisions. Implied preemption requires showing that Congress intended to occupy the field, which is difficult when Congress has conspicuously not passed comprehensive AI legislation. Funding leverage works as political pressure but doesn’t change legal obligations.
The NIST AI Risk Management Framework: From Voluntary Standard to Legal Baseline
Calling the NIST AI Risk Management Framework “voluntary” is technically accurate and increasingly misleading. Released in January 2023, the framework was designed as guidance, not regulation. But Colorado’s AI Act changed its status by granting safe harbor to organizations that can demonstrate compliance with it. When a voluntary standard becomes a legal defense, it stops being optional for any organization that takes compliance seriously.
The Four Core Functions
The NIST AI RMF organizes risk management around four functions: Govern, Map, Measure, and Manage. Each targets a different phase of the AI lifecycle:
Govern is the organizational foundation. It covers accountability structures, risk culture, policy development, and legal compliance. If your company doesn’t have a defined owner for AI risk, someone with actual authority and not just an advisory role, Govern is where to start.
Map focuses on contextualizing AI systems within their broader environment. What is this system used for? Who does it affect? What are the potential impacts, whether technical, social, or ethical? Mapping is where organizations identify risks before they become incidents.
Measure is risk assessment: how likely is a given harm, how severe would it be, and how do you quantify something as ambiguous as algorithmic bias? The framework encourages both quantitative metrics (false positive rates, demographic parity scores) and qualitative assessment (stakeholder impact reviews).
Manage covers risk response. Once you’ve identified and measured a risk, what do you do about it? Mitigation strategies, monitoring for drift, incident response plans, and criteria for decommissioning AI systems that can’t be made safe.
The Generative AI Profile (NIST AI 600 1)
NIST released the Generative AI Profile in July 2024, extending the base framework to address risks specific to large language models and multimodal AI. It focuses on four areas: governance of generative systems, content provenance (knowing where AI generated content came from), pre deployment testing, and incident disclosure.
The profile matters because generative AI introduces risks the original framework wasn’t designed to address: hallucinated outputs, training data contamination, prompt injection attacks, and the challenge of monitoring systems whose outputs vary with every interaction.
How NIST Fits with ISO/IEC 42001
Organizations operating internationally often ask whether they need both NIST and ISO/IEC 42001, the international AI management system standard published in 2023. The short answer: they serve different purposes. NIST provides risk management guidance. ISO 42001 provides a certifiable management system, similar to ISO 27001 but for AI. They’re complementary, not competing. An organization can use NIST AI RMF as its risk methodology within an ISO 42001 management system, satisfying both domestic safe harbor requirements and international certification expectations.
Building an Enterprise AI Governance Program That Actually Works
A 2025 AuditBoard study found that only one in four organizations have a fully operational AI governance program, despite widespread awareness of incoming regulations. The gap isn’t awareness. It’s execution. Most companies have drafted policies. Far fewer have turned those policies into daily operational practice.
The barriers are consistent across industries: unclear ownership (who is actually responsible for AI risk?), limited internal expertise, and resource constraints that make governance feel like a cost center competing with AI deployment budgets. The McKinsey State of AI 2025 report reinforces this. Nearly half of organizations experienced measurable governance or ethical lapses tied to their AI initiatives.
What an Effective Program Looks Like
Six components separate organizations that are governing AI from those that are merely writing policies about it:
1. AI system inventory. You cannot govern what you haven’t cataloged. Every AI system in use, whether purchased, built in house, or embedded in vendor tools, needs to be identified, classified by risk level, and mapped to applicable jurisdictions.
2. Risk classification methodology. Not all AI use carries equal risk. A recommendation engine for blog posts and an AI system screening loan applications require fundamentally different levels of oversight. Adopt NIST’s approach: classify by potential severity of harm, not by the sophistication of the technology.
3. Impact assessment process. Required under Colorado law, aligned with EU AI Act expectations, and increasingly considered best practice everywhere. Impact assessments should be completed before deploying high risk systems and updated annually.
4. Transparency and disclosure practices. Different laws require different disclosures: consumer notice under Colorado, content labeling under California SB 942, employee notification under Illinois. Build a flexible disclosure framework that can adapt to jurisdiction specific requirements.
5. Monitoring and incident response. AI systems drift. Models that were fair at deployment can develop bias as underlying data distributions shift. Continuous monitoring for performance degradation, demographic disparities, and security vulnerabilities is a governance requirement, not a nice to have.
6. Board level accountability. The Caremark line of cases holds corporate boards liable for failure to provide adequate oversight of “mission critical” operations. For companies where AI drives material business decisions, that includes AI governance. Board members who cannot articulate their company’s AI risk posture face personal liability exposure.
The U.S. AI governance market reflects this urgency. It generated $59.2 million in 2025 and is projected to reach $354.1 million by 2033, a compound annual growth rate of 24.5%, according to Horizon Databook. That growth signals enterprise recognition that governance is a cost of doing AI, not an optional add on.
How the U.S. Approach Compares to the Rest of the World
A multinational company deploying AI in the U.S., EU, and UK faces three fundamentally different governance regimes.
| Dimension | United States | European Union | China | United Kingdom |
|---|---|---|---|---|
| Primary mechanism | State laws + exec. orders + voluntary standards | EU AI Act (single comprehensive law) | Sector specific mandates from central authorities | Sector regulators + principles based guidance |
| Binding? | State laws: yes. Federal EOs: policy direction only. NIST: voluntary (with CO safe harbor) | Yes, with fines up to €35M or 7% of global revenue | Yes, with administrative penalties | Partially. Existing sector laws apply; new AI specific rules still emerging |
| Risk approach | Varies by state; CO uses risk based; TX uses prohibited use approach | Tiered risk categories (unacceptable, high, limited, minimal) | Technology specific (GenAI, algorithms, deepfakes) | Pro innovation, proportionate, sector led |
| Central authority? | No single authority | National competent authorities + EU AI Office | Cyberspace Administration of China (CAC) | No single authority; sector regulators coordinate |
The geopolitical dimension matters here. The AI Action Plan explicitly positions U.S. governance as a competitive instrument. The administration’s goal is to export what it calls the American “AI stack” (hardware, models, software, and governance norms) to allied countries as an alternative to Chinese AI infrastructure. Governance isn’t just about domestic policy; it’s about global influence.
For organizations managing multinational compliance, the practical takeaway is sobering: there is no single governance program that satisfies all major jurisdictions. But NIST AI RMF provides the closest thing to a universal foundation, because its structure aligns with ISO 42001 internationally and provides safe harbor domestically under Colorado law.
What Organizations Should Do Right Now: Seven Steps for 2026
Waiting for regulatory clarity before acting is a strategy. It’s just a bad one. State laws are already enforceable. Federal policy is shifting quarterly. And the organizations that built governance programs early are the ones that won’t need to scramble when the next rule drops.
1. Inventory every AI system. This sounds basic. It isn’t. Most organizations significantly undercount their AI exposure because they don’t account for AI embedded in vendor tools: CRM scoring, resume screening plugins, automated underwriting. If a tool makes or influences a decision about a person, it counts.
2. Adopt NIST AI RMF as your governance backbone. Not because it’s the only framework available, but because it carries legal weight in Colorado, aligns with ISO 42001 internationally, and provides a structured approach that scales. Even if no law required it, the framework represents the clearest articulation of what “reasonable care” looks like in AI risk management.
3. Run impact assessments for high risk systems. Colorado requires them. The EU AI Act requires them. Best practice demands them. At minimum, assess: what decisions does this system influence, who is affected, what happens when it’s wrong, and how do you know if it’s drifting toward discriminatory outcomes?
4. Build jurisdiction specific disclosure capabilities. California needs content labeling. Colorado needs consumer notice and opt out mechanisms. Illinois needs employee notification before AI assisted hiring. Don’t build three separate systems. Build one flexible disclosure framework that can be configured per jurisdiction.
5. Establish monitoring and incident response. Deploy continuous monitoring for bias, performance drift, and security vulnerabilities in production AI systems. Document your incident response plan before you need it.
6. Brief your board. If AI drives material business decisions, your board needs to understand the governance structure, risk exposure, and compliance posture. Caremark duties are real. Ignorance is not a defense.
7. Track the federal state conflict actively. Subscribe to the IAPP’s state AI governance tracker. Monitor Commerce Department evaluations of state laws. Watch for AI Litigation Task Force actions. The legal ground is shifting, and your compliance program needs to shift with it.
Where This Leaves Organizations
AI governance in the United States is not a single system. It’s an ecosystem of overlapping federal signals, enforceable state laws, and voluntary standards that are gaining legal weight. That ecosystem is messy, politically contested, and changing faster than most compliance programs can adapt. But the core calculus for organizations is simpler than it looks: the NIST AI RMF provides the governance backbone, state laws define the floor for compliance, and the federal preemption debate will play out in courts over years, not months.
The most useful thing you can do this quarter is inventory your AI systems, classify them by risk, and start building the governance infrastructure that every jurisdiction is converging toward, even if they’re arriving from different directions.
Professionals seeking structured learning and certification pathways can explore formal ISO/IEC 42001 Certification Courses to build governance expertise.
