For US companies operating in or selling to the UK market, this matters. The UK is now preparing a formal AI Bill expected in the second half of 2026, the Data (Use and Access) Act 2025 has introduced new automated decision-making provisions, and the renamed AI Security Institute is testing frontier models with increasing rigour. The regulatory landscape is shifting from voluntary principles to enforceable obligations, and organisations without a governance framework risk being caught off guard.
This article breaks down the current state of UK AI governance, explains what is changing, identifies where ISO/IEC 42001 fits into the picture, and outlines what American organisations should do now to prepare.
The UK’s Principles-Based Approach to AI Regulation
The UK government published its White Paper, A Pro-Innovation Approach to AI Regulation, in March 2023. Rather than creating a new regulatory body or a single piece of AI legislation, the White Paper established five cross-sector principles that existing regulators would apply within their own domains.
Those five principles are safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Regulators such as the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), and Ofcom each received instructions to interpret these principles within their specific sectors.
The model has real advantages. Financial regulators understand financial risk better than a generalist AI authority ever could. Healthcare regulators grasp clinical safety in ways that a technology-focused body would struggle to replicate. Sector-specific oversight means that AI governance is grounded in the actual context where systems operate.
It also has a significant limitation. When multiple regulators apply the same principles independently, businesses operating across sectors face inconsistent expectations, overlapping requirements, and gaps where no regulator has clear jurisdiction. A company deploying an AI-powered customer service tool that processes personal data, makes credit-related recommendations, and operates online could plausibly fall under the ICO, the FCA, and Ofcom simultaneously, each applying the same five principles in different ways.
Key Regulatory Bodies Shaping UK AI Governance
Several UK institutions play distinct roles in how AI is governed. Understanding their mandates helps US companies identify which obligations apply to their specific operations.
The Information Commissioner’s Office (ICO)
The ICO oversees data protection under the UK GDPR and Data Protection Act 2018. Since most AI systems process personal data, the ICO’s guidance on AI is particularly consequential. In June 2025, the ICO published its AI and Biometric Plan of Action for 2025 to 2026, which includes updating guidance on automated decision-making and profiling, developing a statutory code of practice on AI, and engaging with foundation model developers on data protection during training. The ICO also maintains an AI toolkit that helps organisations identify and mitigate data protection risks throughout the AI lifecycle.
The AI Security Institute (AISI)
Originally established as the AI Safety Institute in November 2023, this body was rebranded in February 2025 to reflect a stronger focus on national security threats, including model abuse for cyberattacks and weapons development. AISI has tested more than 30 frontier AI models, published its first Frontier AI Trends Report in December 2025, and launched open-source evaluation tools used by governments and companies worldwide. Its Alignment Project, backed by £27 million from partners including OpenAI, Microsoft, and Anthropic, funds 60 research projects focused on ensuring AI systems behave as intended.
The Regulatory Innovation Office (RIO)
Established in October 2024, RIO works to ensure regulatory bodies collaborate effectively and that regulation keeps pace with technological change. Its first-year report highlighted progress across priority sectors including space, drones, healthcare, and engineering biology, with plans to expand into additional areas.
Sector Regulators
The FCA, CMA, Ofcom, the Medicines and Healthcare products Regulatory Agency (MHRA), and others each apply the five AI principles within their domains. In February 2024, the government asked these regulators to publish strategic updates explaining their approach to AI. Their responses detail how principles like fairness and transparency translate into sector-specific expectations for financial services, telecommunications, and healthcare.
The Data (Use and Access) Act 2025: A Statutory Shift
Passed in June 2025, the Data (Use and Access) Act (DUAA) represents the UK’s first statutory step toward AI-specific obligations. While not an AI Act in the EU sense, it introduces provisions that directly affect how organisations deploy automated systems.
The most significant change involves automated decision-making (ADM). Under the previous UK GDPR framework, solely automated decisions with legal or significant effects required specific lawful bases and were subject to strict limitations. The DUAA shifts this model. Once the relevant provisions take effect in 2026, organisations will be able to use any lawful basis for automated decision-making, provided safeguards are in place. Those safeguards include the right to human intervention, the ability to contest decisions, and transparency about the logic and criteria used.
The Act also requires the Secretary of State to produce an economic impact assessment on the use of copyrighted works in AI development, due by March 2026. This responds to the contentious intersection of AI training data and intellectual property rights, an area where UK courts, policy makers, and creative industries remain in active disagreement.
For US companies, the DUAA means that any AI system making automated decisions about UK individuals needs documented safeguards, clear transparency mechanisms, and a process for human review. Treating this as a future concern rather than a present requirement is a strategic error.
The Coming UK AI Bill: What to Expect
In June 2025, the UK government confirmed that a comprehensive AI Bill would not be introduced before the second half of 2026. Secretary of State Peter Kyle indicated the legislation would address regulation of advanced AI models and establish rules around AI and copyright. The government also committed to forming a Parliamentary Working Group on AI and copyright law.
A Private Member’s Bill, the Artificial Intelligence (Regulation) Bill, was reintroduced in the House of Lords in March 2025 by Lord Holmes of Richmond. It proposes creating an AI Authority to coordinate regulators, enshrining the five AI principles in statute, and introducing transparency and labelling requirements. While this bill lacks government backing, its proposals signal the direction that formal legislation is likely to take.
The expected AI Bill will probably draw on lessons from the EU AI Act, including its risk-based classification system, while maintaining the UK’s preference for proportionate, sector-sensitive regulation. Organisations that have already aligned their AI governance with international standards such as ISO/IEC 42001 will find themselves substantially better prepared than those starting from scratch.
How UK AI Governance Compares: EU and US Approaches
Understanding the UK’s position requires placing it alongside the two other major regulatory frameworks that affect US businesses operating globally.
For a detailed breakdown of federal oversight, NIST guidance and emerging state-level obligations, see our guide to AI governance in the United States.
| Dimension | United Kingdom | European Union | United States |
|---|---|---|---|
| Regulatory Model | Principles-based, sector-led with forthcoming legislation | Comprehensive risk-based legislation (EU AI Act) | Voluntary frameworks (NIST AI RMF) with state-level laws |
| Primary Instrument | White Paper principles + DUAA 2025 + forthcoming AI Bill | EU AI Act (entered force Aug 2024) | Executive Order on AI (Oct 2023) + NIST AI RMF |
| Risk Classification | Not yet formalised; expected in AI Bill | Four tiers: minimal, limited, high, unacceptable | Risk-based approach in NIST framework (voluntary) |
| Enforcement | Sector regulators (ICO, FCA, CMA, Ofcom) | National supervisory authorities + EU AI Office | FTC, sector regulators, state attorneys general |
| AI Safety Body | AI Security Institute (AISI) | EU AI Office | US AI Safety Institute (NIST) |
| Copyright & AI | Under consultation; DUAA requires impact assessment by March 2026 | Text and data mining exception with opt-out rights | Fair use doctrine; ongoing litigation |
| ISO 42001 Relevance | Recommended by government as practical governance tool | Maps to EU AI Act Articles 9-15 for high-risk systems | Voluntary adoption; complements NIST AI RMF |
Businesses operating across Asia should also review our analysis of the China AI Governance Framework to understand how standards-driven regulation differs from the UK’s principles-based approach.
Why ISO/IEC 42001 Matters for UK Market Access
ISO/IEC 42001, published in December 2023, is the first international standard specifically designed for AI Management Systems (AIMS). For US organisations operating in the UK, it serves a dual purpose: it provides a structured governance framework that satisfies current regulatory expectations, and it positions the organisation for compliance with the forthcoming AI Bill.
The UK government has explicitly encouraged adoption of international AI standards as practical tools for governance and assurance. ISO/IEC 42001 sits alongside ISO/IEC 23894 for AI risk management and ISO/IEC 24029 for robustness assessment as part of this recommended toolkit. Organisations implementing ISO 42001 can demonstrate to UK regulators that they have systematic processes for identifying AI risks, documenting governance decisions, maintaining transparency, and enabling continuous improvement.
Certification also carries weight in procurement. Major technology vendors, including Microsoft, have already certified AI services against ISO 42001. The standard is appearing in tender requirements and supply-chain assurance processes with increasing frequency. For US companies bidding on UK government contracts or partnering with UK enterprises, ISO 42001 certification is rapidly moving from differentiator to baseline expectation.
The standard follows the familiar Annex SL structure used by ISO 27001 and ISO 9001, which means organisations with existing ISO certifications can integrate AI governance into their current management systems rather than building an entirely separate compliance programme.
The AI Security Institute and Frontier Model Oversight
AISI’s December 2025 Frontier AI Trends Report provided the first public, data-driven analysis of how rapidly frontier AI capabilities are advancing. The findings are striking. In cybersecurity evaluations, AI models could complete apprentice-level tasks just 9% of the time in late 2023. By late 2025, that figure reached 50%. AISI tested the first model capable of completing expert-level cyber tasks typically requiring over ten years of human experience.
The Institute’s work extends well beyond capability testing. It conducts end-to-end biosecurity red-teaming with major AI labs, has pioneered benchmarks for detecting early signs of AI self-replication, and maintains open-source evaluation tools such as Inspect and ControlArena that are used by governments, companies, and academic researchers globally.
For US companies developing or deploying frontier AI systems in the UK market, AISI’s growing role means that pre-deployment safety evaluations may become a practical requirement rather than a voluntary engagement. The government’s stated intention to make voluntary safety agreements legally binding signals that the current model of cooperative engagement with AISI will eventually carry regulatory teeth.
Copyright, AI Training Data, and the Unresolved Debate
The intersection of AI and intellectual property remains one of the most contested areas in UK governance. The High Court’s September 2025 ruling in Getty Images v Stability AI sided with the AI developer, holding that an AI model that does not store or reproduce copyrighted works is not an infringing copy. The ruling centred on territorial grounds, finding that Getty had not demonstrated the infringing training acts occurred within the UK.
This decision, while significant, hardly settles the matter. The government’s AI and copyright consultation remains open, and the DUAA requires an economic impact assessment of different policy options by March 2026. The government is considering an exception to existing legislation that would permit commercial data mining with an opt-out mechanism for rights holders.
US companies training AI models on data that includes UK-originated content should monitor this area closely. The legal framework could shift substantially once the AI Bill is introduced, and retroactive compliance challenges are far more expensive than proactive governance.
Practical Steps for US Organisations
Waiting for the UK AI Bill to arrive before taking action is a mistake. The regulatory infrastructure is already operational through sector regulators, the DUAA, and AISI’s testing programme. Here is what US organisations should prioritise now.
Map your AI exposure to UK regulation. Identify every AI system that processes data from UK individuals, makes decisions affecting UK residents, or is marketed in the UK. Determine which sector regulators have jurisdiction over each system.
Implement automated decision-making safeguards. The DUAA’s new provisions require human intervention rights, contestability mechanisms, and transparency about decision logic. Build these into your AI systems before the provisions take effect.
Adopt ISO/IEC 42001 as your governance baseline. The standard provides the structured framework that UK regulators expect. It maps directly to the five AI principles, integrates with existing ISO certifications, and prepares your organisation for the AI Bill’s requirements.
Engage with AISI’s evaluation framework. If you develop or deploy frontier AI models, understand AISI’s testing methodology and prepare for the possibility that safety evaluations become mandatory.
Document your AI training data provenance. The copyright landscape is shifting. Maintain clear records of training data sources, licensing agreements, and any UK-originated content used in model development.
Monitor sector-specific guidance. The ICO, FCA, CMA, and other regulators continue to issue updated guidance on AI within their domains. Subscribe to their communications and incorporate new requirements into your governance processes.
The Role of International Standards in Bridging Regulatory Gaps
One consistent theme across UK, EU, and US AI governance is the role of international standards as practical compliance tools. ISO/IEC 42001 provides governance structure. The NIST AI Risk Management Framework supplies risk metrics and measurement methodology. The EU AI Act establishes legal boundaries. Together, they create what many governance professionals refer to as a complete ecosystem for responsible AI.
For US organisations operating across all three jurisdictions, ISO 42001 serves as a particularly effective foundation. Its Clause 5 on leadership aligns with NIST’s Govern function and the EU AI Act’s accountability requirements. Clause 6 on planning maps to NIST’s Map function and EU AI Act Article 9’s risk management mandates. Clause 8 on operational control corresponds to NIST’s Measure and Manage functions and the EU AI Act’s transparency and documentation requirements.
Implementing ISO 42001 does not guarantee compliance with any specific regulation. What it does provide is a structured, auditable, internationally recognised framework that demonstrates to regulators in London, Brussels, and Washington that your organisation governs AI systematically rather than reactively.
Looking Ahead
The UK’s AI governance landscape is evolving from voluntary principles toward statutory obligations. The five-principle framework remains in place, but the DUAA, the forthcoming AI Bill, AISI’s expanding mandate, and sector regulators’ increasing specificity are collectively creating a more demanding compliance environment. For US organisations, the window between current voluntary expectations and future legal requirements is a strategic opportunity, not a reason for delay.
Implementing ISO/IEC 42001 now establishes the governance infrastructure that UK regulators will expect to see when the AI Bill takes effect. It demonstrates accountability, supports procurement eligibility, and reduces the cost of adapting to new requirements. The organisations that invest in structured AI governance today will spend far less time and money scrambling to comply tomorrow.
Ready to strengthen your AI governance for the UK market? Explore GAICC’s ISO/IEC 42001 certification programmes to build a management system that meets current expectations and prepares your organisation for what comes next.
