Between 2021 and 2025, China enacted more sector-specific AI regulations than any other country. While the EU spent years debating a single comprehensive law and the United States relied on executive orders and voluntary frameworks, Beijing took a different path: rapid, iterative rulemaking targeting algorithms, deepfakes, generative AI services, and data security one regulation at a time. The result is an AI governance ecosystem that is sprawling, fast-moving, and increasingly influential beyond China’s borders.
At the centre of this ecosystem sits the AI Safety Governance Framework, a technical document published by China’s National Information Security Standardization Technical Committee (TC260) in September 2024 and updated to version 2.0 just twelve months later. The Framework is not a law. It functions more like an operational manual for risk classification, ethical principles, and governance measures that feed directly into binding national standards.
For organisations operating in or selling into China, and for governance professionals tracking global regulatory convergence, understanding this Framework is no longer optional. This article breaks down its structure, traces its evolution, compares it with the EU AI Act, the NIST AI Risk Management Framework, and ISO/IEC 42001, and explains what it means for cross-border AI compliance.
How China’s AI Regulatory Approach Differs from the EU and the US
China’s regulatory philosophy sits somewhere between the EU’s top-down legislative model and America’s fragmented, agency-led approach to AI governance in the United States differs significantly from China’s standards-driven model. Rather than passing a single comprehensive AI statute (as the EU did with the AI Act in 2024), China has built its governance architecture through a series of targeted regulations, each addressing a specific application of AI technology.
Three regulations form the regulatory backbone. The Provisions on the Management of Algorithmic Recommendations for Internet Information Services, effective March 2022, require companies to register algorithms that shape content feeds and mandate transparency about how those algorithms work. The Deep Synthesis Provisions, effective January 2023, govern AI-generated synthetic media, including deepfakes, mandating content labelling and identity verification. The Interim Measures for the Management of Generative Artificial Intelligence Services, effective August 2023, apply specifically to large language models and generative AI, requiring security assessments before public release and alignment with what Chinese regulators describe as core socialist values.
This sector-specific approach has a practical consequence for compliance teams: there is no single risk-tier classification system equivalent to the EU AI Act’s four-level framework (unacceptable, high, limited, minimal risk). Instead, obligations depend on the type of AI service being offered. A recommendation algorithm triggers different requirements than a generative AI chatbot, even if both carry comparable risk profiles.
The upside of this model is specificity. Each regulation is tailored to real-world use cases that regulators have already observed in the market. The downside is complexity. Multinational organisations must track multiple overlapping regulations, each issued by different agencies including the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT), and the State Administration for Market Regulation (SAMR).
The AI Safety Governance Framework: Structure and Core Principles
The AI Safety Governance Framework, published by TC260, is the closest thing China has to a unified governance reference document. Version 1.0 arrived in September 2024 during National Cybersecurity Awareness Week. Version 2.0 followed in September 2025 at the same annual event, reflecting the speed at which AI technology and associated risks have evolved.
The Framework is organised around four pillars: governance principles, risk taxonomy, technical countermeasures, and governance measures. Each pillar connects to the others in a structured hierarchy.
Governance Principles
Four foundational principles run through the document. “People-centred” requires that AI development prioritise human welfare. “AI for good” establishes an expectation that systems contribute positively to society. “Secure and controllable” mandates that AI remain within defined operational boundaries. “Fairness and justice” addresses bias and discrimination. These principles, first articulated in China’s 2021 New Generation AI Ethics Norms, serve as the ethical foundation for all subsequent technical requirements.
Risk Taxonomy
Version 1.0 classified risks into two broad categories: inherent technology risks (bias in training data, lack of explainability, adversarial vulnerabilities) and application risks (deepfake misuse, critical infrastructure failures, cyberattacks). Version 2.0 introduced a third category, derivative risks, covering indirect societal effects such as job displacement, environmental impact from AI data centres, erosion of creative skills, and in extreme scenarios, the possibility of AI systems developing capabilities beyond human control.
Version 2.0 also introduced a structured risk grading system based on three criteria: application scenario, level of intelligence, and application scale. Risks are classified into five levels, from low to extremely serious. TC260 has called for this grading system to be formalised into binding national standards, and in September 2025, just ten days after the Framework’s release, it issued an open call for organisations to participate in drafting a formal standard for risk categorisation.
Version 2.0: What Changed and Why It Matters
The gap between Framework 1.0 and 2.0 is substantial enough to warrant close attention. Where version 1.0 functioned primarily as a governance declaration, version 2.0 reads more like an operational manual.
Three changes stand out. First, the treatment of frontier AI risks received significant expansion. Version 2.0 dedicates substantial attention to loss-of-control scenarios, where AI systems might act in ways that humans cannot predict or override. It also addresses misuse in CBRN domains (chemical, biological, radiological, and nuclear), a concern that mirrors language in Western AI safety discussions.
Second, Version 2.0 adds provisions for open-source AI governance. As lightweight, high-efficiency open-source models lowered the barrier to AI deployment throughout 2024 and 2025, regulators recognised that governance frameworks needed to address models distributed outside traditional commercial channels. The Framework now recommends closer collaboration between model developers and open-source communities on risk disclosure, prohibited use cases, and security obligations.
Third, Version 2.0 explicitly calls for international collaboration on AI risk information sharing. Section 5.10 calls for exploring the creation of international collaboration mechanisms for sharing information about emerging AI risks. This signals that China’s approach to AI governance, while domestically focused, is increasingly oriented toward shaping global norms.
Key Regulatory Bodies and Their Roles
China’s AI governance structure involves multiple agencies with overlapping but distinct mandates, a feature that can confuse organisations accustomed to single-regulator models.
| Body | Primary Role | Key AI Actions |
|---|---|---|
| CAC | Internet content and cybersecurity regulation | Algorithm regulations, generative AI rules, Global AI Governance Initiative |
| TC260 | Technical standards development | AI Safety Governance Framework, Basic Security Requirements for GenAI |
| MIIT | Industry regulation and AI ethics | AI ethics review regulation, AI Plus Action Plan implementation |
| SAMR | Market regulation and standardisation | AI industry standardisation guidelines, market access requirements |
The Cyberspace Administration of China has emerged as the lead agency for AI governance, driving most of the sector-specific regulations and overseeing TC260’s technical standards work. TC260 itself operates as the standards body that translates regulatory principles into measurable technical requirements, a model that China describes as a “Law plus Standard” dual-drive approach.
In October 2025, amendments to the Cybersecurity Law explicitly brought AI into national legislation for the first time, adding provisions on algorithm R&D support, training data infrastructure, AI ethics rulemaking, and risk assessment governance. These amendments take effect on 1 January 2026.
Comparing China’s Framework with the EU AI Act, NIST AI RMF, and ISO/IEC 42001
Organisations operating across jurisdictions need to understand where these frameworks converge and diverge. The structural differences are significant, but the philosophical overlap is greater than most commentary suggests.
Risk Classification
The EU AI Act uses a four-tier risk classification (unacceptable, high, limited, minimal) that is technology-agnostic and applies horizontally across all AI systems. China’s Framework 2.0 uses a five-level grading system (low to extremely serious) based on application scenario, intelligence level, and scale. The NIST AI RMF avoids prescriptive risk tiers entirely, instead providing a process-based approach through its Govern, Map, Measure, and Manage functions. ISO/IEC 42001 requires organisations to conduct risk assessments but leaves the specific classification methodology to each organisation’s context.
Regulatory Force
The EU AI Act is binding law with substantial penalties (up to 35 million euros or 7% of global annual turnover). China’s Framework is technically voluntary, but its standards are rapidly translated into binding national requirements through TC260’s standards pipeline. The NIST AI RMF is voluntary guidance. ISO/IEC 42001 is a certifiable management system standard, adopted voluntarily but increasingly referenced by regulators as evidence of due diligence.
Scope and Applicability
The EU AI Act has extraterritorial reach, covering any AI system that affects EU residents regardless of where the provider is based. China’s regulations apply to AI services offered within China, with particular emphasis on content-facing applications. The NIST AI RMF targets primarily US organisations, though its principles are referenced globally. ISO/IEC 42001 applies to any organisation of any size that develops, provides, or uses AI systems.
The Convergence Point
Despite structural differences, all four frameworks share common ground on several principles: risk-based governance, transparency requirements, human oversight, accountability structures, and the need for ongoing monitoring after deployment. Organisations that build a governance programme around these shared principles can create a foundation that adapts to multiple jurisdictional requirements without starting from scratch for each one.
What China’s Cybersecurity Law Amendments Mean for AI in 2026
On 28 October 2025, China’s top legislature passed amendments to the Cybersecurity Law (CSL) that bring AI explicitly into national law for the first time. The amendments confirm government support for algorithm R&D, mandate construction of training data resources and computing infrastructure, require accelerated rulemaking on AI ethics, and strengthen AI risk assessment and security governance.
For organisations already subject to the CSL (which includes most technology companies operating in China), the amendments add a new compliance dimension. The practical impact will depend on implementing regulations that Chinese regulators are expected to issue throughout 2026. But the signal is clear: AI governance in China is moving from soft-law guidance toward hard-law obligations at an accelerating pace.
The content labelling requirements that took effect in September 2025 offer a preview of what is to come. The Measures for Labelling AI Generated or Synthesised Content require providers to attach explicit labels to AI-generated text, images, audio, and video, and to embed implicit labels in file metadata. App distribution platforms must verify that applications disclose whether they offer AI-generated content services.
Practical Implications for Global AI Compliance
Compliance teams at multinational organisations face a specific challenge: how to build a governance programme that satisfies Chinese requirements without duplicating effort for EU, US, and international standards compliance.
Three practical strategies emerge from the current regulatory landscape.
Anchor on ISO/IEC 42001 as a governance baseline. The standard’s management system approach (leadership commitment, risk assessment, operational controls, performance evaluation, continual improvement) maps well to requirements across all three major jurisdictions. Clause 6 on planning aligns with both China’s risk assessment expectations and the NIST AI RMF’s Map function. Clause 9 on performance evaluation mirrors China’s emphasis on ongoing monitoring and the EU AI Act’s post-market surveillance requirements.
Map jurisdiction-specific requirements as additions, not replacements. China’s content labelling obligations, algorithm registration requirements, and data localisation rules are additive to a baseline governance programme. Treat them as jurisdiction-specific controls layered on top of a common framework, not as a separate compliance workstream.
Monitor TC260 standards releases as leading indicators. China’s regulatory model moves from framework to standard to binding requirement in a predictable sequence. TC260’s AI Safety Standards System (V1.0), published in January 2025, mapped the entire pipeline of forthcoming technical standards. Organisations that track these releases gain early warning of compliance obligations before they become enforceable.
China’s Role in Shaping Global AI Governance Norms
China is not building its AI governance framework in isolation. In October 2023, President Xi Jinping launched the Global AI Governance Initiative, positioning China as an active participant in international AI governance discussions. In September 2024, China introduced the AI Capacity-Building Action Plan for Good and for All, focused on collaboration with developing countries on AI infrastructure, training, and governance.
In July 2025, China hosted the World AI Conference and High-Level Meeting on Global AI Governance in Shanghai, which produced the Global AI Governance Action Plan, a 13-point roadmap for international coordination. At the same event, China proposed establishing a World AI Organisation (WAIO), an international body designed to coordinate global AI development and regulation.
These moves carry strategic significance. By providing technical assistance and governance frameworks to developing nations, China is shaping the default standards that emerging AI markets adopt. For global governance professionals, this means that Chinese standards and approaches are increasingly likely to appear in regulatory discussions well beyond mainland China.
Where This Is Heading
China’s AI governance trajectory points in one direction: from soft-law frameworks toward enforceable standards and comprehensive legislation. The AI Safety Governance Framework is the current centrepiece of this evolution, but it is a waypoint, not a destination. A draft comprehensive AI law has been under development since 2024, and while its timeline has shifted from imminent to deliberate, the regulatory infrastructure being built through TC260 standards and sectoral regulations is laying the groundwork for that legislation.
For governance professionals and compliance teams, the practical takeaway is clear: build your AI governance programme on internationally recognised foundations like ISO/IEC 42001, monitor China’s standards pipeline through TC260 releases, and treat jurisdiction-specific requirements as modular additions to a core governance framework. The organisations that treat global AI compliance as an integrated challenge rather than a set of disconnected regional problems will be best positioned as regulatory expectations continue to converge.
GAICC offers ISO/IEC 42001 certification programmes that prepare professionals and organisations to align with global AI governance requirements, including cross-jurisdictional compliance strategies that address Chinese, European, and US regulatory expectations. Explore GAICC’s Lead Implementer training to build the skills needed for this evolving landscape.
