GAICC AI Conference & Awards 2026 "Governing the Future – Building Responsible, Safe and Human-centric AI"

China AI governance framework illustration with risk and compliance elements

China AI Governance Framework: What Global Businesses Need to Know in 2026

Between 2021 and 2025, China enacted more sector-specific AI regulations than any other country. While the EU spent years debating a single comprehensive law and the United States relied on executive orders and voluntary frameworks, Beijing took a different path: rapid, iterative rulemaking targeting algorithms, deepfakes, generative AI services, and data security one regulation at a time. The result is an AI governance ecosystem that is sprawling, fast-moving, and increasingly influential beyond China’s borders.

At the centre of this ecosystem sits the AI Safety Governance Framework, a technical document published by China’s National Information Security Standardization Technical Committee (TC260) in September 2024 and updated to version 2.0 just twelve months later. The Framework is not a law. It functions more like an operational manual for risk classification, ethical principles, and governance measures that feed directly into binding national standards.

For organisations operating in or selling into China, and for governance professionals tracking global regulatory convergence, understanding this Framework is no longer optional. This article breaks down its structure, traces its evolution, compares it with the EU AI Act, the NIST AI Risk Management Framework, and ISO/IEC 42001, and explains what it means for cross-border AI compliance.

How China’s AI Regulatory Approach Differs from the EU and the US

China’s regulatory philosophy sits somewhere between the EU’s top-down legislative model and America’s fragmented, agency-led approach to AI governance in the United States differs significantly from China’s standards-driven model. Rather than passing a single comprehensive AI statute (as the EU did with the AI Act in 2024), China has built its governance architecture through a series of targeted regulations, each addressing a specific application of AI technology.

Three regulations form the regulatory backbone. The Provisions on the Management of Algorithmic Recommendations for Internet Information Services, effective March 2022, require companies to register algorithms that shape content feeds and mandate transparency about how those algorithms work. The Deep Synthesis Provisions, effective January 2023, govern AI-generated synthetic media, including deepfakes, mandating content labelling and identity verification. The Interim Measures for the Management of Generative Artificial Intelligence Services, effective August 2023, apply specifically to large language models and generative AI, requiring security assessments before public release and alignment with what Chinese regulators describe as core socialist values.

This sector-specific approach has a practical consequence for compliance teams: there is no single risk-tier classification system equivalent to the EU AI Act’s four-level framework (unacceptable, high, limited, minimal risk). Instead, obligations depend on the type of AI service being offered. A recommendation algorithm triggers different requirements than a generative AI chatbot, even if both carry comparable risk profiles.

The upside of this model is specificity. Each regulation is tailored to real-world use cases that regulators have already observed in the market. The downside is complexity. Multinational organisations must track multiple overlapping regulations, each issued by different agencies including the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT), and the State Administration for Market Regulation (SAMR).

The AI Safety Governance Framework: Structure and Core Principles

The AI Safety Governance Framework, published by TC260, is the closest thing China has to a unified governance reference document. Version 1.0 arrived in September 2024 during National Cybersecurity Awareness Week. Version 2.0 followed in September 2025 at the same annual event, reflecting the speed at which AI technology and associated risks have evolved.

The Framework is organised around four pillars: governance principles, risk taxonomy, technical countermeasures, and governance measures. Each pillar connects to the others in a structured hierarchy.

Governance Principles

Four foundational principles run through the document. “People-centred” requires that AI development prioritise human welfare. “AI for good” establishes an expectation that systems contribute positively to society. “Secure and controllable” mandates that AI remain within defined operational boundaries. “Fairness and justice” addresses bias and discrimination. These principles, first articulated in China’s 2021 New Generation AI Ethics Norms, serve as the ethical foundation for all subsequent technical requirements.

Risk Taxonomy

Version 1.0 classified risks into two broad categories: inherent technology risks (bias in training data, lack of explainability, adversarial vulnerabilities) and application risks (deepfake misuse, critical infrastructure failures, cyberattacks). Version 2.0 introduced a third category, derivative risks, covering indirect societal effects such as job displacement, environmental impact from AI data centres, erosion of creative skills, and in extreme scenarios, the possibility of AI systems developing capabilities beyond human control.

Version 2.0 also introduced a structured risk grading system based on three criteria: application scenario, level of intelligence, and application scale. Risks are classified into five levels, from low to extremely serious. TC260 has called for this grading system to be formalised into binding national standards, and in September 2025, just ten days after the Framework’s release, it issued an open call for organisations to participate in drafting a formal standard for risk categorisation.

Version 2.0: What Changed and Why It Matters

The gap between Framework 1.0 and 2.0 is substantial enough to warrant close attention. Where version 1.0 functioned primarily as a governance declaration, version 2.0 reads more like an operational manual.

Three changes stand out. First, the treatment of frontier AI risks received significant expansion. Version 2.0 dedicates substantial attention to loss-of-control scenarios, where AI systems might act in ways that humans cannot predict or override. It also addresses misuse in CBRN domains (chemical, biological, radiological, and nuclear), a concern that mirrors language in Western AI safety discussions.

Second, Version 2.0 adds provisions for open-source AI governance. As lightweight, high-efficiency open-source models lowered the barrier to AI deployment throughout 2024 and 2025, regulators recognised that governance frameworks needed to address models distributed outside traditional commercial channels. The Framework now recommends closer collaboration between model developers and open-source communities on risk disclosure, prohibited use cases, and security obligations.

Third, Version 2.0 explicitly calls for international collaboration on AI risk information sharing. Section 5.10 calls for exploring the creation of international collaboration mechanisms for sharing information about emerging AI risks. This signals that China’s approach to AI governance, while domestically focused, is increasingly oriented toward shaping global norms.

Key Regulatory Bodies and Their Roles

China’s AI governance structure involves multiple agencies with overlapping but distinct mandates, a feature that can confuse organisations accustomed to single-regulator models.

BodyPrimary RoleKey AI Actions
CACInternet content and cybersecurity regulationAlgorithm regulations, generative AI rules, Global AI Governance Initiative
TC260Technical standards developmentAI Safety Governance Framework, Basic Security Requirements for GenAI
MIITIndustry regulation and AI ethicsAI ethics review regulation, AI Plus Action Plan implementation
SAMRMarket regulation and standardisationAI industry standardisation guidelines, market access requirements

The Cyberspace Administration of China has emerged as the lead agency for AI governance, driving most of the sector-specific regulations and overseeing TC260’s technical standards work. TC260 itself operates as the standards body that translates regulatory principles into measurable technical requirements, a model that China describes as a “Law plus Standard” dual-drive approach.

In October 2025, amendments to the Cybersecurity Law explicitly brought AI into national legislation for the first time, adding provisions on algorithm R&D support, training data infrastructure, AI ethics rulemaking, and risk assessment governance. These amendments take effect on 1 January 2026.

Comparing China’s Framework with the EU AI Act, NIST AI RMF, and ISO/IEC 42001

Organisations operating across jurisdictions need to understand where these frameworks converge and diverge. The structural differences are significant, but the philosophical overlap is greater than most commentary suggests.

Risk Classification

The EU AI Act uses a four-tier risk classification (unacceptable, high, limited, minimal) that is technology-agnostic and applies horizontally across all AI systems. China’s Framework 2.0 uses a five-level grading system (low to extremely serious) based on application scenario, intelligence level, and scale. The NIST AI RMF avoids prescriptive risk tiers entirely, instead providing a process-based approach through its Govern, Map, Measure, and Manage functions. ISO/IEC 42001 requires organisations to conduct risk assessments but leaves the specific classification methodology to each organisation’s context.

Regulatory Force

The EU AI Act is binding law with substantial penalties (up to 35 million euros or 7% of global annual turnover). China’s Framework is technically voluntary, but its standards are rapidly translated into binding national requirements through TC260’s standards pipeline. The NIST AI RMF is voluntary guidance. ISO/IEC 42001 is a certifiable management system standard, adopted voluntarily but increasingly referenced by regulators as evidence of due diligence.

Scope and Applicability

The EU AI Act has extraterritorial reach, covering any AI system that affects EU residents regardless of where the provider is based. China’s regulations apply to AI services offered within China, with particular emphasis on content-facing applications. The NIST AI RMF targets primarily US organisations, though its principles are referenced globally. ISO/IEC 42001 applies to any organisation of any size that develops, provides, or uses AI systems.

The Convergence Point

Despite structural differences, all four frameworks share common ground on several principles: risk-based governance, transparency requirements, human oversight, accountability structures, and the need for ongoing monitoring after deployment. Organisations that build a governance programme around these shared principles can create a foundation that adapts to multiple jurisdictional requirements without starting from scratch for each one.

What China’s Cybersecurity Law Amendments Mean for AI in 2026

On 28 October 2025, China’s top legislature passed amendments to the Cybersecurity Law (CSL) that bring AI explicitly into national law for the first time. The amendments confirm government support for algorithm R&D, mandate construction of training data resources and computing infrastructure, require accelerated rulemaking on AI ethics, and strengthen AI risk assessment and security governance.

For organisations already subject to the CSL (which includes most technology companies operating in China), the amendments add a new compliance dimension. The practical impact will depend on implementing regulations that Chinese regulators are expected to issue throughout 2026. But the signal is clear: AI governance in China is moving from soft-law guidance toward hard-law obligations at an accelerating pace.

The content labelling requirements that took effect in September 2025 offer a preview of what is to come. The Measures for Labelling AI Generated or Synthesised Content require providers to attach explicit labels to AI-generated text, images, audio, and video, and to embed implicit labels in file metadata. App distribution platforms must verify that applications disclose whether they offer AI-generated content services.

Practical Implications for Global AI Compliance

Compliance teams at multinational organisations face a specific challenge: how to build a governance programme that satisfies Chinese requirements without duplicating effort for EU, US, and international standards compliance.

Three practical strategies emerge from the current regulatory landscape.

Anchor on ISO/IEC 42001 as a governance baseline. The standard’s management system approach (leadership commitment, risk assessment, operational controls, performance evaluation, continual improvement) maps well to requirements across all three major jurisdictions. Clause 6 on planning aligns with both China’s risk assessment expectations and the NIST AI RMF’s Map function. Clause 9 on performance evaluation mirrors China’s emphasis on ongoing monitoring and the EU AI Act’s post-market surveillance requirements.

Map jurisdiction-specific requirements as additions, not replacements. China’s content labelling obligations, algorithm registration requirements, and data localisation rules are additive to a baseline governance programme. Treat them as jurisdiction-specific controls layered on top of a common framework, not as a separate compliance workstream.

Monitor TC260 standards releases as leading indicators. China’s regulatory model moves from framework to standard to binding requirement in a predictable sequence. TC260’s AI Safety Standards System (V1.0), published in January 2025, mapped the entire pipeline of forthcoming technical standards. Organisations that track these releases gain early warning of compliance obligations before they become enforceable.

China’s Role in Shaping Global AI Governance Norms

China is not building its AI governance framework in isolation. In October 2023, President Xi Jinping launched the Global AI Governance Initiative, positioning China as an active participant in international AI governance discussions. In September 2024, China introduced the AI Capacity-Building Action Plan for Good and for All, focused on collaboration with developing countries on AI infrastructure, training, and governance.

In July 2025, China hosted the World AI Conference and High-Level Meeting on Global AI Governance in Shanghai, which produced the Global AI Governance Action Plan, a 13-point roadmap for international coordination. At the same event, China proposed establishing a World AI Organisation (WAIO), an international body designed to coordinate global AI development and regulation.

These moves carry strategic significance. By providing technical assistance and governance frameworks to developing nations, China is shaping the default standards that emerging AI markets adopt. For global governance professionals, this means that Chinese standards and approaches are increasingly likely to appear in regulatory discussions well beyond mainland China.

Where This Is Heading

China’s AI governance trajectory points in one direction: from soft-law frameworks toward enforceable standards and comprehensive legislation. The AI Safety Governance Framework is the current centrepiece of this evolution, but it is a waypoint, not a destination. A draft comprehensive AI law has been under development since 2024, and while its timeline has shifted from imminent to deliberate, the regulatory infrastructure being built through TC260 standards and sectoral regulations is laying the groundwork for that legislation.

For governance professionals and compliance teams, the practical takeaway is clear: build your AI governance programme on internationally recognised foundations like ISO/IEC 42001, monitor China’s standards pipeline through TC260 releases, and treat jurisdiction-specific requirements as modular additions to a core governance framework. The organisations that treat global AI compliance as an integrated challenge rather than a set of disconnected regional problems will be best positioned as regulatory expectations continue to converge.

GAICC offers ISO/IEC 42001 certification programmes that prepare professionals and organisations to align with global AI governance requirements, including cross-jurisdictional compliance strategies that address Chinese, European, and US regulatory expectations. Explore GAICC’s Lead Implementer training to build the skills needed for this evolving landscape.

Frequently Asked Questions (FAQs)

Is China's AI Safety Governance Framework legally binding?

No. The Framework itself is a technical guidance document published by TC260, not a regulation. However, TC260 systematically translates Framework provisions into formal national standards, and those standards can become binding through incorporation into regulations. The practical effect is that Framework recommendations frequently become enforceable requirements within 12 to 24 months.

How does China's approach to AI risk classification differ from the EU AI Act?

The EU AI Act uses a four-tier system (unacceptable, high, limited, minimal) that is technology-agnostic. China's Framework 2.0 uses a five-level grading system based on three criteria: application scenario, intelligence level, and application scale. The Chinese system is more granular but less prescriptive about which specific applications fall into which category, leaving sectoral regulators to adapt the system for their domains.

Does ISO/IEC 42001 help with compliance in China?

ISO/IEC 42001 provides a strong governance foundation that aligns with several Chinese requirements, particularly around risk assessment, documentation, and performance monitoring. It does not replace China-specific obligations like algorithm registration, content labelling, or data localisation. Organisations should treat ISO 42001 as a governance backbone and layer jurisdiction-specific Chinese requirements on top.

What is TC260 and why does it matter?

TC260 is the National Information Security Standardization Technical Committee, one of China's most influential standards bodies for AI. It develops the technical standards that implement regulatory principles, including the Basic Security Requirements for Generative AI Services and the AI Safety Governance Framework. TC260 standards serve as the bridge between high-level policy goals and measurable compliance requirements.

What are the most important Chinese AI regulations for foreign companies?

Three regulations carry the most compliance weight: the Algorithm Recommendations Provisions (March 2022), the Deep Synthesis Provisions (January 2023), and the Interim Measures for Generative AI Services (August 2023). The October 2025 Cybersecurity Law amendments and the September 2025 AI content labelling measures add further obligations. Companies offering AI-powered services to users in China should assess compliance across all five instruments.

How does China's Framework address frontier AI risks like AGI?

Framework 2.0 significantly expands attention to frontier risks compared to version 1.0. It addresses loss-of-control scenarios, CBRN misuse, and the broader societal consequences of increasingly capable AI systems. The Framework introduces derivative risks as a third risk category, covering indirect effects like erosion of human creativity and, in extreme cases, AI developing self-awareness beyond human control.
Share it :
About the Author

Dr Faiz Rasool

Director at the Global AI Certification Council (GAICC) and PM Training School

A globally certified instructor in ISO/IEC, PMI®, TOGAF®, SAFe®, and Scrum.org disciplines. With over three years’ hands-on experience in ISO/IEC 42001 AI governance, he delivers training and consulting across New Zealand, Australia, Malaysia, the Philippines, and the UAE, combining high-end credentials with practical, real-world expertise and global reach.

Start Your ISO/IEC 42001 Lead Implementer Training Today

4.8 / 5.0 Rating