GAICC AI Conference & Awards 2026 "Governing the Future – Building Responsible, Safe and Human-centric AI"

agentic ai governance singapore banner

Singapore AI Governance Framework: What US Businesses Need to Know

For US companies expanding into Asia-Pacific, Singapore’s approach to AI governance is fundamentally different from what the EU or the UK are building. There is no single AI law. No mandatory compliance regime. Instead, Singapore has constructed an ecosystem of voluntary frameworks, government-built testing tools, and interoperable standards that collectively set expectations without prescribing rigid rules. The result is a governance model that rewards proactive adoption rather than punishing non-compliance.

That voluntary nature, however, does not mean these frameworks lack teeth. Singapore’s Model AI Governance Framework has been mapped to ISO/IEC 42001, the NIST AI Risk Management Framework, and OECD AI Principles. Its AI Verify testing toolkit, the world’s first government-developed AI assurance platform, is used by companies including Google, Microsoft, DBS Bank, and Singapore Airlines. And in January 2026, Singapore released the world’s first governance framework for agentic AI at the World Economic Forum in Davos. This is a country that treats governance as competitive infrastructure, and US organisations operating in the region need to understand how that infrastructure works.

The Foundation: Singapore’s Model AI Governance Framework

Singapore published the first edition of its Model AI Governance Framework in January 2019, making it one of the earliest countries to establish structured guidance for responsible AI deployment. The second edition followed in January 2020, expanding the framework with implementation details and real-world use cases.

The framework is built on two overarching principles. First, organisations using AI in decision-making should ensure that the decision-making process is explainable, transparent, and fair. Second, AI solutions should be human-centric, with safeguards that allow human oversight when needed.

From these principles, the framework addresses four practical areas: internal governance structures and measures, determining the level of human involvement in AI-augmented decision-making, operations management covering risk assessment and data governance, and stakeholder interaction and communication.

What distinguishes Singapore’s framework from regulatory approaches elsewhere is its implementation-first philosophy. Rather than establishing abstract principles and leaving organisations to figure out compliance, the framework was co-developed with companies from the outset. Early consultation partners included AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank. This meant the guidance reflected operational reality from day one.

The framework is accompanied by the Implementation and Self-Assessment Guide for Organisations (ISAGO), developed in collaboration with the World Economic Forum. ISAGO helps companies assess how well their existing practices align with the framework and provides detailed examples from across sectors and company sizes. The updated ISAGO 2.0 integrates directly with AI Verify, creating a seamless pathway from governance assessment to technical testing.

AI Verify: The World’s First Government-Built AI Testing Toolkit

Launched in May 2022, AI Verify is a governance testing framework and open-source software toolkit developed by the Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC). It is, by most measures, the most operationally mature government AI assurance tool available anywhere.

AI Verify tests AI systems against 11 internationally recognised ethics principles covering transparency and explainability, fairness and bias, safety and resilience, accountability and oversight, data governance and privacy, and model security. The toolkit combines automated technical tests with structured process checks, generating detailed reports that organisations can share with stakeholders, auditors, or regulators.

The technical significance of AI Verify lies in its interoperability. IMDA published a crosswalk mapping AI Verify to the US NIST AI Risk Management Framework in October 2023, declaring the two frameworks interoperable. In June 2024, a second crosswalk mapped AI Verify to ISO/IEC 42001:2023. Singapore also adopted the ISO standard nationally as SS ISO/IEC 42001:2024, with a national annex describing AI Verify as an example tool for meeting the standard’s requirements.

For US companies, this interoperability is highly practical. Work done to satisfy NIST AI RMF requirements can be leveraged for AI Verify compliance, and vice versa. An organisation that implements ISO 42001 can use AI Verify to demonstrate alignment without duplicating effort. This is precisely the kind of cross-framework efficiency that multinational operations require.

The AI Verify Foundation, established in 2023 with premier members including Google, IBM, Microsoft, and Salesforce, has grown to more than 90 member organisations by 2025. It maintains the open-source toolkit, collaborates with OECD and GPAI on harmonising testing standards, and runs international pilots including the Global AI Assurance Pilot launched in February 2025.

Singapore’s AI Verify toolkit has been mapped directly to the US NIST AI Risk Management Framework, enabling interoperability across jurisdictions. For a deeper look at how federal oversight and state AI laws operate in the US, read our guide on AI Governance in the United States: Federal Oversight, NIST Framework and State AI Laws.

National AI Strategy 2.0: The S$1 Billion Commitment

Singapore launched its first National AI Strategy (NAIS) in 2019. The updated NAIS 2.0, released in December 2023, represents a substantial escalation in ambition and investment. The government has committed more than S$1 billion (approximately US$786 million) over five years through the National AI Research and Development Plan (NAIRD), running from 2025 to 2030.

NAIS 2.0 is organised around three systems: activity drivers (industry, government, and research collaboration), people and communities (talent development and workforce skills), and infrastructure and environment (compute capacity, data resources, and governance). Ten enablers and 15 specific actions translate these systems into operational targets.

The talent ambitions are particularly noteworthy. Singapore aims to more than triple its AI practitioner workforce from approximately 4,500 to 15,000. The strategy includes an expanded AI Apprenticeship Programme, increased PhD fellowships, and an AI Accelerated Masters Programme. For US companies with operations in Singapore, this growing talent pool is a direct benefit of the national strategy.

On the governance front, NAIS 2.0 explicitly identifies a trusted environment as one of its three systems. The strategy commits to continuing the evolution of the Model AI Governance Framework, expanding AI Verify’s capabilities, and deepening international interoperability through bilateral agreements and multilateral standards alignment.

Generative AI Governance: Singapore’s Rapid Response

Singapore moved faster than most governments to establish governance guidance for generative AI. In June 2023, IMDA published a discussion paper identifying six key risks: hallucinations and content quality, copyright and IP concerns, data privacy risks, embedded biases, malicious use potential, and cybersecurity vulnerabilities.

By May 2024, this analysis had matured into the Model AI Governance Framework for Generative AI, the third framework in the series. The 2025 update integrated OECD AI Principles and GPAI Code of Practice criteria, enabling interoperability with EU, UK, and US assurance models.

The generative AI framework extends the existing governance principles into specific operational areas including content provenance and watermarking, supply chain transparency for training data, red-teaming and adversarial testing protocols, and incident response for model failures. Singapore’s AI Safety Red Teaming Challenge, first run in 2024 and expanded in 2026 to include participants from 14 Asian countries, provides practical testing across languages and cultural contexts.

Project Moonshot, launched alongside the generative AI framework, is one of the world’s first open-source LLM evaluation toolkits. It combines benchmarking with red-teaming capabilities, enabling developers and compliance teams to test large language models and their applications systematically.

The Agentic AI Framework: First in the World

On 22 January 2026, Minister for Digital Development and Information Josephine Teo unveiled the Model AI Governance Framework for Agentic AI at the World Economic Forum in Davos. This represents the fourth framework in Singapore’s governance series and the first governance model anywhere specifically designed for AI agents that can reason, plan, and execute tasks autonomously.

The framework identifies five categories of risk unique to agentic AI: erroneous actions (agents performing incorrect tasks), unauthorised actions (agents exceeding their designated scope), cascading failures (errors propagating across multi-agent systems), data leakage (agents exposing sensitive information to external systems), and amplified bias (discriminatory patterns scaling through autonomous operation).

Governance guidance is organised around four dimensions. First, assessing and bounding risks upfront by selecting appropriate use cases and placing limits on agents’ powers. Second, making humans meaningfully accountable by defining significant checkpoints requiring human approval. Third, strengthening monitoring and response by implementing oversight throughout the agent lifecycle. Fourth, ensuring robust security and data protection through technical controls.

The 18-month gap between the generative AI and agentic AI frameworks is notably shorter than previous cycles. The framework was released without the extended public consultation that characterised earlier editions, reflecting the urgency of addressing rapidly advancing autonomous AI capabilities.

The Personal Data Protection Act and AI

Unlike the EU’s GDPR or the UK’s Data Protection Act, Singapore’s Personal Data Protection Act (PDPA) takes a balanced approach to consent that accommodates AI innovation while protecting individual rights. The PDPA requires organisations to obtain consent before collecting, using, or disclosing personal data, unless an exception applies.

Amendments passed in 2020 expanded the consent framework significantly. Organisations can now rely on deemed consent in certain circumstances, and the range of exceptions has widened to support data-driven innovation. The PDPC encourages the use of anonymised data in AI development wherever possible, noting that properly anonymised data falls outside the PDPA’s scope.

In 2024, the PDPC issued Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems, clarifying how data protection obligations apply to AI-powered systems. These guidelines address lawful bases for data processing, transparency requirements for automated decisions, and the circumstances under which individuals must be informed that AI is involved in decisions affecting them.

For US companies, the PDPA’s framework sits between the consent-heavy GDPR model and the more fragmented US state-level approach. Understanding these differences matters when designing AI systems that handle personal data across jurisdictions.

How Singapore Compares: A Multi-Jurisdiction View

Positioning Singapore’s approach alongside the US, EU, and UK frameworks reveals the distinct governance philosophy each jurisdiction has adopted.

DimensionSingaporeUnited StatesEuropean UnionUnited Kingdom
Regulatory ModelVoluntary frameworks with government-built toolsVoluntary federal framework + state-level lawsComprehensive risk-based legislationPrinciples-based, sector-led
Primary InstrumentsModel AI Framework + AI Verify + PDPANIST AI RMF + Executive Order + state lawsEU AI Act (Aug 2024)White Paper principles + DUAA 2025
EnforcementNon-binding; PDPC enforces data protectionFTC + sector regulators + state AGsNational authorities + EU AI OfficeSector regulators (ICO, FCA, CMA)
AI Testing ToolsAI Verify (government-built, open-source)No government toolkitConformity assessment (delegated)AISI evaluates frontier models
ISO 42001 IntegrationMapped via crosswalk; national adoptionVoluntary; complements NIST RMFExpected presumption of conformityRecommended governance tool
Agentic AI GuidanceWorld’s first framework (Jan 2026)No specific frameworkNot yet addressedNot yet addressed
InvestmentS$1B+ over 5 years (NAIRD)Federal R&D + private sectorEU-wide R&D programmes£1B+ in compute infrastructure

While Singapore relies on voluntary frameworks supported by government-built testing tools, the United Kingdom adopts a regulator-led, sector-specific approach to responsible AI. You can explore this model in detail in our analysis of AI Governance in the United Kingdom: Regulator Led Oversight and Responsible AI Implementation.

Why ISO/IEC 42001 Matters in the Singapore Context

Singapore’s decision to map AI Verify directly to ISO/IEC 42001 in June 2024 is one of the most significant governance developments for organisations operating across multiple jurisdictions. The crosswalk document demonstrates that companies implementing ISO 42001 can use AI Verify to validate their compliance without building separate assurance processes for each framework.

The national adoption of ISO 42001 as Singapore Standard SS ISO/IEC 42001:2024 further reinforces this alignment. The national annex explicitly positions AI Verify as a practical tool for meeting ISO 42001 requirements, creating a clear pathway from international standard to local implementation.

For US companies, this creates a powerful efficiency. An organisation that certifies against ISO 42001 gains simultaneous credibility with Singapore’s governance ecosystem (through AI Verify interoperability), US regulatory expectations (through NIST RMF mapping), and EU compliance preparation (through ISO 42001’s alignment with AI Act Articles 9-15). One standard, multiple jurisdictions.

The standard’s Annex SL structure means it integrates with ISO 27001 for information security and ISO 9001 for quality management, frameworks that many multinational companies already maintain. Adding AI governance to an existing management system is substantially less disruptive than establishing a standalone compliance programme.

Sector-Specific Governance: Financial Services

The Monetary Authority of Singapore (MAS) has taken the most advanced sector-specific approach to AI governance in the country. The Veritas framework, launched initially as a set of fairness principles for AI in financial services, has evolved into a comprehensive governance toolkit.

MAS guidelines address algorithmic fairness in credit scoring and insurance underwriting, explainability requirements for customer-facing AI decisions, model risk management for AI-driven trading and investment systems, and operational resilience for AI-dependent financial infrastructure.

For US financial institutions operating in Singapore, MAS expectations represent the most binding form of AI governance in the country. While the Model AI Governance Framework is voluntary, MAS can and does enforce its sector-specific guidelines through supervisory action. Financial institutions that have not aligned their AI systems with both MAS guidance and the broader national framework face regulatory risk.

Practical Steps for US Organisations

Singapore’s voluntary governance model means that compliance is not about checking regulatory boxes. It’s about demonstrating to partners, customers, and government agencies that your organisation takes AI governance seriously. Here is how US companies should approach it.

Map your AI systems to the Model AI Governance Framework. Use ISAGO 2.0 to assess your current governance maturity against Singapore’s framework. Identify gaps between your existing practices and the framework’s expectations across all four governance areas.

Deploy AI Verify for technical testing. The open-source toolkit is freely available. Use it to generate assurance reports that demonstrate your AI systems’ performance against the 11 ethics principles. These reports carry credibility with Singapore government agencies and enterprise partners.

Implement ISO/IEC 42001 as your governance standard. The crosswalk between AI Verify and ISO 42001 means certification gives you documented alignment with Singapore’s framework, NIST AI RMF, and EU AI Act requirements simultaneously.

Review PDPA compliance for AI data processing. Ensure you have appropriate consent mechanisms or valid exceptions for personal data used in AI systems. Pay particular attention to the 2024 Advisory Guidelines on AI recommendation and decision systems.

Prepare for agentic AI governance. If you are deploying or planning to deploy AI agents, assess your systems against the four dimensions of the January 2026 agentic AI framework. Implement risk bounding, human checkpoints, monitoring, and security controls.

Engage with sector-specific requirements. Financial institutions must align with MAS guidelines. Healthcare companies should follow Ministry of Health AI guidance. Identify which sector regulators have jurisdiction over your operations and ensure your governance reflects their specific expectations.

Singapore’s Global Influence on AI Standards

Singapore’s governance frameworks carry influence well beyond its borders. The ASEAN Guide on AI Governance and Ethics, published in February 2024, draws heavily on Singapore’s Model AI Governance Framework. The expanded guide covering generative AI followed in late 2024. For US companies with operations across Southeast Asia, Singapore’s frameworks increasingly define regional governance expectations.

Singapore has also entered bilateral AI governance agreements with the US, the UK, and South Korea. The US-Singapore bilateral AI Governance Group, established in October 2023, promotes framework interoperability and shared testing methodologies. These agreements mean that governance work done to Singapore’s standards receives recognition from partner jurisdictions.

The AI Verify Foundation’s collaboration with MLCommons on safety benchmarks, and with OECD and GPAI on assurance standards, positions Singapore at the centre of global AI governance harmonisation. For multinationals, this means Singapore-aligned governance is not a regional requirement but an internationally recognised credential.

Looking Ahead

Singapore’s AI governance model is a case study in how a small nation can shape global standards through practical, implementation-ready frameworks rather than prescriptive legislation. The progression from the 2019 Model AI Governance Framework through generative AI and agentic AI governance reflects a government that iterates governance at the speed of technology.

For US organisations, the strategic question is not whether to engage with Singapore’s governance ecosystem but how quickly. The interoperability between AI Verify, ISO/IEC 42001, and NIST AI RMF means that governance work done for Singapore operations reduces compliance effort everywhere else. The S$1 billion NAIRD investment, the growing talent pool, and the expanding regional influence through ASEAN make Singapore a governance environment that rewards early movers.

Ready to build your AI governance framework for Asia-Pacific operations? Explore GAICC’s ISO/IEC 42001 certification courses to align your management system with Singapore’s governance ecosystem and international standards simultaneously.

Frequently Asked Questions (FAQs)

Does Singapore have a mandatory AI law?

No. Singapore's AI governance operates through voluntary frameworks, not binding legislation. The Personal Data Protection Act applies to AI systems processing personal data, and sector-specific regulators like MAS can enforce their own guidelines. But there is no equivalent of the EU AI Act.

What is AI Verify and why does it matter?

AI Verify is a government-developed, open-source testing toolkit that helps organisations validate their AI systems against 11 internationally recognised ethics principles. It has been mapped to both NIST AI RMF and ISO/IEC 42001, making it a practical bridge between multiple governance frameworks.

How does Singapore's approach differ from the EU AI Act?

The EU AI Act imposes legally binding obligations with risk-based classification and enforcement penalties. Singapore relies on voluntary frameworks supported by government-built testing tools and industry collaboration. Singapore's model prioritises adoption incentives over compliance penalties.

Do US companies operating in Singapore need to follow these frameworks?

The Model AI Governance Framework is voluntary, but the PDPA's data protection requirements are mandatory for organisations handling personal data of Singapore residents. MAS guidelines are effectively mandatory for financial institutions. In practice, enterprise partners increasingly expect alignment.

How does ISO 42001 connect to Singapore's governance ecosystem?

Singapore adopted ISO 42001 as a national standard (SS ISO/IEC 42001:2024) and published a crosswalk mapping it to AI Verify. Implementing ISO 42001 demonstrates alignment with Singapore's framework while also satisfying governance expectations in the US, EU, and UK.

What is the agentic AI framework?

Released in January 2026, it is the world's first governance framework specifically designed for AI agents that can reason, plan, and act autonomously. It covers risk assessment, human accountability, monitoring, and security across four governance dimensions.

What sectors have specific AI governance requirements?

Financial services (regulated by MAS) and healthcare (guided by Ministry of Health) have the most developed sector-specific AI governance. The Cyber Security Agency also issued guidelines on securing AI systems in 2024. Other sectors follow the general Model AI Governance Framework.

Should US companies adopt Singapore's frameworks even if they're voluntary?

Yes. Adoption demonstrates governance maturity to Singapore enterprise partners, government procurement processes, and ASEAN-wide operations. The frameworks' interoperability with NIST and ISO 42001 means the effort translates directly to compliance readiness in other jurisdictions.
Share it :
About the Author

Dr Faiz Rasool

Director at the Global AI Certification Council (GAICC) and PM Training School

A globally certified instructor in ISO/IEC, PMI®, TOGAF®, SAFe®, and Scrum.org disciplines. With over three years’ hands-on experience in ISO/IEC 42001 AI governance, he delivers training and consulting across New Zealand, Australia, Malaysia, the Philippines, and the UAE, combining high-end credentials with practical, real-world expertise and global reach.

Start Your ISO/IEC 42001 Lead Implementer Training Today

4.8 / 5.0 Rating