GAICC AI Conference & Awards 2026 "Governing the Future – Building Responsible, Safe and Human-centric AI"

AI governance in India banner with shield and compliance icons

AI Governance in India: What US Businesses Need to Know

 
 

India’s “No New AI Law” Strategy

India’s IT Secretary S. Krishnan stated the government’s position clearly at the launch of the AI Governance Guidelines: “India has consciously chosen not to lead with regulation but to encourage innovation while studying global approaches. Wherever possible, we will rely on existing laws and frameworks rather than rush into new legislation.”

This means India regulates AI primarily through technology-neutral application of existing statutes. The Information Technology Act of 2000 remains the backbone for digital governance, covering intermediary liability, cybercrime, and content moderation. The DPDPA governs personal data processing. The Consumer Protection Act of 2019 applies to AI-enabled products and services. The Bharatiya Nyaya Sanhita (the replacement for the Indian Penal Code) covers criminal liability. Sector-specific regulators, particularly the Reserve Bank of India (RBI) for financial services and the Securities and Exchange Board of India (SEBI) for capital markets, apply additional requirements within their domains.

The rationale is pragmatic. India’s government views many AI risks as manageable under existing law if that law is enforced consistently. The AI Governance Guidelines explicitly state that “a separate law to regulate AI is not needed given the current assessment of risks” but acknowledge that legal amendments may be needed to address specific gaps, particularly around copyright and AI training data.

A Private Member’s Bill, the Artificial Intelligence (Ethics and Accountability) Bill, was introduced in the Lok Sabha in December 2025. It proposes a statutory Ethics Committee, mandatory ethical reviews for high-risk systems, bias audits, and penalties up to ₹5 crore (approximately US$590,000). While not yet enacted, it signals growing parliamentary interest in binding AI accountability.

The India AI Governance Guidelines: Seven Sutras

Released on 5 November 2025, the India AI Governance Guidelines represent the country’s most comprehensive articulation of AI governance philosophy. They are structured in four parts: foundational principles, key recommendations, an action plan, and practical sector-specific guidance.

The framework rests on seven guiding principles, adapted from the RBI’s Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) report published in August 2025. The guidelines refer to these as “sutras”:

#SutraGovernance Implication
1Trust as the FoundationTrust across the AI value chain is essential for adoption; without it, innovation stalls
2Human CentricityAI systems must serve human needs with appropriate oversight and safeguards
3Responsible InnovationInnovation should be encouraged within frameworks that address risks proportionately
4Fairness and EquityAI must not perpetuate discrimination; specific attention to caste, language, and gender bias
5AccountabilityClear responsibility chains across the AI value chain, from developers to deployers
6Understandable by DesignAI systems require explanations enabling users and regulators to comprehend operations
7Safety, Resilience, and SustainabilitySystems must minimise risks, detect anomalies, and ensure environmental responsibility

What makes India’s principles distinctive is the explicit attention to the country’s social context. The risk classification framework specifically addresses deepfakes targeting women, child safety, language bias affecting India’s 22 scheduled languages, and caste-based discrimination in algorithmic systems. This social-context approach reflects India’s population diversity in ways that generic international frameworks do not.

The guidelines recommend six pillars of governance: infrastructure development, capacity building, policy and regulation, risk mitigation, accountability mechanisms, and institutional architecture. Under the institutional pillar, India plans to establish an AI Governance Group (AIGG) for inter-ministerial coordination, a Technology and Policy Expert Committee, and the IndiaAI Safety Institute.

The Digital Personal Data Protection Act and AI

The DPDPA, enacted in August 2023, is India’s first comprehensive data protection legislation. Its implementing rules were notified on 13 November 2025, bringing approximately 800 million internet users under a formal privacy framework. Full compliance is required by 13 May 2027.

For AI systems, the DPDPA creates several important obligations. Consent must be explicit and informed for most personal data processing, which creates challenges for large-scale AI training. Data Fiduciaries (the DPDPA’s term for data controllers) must observe purpose limitation, data minimisation, security safeguards, and breach notification requirements. Significant Data Fiduciaries face additional obligations including impact assessments and audits.

The Act introduces the concept of a consent manager, a trusted third party that enables individuals to manage their data permissions. This is an innovative mechanism that has no direct parallel in the EU GDPR or the US regulatory landscape.

Exemptions exist for publicly available data and certain research purposes, which partially addresses AI training data concerns. However, the DPDPA’s consent-centric approach means that US companies processing personal data of Indian citizens for AI applications need robust consent mechanisms, clear purpose documentation, and data minimisation practices.

The IndiaAI Mission: Infrastructure at Scale

Approved in March 2024 with an allocation of ₹10,300 crore (approximately US$1.2 billion) over five years, the IndiaAI Mission operates across seven pillars: compute infrastructure, datasets (AI Kosh), application development, future skills, startup financing, safe and trusted AI, and institutional capacity.

The compute infrastructure achievements are notable. As of August 2025, over 38,000 GPUs had been made available at subsidised rates through the mission. The AI Kosh platform hosts 1,500 datasets and 217 AI models. Four Indian startups are developing sovereign foundation models with government support, and Sarvam AI launched 30-billion and 105-billion parameter multilingual models at the February 2026 summit.

India’s approach to AI infrastructure has a governance dimension that matters for US companies. The emphasis on sovereign AI models and domestic compute capacity reflects a desire to reduce dependence on foreign AI systems for critical applications. US AI providers operating in India should expect increasing expectations around data localisation, model transparency, and alignment with India-specific governance standards.

The IndiaAI Safety Institute

Announced by Minister Ashwini Vaishnaw in January 2025, the IndiaAI Safety Institute operates under the Safe and Trusted Pillar of the IndiaAI Mission. It was established after consultations with Meta, Google, Microsoft, IBM, OpenAI, NASSCOM, and several Indian Institutes of Technology.

The Institute’s mandate includes developing indigenous tools and frameworks for AI safety testing, setting standards for responsible AI, collaborating with international AI safety networks, and addressing India-specific risks including multilingual AI challenges, unrepresentative training data, and socioeconomic biases.

The decision to focus on standards-setting and risk identification rather than enforcement reflects India’s broader governance philosophy. The initial budget comes from the ₹20 crore allocated to the Safe and Trusted Pillar, with future funding expected from other IndiaAI Mission components.

Sector-Specific AI Governance: Financial Services Lead

The Reserve Bank of India has taken the most advanced sector-specific approach to AI governance in the country. Its FREE-AI Committee report, published in August 2025, provides the seven sutras that the national governance guidelines subsequently adopted. The RBI’s framework addresses AI in lending decisions, fraud detection, customer service, and systemic risk monitoring.

SEBI has issued guidance on algorithmic trading and AI-driven investment advice. The Insurance Regulatory and Development Authority of India (IRDAI) is developing frameworks for AI in underwriting and claims processing. Healthcare AI governance is addressed through the National Digital Health Mission and associated guidelines.

For US financial institutions, fintech companies, and insurance providers operating in India, the RBI’s framework carries practical regulatory weight. While the national AI Governance Guidelines are voluntary, sector regulators can and do enforce their specific requirements through supervisory action.

How India Compares: Global Governance Models

India’s governance approach positions it as a distinct model among major AI jurisdictions.

DimensionIndiaUnited StatesEuropean UnionSingapore
Regulatory ModelExisting laws + voluntary guidelines + techno-legal approachVoluntary federal + state-level lawsComprehensive risk-based legislationVoluntary frameworks + government testing tools
Dedicated AI LawNo (relies on existing statutes)No federal AI lawEU AI Act (Aug 2024)No
Data ProtectionDPDPA 2023 (full compliance May 2027)State-level + sectoralGDPRPDPA
AI Safety BodyIndiaAI Safety Institute (announced Jan 2025)CAISI (renamed from AISI)EU AI OfficeAI Verify Foundation
Key InnovationTechno-legal compliance; social-context risk classificationNIST AI RMFRisk-based classificationAI Verify testing toolkit
ISO 42001 StatusAdopted by BIS as national standardVoluntaryExpected conformity presumptionMapped via crosswalk; national adoption
Investment~US$1.2B (IndiaAI Mission, 5 years)Federal R&D + private sectorEU-wide programmesS$1B+ (NAIRD, 5 years)
Global South LeadershipAI Impact Summit host (Feb 2026)N/AN/AASEAN hub

For a detailed breakdown of federal oversight, the NIST AI RMF, and state level obligations, see our guide to AI governance in the United States.

ISO/IEC 42001 in the Indian Context

The Bureau of Indian Standards (BIS) has adopted ISO/IEC 42001:2023 as a national standard. The AI Governance Guidelines explicitly encourage self-certification, industry codes, and alignment with ISO/IEC 42001 as practical governance mechanisms.

This is a significant development for US companies. ISO 42001 certification provides documented evidence of responsible AI governance that Indian regulators, enterprise partners, and government procurement processes recognise. Given India’s reliance on existing laws rather than a dedicated AI compliance regime, ISO 42001 serves as the closest thing to a universal governance credential in the Indian market.

The standard’s alignment with international frameworks means that ISO 42001 certification simultaneously satisfies governance expectations across India, the US (through NIST AI RMF mapping), the EU (through AI Act Articles 9-15 alignment), and Singapore (through AI Verify crosswalk). For multinational operations, this cross-jurisdictional utility makes ISO 42001 the most efficient governance investment available.

The Copyright Question: Training Data in India

The AI Governance Guidelines acknowledge that copyright laws may need amendment to enable large-scale AI model training while protecting rights holders. India’s Copyright Act of 1957 does not contain a text and data mining exception comparable to those in the EU or Japan.

The guidelines advocate for a balanced approach: enabling innovation while ensuring adequate protections. The expected Digital India Act, which would comprehensively overhaul the IT Act of 2000, is likely to address AI and copyright as part of its broader digital governance framework. Public consultation on the Digital India Act is anticipated in 2026.

US companies training AI models on data that includes Indian-originated content should track these developments carefully. India’s position as the world’s largest internet market by user count makes its copyright framework commercially consequential for any company operating generative AI at scale.

Practical Steps for US Organisations

India’s governance model rewards organisations that demonstrate responsible AI practices voluntarily rather than waiting for mandatory requirements. Here is how US companies should approach the Indian market.

Ensure DPDPA compliance for AI data processing. Map all personal data flows involving Indian citizens. Implement explicit consent mechanisms, purpose limitation, and data minimisation. Prepare for Significant Data Fiduciary obligations if your data processing meets the threshold. Full compliance is required by May 2027.

Align with the seven sutras. Review your AI governance practices against India’s seven guiding principles. Pay particular attention to fairness requirements that address India-specific social context, including language diversity, caste, and gender.

Implement ISO/IEC 42001. BIS adoption of the standard makes certification the most credible governance credential in the Indian market. It demonstrates alignment with the AI Governance Guidelines while providing cross-jurisdictional compliance utility.

Engage with sector-specific regulators. Financial services companies must align with RBI’s FREE-AI framework. Identify which sector regulators have jurisdiction over your India operations and incorporate their specific requirements into your governance processes.

Prepare for the Digital India Act. The expected overhaul of the IT Act will likely introduce risk-based classifications for digital platforms, enhanced intermediary obligations, and specific AI provisions. Monitor public consultations and prepare for new requirements.

Document AI training data provenance. Copyright law reform is coming. Maintain clear records of training data sources, particularly for content originating from Indian creators or users.

Leverage the techno-legal approach. India’s guidelines encourage embedding compliance into AI system design through watermarking, bias detection, and content authentication. Organisations that adopt these technical controls proactively will find themselves ahead of regulatory expectations. 

If you also operate in China or sell AI enabled services into China, review our analysis of the China AI Governance Framework to understand how standards-driven governance differs from India’s voluntary model.

Looking Ahead

India’s AI governance is at an inflection point. The voluntary guidelines, the DPDPA’s phased implementation, the IndiaAI Safety Institute, and the momentum from the AI Impact Summit are creating a governance ecosystem that will become more structured and potentially more binding over time. The forthcoming Digital India Act, expected copyright reforms, and the Private Member’s AI Ethics Bill all suggest that India’s regulatory stance will evolve from its current light-touch posture.

For US organisations, the strategic calculation is straightforward. India is the world’s largest internet market, a major AI talent source, and an increasingly influential voice in global AI governance. ISO/IEC 42001 certification, DPDPA compliance preparation, and alignment with the seven sutras are investments that pay returns in market access, partner credibility, and reduced regulatory risk as India’s governance matures.

Ready to prepare your AI governance for the Indian market? Explore GAICC’s ISO/IEC 42001 certification programmes to build a management system recognised by Indian regulators and aligned with global governance standards.

Frequently Asked Questions (FAQs)

Does India have a dedicated AI law?

No. India governs AI through existing statutes including the IT Act 2000, the DPDPA 2023, the Consumer Protection Act 2019, and sector-specific regulations. The AI Governance Guidelines released in November 2025 are voluntary but serve as the foundational governance reference.

What is the DPDPA and how does it affect AI?

The Digital Personal Data Protection Act 2023 is India's comprehensive data protection law. It requires explicit consent for personal data processing, imposes obligations on data fiduciaries, and grants rights to data principals. Full compliance is required by May 2027. AI systems processing personal data of Indian citizens must comply.

What are the seven sutras?

The seven foundational principles: trust, human centricity, responsible innovation, fairness and equity, accountability, understandability by design, and safety, resilience, and sustainability. They were adapted from the RBI's FREE-AI Committee report of August 2025.

What is the IndiaAI Safety Institute?

Announced in January 2025, it operates under the IndiaAI Mission's Safe and Trusted Pillar. It focuses on developing indigenous AI safety tools, setting standards, and collaborating with international safety networks to address India-specific risks.

How does ISO 42001 apply in India?

BIS has adopted ISO/IEC 42001 as a national standard. The AI Governance Guidelines encourage alignment with ISO 42001 through self-certification and industry codes. Certification demonstrates responsible governance to Indian regulators and enterprise partners.

What was the AI Impact Summit 2026?

Held in New Delhi from 16 to 21 February 2026, it was the fourth global AI summit and the first hosted by a Global South nation. Over 100 countries participated and 88 signed a declaration committing to inclusive AI development. India released updated governance guidelines during the summit.

Do US companies need to comply with Indian AI regulations?

If you process personal data of Indian citizens, the DPDPA applies regardless of where your organisation is based. Sector regulators like RBI enforce their own requirements. The AI Governance Guidelines are voluntary but increasingly expected by enterprise partners and government procurement.

How does India's approach differ from the EU AI Act?

The EU AI Act imposes binding risk classifications and compliance penalties. India relies on existing laws with voluntary governance guidelines, prioritising innovation and flexibility. India's social-context risk classification addresses country-specific concerns like caste bias and linguistic diversity.
Share it :
About the Author

Dr Faiz Rasool

Director at the Global AI Certification Council (GAICC) and PM Training School

A globally certified instructor in ISO/IEC, PMI®, TOGAF®, SAFe®, and Scrum.org disciplines. With over three years’ hands-on experience in ISO/IEC 42001 AI governance, he delivers training and consulting across New Zealand, Australia, Malaysia, the Philippines, and the UAE, combining high-end credentials with practical, real-world expertise and global reach.

Start Your ISO/IEC 42001 Lead Implementer Training Today

4.8 / 5.0 Rating