India’s “No New AI Law” Strategy
India’s IT Secretary S. Krishnan stated the government’s position clearly at the launch of the AI Governance Guidelines: “India has consciously chosen not to lead with regulation but to encourage innovation while studying global approaches. Wherever possible, we will rely on existing laws and frameworks rather than rush into new legislation.”
This means India regulates AI primarily through technology-neutral application of existing statutes. The Information Technology Act of 2000 remains the backbone for digital governance, covering intermediary liability, cybercrime, and content moderation. The DPDPA governs personal data processing. The Consumer Protection Act of 2019 applies to AI-enabled products and services. The Bharatiya Nyaya Sanhita (the replacement for the Indian Penal Code) covers criminal liability. Sector-specific regulators, particularly the Reserve Bank of India (RBI) for financial services and the Securities and Exchange Board of India (SEBI) for capital markets, apply additional requirements within their domains.
The rationale is pragmatic. India’s government views many AI risks as manageable under existing law if that law is enforced consistently. The AI Governance Guidelines explicitly state that “a separate law to regulate AI is not needed given the current assessment of risks” but acknowledge that legal amendments may be needed to address specific gaps, particularly around copyright and AI training data.
A Private Member’s Bill, the Artificial Intelligence (Ethics and Accountability) Bill, was introduced in the Lok Sabha in December 2025. It proposes a statutory Ethics Committee, mandatory ethical reviews for high-risk systems, bias audits, and penalties up to ₹5 crore (approximately US$590,000). While not yet enacted, it signals growing parliamentary interest in binding AI accountability.
The India AI Governance Guidelines: Seven Sutras
Released on 5 November 2025, the India AI Governance Guidelines represent the country’s most comprehensive articulation of AI governance philosophy. They are structured in four parts: foundational principles, key recommendations, an action plan, and practical sector-specific guidance.
The framework rests on seven guiding principles, adapted from the RBI’s Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) report published in August 2025. The guidelines refer to these as “sutras”:
| # | Sutra | Governance Implication |
|---|---|---|
| 1 | Trust as the Foundation | Trust across the AI value chain is essential for adoption; without it, innovation stalls |
| 2 | Human Centricity | AI systems must serve human needs with appropriate oversight and safeguards |
| 3 | Responsible Innovation | Innovation should be encouraged within frameworks that address risks proportionately |
| 4 | Fairness and Equity | AI must not perpetuate discrimination; specific attention to caste, language, and gender bias |
| 5 | Accountability | Clear responsibility chains across the AI value chain, from developers to deployers |
| 6 | Understandable by Design | AI systems require explanations enabling users and regulators to comprehend operations |
| 7 | Safety, Resilience, and Sustainability | Systems must minimise risks, detect anomalies, and ensure environmental responsibility |
What makes India’s principles distinctive is the explicit attention to the country’s social context. The risk classification framework specifically addresses deepfakes targeting women, child safety, language bias affecting India’s 22 scheduled languages, and caste-based discrimination in algorithmic systems. This social-context approach reflects India’s population diversity in ways that generic international frameworks do not.
The guidelines recommend six pillars of governance: infrastructure development, capacity building, policy and regulation, risk mitigation, accountability mechanisms, and institutional architecture. Under the institutional pillar, India plans to establish an AI Governance Group (AIGG) for inter-ministerial coordination, a Technology and Policy Expert Committee, and the IndiaAI Safety Institute.
The Digital Personal Data Protection Act and AI
The DPDPA, enacted in August 2023, is India’s first comprehensive data protection legislation. Its implementing rules were notified on 13 November 2025, bringing approximately 800 million internet users under a formal privacy framework. Full compliance is required by 13 May 2027.
For AI systems, the DPDPA creates several important obligations. Consent must be explicit and informed for most personal data processing, which creates challenges for large-scale AI training. Data Fiduciaries (the DPDPA’s term for data controllers) must observe purpose limitation, data minimisation, security safeguards, and breach notification requirements. Significant Data Fiduciaries face additional obligations including impact assessments and audits.
The Act introduces the concept of a consent manager, a trusted third party that enables individuals to manage their data permissions. This is an innovative mechanism that has no direct parallel in the EU GDPR or the US regulatory landscape.
Exemptions exist for publicly available data and certain research purposes, which partially addresses AI training data concerns. However, the DPDPA’s consent-centric approach means that US companies processing personal data of Indian citizens for AI applications need robust consent mechanisms, clear purpose documentation, and data minimisation practices.
The IndiaAI Mission: Infrastructure at Scale
Approved in March 2024 with an allocation of ₹10,300 crore (approximately US$1.2 billion) over five years, the IndiaAI Mission operates across seven pillars: compute infrastructure, datasets (AI Kosh), application development, future skills, startup financing, safe and trusted AI, and institutional capacity.
The compute infrastructure achievements are notable. As of August 2025, over 38,000 GPUs had been made available at subsidised rates through the mission. The AI Kosh platform hosts 1,500 datasets and 217 AI models. Four Indian startups are developing sovereign foundation models with government support, and Sarvam AI launched 30-billion and 105-billion parameter multilingual models at the February 2026 summit.
India’s approach to AI infrastructure has a governance dimension that matters for US companies. The emphasis on sovereign AI models and domestic compute capacity reflects a desire to reduce dependence on foreign AI systems for critical applications. US AI providers operating in India should expect increasing expectations around data localisation, model transparency, and alignment with India-specific governance standards.
The IndiaAI Safety Institute
Announced by Minister Ashwini Vaishnaw in January 2025, the IndiaAI Safety Institute operates under the Safe and Trusted Pillar of the IndiaAI Mission. It was established after consultations with Meta, Google, Microsoft, IBM, OpenAI, NASSCOM, and several Indian Institutes of Technology.
The Institute’s mandate includes developing indigenous tools and frameworks for AI safety testing, setting standards for responsible AI, collaborating with international AI safety networks, and addressing India-specific risks including multilingual AI challenges, unrepresentative training data, and socioeconomic biases.
The decision to focus on standards-setting and risk identification rather than enforcement reflects India’s broader governance philosophy. The initial budget comes from the ₹20 crore allocated to the Safe and Trusted Pillar, with future funding expected from other IndiaAI Mission components.
Sector-Specific AI Governance: Financial Services Lead
The Reserve Bank of India has taken the most advanced sector-specific approach to AI governance in the country. Its FREE-AI Committee report, published in August 2025, provides the seven sutras that the national governance guidelines subsequently adopted. The RBI’s framework addresses AI in lending decisions, fraud detection, customer service, and systemic risk monitoring.
SEBI has issued guidance on algorithmic trading and AI-driven investment advice. The Insurance Regulatory and Development Authority of India (IRDAI) is developing frameworks for AI in underwriting and claims processing. Healthcare AI governance is addressed through the National Digital Health Mission and associated guidelines.
For US financial institutions, fintech companies, and insurance providers operating in India, the RBI’s framework carries practical regulatory weight. While the national AI Governance Guidelines are voluntary, sector regulators can and do enforce their specific requirements through supervisory action.
How India Compares: Global Governance Models
India’s governance approach positions it as a distinct model among major AI jurisdictions.
| Dimension | India | United States | European Union | Singapore |
|---|---|---|---|---|
| Regulatory Model | Existing laws + voluntary guidelines + techno-legal approach | Voluntary federal + state-level laws | Comprehensive risk-based legislation | Voluntary frameworks + government testing tools |
| Dedicated AI Law | No (relies on existing statutes) | No federal AI law | EU AI Act (Aug 2024) | No |
| Data Protection | DPDPA 2023 (full compliance May 2027) | State-level + sectoral | GDPR | PDPA |
| AI Safety Body | IndiaAI Safety Institute (announced Jan 2025) | CAISI (renamed from AISI) | EU AI Office | AI Verify Foundation |
| Key Innovation | Techno-legal compliance; social-context risk classification | NIST AI RMF | Risk-based classification | AI Verify testing toolkit |
| ISO 42001 Status | Adopted by BIS as national standard | Voluntary | Expected conformity presumption | Mapped via crosswalk; national adoption |
| Investment | ~US$1.2B (IndiaAI Mission, 5 years) | Federal R&D + private sector | EU-wide programmes | S$1B+ (NAIRD, 5 years) |
| Global South Leadership | AI Impact Summit host (Feb 2026) | N/A | N/A | ASEAN hub |
For a detailed breakdown of federal oversight, the NIST AI RMF, and state level obligations, see our guide to AI governance in the United States.
ISO/IEC 42001 in the Indian Context
The Bureau of Indian Standards (BIS) has adopted ISO/IEC 42001:2023 as a national standard. The AI Governance Guidelines explicitly encourage self-certification, industry codes, and alignment with ISO/IEC 42001 as practical governance mechanisms.
This is a significant development for US companies. ISO 42001 certification provides documented evidence of responsible AI governance that Indian regulators, enterprise partners, and government procurement processes recognise. Given India’s reliance on existing laws rather than a dedicated AI compliance regime, ISO 42001 serves as the closest thing to a universal governance credential in the Indian market.
The standard’s alignment with international frameworks means that ISO 42001 certification simultaneously satisfies governance expectations across India, the US (through NIST AI RMF mapping), the EU (through AI Act Articles 9-15 alignment), and Singapore (through AI Verify crosswalk). For multinational operations, this cross-jurisdictional utility makes ISO 42001 the most efficient governance investment available.
The Copyright Question: Training Data in India
The AI Governance Guidelines acknowledge that copyright laws may need amendment to enable large-scale AI model training while protecting rights holders. India’s Copyright Act of 1957 does not contain a text and data mining exception comparable to those in the EU or Japan.
The guidelines advocate for a balanced approach: enabling innovation while ensuring adequate protections. The expected Digital India Act, which would comprehensively overhaul the IT Act of 2000, is likely to address AI and copyright as part of its broader digital governance framework. Public consultation on the Digital India Act is anticipated in 2026.
US companies training AI models on data that includes Indian-originated content should track these developments carefully. India’s position as the world’s largest internet market by user count makes its copyright framework commercially consequential for any company operating generative AI at scale.
Practical Steps for US Organisations
India’s governance model rewards organisations that demonstrate responsible AI practices voluntarily rather than waiting for mandatory requirements. Here is how US companies should approach the Indian market.
Ensure DPDPA compliance for AI data processing. Map all personal data flows involving Indian citizens. Implement explicit consent mechanisms, purpose limitation, and data minimisation. Prepare for Significant Data Fiduciary obligations if your data processing meets the threshold. Full compliance is required by May 2027.
Align with the seven sutras. Review your AI governance practices against India’s seven guiding principles. Pay particular attention to fairness requirements that address India-specific social context, including language diversity, caste, and gender.
Implement ISO/IEC 42001. BIS adoption of the standard makes certification the most credible governance credential in the Indian market. It demonstrates alignment with the AI Governance Guidelines while providing cross-jurisdictional compliance utility.
Engage with sector-specific regulators. Financial services companies must align with RBI’s FREE-AI framework. Identify which sector regulators have jurisdiction over your India operations and incorporate their specific requirements into your governance processes.
Prepare for the Digital India Act. The expected overhaul of the IT Act will likely introduce risk-based classifications for digital platforms, enhanced intermediary obligations, and specific AI provisions. Monitor public consultations and prepare for new requirements.
Document AI training data provenance. Copyright law reform is coming. Maintain clear records of training data sources, particularly for content originating from Indian creators or users.
Leverage the techno-legal approach. India’s guidelines encourage embedding compliance into AI system design through watermarking, bias detection, and content authentication. Organisations that adopt these technical controls proactively will find themselves ahead of regulatory expectations.
If you also operate in China or sell AI enabled services into China, review our analysis of the China AI Governance Framework to understand how standards-driven governance differs from India’s voluntary model.
Looking Ahead
India’s AI governance is at an inflection point. The voluntary guidelines, the DPDPA’s phased implementation, the IndiaAI Safety Institute, and the momentum from the AI Impact Summit are creating a governance ecosystem that will become more structured and potentially more binding over time. The forthcoming Digital India Act, expected copyright reforms, and the Private Member’s AI Ethics Bill all suggest that India’s regulatory stance will evolve from its current light-touch posture.
For US organisations, the strategic calculation is straightforward. India is the world’s largest internet market, a major AI talent source, and an increasingly influential voice in global AI governance. ISO/IEC 42001 certification, DPDPA compliance preparation, and alignment with the seven sutras are investments that pay returns in market access, partner credibility, and reduced regulatory risk as India’s governance matures.
Ready to prepare your AI governance for the Indian market? Explore GAICC’s ISO/IEC 42001 certification programmes to build a management system recognised by Indian regulators and aligned with global governance standards.
