In a defining step toward shaping the future of responsible technology, the Government of India has unveiled the India AI Governance Guidelines , a comprehensive blueprint designed to ensure that artificial intelligence (AI) drives innovation safely, inclusively, and responsibly.
Released under the Ministry of Electronics and Information Technology (MeitY), the Guidelines mark a pivotal moment in India’s AI journey. They position the country as a global thought leader in balancing AI innovation with accountability and ethics, aligning with the vision of Viksit Bharat 2047 and the national principle of AI for All.
Seven Sutras: India’s Ethical Foundation for AI
At the heart of the document lie seven guiding sutras –
- Trust is the Foundation – Building confidence across developers, deployers, and citizens.
- People First – Ensuring human-centric design, oversight, and empowerment.
- Innovation over Restraint – Encouraging responsible progress over excessive caution.
- Fairness & Equity – Preventing bias and promoting inclusive growth.
- Accountability – Defining clear responsibilities across the AI value chain.
- Understandable by Design – Promoting transparency and explainability.
- Safety, Resilience & Sustainability – Building systems that are robust and environmentally conscious.
These foundational principles serve as India’s ethical compass, promoting human-centric AI systems that are transparent, equitable, and trustworthy. They seek to embed human oversight, social inclusion, and environmental consciousness into the design and deployment of AI technologies.
A Three-Domain Framework
The Guidelines operate across three key domains : Enablement, Regulation, and Oversight , supported by six strategic pillars:
- Infrastructure: Expanding access to GPUs, national datasets, and leveraging Digital Public Infrastructure (DPI).
- Capacity Building: Enhancing AI literacy, upskilling citizens, and training regulators.
- Policy & Regulation: Reviewing existing laws to ensure agility and balance in rulemaking.
- Risk Mitigation: Developing India-specific frameworks and a national AI Incidents Database.
- Accountability: Establishing graded liability systems and grievance redressal mechanisms.
- Institutions: Creating the AI Governance Group (AIGG), Technology & Policy Expert Committee (TPEC), and AI Safety Institute (AISI) to oversee implementation.
Together, these form a “whole-of-government” approach to AI governance — uniting ministries, regulators, and experts under one coordinated vision. India’s existing strengths in Digital Public Infrastructure (DPI) such as Aadhaar, UPI, and DigiLocker are placed at the centre of this approach, providing a scalable and secure foundation for AI development and adoption.
One of the most pragmatic aspects of the framework is its clear stance that India does not require a standalone AI law—at least not yet. Instead, it argues that the current legal ecosystem, including the Information Technology Act, the Digital Personal Data Protection Act, and consumer protection and criminal laws, can already address most AI-related risks. These range from deepfakes and data misuse to algorithmic bias and misinformation. The report nonetheless calls for targeted legal amendments to clarify the classification of AI actors, define liability within the AI value chain, and address contentious copyright issues related to AI training on protected data.
This “no new law yet” approach reflects India’s belief in agile regulation, one that evolves with technology rather than constraining it. It allows innovation to flourish while maintaining the flexibility to intervene when risks become apparent.
In keeping with India’s legacy of technology-enabled governance, the Guidelines adopt a techno-legal approach. Concepts such as DEPA for AI Training, privacy-preserving architectures, and watermarking for content authentication are proposed to embed compliance, auditability, and accountability directly into the design of AI systems. These measures echo India’s broader digital philosophy: that law and technology should reinforce one another to ensure trust at scale.
A central focus of the framework is risk mitigation. The Guidelines categorise AI risks into areas such as malicious use, bias and discrimination, transparency failures, systemic threats, and national security concerns. To address these, they propose a national AI Incidents Reporting Mechanism to collect and analyse data on real-world harms, supported by voluntary compliance frameworks and human oversight requirements in critical sectors.
Institutionally, the Guidelines envision a coordinated governance ecosystem anchored by three key entities: the AI Governance Group (AIGG) to drive national policy and coordination; the Technology and Policy Expert Committee (TPEC) to provide strategic and technical guidance; and the AI Safety Institute (AISI) to conduct safety testing, risk research, and international collaboration. These bodies will work alongside regulators such as the RBI, SEBI, TRAI, and CCI to ensure accountability while supporting innovation across sectors.
Globally, the Guidelines highlight the growing importance of AI diplomacy as a pillar of India’s foreign policy. They call for deeper engagement in multilateral forums such as the G20, OECD, and UNESCO, and for India to lead by example through the AI Impact Summit 2026. The report also acknowledges emerging challenges from next-generation “agentic” AI systems that can act autonomously, recommending foresight research and horizon scanning to ensure governance frameworks remain future-ready.
The Action Plan outlined in the report sets out a phased approach. In the short term, the focus will be on establishing institutional frameworks, developing risk assessment models, and launching public awareness initiatives. The medium-term priorities include piloting regulatory sandboxes and amending existing laws, while the long-term goal is to draft AI-specific legislation if emerging risks demand it.
The India AI Governance Guidelines mark a shift from reactive regulation to anticipatory governance—one that blends ethics, innovation, and foresight. By choosing no new AI law yet and strengthening existing frameworks through coordinated oversight, India has signalled that responsible innovation does not mean restriction. It means readiness.
