INDIA TO UNVEIL ITS AI GOVERNANCE FRAMEWORK BY SEPTEMBER  28, 2025: MINISTER ASHWINI VAISNAW  (20.09.25)

Minister Shree Ashwini Vaishnaw announced that India will unveil a national AI governance framework by September 28, 2025, with the aim of defining “safety boundaries” around AI, setting in place checks and balances to protect citizens from AI-related harms, and aligning its domestic rules with emerging global norms.

 

“We are focusing on human-centric and inclusive growth, making technology accessible to all, and creating a governance framework acceptable to large parts of the world.”- Minister Shree Ashwini Vaishnaw

 

At the inauguration of the logo and key flagship initiative for the AI Summit, 2026, the Minister of Information and technology, Minister Shree Ashwini Vaishnaw announced that India will unveil a national AI governance framework by September 28, 2025, with the aim of defining “safety boundaries” around AI, setting in place checks and balances to protect citizens from AI-related harms, and aligning its domestic rules with emerging global norms.

Amid growing concerns around AI and global calls for clearer regulation of artificial intelligence systems, particularly generative AI and deepfakes, as well as concern about misuse, bias, transparency, and accountability, this framework will set tone for how India visions to regulate Artificial Intelligence technology.  In India, the framework will work in tandem with other policy developments , including administrative rules under the Digital Personal Data Protection (DPDP) Act, as well as regulation of online gaming.

 

WHAT WE KNOW SO FAR?

Timeline & Scope

  • The framework is to be released by September 28, 2025, as stated by the Minister of Electronics and IT, Shree Ashwini Vaishnaw.
  • It will not be fully prescriptive. Some portions may later be converted into law; others will remain under regulatory or administrative practice.
  • Alongside the AI framework, administrative rules for the Digital Personal Data Protection Act (DPDP Act, 2023) will also be notified by that date.
  • Also, rules concerning the Promotion and Regulation of Online Gaming Act, 2025 are expected, via additional stakeholder consultations.

KEY PRIORITIES

From the public statements:

  • Citizen safety & AI harm: The framework aims to clearly define boundaries where AI might cause harm, and establish mechanisms to deal with such harms.
  • Checks and balances: Oversight, transparency, auditing, and managing the risks of misuse (e.g. deepfakes) are to be addressed.
  • Balancing innovation with regulation: Authorities say the framework will allow innovation but ensure responsible development and deployment.

CONSULTATIONS & PROCESS

  • Over 3,000 consultations have reportedly been conducted in preparation for the governance framework.
  • The Ministry of Electronics & IT (MeitY), including its Principal Scientific Adviser, has been involved in crafting the framework.

 

WHY THIS MATTERS?

Responding to Emerging Risks

With AI systems (especially generative models, synthetic media) becoming more capable and accessible, the potential for misuse misinformation, manipulation, discrimination, violation of privacy has increased. India, like many countries, is facing instances of deepfakes and AI-generated synthetic content, which pose serious challenges to personal reputation, public trust, and law enforcement. The framework attempts to pre-empt some of these risks.

Aligning with Global Norms

India’s move to define safety boundaries and checks and balances echoes international trends such as EU’s AI Act, OECD’s AI Principles, etc. By signalling its willingness to align with global governance norms, India is positioning itself as a responsible player in the AI space on the international stage.

Legal and Regulatory Implications

Many parts of the framework will likely remain non-binding or advisory initially, but components related to safety and citizen protection may move into law. This means that AI developers, platforms, service providers will have to anticipate evolving legal obligations.

 

WHAT IS STILL UNCLEAR?

While the broad outlines are visible, there remain several important questions:

  • Definition of “AI harm”: What kinds of harms will be included  reputational, physical, psychological, economic? How will thresholds be defined?
  • Scope of applicability: Will the framework apply to private sector, public sector, cross-border AI deployments, foreign models used in India, etc.?
  • Enforcement mechanisms: Which bodies will enforce the framework? What penalties or redress will exist?
  • Interaction with existing laws: The DPDP Act handles personal data; how will that law interact with the new AI framework, especially with issues like synthetic data, model training, algorithmic bias?
  • Transparency and auditability: Will there be mandatory disclosure for model architectures, training data provenance, or model behaviour?
  • Global coordination: Since AI technologies cross borders, how will India collaborate with other nations or international bodies to ensure harmonised standards, especially for APIs, open-source models, etc.?

Broader Context: India’s AI Governance Ecosystem

This initiative doesn’t come in isolation. India has been progressively building its AI governance and policy ecosystem:

  • The DPDP Act, 2023, which addresses data protection and privacy, is already in place.
  • The Online Gaming Act, 2025, recently passed, is another sector-specific law that raises issues of misuse, addiction, psychological harms, financial risks, and so on.
  • MeitY has been working on AI labs, infrastructure, data labs under the IndiaAI mission, and other initiatives that aim to build capacity, skilling, and safe development practices. So, the new governance framework is intended to slot into a larger policy architecture that balances enabling AI innovation (labs, compute, data, research) with oversight.

Conclusion

India is poised at a crucial juncture in its AI policy journey. By launching an AI governance framework by September 28, the government is signalling seriousness about managing the risks of AI, while retaining space for innovation. The devil, however, will be in the details how “AI harm” is defined, how oversight is structured, which rules become law, and how the framework weaves into existing laws and international norms.

For JustAI readers, this is a moment to prepare: whether in strategy, research, legal compliance, or public advocacy. As the framework becomes public, the next few weeks will reveal whether India’s approach can balance safety and innovation with clarity and enforceability.