India Moves to Regulate Deepfakes, Proposes Amendment in IT Rules, 2021 (25.10.25)

MeiTy proposes amendment to IT rules, 2021 to regulate deepfake media

The Indian government has taken a decisive step toward tightening control over synthetic media and AI-generated misinformation. On 22 October 2025, the Ministry of Electronics and Information Technology (MeitY) released a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, seeking to bring “synthetically generated information” — commonly known as deepfakes — within the fold of legal regulation. The proposed rules aim to introduce the country’s first formal regulatory framework for synthetically generated information, a term that encompasses AI-created or manipulated media that appears authentic to the viewer.

The ministry’s draft, now open for public consultation until 6 November 2025, marks a crucial moment in India’s evolving digital-governance landscape. For the first time, the government has directly addressed the risks emerging from AI-generated content, signalling a move to balance innovation with accountability in the era of deep learning and algorithmic media creation.

 

Why the amendment is required?

The ministry’s explanatory note acknowledges that the availability of generative AI tools has triggered a surge in deepfake videos, synthetic images, and impersonation-based content capable of manipulating elections, spreading misinformation, or damaging reputations. Such content—often indistinguishable from reality has exposed significant gaps in the current regulatory system, which was designed for traditional user-generated data rather than machine-generated fabrications.

By extending the due-diligence obligations under the IT Rules to platforms that create, modify, or host synthetic content, the government seeks to pre-empt potential misuse while ensuring that India’s digital ecosystem remains, in MeitY’s words, “open, safe, trusted, and accountable.”

 

Inside the Draft: Defining Synthetic Information and Mandating Labels

The proposed amendment introduces, for the first time, a legal definition of “synthetically generated information”, covering any content created or altered using a computer resource that “reasonably appears to be authentic or true.” This seemingly simple phrasing carries broad implications: it brings AI-generated audio, video, and visual media squarely under the compliance umbrella of the IT Rules, 2021.

In practical terms, MeitY proposes a labelling and metadata regime requiring that all such synthetic content be clearly marked to distinguish it from authentic media. The draft mandates that the identifiers visual or auditory must occupy at least 10 percent of the display area or the initial duration of audio, ensuring users can easily recognise manipulated material. Platforms enabling the creation or modification of AI-based media will have to embed unique identifiers or metadata tags that remain visible or traceable across the lifecycle of the content.

Furthermore, significant social media intermediaries (SSMIs)—large platforms such as video-sharing and messaging services—will need to implement technical verification systems to determine whether uploaded material is synthetically generated. Users may be required to declare whether their content is AI-created, after which the platform must attach an appropriate label or watermark. These measures together are intended to strengthen transparency, limit impersonation risks, and promote informed consumption of online content.

 

Deepfake Governance within the Larger AI Policy Landscape

The proposed amendments do not arrive in isolation. They align with India’s ongoing efforts to build a coherent framework for AI governance, complementing the forthcoming National AI Mission and the broader Digital India programme. While other jurisdictions, such as the European Union and the United States, debate dedicated AI acts or model bills, India appears to be embedding AI accountability within existing digital-governance structures rather than creating standalone legislation.

This integrated approach has advantages. It enables faster rule-making under established IT mechanisms and ensures that AI-generated content is subject to the same legal duties as user-generated content, including takedown and due-diligence requirements. However, it also blurs the boundary between AI-specific and platform-specific obligations, potentially complicating enforcement and compliance strategies for intermediaries.

For India’s rapidly growing AI ecosystem, this move sends a clear message: ethical design, transparency, and traceability are no longer optional. Companies building or deploying generative-AI systems must now consider compliance by design, embedding labelling features and audit trails into their products to avoid regulatory risk.

 

Intent behind the proposed Amendment: Accountability vs. Innovation

While the intent behind the draft amendment is clear, its implementation challenges may prove complex. Embedding immutable metadata or visible identifiers across multiple formats, text, audio, video, will demand significant technical upgrades from platforms, especially smaller intermediaries lacking advanced AI detection tools.

The threat of losing “safe harbour” protection under Section 79 of the Information Technology Act, 2000 for non-compliance could drive intermediaries to adopt over-cautious moderation practices, risking over-blocking or removal of legitimate creative content. Civil-liberties experts have already cautioned that without procedural safeguards and clear appeal mechanisms, stricter takedown provisions could inadvertently chill free expression online.

Moreover, the draft’s definition of synthetic information, while broad, centres on visual and audio media, leaving ambiguity around text-based generative outputs—such as AI-written essays, news, or code. As generative-AI tools diversify, regulators may soon face pressure to clarify how these textual outputs fit within the proposed framework.

Another unresolved issue lies in cross-platform interoperability. Global intermediaries operate across multiple jurisdictions, each with different labelling and traceability standards. India’s insistence on visible identifiers covering 10 percent of display space may not align with technical frameworks used internationally, creating compliance friction for global platforms.

 

A Step Towards Transparent AI Use — If Executed Right

Despite these concerns, the proposed amendment represents an important normative shift. It acknowledges that AI systems have moved beyond mere innovation and into the domain of social impact and public risk. By introducing traceability and disclosure obligations, the government is attempting to rebuild digital trust at a time when misinformation, impersonation, and algorithmic manipulation threaten the integrity of online discourse.

The amendment also follows a global trend: nations are increasingly demanding machine-readable provenance markers for AI-generated content, echoing initiatives such as the Content Authenticity Initiative and the EU AI Act’s labelling obligations. If implemented effectively, India’s draft could serve as a regulatory model for other emerging economies, demonstrating how AI accountability can be pursued without stalling innovation.

 

CONCLUSION

The draft amendments to the IT Rules mark a critical inflection point for India’s digital governance framework. Up until now, content regulation, intermediary liability and platform enforcement have often been reactive and foisted onto platforms through ad-hoc notices and advisories. With these changes, we see a proactive, technology-aware, and regulation-embedded turn, one that acknowledges generative AI not just as a tool but as a source of regulatory risk.

By defining synthetic media, mandating labelling, embedding identifiers and tightening takedown regimes, the government is signalling that the era of unchecked online intermediaries and untraceable AI-generated content is nearing its end. For the tech ecosystem, this means compliance is moving from desirable to mandatory. For users, it promises more transparency (though not necessarily more simplicity). For regulators, it underscores the challenge of keeping pace with AI’s rapid evolution.

Yet the regulatory environment must walk a fine line: ensuring accountability without stifling innovation; protecting rights without enabling over-censorship; building oversight without creating chilling effects. The next few months stakeholder feedback, finalisation of rules, implementation timelines , will determine whether this shift yields a robust, balanced regime or simply more compliance burdens.

For the AI and law community that we at JustAI closely monitor, these developments presage several lines of enquiry: the efficacy of labelling regimes, the architecture of metadata tracing, the liability contours for AI-tool providers, and the impact on generative-AI deployment in India. In that sense, the rules are more than regulatory change: they are a test-bed for how a major democracy seeks to govern the next frontier of synthetic content and platform accountability.

We will continue to dissect the final rules, monitor the consultation outcomes, and track how platforms respond. The era of deepfakes may not be over , but for now, the field of play is changing.

Find the official Notice and Pdf of the draft amendments here