INDIA’S BOLD MOVE: GOVERNMENT PERMISSION REQUIRED FOR AI MODEL DEPLOYMENT, STRICTER RULES FOR SYNTHETIC MEDIA

Authored by: Ms. Tanima Bhatia

Introduction

In a significant development, India’s Ministry of Electronics and Information Technology (MeitY) has issued an advisory mandating that all under-testing or undeniable Artificial Intelligent (AI) models must obtain explicit Government permission before deployment. This move comes in the wake of concerns about biased AI responses, particularly highlighted by recent incidents involving Google’s Gemini AI. The advisory, issued on Friday, also emphasizes the need for platforms and intermediaries to label synthetically created media and text, adding a unique identifier or metadata for easy identification. Let’s delve into the details and implications of this groundbreaking decision.

Stricter regulations on AI models:

The advisory outlines a clear directive that any AI model, including Large Language Models (LLMs), Generative AI, or algorithms labelled as “Under-testing” or “Unreliable”, must secure explicit approval from the Indian Government before becoming accessible to users. This reflects the Government’s commitment to ensure the responsible deployment of AI technologies, with a particular focus on preventing biases, discrimination, or threats to electoral processes.

Measures to prevent bias and discrimination:

Highlighting the importance of ethical AI employment, the advisory instructs intermediaries to ensure that their AI tools do not allow for bias or discrimination. This aligns with the broader goal of fostering a fair and transparent online environment. Notably, the Government is keen on preventing any compromise to the integrity of electoral processes, emphasizing the need for responsible AI practices among technology platforms.

Labelling of synthetically created content:

To address concerns surrounding synthetically created media and text, MeitY has directed intermediaries to label such content or embed it with a unique identifier or metadata. This step aims to make it easier to identify artificially generated content, which can be prone to misinformation or manipulation. The advisory requires immediate compliance, with intermediaries expected to submit an “Action Taken-Cum-Status Report” Within 15 days.

Google’s Gemini AI incident:

The advisory follows a recent incident involving Google’s Gemini AI, where a response to a political query raised concerns about compliance with the information technology (Intermediary guidelines and digital media ethics code) Rules, 2021. Minister of State for electronics and information technology, Rajeev Chandrasekhar, highlighted the need for platforms to ensure proper training of AI models, emphasizing the intolerance towards racial and other biases.

Strategic communication with users:

In line with previous discussions with industry stakeholders, advisory urges intermediaries and platforms to clearly inform users about the consequences of engaging with unlawful information on their platforms. This includes disabling access to non-compliant information, suspension, or termination of user accounts, and legal consequences. The Government aims to enhance user awareness and accountability through transparent communication.

Ongoing amendments to IT rules:

Advisory alliance with ongoing considerations within MeitY to amend IT rules, potentially requiring intermediaries to remind users of disallowed content every 15 days. Minister Chandrasekhar had previously hinted at possible amendments related to algorithmic bias within the context of Digital India act, suggesting that changes to IT rules would follow if new legislation takes time to materialize.

Addressing defects and synthetically created content:

MeitY’s advisory also addresses the emerging challenge of Deepfakes, urging intermediaries to label or embed synthetically created content with unique identifiers. The goal is to identify the original creator, and the tools used to generate such content. Although the advisory does not explicitly define “deepfake”, it emphasizes the importance of preventing the hosting of such content.

Ongoing engagement with tech companies:

The Government’s proactive stance on AI deployment and deepfakes is evident through multiple meetings with social media and technology companies. Ministers, Vaishnav and Chandrasekhar have engaged with industry stakeholders on these issues, emphasizing the need for responsible practices and addressing concerns related to misinformation and AI-powered content.

Conclusion:

India’s recent advisory on AI-model Deployment and synthetic content marks a significant step towards regulating emerging technologies responsibly. By requiring explicit Government permission for under-tested AI models and advocating for transparent communication with users, the Government aims to create a safer and more accountable online environment. As technology continues to advance, such regulatory measures have become crucial in ensuring the ethical and responsible use of AI in digital landscape.