INDIA RANKS AT 2nd IN AI-GENERATED DEEPFAKE SEXUALLY EXPLICIT CONTENT (03.01.25)

Authored by Dr. Yatin Kathuria

A recent survey has revealed a concerning trend in the rise of deepfake content, with India ranking second globally in accessing websites that create sexually explicit deepfakes. The survey, conducted by The Yomiuri Shimbun, highlighted that more than half of these websites were launched in 2024, coinciding with a significant increase in the creation and sharing of deepfake content online.

INTRODUCTION

A survey conducted by The Yomiuri Shimbun, revealed that the United States ranks number 1 globally for accessing websites that create sexually explicit deepfakes, with approximately 59.73 million visits to these websites, followed by India with 24.57 million visits, and Japan with 18.43 million visits over a one-year period from December 2023 to November 2024. Russia and Germany also saw substantial traffic, with 17.59 million and 16.86 million visits, respectively. The survey noted that most users access these deepfake websites via smartphones, indicating a widespread and easily accessible means of creating and sharing explicit content. This trend underscores the need for robust legal frameworks and technological solutions to combat the misuse of AI in creating harmful content.

An expert from Japan has raised alarms over the issue, advocating for stricter enforcement of rules and regulations to address the proliferation of deepfake content. The report identified 41 websites that allow users to upload images and modify them to create explicit deepfakes, with instructions primarily in English and Russian, and some in Japanese. A report by U.S. cybersecurity firm Security Hero revealed that 95,820 deepfake videos were identified online in 2023, a figure five and a half times higher than in 2019. Of these, 98% were sexually explicit, highlighting the rapid growth and concerning nature of deepfake content.

The Rise of AI Kiss-Generating Apps

One of the more surprising innovations is the rise of AI kiss-generating apps that use images of two people to create personalized kissing videos. These apps are revolutionizing the way we think about love, affection, and emotional connection in the digital age, enabling users to generate virtual kisses based on photos — blending technology with romance in an entirely new way.

AI kiss-generating apps allow users to take pictures of themselves or a loved one and use these images to create kissing videos. Using sophisticated AI and deep learning algorithms, these apps analyze the photos and recreate a simulated kiss between the two people featured. This kiss isn’t just a static image; it is brought to life in video format, allowing the two individuals to interact with each other in a very intimate way virtually.

Some of these apps even allow users to adjust the intensity, duration, and style of the kiss, creating a tailored experience that can feel as close to the real thing as possible—all through a screen. As physical interactions become harder to come by, AI kiss apps provide a new avenue for couples, friends, or even strangers to share an intimate, affectionate moment virtually.

 

While the rise of AI kiss apps is exciting, it also raises a number of ethical and privacy concerns. These concerns largely stem from the use of personal images and the creation of intimate moments in a virtual space.  AI kiss apps typically require users to upload personal photos, which raises questions about privacy. How are these images stored? Are they shared with third parties? Is there a risk of misuse or theft? These questions must be addressed by developers to ensure that users’ data remains secure.

There is also the risk of these technologies being misused, particularly in situations where they are used to simulate affection or intimacy inappropriately. AI-generated kissing videos could be manipulated or used without consent, leading to potential exploitation or emotional harm.

Legal Framework Governing  Deepfakes in India

These challenges posed by deepfakes highlight the need to regulate their usage through legal mechanisms. In India, there is currently no specific legislation addressing the issue of deepfakes. However, India can take inspiration from countries like the USA, where federal and state lawmakers have introduced various measures to tackle the problem of deepfakes.

  • Malicious Deep Fake Prohibition Act (2019): This bill sought to criminalize the creation and distribution of deepfake content intended to deceive the public, especially during elections.
  • Identifying Outputs of Generative Adversarial Networks (IOGAN) Act (2020): Proposed establishing a task force within the Department of Homeland Security to study deepfake technology and develop strategies to counter its harmful effects.
  • DEEPFAKES Accountability Act (2023): Aimed to protect national security against the threats posed by deepfake technology and provide legal recourse to victims of harmful deepfakes.

In India, existing Information Technology laws provide some recourse:

  • Section 66E of the IT Act, 2000: Pertains to deepfake offenses involving the capturing, dissemination, or transmission of an individual’s visual representations through mass media, carrying penalties of imprisonment for three years or a fine of ₹2 lakh.
  • Section 66D of the IT Act, 2000: Addresses the use of communication devices or computer resources with malicious intent to deceive or assume another person’s identity, with penalties of imprisonment for three years and/or a fine up to ₹1 lakh.
  • Section 66-F (Cyber Terrorism) of the IT Act, 2000: Can be invoked for deepfake cybercrimes that manipulate public sentiment and exert political influence.

Accountability of intermediaries (platforms where deepfake content is uploaded) is also addressed under:

  • Section 79 of the IT Act, 2000: Intermediaries are required to remove infringing content upon becoming aware of its existence or receiving a judicial order. The Myspace Inc. v Super Cassettes Industries Ltd (2017) case ruled that intermediaries must remove infringing material upon notice from private parties, even without a court order.
  • IT Rules of 2021: Social Media Intermediaries must designate individuals to monitor content and establish a grievance resolution mechanism.

Indian Government Directive on Deepfakes

In November 2023, the Indian government issued a directive to social media intermediaries to remove deepfakes within 24 hours of a complaint. This requirement is outlined in the IT Rules of 2021, and the directive came after deepfake videos of two actors surfaced online within a span of one week. This step is part of the ongoing efforts to curb the spread of harmful deepfake content and protect individuals’ rights and privacy.

 Conclusion

The increasing use of AI to create explicit deepfakes is a global issue that requires immediate attention. With India and other countries witnessing significant traffic to these websites, it is crucial to implement stringent measures to prevent the misuse of AI and protect individuals from the harmful effects of deepfake content. Robust legal frameworks, technological solutions, and awareness campaigns are essential to address this growing concern.