In a shocking incident that underscores the growing menace of AI-generated deepfakes, a 19-year-old college student from Faridabad died by suicide after being allegedly blackmailed with morphed obscene images and videos created using artificial intelligence. The case has sparked widespread outrage and renewed debate over the urgent need to regulate misuse of AI technologies in India.
According to reports, the victim, identified as Rahul Bharti, had been receiving AI-generated nude photos and obscene videos showing him and his three sisters for nearly two weeks. The accused , identified as a man named Sahil in the police complaint , allegedly demanded ₹20,000 and threatened to circulate the doctored content on social media if the money was not paid.
WHEN AI BECOMES A WEAPON OF SILENCE
In a grim reminder of how generative AI can be weaponised, a 19-year-old college student, Rahul Bharti (name as reported), died by suicide in Faridabad, Haryana, after being allegedly blackmailed with AI-generated obscene images and videos of his three sisters.
According to his father, Rahul had for about two weeks been receiving AI-morphed nude photos and videos of himself and his sisters, after his phone was hacked. The perpetrator, identified in the complaint as “Sahil” demanded ₹20,000 and threatened to release the content on social media if the amount was not paid. In his final conversations the blackmailer reportedly also taunted Rahul to kill himself, naming substances and methods.
Rahul consumed sulpha tablets at around 7 pm on a Saturday; his family rushed him to hospital, but he died during treatment. The case is being investigated by the Old Faridabad police station, with the mobile phone being forensically examined and at least two persons named in the FIR under abetment to suicide (Bharatiya Nyaya Sanhita, 2023 s. 108).
A NEW FRONTIER OF CYBER-EXTORTION
What makes this case especially concerning for the AI governance community is the use of AI-generated deepfakes as the vehicle for extortion, rather than purely stolen personal data or actual intimate imagery.
In effect:
- The hacker (or hackers) created morphed nude photos and videos of Rahul and his sisters using artificial intelligence.
- These were then used as leverage: pay up, or we’ll publish them and ruin your family’s reputation. The victim’s father says the accused also encouraged suicide.
- The victim’s behaviour changed: he withdrew, stopped eating, isolated himself, classic signs of acute distress under prolonged harassment.
This is not just a personal tragedy, it is a profound warning about the governance risks in a world where deepfakes are cheaply produced, difficult to trace, and carry high emotional and reputational costs for the victim.
As the investigating officer put it: “This case is a serious example of cybercrime and the misuse of AI technology.”
A PATTERN OF TECHNOLOGICAL EXPLOITATION
This tragedy adds to a growing list of incidents across India involving deepfake abuse , from doctored celebrity images to fake revenge porn. But this case is perhaps the most chilling example of how such technology can be weaponised to push a young person into despair.
Experts note that AI-driven image morphing and deepfake porn are emerging as a new category of cyber-extortion, targeting both men and women. The victims are often unaware that the images are fabricated and feel trapped by shame and fear of social stigma.
“Deepfake technology has taken cyber harassment to a new level,” says Delhi-based cyber law expert Advocate Rakshit Tandon. “In the past, blackmailers used hacked data or real photos. Now, with AI, they can create synthetic images that are almost impossible to disprove instantly. The psychological damage is immense.”
The Legal and Policy Gap
While Indian law criminalises extortion, online harassment, and publishing obscene material under the IT Act and IPC/BNS provisions, there is no explicit legislation addressing deepfake creation or distribution. The absence of targeted regulation leaves victims vulnerable and complicates prosecution.
The proposed amendments to India’s IT Rules — which the government recently announced to curb deepfakes — could fill some of these gaps, but as this case shows, enforcement remains a major challenge. Identifying the creator of an AI-generated image requires specialised forensic tools and international cooperation, especially when data is hosted on foreign servers or encrypted platforms.
For police forces still catching up with conventional cybercrime, deepfake-related offences represent a new and technically demanding frontier. “Most police stations lack the capacity to verify AI-generated content. Without expert intervention, it’s difficult to establish that the images were fabricated,” says a senior cybercrime official in Gurugram.
The Human Cost of AI Misuse
Behind the legal complexities lies the human toll. Rahul’s death is not an isolated case of digital harassment , it reflects a broader crisis of mental health, social shame, and technological vulnerability.
The family told media outlets that Rahul had become unusually quiet and fearful in the days before the incident. “He stopped eating properly. He kept saying someone would destroy our lives,” his father said, fighting tears.
The incident also exposes the lack of institutional support for victims of digital blackmail. While cybercrime helplines and grievance portals exist, few victims have the emotional or legal strength to navigate the process , especially when fake sexual content is involved.
A Tragic Reminder
Rahul Bharti’s death is a tragic reminder that technology, when misused, can turn from innovation to intimidation in an instant. His case should not be remembered as another cybercrime statistic but as a wake-up call for policymakers, AI developers, and citizens alike.
As India moves toward becoming an AI-driven economy, safeguarding the dignity, privacy, and safety of individuals must become as much a priority as promoting digital innovation. The Faridabad case shows what’s at stake , human lives caught in the crossfire between progress and its perils.
