India’s Supreme Court has flagged its first major case of alleged AI misuse in court filings, after a litigant submitted a rejoinder packed with fabricated case laws in a high‑stakes insolvency dispute between Omkara Assets Reconstruction and Gstaad Hotels. The episode has intensified judicial and global concerns that unverified AI‑generated content could distort legal reasoning and undermine trust in the justice system.
What happened in the Supreme Court?
The controversy erupted when a rejoinder filed on behalf of Gstaad Hotels promoter Deepak Raheja allegedly cited “hundreds” of non‑existent or distorted precedents, many of them suspected to be AI‑generated. Senior advocate Neeraj Kishan Kaul, appearing for Omkara Assets Reconstruction, told the Bench of Justices Dipankar Datta and A G Masih that several cited judgments “do not exist at all”, and that some real cases were misreported with fabricated questions of law to suit the litigant’s position.
Senior advocate C A Sundaram, representing Raheja, admitted in court that the rejoinder had been prepared with the help of AI tools and described the situation as “a terrible error”, while the advocate‑on‑record tendered an unconditional apology and sought to withdraw the document.
refused to simply ignore the issue, remarking that it “cannot be brushed aside”, even as it decided to proceed with the corporate insolvency appeal on its merits rather than summarily shutting out the erring party.
The underlying corporate dispute
The AI controversy is layered onto an ongoing insolvency battle in which Omkara Assets Reconstruction has proceeded against Gstaad Hotels and related entities under Section 7 of the Insolvency and Bankruptcy Code (IBC). The National Company Law Tribunal (NCLT) in Mumbai had earlier admitted Omkara’s petitions, and the National Company Law Appellate Tribunal (NCLAT) subsequently upheld this admission, allowing insolvency proceedings to move forward against Gstaad Hotels and Neo Capricorn Plaza.
It is this NCLAT order that has been challenged before the Supreme Court, which is now simultaneously examining the merits of the insolvency case and the implications of the allegedly AI‑fabricated case laws used in the rejoinder. The outcome could shape not only the fate of the hotel group and its creditors, but also signal how India’s top court expects AI‑assisted pleadings to be handled in future disputes.
Why AI ‘hallucinations’ alarm judges?
Senior counsel for Omkara warned that it is “not about AI per se, but about fabrication of case law”, stressing that judges already handle 70–80 matters a day and cannot realistically cross‑check every citation if lawyers start relying on unchecked AI outputs. The fear is that if even a few non‑existent precedents slip through and influence reasoning, the consequences could be disastrous for the consistency and credibility of the legal system.
Judges and legal commentators in India have recently cautioned that generative AI tools are prone to “hallucinations”, where the system confidently produces plausible‑sounding but false judgments, citations or factual narratives. In public remarks, members of the higher judiciary have repeatedly underlined that AI can assist in research or translation but cannot replace human legal analysis, and that any AI‑assisted material must be independently verified by lawyers before being placed on record.
Global pattern of fake AI case citations
India’s episode fits a growing international pattern in which courts are confronting fabricated citations generated by AI‑powered tools. In the United States, the widely discussed 2023 case Mata v. Avianca in a New York federal court saw lawyers sanctioned after they filed a brief containing fake judicial decisions produced by a generative AI system, prompting judges to warn that all AI‑assisted research must be thoroughly verified under Rule 11 obligations. Follow‑up commentary and later cases in US courts suggest that despite those sanctions, AI‑related negligence has persisted, with some filings and even expert reports still carrying hallucinated case references.
Legal ethics bodies and bar associations across jurisdictions have begun issuing guidance that when lawyers use AI tools, they remain fully responsible for the accuracy of citations, the confidentiality of client data, and the avoidance of misleading the court. Several US judges now require counsel to disclose whether AI was used in drafting or research, and to certify that all authorities cited in AI‑assisted documents have been checked against official databases.
How this interacts with AI rules for courts?
India’s judiciary has cautiously embraced AI for back‑office tasks such as research, transcription and translation, but policy efforts so far have consistently framed these systems as assistive, not decision‑making, tools. Initiatives like the Supreme Court’s AI committee and High Court‑level policies emphasise that any AI use must be subject to human oversight, with judges and lawyers remaining accountable for final reasoning, orders and judgments.youtube
The Supreme Court’s decision to take the Gstaad–Omkara “AI reply” seriously, without yet shutting the door on the litigant’s appeal, reflects a balancing act: signalling zero tolerance for fabricated case law while acknowledging that the core dispute still deserves adjudication on evidence and valid precedent. As India drafts broader AI governance frameworks and the legal community experiments with generative tools, this case could become a reference point for future advisories on how far AI can be used in pleadings and what verification standards advocates must meet.
As the Supreme Court resumes its hearing in the Omkara–Gstaad matter, the proceedings will be closely watched by litigants, lawyers and technologists alike as an early test of how India’s highest court responds when AI‑generated “legal research” crosses the line into fabrication.
