When judges begin receiving official directions on how to use ChatGPT, the message is clear: artificial intelligence has entered the courtroom, though not without rules.
The Judiciary of England and Wales has issued detailed Artificial Intelligence (AI) Guidance for Judicial Office Holders, outlining when and how AI can be used by judges, tribunal members, and their staff. Released on 31 October 2025, the new guidance replaces the April edition and sets a firmer tone on confidentiality, accuracy, and personal accountability.
Far from an endorsement of AI, the document reads as a cautionary framework—acknowledging the technology’s usefulness while warning against its uncritical adoption. Its central principle remains uncompromising: no technological tool may compromise the integrity of justice.
Guardrails for an AI-Assisted Judiciary
The 2025 guidance marks a turning point in the judiciary’s engagement with technology. While courts in the UK already use AI-assisted tools in document review and case management, this is the first time judicial officers have been given formal boundaries for using generative AI platforms such as ChatGPT, Google Gemini, or Meta AI.
The guidance calls for a sober understanding of how these systems actually work. Large Language Models, it explains, “do not retrieve facts from verified databases” but generate sentences based on statistical prediction. This makes them capable of fluency but not of truth.
Put simply, AI chatbots can write persuasively—and be wrong. Judicial officers are advised to treat their output as “non-definitive” and always verify facts through official legal sources. The document even cautions that some AI tools display an “Americanised view of law,” drawing heavily from US legal data rather than UK jurisprudence.
Confidentiality Is Non-Negotiable
The guidance is blunt on privacy: do not enter anything confidential into an AI tool.
Judicial officers are reminded that information typed into public chatbots “should be seen as published to all the world.” Even when chat history is turned off, it should be assumed that data may still be retained or disclosed.
To prevent breaches, judges are instructed to disable data-sharing features, use only official devices, and report any accidental disclosure as a data incident. The warning extends even to mobile app permissions—urging officers to deny access requests from AI tools to contacts, files, or device data.
This insistence on privacy reflects a deeper concern: if judicial data fuels commercial AI systems, public confidence in impartial justice could erode beyond repair.
Accuracy, Bias, and Human Judgment
The new framework goes beyond cautionary notes—it reiterates a judicial philosophy: accountability cannot be automated. Judges remain personally responsible for everything issued in their name, even if AI assisted in drafting or research.
The guidance describes “hallucinations”—a term borrowed from AI research—as a real and recurring risk. Fabricated citations, misquoted legislation, or invented precedents are not rare, it warns. As such, every AI-assisted output must be verified manually before use.
The document also raises the issue of bias, noting that AI systems inherit distortions from the data they are trained on. Judicial officers are urged to remain alert to cultural and demographic bias, referencing the Equal Treatment Bench Book as a resource for ensuring fairness when dealing with AI-generated text or evidence.
Anticipating AI Use in Litigation
In a prescient move, the guidance acknowledges that AI-generated material is already finding its way into courtrooms—sometimes through legal professionals, and increasingly through self-represented litigants.
Judges are encouraged to inquire where submissions appear to contain AI-generated text—often identifiable by American spelling, irrelevant case citations, or highly polished but inaccurate prose. In such cases, litigants should be reminded of their responsibility for the accuracy of what they submit.
The guidance also highlights the emergence of “white text” (hidden machine-readable prompts) and deepfakes as new threats to judicial integrity, warning that forged or manipulated materials may now reach courts in digital disguise.
Where AI Can and Cannot “Assist” ?
The guidance draws a clear line between permissible and prohibited uses of AI in judicial work.
AI tools may be used for:
- Drafting administrative communications (emails, memos, presentations).
- Summarising large bodies of text for internal review.
- Assisting with meeting transcription and scheduling.
But they must not be used for:
- Conducting legal research to discover new information.
- Analysing legal questions or drafting judicial reasoning.
- Reviewing evidence without direct human engagement.
In short, AI can help judges manage workload—but not make decisions. It can summarise, not substitute.
A Judiciary that Leads, Not Follows
While global debates around AI in justice often focus on automation, the UK judiciary’s approach is deliberately conservative, rooted in human oversight, ethical prudence, and public transparency.
By publicly releasing this document, the judiciary signals that its legitimacy depends not only on adopting technology, but on constraining it. The goal is not to reject AI, but to domesticate it—to ensure that efficiency never eclipses independence.
The closing line of the guidance encapsulates this balance:
“Judges must always read the underlying documents. AI tools may assist, but they cannot replace direct judicial engagement with evidence.”
In an age where algorithms increasingly mediate knowledge, this guidance reaffirms a principle older than the law itself: justice must be humanly reasoned, and humanly accountable.
