In a move that could reshape how technology intersects with justice, New York has become the first U.S. state to roll out a comprehensive policy regulating artificial intelligence in its court system. Announced on October 10 by Chief Administrative Judge Joseph A. Zayas, the rulebook sets out how judges and court personnel can and cannot use AI, marking a major shift for a judiciary grappling with the promises and pitfalls of generative technology. From ChatGPT to Microsoft Copilot, the policy outlines what’s permissible, what’s off-limits, and how human judgment must always stay in charge.
“The use of AI requires strict adherence to the court system’s fundamental and longstanding values, relying on our integrity, attention to detail, and tireless scrutiny and fairness. While AI can enhance productivity, it must be utilized with great care. It is not designed to replace human judgment, discretion, or decision-making.”
Policy Origins and Rationale
This new AI policy was the product of extensive study led by a committee formed in April 2024 by Chief Administrative Judge Joseph A. Zayas. The committee examined the complex challenges presented by AI use in judicial contexts, including accuracy, bias, ethical implications, and security. Judge Zayas emphasized that the policy “provides a strong base, guiding the court system on how to best leverage AI’s potential to help fulfill the judiciary’s core mission,” while warning that AI should never replace human judgment, discretion, or decision-making.
Key Principles and Scope
- Universal Application: The policy applies to all judges, justices, and nonjudicial employees, covering any work performed on court-owned or personal devices when related to court business.
- Guardrails for Generative AI: It sets out important guardrails around fairness, accountability, and security, particularly focusing on generative AI—which can produce human-like text and documents based on user prompts.
- Mandatory Training: All court personnel with computer access must complete initial and ongoing AI training to keep up with technological advances and ensure responsible use.
Approved Use Cases and Tools
The policy clearly distinguishes between public and private AI models. Only UCS-approved AI tools may be used, with a strong preference for platforms that operate within a private, secure environment maintained by the court system (e.g., Microsoft Azure AI Services, Microsoft 365 Copilot Chat, GitHub Copilot for enterprise use, etc.). Use of public generative models (like ChatGPT) is allowed only under strict conditions, and entering confidential or sensitive information into such platforms is strictly prohibited.
Generative AI may assist in drafting documents, summarizing large datasets, improving language for public communication, and generating ideas for administrative tasks. However, all output must be thoroughly reviewed for accuracy, inclusivity, and the absence of bias or harmful stereotypes.
Risks, Safeguards, and Restrictions
- Accuracy and Reliability: AI-generated content, particularly from generative models, is prone to inaccuracy, hallucinations, and even fabrication of facts or legal citations. The policy mandates independent verification and careful review of any AI-generated text.
- Bias and Inclusivity: AI systems may perpetuate biases present in their training data. The policy instructs users to ensure output does not reflect unfair bias, stereotypes, or prejudice.
- Confidentiality: Court personnel are forbidden from inputting confidential, privileged, or personally identifiable information into any generative AI system that is not a private model controlled by the court system. Documents filed with the courts, even if publicly accessible, also cannot be uploaded to generative AI platforms outside UCS oversight.
- Ethical Oversight: Judges and staff are reminded that their use of AI must always align with the professional ethical obligations incumbent on their roles. The policy underscores that AI tools must never be engaged in decision-making functions ethically reserved for judges themselves.
Training, Oversight, and Enforcement
AI use within the court system is subject to rigorous ongoing training requirements. Initial AI training is mandatory before any generative AI product is accessible for official court use, and continuing education ensures personnel remain informed of new risks and developments. Only approved AI products may be installed on court-owned devices, and any software requiring paid or subscription access must be provided through official channels.
The court system retains discretion over actual AI use: approval of an AI tool does not guarantee suitability for every task, and supervisors can further limit access as needed.
Context and National Impact
New York’s move places it alongside states like California, Delaware, Illinois, and Arizona, which have also issued court policies on AI in recent years. This wave of regulation comes amid increasing cases where lawyers and other court personnel have faced fines and sanctions for the misuse of AI—especially for submitting documents containing fabricated or inaccurate citations. Nationwide, the legal sector is rapidly adapting, with both courts and state bars investing in AI education for judges, lawyers, and court staff.
Toward Ethical, Responsible AI in Justice
New York’s interim AI policy is a major milestone, emphasizing technological advancement in tandem with legal tradition and ethical responsibility. With ongoing review and a commitment to continuous learning, the state is setting a standard for responsible AI integration—one that other states and legal systems are likely to watch closely.
First Deputy Chief Administrative Judge Norman St. George added: “We have a duty to carefully explore and fully understand—AI’s strengths and limitations, so that we may use it responsibly, intelligently, and optimally, in furthering the delivery of justice across the State.”
This policy both addresses immediate risks like confidentiality breaches or fabricated legal citations and lays a foundation for longer-term, adaptive governance as AI technologies evolve and their role in the legal system deepens.