03/30/2026 | Press release | Distributed by Public on 03/30/2026 10:26
More than half of responding judges report using at least one AI tool in their judicial work
Shanice Harris
EVANSTON, Ill. --- A new Northwestern study surveying federal judges across the U.S. on their use and outlook on artificial intelligence in and outside of the courtroom found that more than 60 percent of judges who responded reported using at least one AI tool in their judicial work. While judges reported broad adoption of AI tools, only 22.4% of judges reported using AI tools on a weekly or daily basis.
The research team, led by Daniel Linna, director of Law and Technology Initiatives and senior lecturer at Northwestern Pritzker Law, and V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science and director of the Northwestern Security & AI Lab at McCormick School of Engineering, conducted a stratified random sample survey of bankruptcy, magistrate, district court and court of appeals judges. The Qualtrics survey asked the participants about their current use of AI tools, judicial use cases and perspectives on AI's potential impact on the judiciary.
"To the best of our knowledge, this study is the first based on a random sample of federal judges regarding their AI use," Linna said. "The advantage of a random sample is that, subject to the limitations of any survey, it provides a good foundation for extrapolating our findings to the full population of federal judges."
The study, "Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges," was published today (March 30) by the Sedona Conference, with the New York City Bar as a co-publisher.
"Even though some judges have been wary, there are plenty who think AI creates opportunities for improving access to the courts, access to justice and the quality of judicial decisions - but that requires intentionality," Linna said. "We need to think about how we bring these technologies into the courts, offer AI training to judges and analyze the benefits and risks."
The numbers
The study population was made up of active federal judges serving as of August 2025. The stratified random sample consisted of 92 bankruptcy judges, 177 magistrate judges, 182 district court judges and 51 court of appeals judges, totaling 502 judges selected for the study. The list was compiled using Ballotpedia, Almanac of the Federal Judiciary and the Federal Judicial Center's Biographical Directory. From Dec. 2 to 19, 2025, the researchers collected 112 responses.
The survey asked about the following large language models: ChatGPT (OpenAI), Claude (Anthropic), Copilot (Microsoft), Gemini (Google), Grok (X.ai) and Perplexity. It also included the following "AI for Law" tools: CoCounsel (Thomson Reuters), Westlaw AI-Assisted or Deep Research (Thomson Reuters), Protégé or Lexis+ AI (LexisNexis), Vincent AI (vLex), Harvey and Legora. Out of the 112 judges who responded to the survey, more than 60% reported using at least one of these AI tools in their judicial work, while about 38% have never used any of the tools listed in their work. Nearly one in four judges (22.4%) reported using AI tools on a weekly or daily basis.
"AI has many potential applications for knowledge work," Subrahmanian said. "Our study shows that a significant number of federal judges are already using AI tools."
Judges are more likely to use "AI for Law" tools than general-purpose AI platforms. Researchers found that judges use AI tools mostly for conducting legal research (30%) and reviewing documents (15.5%). When it comes to others in their chambers, judges reported that those individuals also use AI tools mostly for conducting legal research (39.8%) and reviewing documents (16.7%).
Linna said personal and professional use of AI are correlated.
"If a judge uses AI in their personal life, they are more likely to use it in their professional life," he said. Overall, the study found 38% of judges use AI daily or weekly outside of work. Most judges reported that they rarely (26.9%) or never (25.9%) use AI outside of work.
Researchers also found that AI training isn't being offered to most judges - 45.5% said AI training was not provided by the court administration, and 15.7% were not certain. One in three judges permit (25.9%) or permit and encourage (7.4%) the use of AI in their chambers. Approximately 20% of judges formally prohibit AI use, 17.6% discourage but do not formally prohibit AI use and 24.1% of judges have no official policy on AI use.
"AI is here; it's not going anywhere. We need training, best practices and clear policies on how the technology is implemented," Linna said.
In their survey responses, judges were nearly evenly divided between being optimistic about AI's potential for the judiciary and being concerned.
"Judges' responses reveal that many have thought carefully about the benefits and risks of using AI," Subrahmanian said. "This information is valuable for judges and court administrators, policymakers, legal practitioners and researchers."
The researchers plan to continue and expand their study of courts. Linna and Subrahmanian also plan to continue their trainings and workshops for judges and collaborations with judges on how best to implement AI in courts responsibly.
"We want to understand the challenges that judges face and do research that contributes to solving big problems in the world," Linna said. "Judges and lawyers should be at the forefront of ensuring that AI contributes to providing access to the law for everyone and promoting the Rule of Law."
In addition to Linna and Subrahmanian, co-authors of the study include Northwestern undergraduate student Anika Jaitley, Pritzker Law student Siyu Tao and U.S. District Judge Xavier Rodriguez, Western District of Texas.