The University of Auckland

11/07/2025 | News release | Distributed by Public on 11/06/2025 20:19

Lawyers should keep their own counsel when it comes to using AI

Breadcrumbs List.

Show Truncated Breadcrumbs.
  1. Home
  2. News and opinion

Lawyers should keep their own counsel when it comes to using AI

7 November 2025

Politics and law, Auckland Law School

Commentary: Judges around the world have pulled lawyers up for submitting AI-generated material to the court that's wrong. Joshua Yuvaraj cautions lawyers against the use of AI.

If you've ever gone to a lawyer it's likely you've at least questioned how much you paid. Part of the promise of artificial intelligence is that it can streamline legal practice, making advice cheaper and quicker for clients. Since ChatGPT stormed into public consciousness in 2022, firms worldwide have rushed to integrate AI - automating research, document review, drafting and more.

The benefits may not be all they're mooted to be, however. In an upcoming Monash University Law Review paper, I show that generative AI tools - and arguably other forms of AI - rest on two fundamental structural flaws.

AI models do not comprehend whether facts are accurate. That they make mistakes isn't surprising. One study found that even tools made for law firms 'hallucinated' - produced inaccurate or false information, like made-up cases - between 17 and 33 percent of the time. It's an astounding level of inaccuracy when we as the public demand so much more from lawyers.

Second, AI tools often lack transparency. They tend to operate as 'black boxes', so you don't know for certain how a tool reached a decision. All you 'know' for sure is the question and the answer. But we need transparency precisely because of the reality flaw - if the model has no conception of reality then we need to see its working.

All of this is problematic because lawyers are bound by the strictest of professional values. Integrity is sacrosanct; lawyers must stand by the accuracy of anything they produce. This means they must exhaustively verify anything an AI model produces.

The risk of not properly verifying AI content is very real to clients and lawyers. Judges around the world have pulled lawyers up for submitting AI-generated material to the court that's wrong - fake cases, misquoting real cases, and more. They have repeatedly emphasised that lawyers must ensure all content submitted to the court is accurate. One UK judge even said that lawyers might be criminally liable for submitting AI-generated information that is false to the court.

The problem then is that with proper verification, many of the efficiency gains AI is purported to give lawyers may be rendered negligible.

It isn't a stretch to imagine negligence lawsuits brought against lawyers for AI-generated advice with mistakes. We already have a prototype in Deloitte's report for the Australian Government with (allegedly) AI-generated mistakes, for which the company had to partially refund the government.

The problem then is that with proper verification, many of the efficiency gains AI is purported to give lawyers may be rendered negligible. In my paper The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice, I offer a hypothesis: any increase in efficiency will be met by a correspondingly greater cost of verification, that means AI tools will often have a negligible value when it comes to the tasks they are advertised as automating (research, drafting, document review). This is because the more we trust AI, the more costly mistakes are to clients, and the more important it becomes to verify the outputs.

Lawyers may still want to use AI. To this I offer two recommendations. First, imagine a client's embarrassment if, having paid top dollar to a firm to provide bespoke advice/defend you in complex criminal proceedings, they find out their lawyer has a) used AI without telling them, and/or b) not vetted the content so it produces mistakes that could cost them millions, or even prison time. The very real risk of reputational damage should make lawyers think twice before jumping aboard the AI hype train.

Second, the real issue isn't about whether lawyers should use AI or not. It's about what kind of people we want our lawyers to be. We want lawyers to be committed to the truth above all, so that they would baulk at even the chance something they write or say might be inaccurate. And we expect lawyers to be servants, not self-serving. Law is about serving others first - which cuts against the grain of shortcut-taking that has caught so many lawyers around the world using AI in court proceedings.

This doesn't mean lawyers should never use AI. Technology can be useful in some contexts. But it does mean we should think long and hard about the costs of using AI, and who we want lawyers to be in an increasingly uncertain world.

Dr Joshua Yuvaraj is a Senior Lecturer at the University of Auckland and co-director of the New Zealand Centre for Intellectual Property.

This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.

This article was first published on Newsroom, Lawyers, think hard before you use AI, 7 November, 2025.

Media contact

Margo White I Research communications editor Mob 021 926 408 Email [email protected]

The University of Auckland published this content on November 07, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on November 07, 2025 at 02:19 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]