Legal Advice from Chat GPT – Can AI replace solicitors for free?
Posted on 6th February 2025
With a wealth of online sources of legal information, individuals are often keen to save money by representing themselves. Many people see generative AI such as Chat GPT as a solution, providing free legal ‘advice’ or even template arguments, online.
But people considering an AI-driven DIY approach should be careful. Several recent cases have highlighted some major risks.
AI ‘hallucinates’ fictional cases
In a civil case in Manchester in 2023, a claimant representing themself asked ChatGPT to find cases to support their argument. ChatGPT provided four cases. Three were real cases, but the paragraphs cited were false, while the fourth case name was entirely made up.
It isn’t just claimants representing themselves who have been caught out. Legal professionals have made the same mistake - in a Canadian child custody case and in Australian immigration and family law cases.
The Solicitors Regulation Authority (SRA) has published guidance for lawyers using AI, which explains “This might have happened because the system had learned that legal arguments include case and statute references in specific formats, but not that those references needed to be genuine.”
One lawyer in America, working on a personal injury claim seems to have been aware of the issue, asking the AI programme to confirm that the cases were real. The AI said “yes” and even gave references to the cases on legitimate reference databases such as LexisNexis and Westlaw. The cases and references were in fact made up.
This problem isn’t limited to ChatGPT or other widely available sources. One of the Australian lawyers was using specific legal software which had an AI element.
Other risks
This is not the only risk of using AI. As the SRA guidance points out, AI programmes work based on patterns in data “but do not have a concept of ‘reality’.”
AI is only as good as the information provided to it. While there is a wealth of legal information online, not all of it is up to date or of equal value. Without human understanding, AI may struggle to identify which sources are reliable and up to date.
The SRA guidance also highlights the risk of AI following “the wrong patterns” in the data. It gives the example of geographical data about high-risk businesses. The algorithm may focus on the demographic patterns in those areas as the risk factor, rather than the type of business.
Verifying the information
AI is a useful a support tool but cannot replace a professional.
The legal software provider involved in the Australian case emphasised that “verifying the work was a key part of a lawyer’s ethical obligations,” and offers a human verification process. The lawyer received correct documents four hours after requesting verification, but for some reason did not use these documents in court.
The other legal professionals involved admitted that they had not checked the references given by ChatGPT. Had they reviewed the information they would likely have spotted that the information was wrong. Had they searched for the cases given, they would have realised that the information was false.
But could the litigant-in-person be expected to spot the errors, without the same legal knowledge, experience and familiarity? It seems unlikely. They could have attempted to research the cases given but this would be difficult without access to legal publishers or experience of finding cases. Even if they did find the real cases, they may have struggled to confirm whether the paragraphs cited were correct and really did support their claim.
The consequences
The judge accepted that the litigant-in-person had not meant to mislead the court, and so did not penalise them. Several of the lawyers involved have been investigated due to their conduct.
In each of the cases, it was noted that the court and the other party wasted considerable time and expense trying to determine whether the cases were real. In the Australian family case, the solicitor involved made a payment to the other side for wasted costs. In the Canadian case, a request for special costs was declined because the judge felt it was unlikely the cases would have gone unnoticed due to the opposing lawyer being “well-resourced.”
But judges have raised concerns about the potential effects of false caselaw and AI, including miscarriages of justice. Both the legal professionals and the public are becoming more aware of the problems with AI, so it might become difficult to argue that using AI was a naive mistake. The courts will need to prevent AI being used as an excuse for poor conduct. We could soon see the court imposing higher standards and enforcing cost orders or other sanctions.
More about AI in the workplace
Share this post: