It is no secret that the use of Artificial Intelligence (AI) has become ever more present within the legal profession and a prevalent topic of debate. In fact, its come to a point where even the Bar Council [1] and the Law Society [2] have had to publish detailed guidance for legal practitioners to refer to when using or considering to use AI. This technology has, undoubtedly, made some aspects of a lawyers work easier, but it can also be unreliable, make costly mistakes and have long-standing, deep implications for lawyers and our legal systems.
AI-law firm:
For the first time ever the Solicitors Regulation Authority has approved to regulate its first AI-driven law firm. Garfield Law allows clients to use their AI system to recover invoices of up to £10,000 through the small claims court in England and Wales [3]. This is immensely helpful for lay clients since it allows them to have at their fingerprints a relatively straightforward system to process and recover unpaid debts, guiding them through the small claims court process all the way to trial.
This is quite astonishing, but it does beg the question how this will actually work in practice. Well for starters, it is not all fully AI, there is a human element behind it all. Under the SRAs rules there will be a named, human, solicitor who will be ultimately accountable as well as liable for the firms work and whether its work is up to the required standards. There are adequate safe-guards in place by the firm, including but not limited to quality- check work and client confidentiality. It is evident that the SRA has completed all its necessary due diligence before authorising Garfield Law to provide regulated legal services in England and Wales and would not have done so if it hadn´t been fully sure of it.
Lack of human element in AI-firms:
Nevertheless, whilst this model seems to be quite promising, it doesn’t come without its drawbacks. Firstly, from a client’s perspective, the idea of actually talking to a person and empathising with them, is not something the AI can do. It’s all very good having a program in which you can upload your document, and they can give the client a brief explanation of it. But it wouldn’t be too far-fetched to imagine that sometimes clients would want a human being to be on the other side and be able to ask them questions as they arise. This is something that AI cannot really do, especially when it comes to giving that human to human reassurance and service.
AI-hallucinations:
Secondly, this does raise questions more broadly about the phenomenon that is often referred to as “AI hallucinations”. This means that AI chatbots sometimes generate incorrect or misleading results. In the case of Garfield Law, the Law Society has already stated that their system will be unable to propose relevant case law, so that should not be an issue.
However, this is not something that has been avoided in other instances. Recently a solicitor and barrister were referred to their disciplinary bodies (the SRA and the BSB respectively) for using AI to write submissions to the court and rely on fake case law (generate by AI) to back up those submissions. In this case, Ayinde, R v The London Borough of Haringey, both counsel and instructing solicitors had been found to knowingly mislead the court by relying on five fake case authorities for a judicial review case. As a result, Mr Justice Ritchie (the judge hearing the case) considered that “providing a fake description of five fake cases, including a Court of Appeal case, qualifies quite clearly as professional misconduct”[4] thereby reflecting the seriousness of these issues and why their professional regulators have been involved. More on this later.
A common (mis)practice:
This is not, regrettably, an isolated case and there are other examples of this poor and dangerous practice. In May 2023, lawyers from the firm Butler Snow included AI generated case citations in two court filings before US District Judge Anna Manasco in Alabama[5]. In June 2023 a New York Attorney submitted a legal brief with AI-written content to the US District Court for the Southern District of New York[6]. In Canada, a lawyer was found to have included AI-generated content in the files used in court and made to personally pay for the costs for the hearings[7]. We see the pattern, in all of the aforementioned, the lawyers in question had, either in full or part, included AI generated material or cases.
Herein lies the issue, whether it is regarding a fully automated AI firm or using it for your work as a lawyer, it is not flawless, and its mistakes can have costly consequences. These mishaps represent serious violations of core legal, ethical and professional obligations. Hence seeing them as mere minor technological problems is too narrow an approach to adopt. We are facing a common problem that must be adequately addressed by regulators and courts alike and so it is necessary across various different jurisdictions, to sanction and admonish the use of AI and become more conscious of the perils of falling to its deception.
Broader implications of the use of and over-reliance on AI:
This also leads to broader questions of running the risk of undermining the public trust in members of the profession. How are members of the public meant to trust Barristers and Solicitors, if AI is being used when the client is paying for the lawyers and firms experience, expertise and thoughts on a given set of legal issues. As Dr Harkess clearly stated: “The reputational damage, financial penalties, and career embarrassments show the very real costs of falling prey to AI deception.”[8] If lawyers are using AI tools without verifying its validity, ensuring its accuracy and reading trustworthy legal sources then not only are we failing to uphold the high standards that we so proudly boast about, but we also mislead clients and the courts by using these tools that are at best inaccurate and, at worse, simply wrong and mistaken.
Current Position regarding AI:
The case of Ayinde -v- London Borough of Haringey, and Al-Haroun -v- Qatar National Bank[2025] EWHC 1383 (Admin)[9], provides quite a detailed explanation and analysis of this matter at hand. This involved two cases that were listed together arising out of use artificial intelligence by both solicitors and barristers to create documents with false information that was then put forth before the court [10]. The Honourable Ms Dame Victoria Sharp and Mr Justice Johnson delivered a detailed and well explained judgment. They not only presented the existing guidance on the use of AI for legal professionals but also highlight the dangers of using AI in court proceedings in the context of both of the legal cases before the court. Namely the fact that AI hallucinations take place[11] as well as the potential legal and regulatory consequences that could take place if legal professionals use AI without then conducting research using adequate, reputable sources to check their work[12]. Further highlighting, the broader effects that the misuse of AI has for the administration of justice and the public confidence in the justice system[13]. The crux of the problem currently with AI in the legal profession is perfectly encapsulated in the concluding remarks as set out by the Honourable Justices stating that merely promulgating AI guidance “on its own is insufficient to address the misuse of artificial intelligence. More needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court”[14].
So to conclude, while AI has undeniably proven to be a valuable and efficient tool for lawyers in various aspects of their work, it has become apparent that it must be used cautiously. Solely relying on AI without any human oversight—ensuring accuracy and validating its output using reputable sources—can have serious professional and ethical consequences. We must therefore be aware that AI can cause us to be at odds with our obligations to the court, our clients and our regulatory bodies. Thus, it is imperative that we find a way to balance between the ease of use of AI whilst also upholding the high standards and ethical regulations that underpin our profession as lawyers.
By: : Álvaro José Gutierrez Calero
[1] https://www.barcouncil.org.uk/resource/new-guidance-on-generative-ai-for-the-bar.html
[2] https://www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials
[3] https://www.garfield.law/#about
[4] https://www.lawgazette.co.uk/news/appalling-high-court-judge-alerts-regulators-over-fake-case-authorities/5123200.article
[5] https://www.reuters.com/legal/government/trouble-with-ai-hallucinations-spreads-big-law-firms-2025-05-23/
[6] Mata v Avianca, Inc (2023) 678 F Supp 3d 443, 443-444
[7] Zhang v Chen [2024] BCSC 285, [42]-[44]
[8] The AI That Lied to the Court: How Legal Professionals Worldwide Are Being Betrayed by Technology by Dr Jason Harkess; https://www.linkedin.com/pulse/ai-lied-court-how-legal-professionals-worldwide-being-harkess-i5izc/
[9] https://www.judiciary.uk/judgments/ayinde-v-london-borough-of-haringey-and-al-haroun-v-qatar-national-bank/
[10] Paragraph 2 and 3 of the Judgment
[11] Paragraphs 36-38 of the Judgment
[12] Paragraph 7, 17-22 of the Judgment
[13] Paragraph 9, 25, 31
[14] Paragraph 82 of the Judgment