The recent High Court case of Olsen & Anor v Finansiel Stabilitet [2025] EWHC 42 has highlighted the potential risk of legal professionals using AI to prepare cases. Contrastingly, in February 2025, in a speech at the LawtechUK Generative AI Event given by the Master of the Rolls, Sir Geoffrey Vos, he said that “We should not be using silly examples of bad practice as a reason to shun the entirety of a new technology.”
Vos, who is a proponent of the use of generative AI (such as ChatGPT), told his audience:
“I think it is imperative to build bridges in the legal community between the AI sceptics and the AI enthusiasts. There is no real choice about whether lawyers and judges embrace AI – they will have to – and there are very good reasons why they should do so – albeit cautiously and responsibly, taking the time that lawyers always like to take before they accept any radical change.”
So, where do we stand when it comes to using AI within the legal sector to prepare cases?
What Is An LLM Hallucination?
When a Large Language Model (LLM) hallucinates, it produces content that does not make sense, is factually incorrect, and is not faithful to the source content on which it is trained. Ultimately, hallucinations are a byproduct of the way in which LLMs use probability to generate responses from patterns derived from enormous training datasets. One of the leaders in AI, IBM, explained that “AI hallucinations are similar to how humans sometimes see figures in the clouds or faces on the moon. In the case of AI, these misinterpretations occur due to various factors, including overfitting, training data bias/inaccuracy and high model complexity”.
Olsen & Anor v Finansiel Stabilitet
Olsen & Anor v Finansiel Stabilitet [2025] EWHC 42 involved the enforcement of a Danish judgment by Finansiel Stabilitet, a Danish financial institution, against Mr and Mrs Olsen in England for a sum of about €5.8m and 1.25m Danish Kroner.
Mr and Mrs Olsen, who acted in person, successfully appealed, demonstrating that the judgment in question had, in fact, expired in accordance with Danish law and, therefore, could not be enforced in this jurisdiction. The issue was that Mr and Mrs Olsen relied on the case of Flynn v Breitenbach [2020] EWCA Civ 1336, a case that did not exist. One theory is that the case may have been hallucinated by an AI large language model (LLM). However, Mr and Mrs Olsen told the court it came from a “European business network” of law firms and banks.
While they gained no advantage from citing the case, and it is unclear whether it originated from an LLM hallucination, it is known that other LLMs have hallucinated cases.
Other Cases Of LLM Hallucinations
In another case of AI hallucination, an Australian lawyer was referred to the Office of the NSW Legal Services Commissioner (OLSC) after he confirmed he had used ChatGPT to create court filings which had provided ‘non-existent’ citations. It is believed that ChatGPT had ‘hallucinated’ cases and quotes which the lawyer had used within his court submissions without checking the details. In a further example of LLM hallucination, a Canadian lawyer was reprimanded for citing fake cases ‘invented’ by ChatGPT. As a result, the judge ordered the lawyer to personally compensate her client’s ex-wife’s lawyers for the time it took them to learn the cases she hoped to cite were made up by ChatGPT.
Final Words
The legal profession is facing a new challenge, and some may argue, opportunity in the form of using LLMs to prepare cases and the potential for hallucinations. Solicitors are ultimately accountable for the professional expertise they provide. Hence, it is essential that where LLMs are used, the accuracy of output, along with any citations and sources, are fully verified.
We have been helping solicitors and other legal professionals with disciplinary and regulatory advice for nearly 30 years. If you have any questions relating to an SRA investigation or an SDT appearance, please call us on 0151 909 2380 or complete our Free Online Enquiry.