A South African court case made headlines for all the wrong reasons in January 2025. The legal team in Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others had relied on case law that simply didn’t exist. It had been generated by ChatGPT, a generative artificial intelligence (AI) chatbot developed by OpenAI.

Only two of the nine case authorities the legal team submitted to the High Court were genuine. The rest were AI-fabricated “hallucinations”. The court called this conduct “irresponsible and unprofessional” and referred the matter to the Legal Practice Council, the statutory body that regulates legal practitioners in South Africa, for investigation.

It was not the first time South African courts had encountered such an incident. Parker v Forsyth in 2023 also dealt with fake case law produced by ChatGPT. But the judge was more forgiving in that instance, finding no intent to mislead. The Mavundla ruling marks a turning point: courts are losing patience with legal practitioners who use AI irresponsibly.

We are legal academics who have been doing research on the growing use of AI, particularly generative AI, in legal research and education. While these technologies offer powerful tools for enhancing efficiency and productivity, they also present serious risks when used irresponsibly.

Aspiring legal practitioners who misuse AI tools without proper guidance or ethical grounding risk severe professional consequences, even before their careers begin. Law schools should equip students with the skills and judgment to use AI tools responsibly. But most institutions remain unprepared for the pace at which AI is being adopted.

Very few universities have formal policies or training on AI. Students are left with no guide through this rapidly evolving terrain. Our work calls for a proactive and structured approach to AI education in law schools.

When technology becomes a liability

The advocate in the Mavundla case admitted she had not verified the citations and relied instead on research done by a junior colleague. That colleague, a candidate attorney, claimed to have obtained the material from an online research tool. While she denied using ChatGPT, the pattern matched similar global incidents where lawyers unknowingly filed AI-generated judgments.

In the 2024 American case of Park v Kim, the attorney cited non-existent case law in her reply brief, which she admitted was generated using ChatGPT. In the 2024 Canadian case of Zhang v Chen, the lawyer filed a notice of application containing two non-existent case authorities fabricated by ChatGPT.

The court in Mavundla was unequivocal: no matter how advanced technology becomes, lawyers remain responsible for ensuring that every source they present is accurate. Workload pressure or ignorance of AI’s risks is no defence.

The judge also criticised the supervising attorney for failing to check the documents before filing them. The episode underscored a broader ethical principle: senior lawyers must properly train and supervise junior colleagues.

The lesson here extends far beyond one law firm. Integrity, accuracy and critical thinking are not optional extras in the legal profession. They are core values that must be taught and practised from the beginning, during legal education.

The classroom is the first courtroom

The Mavundla case should serve as a warning to universities. If experienced legal practitioners can fall into AI traps regarding law, students still learning to research and reason can too.

Generative AI tools like ChatGPT can be powerful allies – they can summarise cases, draft arguments and analyse complex texts in seconds. But they can also confidently fabricate information. Because AI models don’t always “know” when they are wrong, they produce text that looks authoritative but may be entirely false.

Read more: AI can be a danger to students – 3 things universities must do

For students, the dangers are twofold. First, over-reliance on AI can stunt the development of critical research skills. Second, it can lead to serious academic or professional misconduct. A student who submits AI-fabricated content could face disciplinary action at university and reputational damage that follows them into their legal career.

In our paper we argue that, instead of banning AI tools outright, law schools should teach students to use them responsibly. This means developing “AI literacy”: the ability to question, verify and contextualise AI-generated information. Students should learn to treat AI systems as assistants, not authorities.

Read more: Universities can turn AI from a threat to an opportunity by teaching critical thinking

In South African legal practice, authority traditionally refers to recognised sources such as legislation, judicial precedent and academic commentary, which lawyers cite to support their arguments. These sources are accessed through established legal databases and law reports, a process that, while time-consuming, ensures accuracy, accountability and adherence to the rule of law.

From law faculties to courtrooms

Legal educators can embed AI literacy into existing courses on research methodology, professional ethics and legal writing. Exercises could include verifying AI-generated summaries against real judgments or analysing the ethical implications of relying on machine-produced arguments.

Teaching responsible AI use is not simply about avoiding embarrassment in court. It is about protecting the integrity of the justice system itself. As seen in Mavundla, one candidate attorney’s uncritical use of AI led to professional investigation, public scrutiny and reputational damage to the firm.

The financial risks are also real. Courts can order lawyers to pay costs out of their pockets, when serious professional misconduct occurs. In the digital era, where court judgments and media reports spread instantly online, a lawyer’s reputation can collapse overnight if they are found to have relied on fake or unverified AI material. It would also be beneficial for courts to be trained in detecting fake cases generated by AI.

The way forward

Our study concludes that AI is here to stay, and so is its use in law. The challenge is not whether the legal profession should use AI, but how. Law schools have a critical opportunity, and an ethical duty, to prepare future practitioners for a world where technology and human judgment must work side by side.

Speed and convenience can never replace accuracy and integrity. As AI becomes a routine part of legal research, tomorrow’s lawyers must be trained not just to prompt – but to think.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Jacques Matthee, University of the Free State and Grey Stopforth, University of the Free State

Read more:

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.