What happens when your AI legal assistant fabricates a case law citation, and you don’t catch it? Welcome to the world of AI hallucinations: confident, convincing lies generated by machines that don’t know better; it’s embedded in our daily lives, from business operations to healthcare, education, and creative industries. But what happens when this AI co-pilot gets it wrong? When the machine confidently generates information that is entirely false?
It’s not a glitch. It’s how generative AI works – predicting words based on patterns, not truth. Tools like ChatGPT, Gemini, and Claude can “hallucinate” anywhere from 33% to 50% of the time, according to OpenAI. The consequences? In everyday life, annoying. In the legal world? Potentially catastrophic.
Fake Facts, Real Consequences – The Growing Concern of AI Hallucinations
The consequences of AI hallucinations are not just theoretical, they’re happening now. In a recent case in South Africa, lawyers submitted court documents generated with ChatGPT, including two completely made-up cases. And that’s not the only one, another recent case in Pietermaritzburg saw a lawyer submit arguments that cited no less than 7 hallucinations, and 2 cases that were completely mischaracterized. The Legal Practice Council was not amused. These lawyers now face potential misconduct charges. This highlights the serious risks that AI hallucinations pose, especially in high-stakes environments like the legal system.
AI at Work: What Could Go Wrong?
As more businesses integrate AI into their workflows, the potential risks escalate. AI hallucinations are just one part of a broader range of concerns. Risks include privacy breaches, defamation, and copyright violations, especially when AI systems are used to generate content like marketing materials or legal documents. The rise of deepfake technology further complicates matters, as it opens the door for AI-generated content to be misused, potentially leading to severe legal consequences, including fines and imprisonment.
In a rapidly evolving landscape, the legal framework surrounding AI in South Africa is still developing. Existing laws, such as the Protection of Personal Information Act (POPIA), are inadequate to address the unique challenges posed by AI. There is an undeniable urgent need for clear and comprehensive regulations that not only protect data but also ensure accountability in AI-driven decisions.
Demystifying AI: The Purpose of AI Empowered
Recognising the increasing significance of AI, I will be sharing further insights at the upcoming AI Empowered, inspired by EO Cape Town Summit, taking place on August 7 and 8, 2025, at the Cape Town International Convention Centre. The event aims to demystify AI for business leaders, creatives, and anyone interested in understanding its legal and ethical implications.
The AI Empowered summit will provide a platform for thought leaders and innovators to discuss AI’s role in business, education, and society, focusing on its safe and responsible use. My keynote will delve into the legal considerations every organisation must understand when incorporating AI into their operations, from managing AI risks to ensuring compliance with data protection laws.
The Future of AI: Human Oversight is Key
While AI presents numerous benefits, including enhanced efficiency and innovation, it is crucial to maintain human oversight in AI systems. This ensures that AI is used ethically and responsibly, preventing negative outcomes and ensuring that AI’s potential is harnessed in a way that benefits society as a whole.
As AI continues to advance, organisations must be vigilant in their approach to adopting these technologies. My message is clear: understanding the risks, from AI hallucinations to privacy concerns, is key to ensuring a safe and legally compliant future with AI.
Key Takeaways for a Legally-Safe AI Future
- Double-check AI-generated content especially for legal, medical, or reputational use
- Don’t rely on AI for legal research
- Establish internal AI policies for safe usage
- Stay informed on laws, tools, and risks that are changing fast.
By Emma Sadleir, Founder of The Digital Law Company









