|
This story is hilarious! In a nutshell:
- a lawyer used genAI🤖 for gathering relevant court cases supporting their argument;
- the judge became suspicious🕵️, and found that the cases (4 out of 4) the lawyer quoted did not exists or were very different;
- judge went ballistic😡... (fines, mandatory training, etc.)
Considering genAI as a 'person', coming up with fake precedents, is just unacceptable / unethical / etc. However, genAI is NOT a person.
Meanwhile, looking at genAI merely as 'glorified autocomplete', this may even be normal behaviour. How many times do we use genAI for roleplay🎭, for creating some filler text, some legal hocus-pocus for a movie script? In such contexts, making up some precedents could have been the desired outcome. How could the genAIn know that in this situation, it must refer to a real case. Was this just a simple hallucation👻? Was it a bad prompt, or some misunderstanding? Was this the 'correct' (but unwanted) behaviour for the genAI? -- I do not know.
Of course, submitting important info without checks is always a bad idea. Responsibility cannot be pushed to the AI, the human is responsible.
Btw, this is a great example for a story where it is so easy to be smart -- after the fact...
This post was first published on Linkedin here on 2025-08-02.