<<< Hungarian pronunciation cheat sheet 

This story is hilarious! In a nutshell:

  1. a lawyer used genAI🤖 for gathering relevant court cases supporting their argument;
  2. the judge became suspicious🕵️, and found that the cases (4 out of 4) the lawyer quoted did not exists or were very different;
  3. judge went ballistic😡... (fines, mandatory training, etc.)

Considering genAI as a 'person', coming up with fake precedents, is just unacceptable / unethical / etc. However, genAI is NOT a person.

Meanwhile, looking at genAI merely as 'glorified autocomplete', this may even be normal behaviour. How many times do we use genAI for roleplay🎭, for creating some filler text, some legal hocus-pocus for a movie script? In such contexts, making up some precedents could have been the desired outcome. How could the genAIn know that in this situation, it must refer to a real case. Was this just a simple hallucation👻? Was it a bad prompt, or some misunderstanding? Was this the 'correct' (but unwanted) behaviour for the genAI? -- I do not know.

Of course, submitting important info without checks is always a bad idea. Responsibility cannot be pushed to the AI, the human is responsible.

Btw, this is a great example for a story where it is so easy to be smart -- after the fact...

 

This post was first published on Linkedin here on 2025-08-02.

 

 

 
This is my personal website, opinions expressed here are strictly my own, and do not reflect the opinion of my employer. My English blog is experimental and only a small portion of my Hungarian blog is available in English. Contents of my blog may be freely used according to Creative Commons license CC BY.