For the first time in Argentina, a court has discovered a lawyer who invented court dates using artificial intelligence.

The Civil and Commercial Court of Appeals of Rosario has reprimanded a lawyer who fabricated citations using artificial intelligence (AI) in a court case. The lawyer acknowledged that the case law he cited had been created using a generative AI program (like ChatGPT ) and that he didn't realize it was a hallucination , that is, a fabricated case with no real-world reference.
The case was widely discussed in the legal world, not because it was the first to use AI, but because it led to a judge issuing a formal warning to a lawyer. The case included case law citations (references to previous rulings or judgments included to support an argument) that could not be found in previous documentation.
According to Clarín , Judge Oscar Pucinelli searched unsuccessfully for several hours for the source of the lawyer's quotes. This led the judge to ask the lawyer to identify the sources, but the problem was that they did not exist : the lawyer acknowledged that he had used AI to generate the quotes, copied and pasted them "in good faith," that is, without knowing that they were invented (but without verifying them).
"The court stated that the lawyer likely acted in good faith, but emphasized his professional responsibility: although no formal sanction was imposed, the Rosario Bar Association was notified to adopt preventive measures. The ruling expressly cites the Bar Association's Standards of Professional Ethics, particularly the rule of probity [honesty], which prohibits incomplete, approximate, or untruthful citations," Luis García Balcarce, a lawyer specializing in digital rights, explained in an interview with this media outlet.
The judge warns against the use of AI in legal documents. Photo: Screenshot of the ruling
The general problem is that these systems are used without considering what these tools are and aren't good for. "The ruling doesn't condemn the use of artificial intelligence in the practice of law, but it does set clear limits. As the court points out, the problem isn't the technology itself, but its thoughtless use . The decision to formally instruct the Bar Association to educate its members about these risks sets an important precedent: it's not just about correcting an individual error, but about preventing this practice from becoming widespread ," he adds.
It's not uncommon for chatbots based on artificial intelligence models to fabricate information. This phenomenon, known as " hallucinations ," is studied by specialists who analyze why incorrect or nonexistent information can be generated.
Legal cases involving fabricated quotes have been detected worldwide. Photo: Reuters
These types of cases are flooding the legal world. The first and most well-known of these cases was decided on June 22, 2023, in the United States, when a lawyer filed a lawsuit against the Colombian airline Avianca , citing nonexistent case law, precisely for using ChatGPT. Today, there is even a website that compiles cases from around the world.
“What happened in our country is not an isolated phenomenon . In various jurisdictions around the world, courts have already sanctioned professionals who incorporated nonexistent precedents or false citations generated by artificial intelligence systems. The Mata v. Avianca case in New York marked a turning point, and since then, judicial warnings, fines, and even ethics investigations have multiplied,” recalls Lucas de Venezia, a lawyer (UCA) specializing in Law and Artificial Intelligence.
For many lawyers this is already an “epidemic”: “This has actually happened on several occasions, all over the world. In the United States There are more than 50 sanctions against lawyers for citing cases where AI hallucinates. This is the first case detected in Argentina , which sheds light on the legal duty that lawyers have to act honestly before the court and not lie to it. Part of 'not lying to it' is checking what is written," says Pablo Palazzi, law professor at the University of San Andrés and partner in the technology law practice at Allende & Brea.
The reason for this is usually due to limited time and excessive workloads, although the general comment in the legal world was more likely seen as professional laziness.
If we add to this the use of a tool whose operation is not fully understood, the result can be professionally catastrophic, for both lawyers and litigants.
The system produces texts that are highly predictive and can be "mistaken." Illustration: ChatGPT
Artificial intelligence systems can produce information that doesn't correspond to reality, a phenomenon known as hallucinations. Why does this happen? Thinking in terms of "modules" is one way to approximate how these systems operate.
“I speculate that this lawyer asked for assistance in preparing the arguments for a case. Then the AI system invented it. Why? These AI systems format their response into a kind of template that must contain the content in each module. In their eagerness to construct assertive, concrete responses, they have to fill in these boxes,” Ernesto Mislej, co-founder of 7Puentes, an Argentine company made up of scientists and engineers who apply artificial intelligence to various businesses, explained in an interview with Clarín .
"It's like being asked to take a final exam on a subject you have no idea about: instead of learning about the subject, you learn how to answer an oral exam . It's a defense where, at the beginning, there's a question about an author, so the first thing you do is recognize the author, then you recognize their work, and then you answer with that information about the author and their work. That's not knowing the subject, it's knowing how to answer an exam on a topic ," the specialist exemplifies in relation to how these neural networks work.
Under this modus operandi, Mislej risks that “surely, in this structure of the request to draft a judicial document, one part would say 'this is the time to cite case law with citations' , then the AI doesn't know what to answer because it doesn't have those citations (because they don't exist). The way to answer the citations would be to use known, logical, plausible citations, but in case they don't exist, this system composes something because it has to 'put things in that little box' and that's when it hallucinates .”
The problem is that "in the legal field, this can include nonexistent cases, apocryphal doctrines, or laws that were never enacted, since language models are designed to generate coherent and convincing text, but not necessarily true text , which represents a serious risk in contexts where precision is essential," Balcarce adds.
“Technological fascination, without human oversight or verification, erodes lawyers' credibility and threatens the integrity of the judicial process. It's not about demonizing the tool, but rather understanding that law is a language of authority that doesn't allow shortcuts or fictions,” agrees de Venezia, also a university professor.
Chatbots can also leak information. Photo: Bloomberg
With the widespread adoption of generative artificial intelligence in the workplace, many quick fixes for repetitive tasks have emerged, but so have new risks: not only hallucinations, but also ethical issues surrounding its use.
“We need to emphasize much more the importance of responsible use of these tools to know where to use them. Not everything has to be automated, not everything that generative AI returns is reliable, and even more so, they aren't information-searching tools. Even so, if used for research, each of the outputs these tools return must be checked : ask what the source is, where it comes from,” Carolina Martínez Elebi, a graduate in Communication Sciences and professor at the University of Buenos Aires, told Clarín .
In this regard, it's important to remember that these models are trained with information available on the web, among other sources and large volumes of data.
“We know that the web is currently full of junk sites , even containing misinformation. So if the AI we use feeds on that web, it will also return junk ,” continues Elebi, a consultant and author of the website DHyTecno .
A final, and not minor, issue is the issue of information security. What happens to what is uploaded to chatbots like ChatGPT? The main problem is that the average user doesn't realize that when they make a query, it is processed remotely on the servers of companies like OpenAI, Google, or Microsoft. And they often lose control over that data.
"We must keep in mind that lawyers working with sensitive cases are also using information from their own cases to analyze generative artificial intelligence: there we have another problem that goes into another area, that of data protection," warns Elebi.
In fact, the specialist recalls that "news stories frequently appear in which chats between users and the platform are leaked , published on the easily accessible website. There are a lot of discussions to be had and a lot of information that professionals from different areas of the business need to have when using these tools, and ethical responsibility must always prevail above all else."
Ultimately, the technology has already been invented, and it's impossible to go back . The challenge is understanding what "monster" you're dealing with when using these chatbots so that your application adds value, not subtracts.
Clarin