Zoom Image

Input mask of ChatGPT: You shouldn't take everything the software spits out at face value

Photo: Frank Rumpenhorst / dpa

The idea of a lawyer in New York to use the chatbot ChatGPT when researching a case has gone spectacularly wrong. A motion he filed included references to cases such as "Petersen v. Iran Air" or "Martinez v. Delta Airlines" that never existed. According to the lawyer, the alleged judgments, including supposedly matching file numbers, were issued by ChatGPT.

The gullibility towards the chatbot could have serious professional consequences for the experienced lawyer – apart from a lot of malice and ridicule on platforms like Twitter. According to the New York Times, the judge in charge scheduled a hearing for the beginning of June, in which it should deal with possible consequences.

The background to the farce is a case in which a passenger filed a lawsuit against the airline Avianca because he was allegedly injured in the knee by a serving trolley on one of its planes. The airline requested that the lawsuit be dismissed. In a countermotion filed in March, the plaintiff's law firm referred to various previous decisions. In six of them, however, the Avianca lawyers could not find any evidence of their existence.

Software that could revolutionize the everyday life of a lawyer

The plaintiff's lawyer now stated in a statement under oath that he did not want to deceive the court, but only relied on assurances from ChatGPT that the cases mentioned were authentic. The chatbot had also issued texts of alleged judgments that were submitted by his law firm to the court in April. These documents, in turn, contained references to cases that turned out to be fictitious. In the US, there are databases of judgments that the lawyer could have used to verify ChatGPT's information himself.

Chatbots like ChatGPT have been fueling a new hype about artificial intelligence-based applications for several months now. This software is trained on the basis of huge amounts of data. Experts warn, however, that due to the way they work, the programs also output fictitious information that may look real to the user. At the same time, the profession of lawyer is often cited as one of the fields of work that could be particularly changed by such AI technology, because the programs can quickly evaluate information and formulate texts in such a way that they sound like those written by humans.

According to the New York Times, the lawyer with the fake cases, who is now being ridiculed not only in his industry, has over three decades of professional experience in New York. According to the report, the man promised not to rely on ChatGPT in the future without clearly convincing himself of the authenticity of the data. He did not want to comment on the newspaper's article.

mbö/dpa