A US lawyer is facing possible sanctions after he used the popular ChatGPT to write a brief and discovered that the application of artificial intelligence (AI) had invented a whole series of supposed legal precedents.

According to The New York Times, the lawyer in trouble is Steven Schwartz, lawyer in a case that is settled in a New York court, a lawsuit against the airline Avianca filed by a passenger who claims he suffered an injury when he was hit with a service cart during a flight.

Schwartz represents the plaintiff and used ChatGPT to prepare a brief opposing a defense request to have the case dismissed.

In the ten-page document, the lawyer cited several court decisions to support his thesis, but it was soon discovered that the well-known chatbot of the OpenAI company had invented them.

"The Court is facing an unprecedented situation. A filing submitted by plaintiff's attorney opposing a motion to dismiss (the case) is replete with citations from nonexistent cases," Judge Kevin Castel wrote this month.

On Friday, Castel issued an order convening a hearing on June 8 in which Schwartz must try to explain why he should not be sanctioned after having tried to use totally false assumptions of precedents.

He did so a day after the lawyer himself filed an affidavit in which he admitted to having used ChatGPT to prepare the brief and acknowledged that the only verification he had carried out was to ask the application if the cases he cited were real.

Schwartz justified himself by saying that he had never used a tool of this type before and that, therefore, "he was not aware of the possibility that its content could be false."

The lawyer stressed that he had no intention of misleading the court and fully cleared another lawyer from the firm who is also exposed to possible sanctions.

The document, seen by Efe, closes with an apology in which Schwartz deeply regrets having used artificial intelligence to support his research and promises never to do so again without fully verifying its authenticity.

According to the criteria of The Trust Project

Learn more