A lawyer apologized after ChatGPT made up case law in an affidavit he submitted

Advertisement
A lawyer apologized after ChatGPT made up case law in an affidavit he submitted
Getty Images
  • A lawyer used ChatGPT to write an affidavit in a personal injury lawsuit against an airline.
  • Lawyers for the airline and the judge on the case could not find several of the cited court decisions.
Advertisement

ChatGPT has seen its popularity rise in recent months as optimism and skepticism about the new generative AI program soars.

However, the tool is at the heart of a case to discipline a New York lawyer. Steven Schwartz, a personal injury lawyer with Levidow, Levidow & Oberman, faces a sanctions hearing on June 8, after it was revealed that he used ChatGPT to write up an affidavit.

Another attorney at the same law firm, Peter LoDuca, is also facing a sanctions, but in a court filing he said did not do any of the research in the affidavit.

The affidavit that used ChatGPT was for a lawsuit involving a man who alleged he was injured by a serving cart aboard an Avianca flight, and featured several made up court decisions.

In an order, Judge Kevin Castel said the incident presented the court with "an unprecedented circumstance."

Advertisement

"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Castel wrote.

Neither the lawyers for the airline nor Castel himself were able to find the cases mentioned in the affidavit.

Bart Banino, a lawyer with Condon & Forsyth, which represents Avianca, told The New York Times that his company could tell the cases were fake and were initially skeptical that a chatbot was used.

On Thursday, Schwartz apologized to Castel, adding that he had never used the AI tool before and was unaware of the possibility that its content could be false," the Times reported.

Shwartz also added that ChatGPT was "a source that has revealed itself to be unreliable."

Advertisement

Avianca, LoDuca, and Shwartz did not respond to Insider's requests for comment at the time of publication.

{{}}