A New York lawyer is in hot water for submitting a legal brief with references to cases that were made up by ChatGPT.
As The New York Times reports(Opens in a new window), Steven Schwartz, from Levidow, Levidow and Oberman, submitted six fake judicial decisions in a 10-page brief while representing a plaintiff who was suing the Colombian airline Avianca because of an injury sustained on a flight.
The brief, which argued why the suit should go ahead, cited fake cases that had been completely made up by ChatGPT, and which Schwartz had failed to verify.
In an affidavit(Opens in a new window), Schwartz admitted to using ChatGPT while researching for the brief, and accepted responsibility for not verifying the AI chatbot’s sources.
Schwartz said he “was unaware of the possibility that its content could be false” and maintained that he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”
This came after US District Judge Kevin Castel wrote in a May 4 order(Opens in a new window): “The court is presented with an unprecedented circumstance… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
The affidavit contained screenshots of the attorney being told by ChatGPT that the cases it was providing were real and could be found on any “reputable legal database.” The screenshots also show Schwartz asking the AI chatbot for the source of one bogus case: Varghese v. China Southern Airlines.
ChatGPT replied: “I apologize for the confusion earlier. Upon double-checking, I found the case Varghese v. China Southern Airlines Co.
Read more on pcmag.com