A lawyer used ChatGPT to prepare a court filing.  It went horribly awry.

A lawyer who relied on ChatGPT to prepare a court filing on behalf of a man suing an airline is now all too familiar with the artificial intelligence tool’s drawbacks — including its propensity to invent facts.

Roberto Mata sued Colombian airline Avianca last year, alleging that a metal food and beverage cart injured his knee on a flight to Kennedy International Airport in New York. When Avianca asked a Manhattan judge to dismiss the lawsuit based on the statute of limitations, Mata’s lawyer, Steven A. Schwartz, submitted a brief based on research done by ChatGPT, Schwartz, of the law firm Levidow, Levidow & Oberman, said in an affidavit.

While ChatGPT can be useful to professionals in numerous industries, including the legal profession, it has proved itself to be both limited and unreliable. In this case, the AI ​​invented court cases that didn’t exist, and asserted that they were real.

The fabrications were revealed when Avianca’s lawyers approached the case’s judge, Kevin Castel of the Southern District of New York, saying they couldn’t locate the cases cited in Mata’s lawyers’ brief in legal databases.

The made-up decisions included cases titled Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines.

“It seemed clear when we didn’t recognize any of the cases in their opposition brief that something was missing,” Avianca’s lawyer Bart Banino, of Condon & Forsyth, told CBS MoneyWatch. “We figured it was some sort of chatbot of some kind.”

Schwartz responded in an affidavit last week, saying he had “consulted” ChatGPT to “supplement” his legal research, and that the AI ​​tool was “a source that has revealed itself to be unreliable.” He added that it was the first time he’d used ChatGPT for work and “therefore was unaware of the possibility that its content could be false.”

He said he even pressed the AI ​​to confirm that the cases cited were real. ChatGPT confirmed it was. Schwartz then asked the AI ​​for its source.

ChatGPT’s response? “I apologize for the confusion earlier,” he said. The AI ​​then said the Varghese case could be located in the Westlaw and Lexis Nexis databases.

Judge Castel has set a hearing regarding the legal snafu for June 8 and has ordered Schwartz and the law firm Levidow, Levidow & Oberman to argue why they should not be sanctioned.

Levidow, Levidow & Oberman could not immediately be reached for comment.

By quemari