The Embarrassed ChatGPT Attorney


New York Times, June 9, 2023:

The ChatGPT Lawyer Explains Himself (Sort of)

[Excerpts; blogger Comments in bold red italics]

By Benjamin Weiser and Nate Schweber

In a cringe-inducing court hearing, a [New York City] lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.

Steven A. Schwartz told a judge considering sanctions that the episode had been “deeply embarrassing.”

As the court hearing in Manhattan began, the lawyer, Steven A. Schwartz, appeared nervously upbeat, grinning while talking with his legal team. Nearly two hours later, Mr. Schwartz sat slumped, his shoulders drooping and his head rising barely above the back of his chair.

For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT. The judge, P. Kevin Castel, said he would now consider whether to impose sanctions on Mr. Schwartz and his partner, Peter LoDuca, whose name was on the brief.

At times during the hearing, Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him.

“God, I wish I did that, and I didn’t do it,” Mr. Schwartz said, adding that he felt embarrassed, humiliated and deeply remorseful.

“I did not comprehend that ChatGPT could fabricate cases,” he told Judge Castel.

In contrast to Mr. Schwartz’s contrite postures, Judge Castel gesticulated often in exasperation, his voice rising as he asked pointed questions. Repeatedly, the judge lifted both arms in the air, palms up, while asking Mr. Schwartz why he did not better check his work.

As Mr. Schwartz answered the judge’s questions, the reaction in the courtroom, crammed with close to 70 people who included lawyers, law students, law clerks and professors, rippled across the benches. There were gasps, giggles and sighs. Spectators grimaced, darted their eyes around, chewed on pens.

A New Generation of Chatbots

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. . . .

ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

“I continued to be duped by ChatGPT. It’s embarrassing,” Mr. Schwartz said.

An onlooker let out a soft, descending whistle.

The episode, which arose in an otherwise obscure lawsuit, has riveted the tech world, where there has been a growing debate about the dangers — even an existential threat to humanity — posed by artificial intelligence. It has also transfixed lawyers and judges.

“This case has reverberated throughout the entire legal profession,” said David Lat, a legal commentator. “It is a little bit like looking at a car wreck.”

The case involved a man named Roberto Mata, who had sued the airline Avianca claiming he was injured when a metal serving cart struck his knee during an August 2019 flight from El Salvador to New York.

Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Mata’s lawyers responded with a 10-page brief citing more than half a dozen court decisions, with names like Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines, in support of their argument that the suit should be allowed to proceed.

After Avianca’s lawyers could not locate the cases, Judge Castel ordered Mr. Mata’s lawyers to provide copies. They submitted a compendium of decisions.

It turned out the cases were not real.

Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.

He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases.

“I heard about this new site, which I falsely assumed was, like, a super search engine,” Mr. Schwartz said.

Programs like ChatGPT and other large language models in fact produce [seemingly] realistic responses by analyzing which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples pulled from all over the internet.

Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”

Comment: “Limitations” indeed. Stephen Schwartz presumably graduated from law school and passed a tough bar exam. Also presumably he is familiar with what “preponderance of evidence” and “proof beyond a reasonable doubt” mean. Further, in three decades of practice, he has likely prepared for and conducted numerous cross-examinations, and researched many legal briefs.

All these practices were in place long before computers and AI came along. There were also personal assistants such as clerks and paralegals.

(In my own experiment with ChatGBT, reported here in March, the bot erroneously named me as the author of a nonexistent Pendle Hill pamphlet; who figured that out? Me. The technology needs some work before it will displace my own elderly eyes and ability to spell Google.)

What this report really exposes is professional laziness & sloth. It’s like a star Yankees slugger sending a batboy out to catch a routine pop fly, because the star can’t be bothered to trot out and raise his custom-made Trump-autographed glove; then finding out the TV cameras were all turned on & aimed at him.

Rebecca Roiphe, a New York Law School professor who studies the legal profession, said the imbroglio has fueled a discussion about how chatbots can be incorporated responsibly into the practice of law.

“This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”

The worldwide publicity spawned by the episode should serve as a warning, said Stephen Gillers, who teaches ethics at New York University School of Law. “Paradoxically, this event has an unintended silver lining in the form of deterrence,” he said.

There was no silver lining in courtroom 11-D on Thursday. At one point, Judge Castel questioned Mr. Schwartz about one of the fake opinions, reading a few lines aloud.

“Can we agree that’s legal gibberish?” Judge Castel said.

Comment: unfortunately, the Times did not quote any of the ”legal gibberish,” so lay readers can’t compare it with samples of normal “legal gibberish.”

After Avianca had the case moved into the federal court, where Mr. Schwartz is not admitted to practice, Mr. LoDuca, his partner at Levidow, Levidow & Oberman, became the attorney of record.

In an affidavit last month, Mr. LoDuca told Judge Castel that he had no role in conducting the research. Judge Castel questioned Mr. LoDuca on Thursday about a document filed under his name asking that the lawsuit not be dismissed.

“Did you read any of the cases cited?” Judge Castel asked.

“No,” Mr. LoDuca replied.

“Did you do anything to ensure that those cases existed?”

No again.

Lawyers for Mr. Schwartz and Mr. LoDuca asked the judge not to punish their clients, saying the lawyers had taken responsibility and there was no intentional misconduct.

Comment: I agree the lawyers shouldn’t go to jail. But it would not be amiss, after throwing out their lawsuit as incompetent gibberish, for the judge to send these two invoice-padding ninnies to retirement workshops.

In the declaration Mr. Schwartz filed this week, he described how he had posed questions to ChatGPT, and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot, which shows it tossing out words like “sure” and “certainly!”

After one response, ChatGPT said cheerily, “I hope that helps!”

Comment: There is still much paranoid commentary about the danger of ChatGBT costing many skilled workers their jobs. That tsunami has not arrived yet; but if by this incident, ChatGPT helps weeds out these two

Benjamin Weiser is a reporter covering the Manhattan federal courts. He has long covered criminal justice, both as a beat and investigative reporter. Before joining The Times in 1997, he worked at The Washington Post.

One thought on “The Embarrassed ChatGPT Attorney”

  1. NEVER trust a strange computer. Ever since Microsoft first released Windows, they’ve been making them stranger and stranger!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.