ChatGPT Goes to Trial — BuzzMachine – News Block

I attended a show cause hearing for two lawyers and their firm who submitted non-existent subpoenas and then completely fictitious cases fabricated by ChatGPT to federal court, and then tried to blame the machine. “This case is schadenfreude for any lawyer,” said the lawyers’ lawyer, misusing a word like ChatGPT would. “There, but by the grace of God I go…. Lawyers have always had difficulties with new technologies”.

The judge, P. Kevin Castel, did not want to know any of that. At the end of the two-hour hearing in which he meticulously and patiently questioned each of the lawyers, he said that “it’s not fair to criticize people’s words,” but noted that the lawyers’ actions were “repeatedly described as wrong .” The mistake might have been the first filing with its non-existent citations. But “that’s the beginning of the narrative, not the end,” as time and time again the lawyers failed to do their job, to move on once the opposing counsel and the court drew his attention to the fiction, even to Google the cases ChatGPT fabricated to verify their existence, let alone read what “gibberish” (in the judge’s description) ChatGPT fabricated. And, ultimately instance, they did not take full responsibility for their own actions.

Time and time again, Steven Schwartz, the attorney who used ChatGPT to do his job, testified in court that he “could just never imagine that ChatGPT would fabricate cases…. It never occurred to me that it would be making up cases.” He thought it was a search engine, a “super search engine.” And search engines can be trusted, yes? Technology can’t be wrong, right?

Now it is true that one can criticize the creators of some long language models for giving people the impression that generative AI is credible when we know it is not, and especially Microsoft for later connecting ChatGPT with their search engine, Bing, certainly fooling more people. But Judge Castel’s point remains: it was the lawyer’s responsibility, towards himself, his client, the court and the truth itself, to verify the work of the machine. This is not a story of technology failure but of humans, as most are.

Technology was blamed for a lot this day. The lawyers blamed their legal search engine, Fastcase, for not giving this personal injury firm, accustomed to state courts, access to federal cases (a billing error). They blamed Microsoft Word for cutting and pasting a bollox notorization. In a beautiful Gutenberg-era moment, Judge Castel asked them about the weird font combination (Times Roman and some sans serif) in the bogus cases, and the lawyer blamed that on computer cut-and-paste as well. The lawyers’ attorney said that with ChatGPT, Schwartz “was playing with live ammunition. He didn’t know because the technology lied to him. When Schwartz went back to ChatGPT to “find” the cases, he “doubled down. I kept lying to him.” He made them with digital ether. “The world now knows the dangers of ChatGPT,” said the lawyers’ lawyer. “The court has done its I work warning the public of these risks.” The judge interrupted: “I didn’t set out to do that.” Because the problem here is not the machine, it’s the men who used it.

The courtroom was packed, sending some into a packed room to listen. There were some reporters there, whose presence the lawyers noted as they lamented their public humiliation. The room was also filled with young, dark-suited law students and legal interns. I hope you have listened well to the judge (and I hope the journalists too) about the real obligations of the truth.

ChatGPT is designed to tell you what you want it to say. It is a personal propaganda machine that spins words to satisfy the ear, without expecting it to be correct. Kevin Rose’s The New York Times he asked ChatGPT to reveal a dark soul and then was surprised and disturbed when it did exactly as he asked. Same for attorney Schwartz. In his cross-examination of the attorney, the judge noted this important nuance: Schwartz did not ask ChatGPT for an explanation and case law regarding the somewhat arcane, especially for a personal injury attorney who typically practices in state courts, bankruptcy matters, statutes of limitation, and international treaties in this case of an airline passenger’s knee and a wandering snack cart. “You were not asking ChatGPT for an objective analysis,” the judge said. Instead, Schwartz admitted, he asked ChatGPT to give him cases that bolstered his argument. Then, when the opposing lawyer and judge doubted the existence of the cases, he went back to ChatGPT and showed him the cases, gibberish and all. And in a flash of apparent disbelief, when he asked ChatGPT “are the other cases he provided fake?”, he replied as he no doubt expected: “No, the other cases he provided are real.” He instructed that they could be found in reputable legal databases such as LexisNexis and Westlaw, which Schwartz did not consult. The machine did as it was told; the lawyer did not. “He followed his order,” the judge said. “ChatGPT was not complementing his research. He was your investigation.”

Schwartz apologized chokingly to the court, his colleagues and his opponents, though, as the judge pointed out, he left his own underserved client out of that litany. Schwartz took responsibility for using the machine to do his work, but he didn’t take responsibility for the work he didn’t do to verify the nonsense strings of words he spit out.

I have some empathy for Schwartz and his colleagues, as they will probably be a joke for a long time in jokes about the Nebbish, Nebbish & Luddite company and the dangers of technological progress. All of his associates are now taking continuing legal education courses on the proper use of artificial intelligence (and there are plenty already). Schwartz is unlucky enough to be the unfortunate pioneer who came across this new tool when he was three months into the world, and he was simply the first to find a new way to break it. His lawyers argued before the judge that he and his colleagues should not be sanctioned because they did not act in bad faith. The judge has taken the case under consideration, but I suspect he might not agree, given his neglect to follow through when his work was in question.

I also have some anthropomorphic sympathy for ChatGPT, as it is a wronged party in this case: wronged by the lawyers and their guilt, wronged by the media and their misrepresentations, wronged by companies, especially Microsoft, trying to tell users just what it is. What Schwartz wrongly assumed: that ChatGPT is a search engine that can provide data. cannot Provide language that sounds believable, but not believable. That’s what it’s designed for. That’s what it does, surprisingly. Its misuse is not your fault.

I have come to believe that journalists should stay away from ChatGPT, et al., for creating that product we call content. Yes, AI has long been used to produce stories from limited, structured data: sports games and financial results. That works well, because in these cases, stories are just another form of data visualization. Generative AI is something else again. Choose any word in the language to put after another word based not on facts but on probability. I’ve said I see uses for this technology in journalism: expanding literacy, helping people who are intimidated by writing and illustration to tell their own stories instead of having them mined and exploited by journalists, for example. We should study and test this technology in our field. We should learn about what it can and cannot do from experience, rather than misrepresent its capabilities or dangers in our reports. But we must not let it do our work for us.

Plus, the world already has more than enough content. The last thing we need is a machine that spits even more. What the world needs from journalism is research, information, service, solutions, accountability, empathy, context, history, humanity. I dare say to my journalism students who are learning to write stories that writing stories is not their job; it’s just a useful skill. Your job as journalists is to serve communities and that starts with listening and talking to people, not machines.

Image: Lady Justice throws her scales for the machine, by DreamStudio

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top