Stanford AI expert’s credibility shattered by fake, AI-created sources: judge

A federal court judge has thrown out expert testimony from a Stanford University artificial intelligence and misinformation professor, saying his submission of fake information made up by an AI chatbot “shatters” his credibility.

In her written decision Friday, Minnesota district court Judge Laura Provinzino cited “the irony” of professor Jeff Hancock’s mistake.

“Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less,” the judge wrote.

More irony: Hancock, a professor of communications, has studied irony extensively.

Hancock, founding director of the Stanford Social Media Lab, was hired by the Minnesota Attorney General’s office to produce a sworn expert declaration to defend the state’s law criminalizing election-related, AI-generated “deepfake” photos from a lawsuit by a state legislator and a satirist YouTuber. California approved a similar law last fall.

YouTuber Christopher Kohls, who sued Minnesota, also sued — alongside Elon Musk’s X social media company — California Attorney General Rob Bonta over California’s law, and a judge temporarily blocked it in October. Provinzino last week declined to issue a similar block requested by Kohls and the Minnesota lawmaker.

In November, lawyers for Kohls and legislator told the Minnesota court that Hancock’s 12-page declaration cited “a study that does not exist,” authored by “Huang, Zhang, Wang” and likely “generated by an AI large language model like ChatGPT.”

In December, Hancock admitted in a court filing he had used ChatGPT, blamed the bot for that error and two other AI “hallucinations” he had subsequently discovered in his submission, and apologized to the court.

  The Latest: FBI investigating ‘act of terrorism’ in New Orleans on New Year’s Day

He had used ChatGPT 4.0 to help find and summarize articles for his submission, but the errors likely occurred because he inserted the word “cite” into the text he gave the chatbot, to remind himself to add academic citations to points he was making, he wrote. The bot apparently took “cite” as an instruction and fabricated citations, Hancock wrote, adding that the bot also made up four incorrect authors for research he had cited.

Hancock, a prolific, high-profile researcher whose work has received some $20 million in grant support from Stanford, the U.S. National Science Foundation and others over the past two decades, charged $600 an hour to prepare the testimony the judge tossed, according to court filings.

Judge Provinzino noted that Minnesota Attorney General Keith Ellison was seeking to introduce in court a version of Hancock’s testimony with the errors removed, and she said she did not dispute Ellison’s assertion that the professor was qualified to present expert opinions about AI and deepfakes.

However, the judge wrote, “Hancock’s citation to fake, AI-generated sources in his declaration — even with his helpful, thorough, and plausible explanation — shatters his credibility with this Court.”

At minimum, Provinzino wrote, “expert testimony is supposed to be reliable.”

Such errors cause “many harms” including wasting the opposing party’s time and money, the judge wrote.

The Minnesota Attorney General’s office did not respond to questions, including how much Hancock billed and whether the office would seek a refund.

Hancock did not respond to questions.

At Stanford, students can be suspended and ordered to do community service for using an AI chatbot to “substantially complete an assignment or exam” without instructor permission. The school has repeatedly declined to respond to questions, as recently as Wednesday, about whether Hancock would face disciplinary measures.

  Message sent to Bulls in lopsided loss, as inconsistent play continues

The professor’s legal smackdown highlights a common problem with generative AI, a technology that has taken the world by storm since San Francisco’s OpenAI released its ChatGPT bot in November 2022. Chatbots and AI image generators often “hallucinate,” which in text can involve creating false information, and in images, absurdities like six-fingered hands.

Hancock is not alone in submitting a court filing containing AI-generated errors. In 2023, lawyers Steven A. Schwartz and Peter LoDuca were fined $5,000 each in federal court in New York for submitting a personal-injury lawsuit filing citing fake past court cases invented by ChatGPT.

With chatbot use fast spreading in many fields, including the legal profession, Provinzino in her ruling sought to turn Hancock’s imbroglio into a teachable moment.

“The Court does not fault Professor Hancock for using AI for research purposes. AI, in many ways, has the potential to revolutionize legal practice for the better,” the judge wrote.

“But when attorneys and experts abdicate their independent judgment and critical thinking skills in favor of ready-made, AI-generated answers, the quality of our legal profession and the Court’s decisional process suffer.

  Why is California’s wind blowing so hard in January?

“The Court thus adds its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions!”

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *