Court declaration by Stanford AI fakery expert contained apparent AI fakery, lawyers claim

A Stanford professor serving as an expert in a federal court lawsuit over fakery created by artificial intelligence submitted a sworn declaration containing false information likely made up by an AI chatbot, a legal filing claims.

The declaration submitted by Jeff Hancock, professor of communication and founding director of the Stanford Social Media Lab, “cites a study that does not exist,” the Nov. 16 filing by the plaintiffs in the case alleged. “Likely, the study was a ‘hallucination’ generated by an AI large language model like ChatGPT.”

Hancock and Stanford did not immediately respond to requests for comment.

The lawsuit was brought in Minnesota District Court by a state legislator and a satirist YouTuber seeking a court order declaring unconstitutional a state law criminalizing election-related, AI-generated “deepfake” photos, video and sound.

Hancock, according to the court filing Saturday, was brought in as an expert by Minnesota’s attorney general, a defendant in the case.

The filing by the lawmaker and YouTuber questioned Hancock’s reliability as an expert witness, and argued that his report should be thrown out because it might contain more, undiscovered AI fabrications.

In his 12-page submission to the court, Hancock said he studies “the impact of social media and artificial intelligence technology on misinformation and trust.”

Submitted with Hancock’s report was his list of list of “cited references,” court records show. One of those references — to a study by authors named Huang, Zhang and Wang — caught the attention of lawyers for state representative Mary Franson and YouTuber Christopher Kohls, who is also suing California Attorney General Rob Bonta over a law allowing damages-seeking lawsuits over election deepfakes.

  As Warriors return from Hawaii, Andrew Wiggins returns to practice

Hancock cited the study, purportedly appearing in the Journal of Information Technology & Politics, to support a point he made in his submission to the court about sophistication of deepfake technology. The publication is real. But the study is “imaginary,” the filing by lawyers for Franson and Kohls alleged.

The journal volume and article pages cited by Hancock do not address deepfakes, but instead cover online discussions by presidential candidates about climate change, and the impact of social media posts on election results, the filing said.

Related Articles

Technology |


Apple readies more conversational Siri in bid to catch up in AI

Technology |


Silicon Valley tech boom lifts California’s dreary budget view

Technology |


US gathers allies to talk AI safety. Trump’s vow to undo Biden’s AI policy overshadows their work

Technology |


AI is everywhere. How should California schools handle it?

Technology |


Magid: Generative AI is getting smarter

Such a citation, with a plausible title, and purported publication in a real journal “is characteristic of an artificial intelligence ‘hallucination,’ which academic researchers have warned their colleagues about,” the filing said.

Hancock has declared under penalty of perjury that he “identified the academic, scientific, and other materials referenced” in his expert submission, the filing said.

The filing raised the possibility that the alleged AI falsehood was inserted by the defendants’ legal team, but added, “Hancock would have still submitted a declaration where he falsely represented to have reviewed the cited material.”

Last year, lawyers Steven A. Schwartz and Peter LoDuca were fined $5,000 each in federal court in New York for submitting a personal-injury lawsuit filing that contained fake past court cases invented by ChatGPT to back up their arguments.

  San Jose State-Kennesaw State: What to know before Spartans face Owls

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz told the judge.

 

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *