The Minnesota Reformer reports on the incredible expert declaration by an “AI and misinformation” expert filed in Kohls v. Ellison. The expert ironically cites apparently imaginary journal articles in his declaration, which were apparently “hallucinated” by an AI model like ChatGPT.
A leading misinformation expert is being accused of citing non-existent sources to defend Minnesota’s new law banning election misinformation.
Professor Jeff Hancock, founding director of the Stanford Social Media Lab, is “well-known for his research on how people use deception with technology,” according to his Stanford biography.
At the behest of Minnesota Attorney General Keith Ellison, Hancock recently submitted an affidavit supporting new legislation that bans the use of so-called “deep fake” technology to influence an election. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections.
…
If the citations were generated by artificial intelligence software, it’s possible that other parts of Hancock’s 12-page declaration were as well. It’s unclear whether the non-existent citations were inserted by Hancock, an assistant, or some other party. Neither Hancock nor the Stanford Social Media Lab replied to repeated requests for comment. Nor did Ellison’s office.
Frank Bednarz, an attorney for the plaintiffs in the case, said that proponents of the deep fake law are arguing that, “unlike other speech online, AI-generated content supposedly cannot be countered by fact-checks and education.”
However, he added, “by calling out the AI-generated fabrication to the court, we demonstrate that the best remedy for false speech remains true speech — not censorship.”
Read more at Minnesota Reformer.