X Corp., run by Elon Musk, is moving to confront a Minnesota state law that bans the use of deepfakes in election contexts, arguing that it infringes on First Amendment free speech rights.
The company’s recent federal lawsuit asserts that the state statute, enacted in 2023, is invalidated by a 1996 federal law protecting social media platforms from accountability over user-generated content.
The company expressed concern that while the ban on deepfakes appears well-intentioned, it could endanger harmless political discourse, including satire, and place social media platforms at risk of criminal liability for moderating such content.
“Rather than safeguarding democracy, this legislative measure stands to weaken it,” they claimed.
The Minnesota statute imposes criminal consequences, such as imprisonment, for spreading any deepfake media pieces if the involved parties knowingly distribute falsified materials or do so recklessly regarding their authenticity, within 90 days before party conventions or after early voting has commenced.
The law aims to penalize if the intent is to harm a candidate or alter election outcomes, identifying deepfakes as materially convincing fabrications made via AI or similar tech.
Democratic state Sen. Erin Maye Quade, who sponsored the legislation, remarked on Musk’s financial interests in political domains and stated the Minnesota law blocks harmful election meddling through deepfakes.
She labeled the lawsuit as unnecessary and a drain on the resources of the Attorney General’s Office.
Keith Ellison, the Democratic Attorney General of Minnesota, is tasked with defending the state law’s constitutionality in court. Ellison’s office is carefully examining the lawsuit and will respond when appropriate.
Previously, the Minnesota law faced legal scrutiny from Christopher Kohls, a content creator, and GOP state Rep. Mary Franson, criticizers who favor AI-generated political satire.
Their case, which aims to reverse the law, is currently paused during an appeal process.
The Attorney General’s office maintains that deepfakes threaten electoral integrity and democratic foundations, justifying the law’s existence with built-in measures to protect satire and parody.
X Corp., originally Twitter, is reportedly the sole platform legally contesting Minnesota’s regulation, having also opposed similar free-speech infringement claims elsewhere, like those related to California’s political deepfakes law.
They emphasize how features like “Community Notes,” which allow users to report issues, and their “Authenticity Policy” with “Grok AI” provide content protection mechanisms.
Alan Rozenshtein, a University of Minnesota law professor, highlighted the need to distinguish between free-speech concerns and opinions on Musk himself.
He speculated that the law might not hold up under judicial review.
Rozenshtein notes the absence of a First Amendment classification protecting misleading political speech and the heavy burden social media companies could bear under criminal penalties to avoid deepfake content.
Suppression might lead to significant content removal, deterring creative and humorous works.
Despite recognizing deepfakes’ potential issues, Rozenshtein calls for concrete evidence about their harm before limiting free speech. He acknowledges the problem of misinformation’s demand, which poses a broad challenge to democratic health beyond legislative fixes on content origins.