Robby Starbuck sues Meta over AI-generated responses

    0
    1

    In Los Angeles, a new legal battle has emerged as conservative activist Robby Starbuck has taken on Meta in a defamation lawsuit, accusing the tech giant’s AI chatbot of spreading misleading information about him. Starbuck argues that the AI falsely implicated him in the January 6, 2021, U.S. Capitol riots, a claim he denies, asserting he was in Tennessee at the time.

    Starbuck’s troubles began in August 2024 when he challenged “woke DEI” (diversity, equity, and inclusion) policies at Harley-Davidson, leading to a dealership using the AI’s statements against him. He discovered false information, which he claims has since damaged his reputation and posed risks to his family’s safety. Consequently, the political commentator has filed for over $5 million in damages at Delaware Superior Court.

    In response, a Meta spokesperson highlighted the company’s ongoing model improvements and acknowledged updates in response to Starbuck’s issue. This lawsuit joins a growing number of cases targeting AI platforms for allegedly distributing false information. An earlier example in 2023 involved a Georgia-based radio host suing OpenAI over similar defamation claims involving ChatGPT.

    James Grimmelmann, a professor specializing in digital and information law at Cornell Tech and Cornell Law School, pointed out that AI companies might be held accountable in such defamation matters. He underscores that disclaimers alone cannot absolve liability, as companies must not dismiss their AI outputs by labeling them unreliable while containing damaging assertions.

    Grimmelmann further explains the parallels between AI defamation and copyright infringement, with tech companies often arguing they’re not responsible for supervising AI-produced content. By aiming to limit liability, companies claim enforcement could hinder the AI’s functionality or require shutting it down entirely.

    Addressing AI inaccuracies is indeed challenging, Grimmelmann notes, pointing out that Meta is struggling with this issue as Starbuck’s complaints persist despite attempted fixes. Starbuck notified Meta of these errors, urging them to retract, investigate, and deploy measures to prevent future inaccuracies. According to Starbuck, Meta failed to take significant responsibility and instead erased his name from AI-generated responses without addressing the root problem.

    In the wake of the lawsuit, Joel Kaplan, Meta’s chief global affairs officer, acknowledged the errors and labeled the situation “unacceptable.” He promised to work closely with Meta’s product team to understand the incidents and explore resolutions.

    Adding fuel to the fire, Starbuck accused the AI of making other baseless claims, including his alleged engagement in Holocaust denial and pleading guilty to crimes—a striking falsification considering his clean record. Meta’s subsequent decision to “blacklist” his name did little to resolve the issue, as it inadvertently retained access to information about him through news mentions.

    Starbuck’s closing remarks warn that any person or candidates could be targeted next, emphasizing the potential for misleading AI outputs to impact elections and individual reputations. His lawsuit signals the growing scrutiny surrounding AI misinformation and the demands for tech companies to ensure accountability and transparency in their systems.