Win $100-Register

Meta removes hate speech regulations, with Zuckerberg pointing to ‘recent elections’ as a driving factor.

Amid preparations for a potential second Trump administration, Meta has made significant adjustments to its content moderation policies on its platforms. Not only has the company eliminated its fact-checking procedures, but it has also eased restrictions concerning hate speech and abusive behavior, particularly in matters relating to sexual orientation, gender identity, and immigration status. This shift mirrors similar trends initiated by Elon Musk’s platform, X.

Concerns are rising among advocates for marginalized communities, who fear that Meta’s scaled-back moderation could lead to increased dangers in the real world. Meta’s CEO, Mark Zuckerberg, indicated that these changes were motivated by a desire to align with current mainstream discussions, citing recent electoral events as contributing factors. He stated that the company aims to “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse”.

One notable addition to Meta’s community standards is the allowance of claims regarding mental illness or abnormalities tied to gender or sexual orientation, under the guise of political and religious debate surrounding transgender issues and homosexuality. This means that users on platforms like Facebook, Instagram, and Threads may now label gay individuals as mentally ill. However, the platform continues to prohibit certain derogatory remarks and stereotypes linked to intimidation, such as Blackface and Holocaust denial.

Additionally, Meta has removed a critical phrase from its policy rationale that outlined the reasoning behind its hate speech bans. This previously included the assertion that hate speech can foster an atmosphere of intimidation and exclusion and, in extreme cases, may incite violence outside the digital realm.

Experts like Ben Leiner, a lecturer at the University of Virginia’s Darden School of Business, suggest that these policy adjustments are strategies to curry favor with the anticipated administration while simultaneously lowering costs associated with content moderation. Leiner warns that these changes could incite real harm, not just in the U.S., where a rise in hate speech and misinformation on social media is evident, but also internationally, where platforms like Facebook have exacerbated ethnic tensions, as seen in Myanmar.

Meta itself had previously admitted in 2018 that it fell short in preventing its platform from being used to incite violence in Myanmar, where it contributed to the persecution of the Rohingya Muslim community.

Arturo Béjar, a former engineering director at Meta, has expressed greater concern regarding alterations to the company’s harmful content policies than the discontinuation of fact-checking. He emphasized that rather than actively enforcing regulations against online bullying, harassment, and self-harm, Meta will now depend on user reports before taking action. The company stated its intent to prioritize automated systems on severe violations, including terrorism and child exploitation.

Béjar warns that this reactive rather than proactive approach means that once harmful content is reported, considerable damage might already have occurred. He voiced deep apprehension regarding the implications these changes could have on young users, stating, “Meta is shirking their duty to maintain safety, and we will remain in the dark about the adverse effects of these changes since Meta is not transparent about the challenges faced by youths. They actively work against legislative measures that could provide support.”

author avatar
@USLive

ALL Headlines