Meta Platforms Inc. will stop employing full-time fact-checkers in the U.S. on the first day of business next week. This decision is part of a broader shift in the company’s content moderation strategy, which aims to prioritize free speech while drawing inspiration from the Community Notes initiative at Elon Musk’s social media platform, X.
Change in Content Moderation Strategy
Joel Kaplan, Meta’s Chief Global Affairs Officer, announced in a blog post that the company would scrap its separate, long-running fact-checking initiatives. This decision comes after an alarming increase in the propagation of deceptive information. Insider reported that experts cited this bump to the slow dismantling of fact-checking safeguards. By January, Meta had already started relaxing its content moderation standards, just in time for the most consequential political moment yet – President Trump’s inauguration.
Meta always set a goal of reducing hateful conduct. They encourage that the invocation of mental illness or abnormality should be allowed where it involves a gender or sexual orientation. This includes the ongoing political and religious debate over transgenderism and homosexuality. Recent changes are an intentional move toward a more hands-off approach. Unlike the prior administration’s strategy, this new approach allows inflammatory issues or ideas to spread widely without rigorous scrutiny.
The new direction reflects Meta’s understanding of its most valuable asset: user engagement. The platform’s news feed algorithm rewards content that provokes outrage. They use this engagement to their advantage, referring to it as their “greatest currency.” This strategy aligns with Kaplan’s statement regarding the introduction of Community Notes to Meta platforms like Facebook, Threads, and Instagram. He added that the initial Community Notes will slowly begin to show up throughout Facebook, Threads, and Instagram. Joe reiterated that there are no penalties tied to this rollout.
Policy Changes and Free Speech Reaffirmed
Meta founder and CEO Mark Zuckerberg seemed to understand this shift after the recent midterm elections. He called it a “cultural tipping point” that re-establishes primacy for speech. That was a central message behind the company’s recent spate of new policy changes.
Kaplan stressed that they were lifting the ban on public discussion of immigration and gender identity. These issues often elicit heated partisan fights. He declared a plan to repeal most restrictions on discussing sensitive subjects like immigration status and gender identity. These issues are the topics that typically ignite the most fiery political rhetoric and opposition.
Meta is beginning this latest chapter with no U.S. based fact-checkers. Many observers, including us, are eager to see how this new development will impact the dissemination of fact and fiction across its far-reaching platforms.
What The Author Thinks
Meta’s shift away from fact-checking and focus on free speech in this context raises serious concerns about the future of responsible content moderation. While promoting user engagement may help Meta’s business model, it could also exacerbate the spread of misinformation and deepen political divides. The move reflects an industry-wide struggle between maintaining platforms as spaces for free expression and protecting users from harmful content.
Featured image credit: Tech Wire Asia
Follow us for more breaking news on DMR