DMR News

Advancing Digital Conversations

TikTok Cuts Hundreds of Content Moderators Amid AI Shift

ByHilary Ong

Aug 27, 2025

TikTok Cuts Hundreds of Content Moderators Amid AI Shift

TikTok is preparing to lay off hundreds of content moderators as it continues to shift its trust and safety operations toward artificial intelligence. The cuts will primarily affect a UK-based team of about 2,500 people, while additional roles across South and Southeast Asia will also be impacted.

The ByteDance-owned platform already relies heavily on automated systems. Over 85% of content removed for guideline violations is flagged and taken down by AI, the company told the Wall Street Journal.

A Broader Pattern of Cuts

This isn’t the first time TikTok has scaled back human moderation. In late 2024, the platform eliminated 500 positions, mainly in Malaysia. In July 2025, German trade union group ver.di reported that around 150 Berlin-based employees were also set to be replaced by AI models.

While the company has not explained why the UK team is being targeted now, the timing raises questions. The Financial Times noted that the decision came just one week before London-based staff were due to vote on unionization, which managers were reportedly resisting.

The layoffs come as the UK’s new Online Safety Act goes into effect. Under the law, online platforms can face penalties of up to 10% of global turnover or £18 million if they fail to protect minors from harmful content.

A TikTok spokesperson told the FT that the cuts are part of a reorganization designed to “strengthen our global operating model for Trust and Safety,” focusing on consolidating operations in fewer regions while adopting more technological tools.

Pushback From Labor Groups

Critics argue the move is more about cost-cutting than efficiency. “TikTok doesn’t want to have human moderators,” said John Chadfield of the Communication Workers Union. “Their goal is to have it all done by AI, which makes them sound smart and cutting-edge, but in reality, they’re just offshoring the work.”

Author’s Opinion

While TikTok frames the shift as progress, moderation powered almost entirely by AI is risky. Algorithms are efficient at spotting large volumes of content but often miss cultural context, satire, or nuanced harm. Layoffs of human moderators may save costs, but they also remove the human judgment needed for edge cases where AI fails. If platforms want to protect users and maintain credibility, AI should be a tool for human moderators, not a substitute for them.


Featured image credit: Solen Feyissa via Unsplash

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *