According to a new report from the human rights campaign group Global Witness, TikTok’s algorithm actively recommends pornography and highly sexualized content to accounts designated for children. Researchers created dummy child accounts and activated the platform’s safety settings, yet they still received explicit search suggestions. These suggested search terms subsequently led to sexually explicit material, including videos of penetrative sex.
Research Methodology and Findings
In late July and early August of this year, Global Witness researchers established four accounts on TikTok, pretending to be 13-year-olds. They used false dates of birth and were not required to provide any further information to verify their identity. Crucially, they also enabled the platform’s “restricted mode,” which TikTok claims prevents users from seeing “mature or complex themes, such as… sexually suggestive content.”
Without initiating any searches themselves, the investigators found overtly sexualized search terms being recommended in the app’s “you may like” section. These suggested terms led to content showing women simulating masturbation. Other videos featured women flashing their underwear in public or exposing their breasts. At its most extreme, the suggested content included explicit pornographic films of penetrative sex. Researchers noted that these videos were often embedded within seemingly innocent content in a deliberate effort to bypass the platform’s content moderation systems.
Ava Lee from Global Witness expressed deep concern over the findings, calling them a “huge shock” to the research team. She stated, “TikTok isn’t just failing to prevent children from accessing inappropriate content – it’s suggesting it to them as soon as they create an account.” Global Witness, which typically focuses on how large technology companies impact discussions around human rights, democracy, and climate change, stumbled upon this issue while conducting separate research in April of this year.
TikTok’s Response and Repeated Failures
The researchers initially informed TikTok of their findings, and the company claimed to have taken immediate action to resolve the problem. However, Global Witness repeated the exercise in late July and August and found that the app was once again recommending sexual content to the fake child accounts.
TikTok maintains that it is “fully committed to providing safe and age-appropriate experiences,” noting that it has more than 50 features designed to protect teenagers. The platform also asserts that it removes nine out of every ten videos that violate its guidelines before they are even viewed. When Global Witness presented the findings from the second round of research, TikTok responded by saying it took action to “remove content that violated our policies and launch improvements to our search suggestion feature.”
The second research project by Global Witness was conducted after the Online Safety Act’s Children’s Codes came into effect on July 25 this year. These Codes legally mandate platforms to protect children online, imposing a legal duty on them to use “highly effective age assurance” to prevent children from seeing pornography. Platforms are also required to adjust their algorithms to block content encouraging self-harm, suicide, or eating disorders. Given these legal obligations, Ava Lee of Global Witness argued that “Everyone agrees that we should keep children safe online… Now it’s time for regulators to step in.” During their investigation, researchers also observed that other users were similarly confused by the sexualized search terms being recommended, with one commenter asking, “can someone explain to me what is up w my search recs pls?” and another inquiring, “what’s wrong with this app?”
What The Author Thinks
The recurring failure of TikTok’s safety measures, even after being explicitly warned by a campaign group, suggests a fundamental flaw: either the moderation system is hopelessly ineffective, or the core engagement algorithm actively prioritizes extreme, boundary-pushing content, even for children’s accounts, to maximize time spent on the app. This is a severe indictment of the platform’s commitment to child safety and underscores that self-regulation is insufficient. New legal frameworks, like the Children’s Codes, are clearly necessary, but unless they include substantial, punitive enforcement mechanisms for deliberate or negligent algorithmic failures, companies will continue to treat the safety of minors as a secondary concern behind growth metrics.
Featured image credit: greenwish _ via Pexels
For more stories like it, click the +Follow button at the top of this page to follow us.