Meta on Wednesday introduced enhanced safety measures aimed at teen users, including upgraded protections for direct messaging designed to prevent “exploitative content.”
Teens will now have access to more information about who they are messaging with, such as the account creation date and additional safety tips to help identify potential scammers. The platform also enables teens to block and report suspicious accounts in a single streamlined action.
“In June alone, teens blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice,” Meta stated.
Tackling Exploitation and Spam
This update is part of Meta’s wider efforts to safeguard young users amid growing criticism from policymakers over the platform’s handling of sexual exploitation risks. Earlier this year, Meta removed nearly 135,000 Instagram accounts engaged in sexualizing children, alongside 500,000 linked accounts on Instagram and Facebook.
Meta now automatically places teen and child-representing accounts into the strictest messaging and comment settings, filtering offensive content and limiting contact from unknown users.
Although Instagram requires users to be at least 13 years old, adults can manage accounts representing younger children if the account bio clearly states adult supervision.
Broader Industry and Legal Context
Meta faces mounting pressure as states and Congress push for stricter regulation of social media to protect children. The Kids Online Safety Act, which would require platforms to uphold a “duty of care” to prevent harm to minors, was reintroduced in Congress in May.
Recently, Snapchat was sued by New Mexico for allegedly enabling predators to target children through sextortion schemes. Snapchat has denied the allegations, calling the lawsuit “patently false.”
Meta also announced the removal of about 10 million impersonating profiles in the first half of 2025, targeting spam and fake content.
What The Author Thinks
Meta’s new safety features show progress in addressing the serious risks facing teens online, but technology alone won’t solve the problem. Platforms must combine robust tools with clear policies, transparent enforcement, and ongoing collaboration with regulators and child protection groups. True safety for young users requires sustained commitment beyond feature updates.
Featured image credit: Heute
For more stories like it, click the +Follow button at the top of this page to follow us.