Meta announced new protections for Instagram accounts managed by adults that primarily showcase children. These accounts will be automatically placed under the platform’s strictest message settings to reduce unwanted contact, and the “Hidden Words” feature will filter offensive comments.
The stricter rules also apply to accounts run by parents or talent managers representing children. Meta acknowledged that while most of these accounts are used positively, there are cases of abuse where sexualized comments and inappropriate messages have been left, violating Instagram’s policies.
Preventing Suspicious Interactions
To further protect these accounts, Meta will limit the ability of suspicious adults—such as those previously blocked by teens—from discovering or interacting with these child-focused profiles. The platform will reduce such adults’ visibility in Instagram Search and avoid recommending them to these accounts.
This move is part of broader efforts by Meta and Instagram to address mental health and safety concerns tied to social media, especially following warnings from the U.S. Surgeon General and some state-level regulations requiring parental consent for minors’ access.
The new measures affect family vloggers and parents who manage “kidfluencer” accounts. Critics argue these accounts risk exploiting children by publicly sharing their lives. A recent investigation revealed many parents are aware of, or participate in, commercializing their child’s content, which often attracts a vast number of male followers.
Meta will notify affected accounts of the updated safety settings and prompt a review of privacy controls. The company also disclosed it has removed nearly 135,000 Instagram accounts involved in sexualizing child-focused content, along with 500,000 associated accounts on Instagram and Facebook.
Enhanced Protections for Teen Users
Meta is also enhancing safety in Instagram’s Teen Accounts, featuring built-in protections tailored for young users. Teens will see new safety tips encouraging them to evaluate profiles carefully and be cautious about what they share. The platform now shows the month and year each account joined Instagram at the start of new chats.
A combined block-and-report feature has been introduced to streamline the process of dealing with suspicious or harmful accounts. These features aim to help teens identify and avoid potential scams.
Meta reported that in June alone, teens blocked accounts 1 million times and reported another million after viewing safety notifications. Additionally, 99% of users have kept the nudity protection filter enabled, with over 40% of blurred images in direct messages remaining blurred.
Author’s Opinion
Social media platforms face the difficult task of balancing openness with safety, especially when vulnerable users like children and teens are involved. Meta’s latest steps show progress, but true protection requires ongoing transparency, stricter enforcement, and better tools for users and parents alike. Without constant vigilance, risks remain high for exploitation and abuse.
Featured image credit: Brett Jordan via Pexels
For more stories like it, click the +Follow button at the top of this page to follow us.