Meta announced on Tuesday that it will now automatically restrict the content visible to teenage users on Instagram to what they would typically encounter in a movie rated PG-13. This major policy shift is the company’s latest response to waves of criticism from lawmakers and the public regarding its handling of child safety and related mental health concerns on the platform.
Content Filtering and Account Restrictions
Under the new content guidelines, Meta will hide certain accounts from teenagers, specifically those that share sexualized content or media related to drugs and alcohol. Furthermore, posts containing swear words will not be recommended to teen users, although they can still search for them. Accounts with names or biographies that link to adult-themed websites like OnlyFans or liquor stores will be hidden entirely from teens. Teen users will no longer be able to follow those kinds of accounts, and if they already do, they will be unable to see or interact with the adult-leaning content they share. The restricted accounts will also be prevented from following teens, sending them private messages, or commenting on their posts.
Meta executives explained during a media briefing that while the company’s previous guidelines already met or exceeded PG-13 standards, parents were confused about what content teens could view. The company chose to standardize its policies against movie ratings that parents could better understand, stating in a blog post, “We decided to more closely align our policies with an independent standard that parents are familiar with… so teens’ experience in the 13+ setting feels closer to the Instagram equivalent of watching a PG-13 movie.”
Policy Background and Corporate Scrutiny
The move comes after years of scrutiny over the platform’s effect on young people. The company, then known as Facebook, faced intense criticism in 2021 when The Wall Street Journal published internal research showing the platform’s harmful effects on teenage girls specifically. Subsequent reports also demonstrated how easily teenagers could use Instagram to find drugs, sometimes even through ads run by the company.
Over the past year, Meta has rolled out several features to increase parental transparency and control. This includes new safety tools debuted in July that made it easier for teenage Instagram users to block and report accounts. Meta also recently faced scrutiny from the watchdog Tech Transparency Project, which alleged the company’s sponsorship of the National Parent Teacher Association “gives a sheen of expert approval” to its efforts to keep young users engaged. Meta confirmed its new Instagram content guidelines will begin rolling out Tuesday in the U.S., U.K., Australia, and Canada before expanding to other regions.
Author’s Opinion
Mosseri is correctly identifying that the most critical defense against the proliferation of convincing AI-generated content is not technology—which is easily defeated—but fundamental digital literacy and human skepticism. By placing the onus on parents and society to teach children to actively question the source and incentives behind every piece of online content, he is advocating for a profound, and necessary, cultural shift. The platform’s attempt to use crowdsourced fact-checking for AI labeling is a practical recognition that human consensus, however messy, is the only sustainable guardrail against a technology that is designed to make objective truth seamless and cheap to produce.
Featured image credit: Rubaitul Azad via Unsplash
For more stories like it, click the +Follow button at the top of this page to follow us.