
Instagram will begin notifying parents if their teen repeatedly searches for terms related to suicide or self-harm within a short period of time, the company said on Thursday. The feature will roll out in the coming weeks to parents who are enrolled in Instagram’s parental supervision program.
The Meta-owned platform said it already blocks users from searching for suicide and self-harm content. The new alerts are intended to inform parents if a teen continues attempting such searches, so they can step in and offer support.
How The Alerts Work
Searches that may trigger a notification include phrases encouraging suicide or self-harm, terms suggesting a teen may be at risk, and direct keywords such as “suicide” or “self-harm.”
Parents will receive alerts through email, text message, or WhatsApp, depending on the contact information they have provided. They will also see an in-app notification. Each alert will include guidance and resources to help parents approach a conversation with their teen.
Instagram said it aims to avoid excessive notifications, noting that overuse could reduce the effectiveness of the alerts. The company said it analyzed search behavior and consulted experts from its Suicide and Self-Harm Advisory Group to determine a threshold that requires several searches within a short timeframe before triggering a notification.
The company acknowledged that the system may sometimes notify parents even when there is no immediate cause for concern, but said it believes this approach balances caution with practicality. Instagram added that it will continue monitoring feedback and adjust the feature if needed.
Legal Pressure And Ongoing Scrutiny
The announcement comes as Meta faces multiple lawsuits in the United States over claims that its platforms have harmed teens.
This week, Instagram head Adam Mosseri testified in a case in the U.S. District Court for the Northern District of California, where prosecutors questioned the company’s timeline for rolling out safety tools, including a nudity filter for teen private messages.
In a separate case in Los Angeles County Superior Court, internal Meta research presented during testimony found that parental supervision tools had limited impact on reducing compulsive social media use among children. The study also indicated that children experiencing stressful life events were more likely to struggle with regulating their usage.
Instagram said the new alert system will roll out next week in the United States, United Kingdom, Australia, and Canada, with additional regions to follow later this year.
In the future, the company plans to extend notifications to situations where a teen attempts to engage Instagram’s AI in conversations about suicide or self-harm.
The feature marks another step by Meta to expand parental oversight tools amid growing regulatory and legal scrutiny over teen safety online.
Featured image credits: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.
