
Australia is set to enforce a nationwide ban on social media use for children under 16 from 10 December, a move that has prompted sustained opposition from major technology companies and placed the country at the center of a global debate over how far governments should go in regulating youth access to online platforms.
Growing Scrutiny of Social Media’s Impact on Teenagers
Stephen Scheeler, who served as Facebook’s Australia chief in the early 2010s, said early optimism about social media’s role in public life had faded by the time he left the company in 2017. He said belief in social media as a tool for connection and learning has been tempered by concerns over harm.
“There’s lots of good things about these platforms, but there’s just too much bad stuff,” Scheeler told the BBC.
Teenagers have become a key market for social media companies, attracting scrutiny from governments and health experts who argue that platform design contributes to poor mental health and wellbeing. Governments in regions including the European Union and the U.S. state of Utah have trialed restrictions on children’s access. Australia’s approach goes further than any previous effort.
Details of Australia’s Social Media Law
The new law requires social media companies to take “reasonable steps” to prevent users under 16 from having accounts. Australia is also the first jurisdiction to deny exemptions based on parental consent, making the policy the strictest of its kind globally.
Companies face fines of up to A$49.5m ($33m; £24.5m) for serious breaches. Meta said in a statement to the BBC that while it would meet its legal obligations, it believed legislation should empower parents to approve and verify app access rather than apply a blanket ban.
Communications Minister Anika Wells said the government rejected industry arguments that parents alone should control access. She said companies had years to improve their practices and failed to act sufficiently. She added that governments from the EU, Fiji, Greece, and Malta had contacted her for guidance, while Denmark and Norway are working on similar laws and Singapore and Brazil are observing developments closely.
Tech Industry Opposition and Lobbying Efforts
Social media firms have openly opposed the Australian law for more than a year. They argue the ban could reduce online safety, limit access to information, and raise unresolved questions about age verification technology.
Paul Taske of NetChoice, a trade group representing major technology firms, said Australia was engaging in “blanket censorship” that would leave young people less informed and less prepared for adulthood.
Behind the scenes, executives sought direct engagement with the government. Walgreens said Snapchat chief executive Evan Spiegel met with her personally. She also said YouTube enlisted the children’s entertainment group The Wiggles as part of its lobbying efforts.
Meta and Snap have both argued that Apple and Google should assume responsibility for age verification through their app stores.
Legal Pressure and Whistleblower Claims in the United States
The Australian law follows a series of lawsuits and whistleblower disclosures in the United States that accuse social media companies of prioritising profit over user safety.
A major U.S. trial is scheduled to begin in January, consolidating hundreds of cases alleging that Meta, TikTok, Snapchat, and YouTube designed their platforms to be addictive and concealed evidence of harm. Meta chief executive Mark Zuckerberg and Snap chief Evan Spiegel have been ordered to testify.
In a separate case, state prosecutors alleged that Zuckerberg overruled proposals to remove Instagram face-altering filters that experts say worsen body image issues. Former Meta employees including Sarah Wynn-Williams, Frances Haugen, and Arturo Béjar have provided testimony to Congress alleging various internal failures.
Meta has said it has worked to build tools to improve teen safety.
Recent Content Moderation and Misinformation Controversies
The industry also faces oversight for misinformation, hate speech, and violent content. Graphic footage of the assassination of Charlie Kirk spread rapidly online across several platforms earlier this year. Elon Musk has sued U.S. states over laws that require disclosure of how platforms address hate speech. Meta faced criticism after announcing it would remove third-party factcheckers.
A bipartisan group of U.S. lawmakers has increased pressure on platform executives. During a congressional hearing last year, Zuckerberg was urged to apologise to families affected by online harm. Among them was Tammy Rodriguez, whose 11-year-old daughter died after sexual exploitation on Instagram and Snapchat.
Market Pressures in Australia and Product Changes
Australia is a significant market for social media companies. Snapchat told a parliamentary hearing in October that it estimated about 440,000 users aged 13 to 15 in the country. TikTok reported around 200,000 under-16 users. Meta said it had about 450,000 under-16 users across Facebook and Instagram.
As the ban approached, companies introduced youth-focused safety products. YouTube announced AI tools to estimate user age. Snapchat expanded child-specific accounts with default safety settings. Meta launched Instagram Teen accounts with restricted privacy and content controls, accompanied by a wide marketing campaign.
Marketing professor Pinar Yildirim of the University of Pennsylvania’s Wharton School said these efforts were designed to reduce harm and protect access to young users in large global markets.
Ongoing Criticism of Safety Measures
Critics remain sceptical of platform-led reforms. Arturo Béjar, a Meta whistleblower, co-authored a September study that found nearly two-thirds of the new safety controls on Instagram Teen accounts were ineffective.
Béjar said companies were not addressing the full extent of known harm to teenagers.
Analysts say technology firms may comply with the Australian law while also allowing technical and legal challenges to test its effectiveness. University of Southern California professor Nate Fast said the Australian case could serve as a precedent for other countries.
Scheeler said the fines could be treated as an operational cost by large firms. Carnegie Mellon University marketing professor Ari Lightman said the penalties would be financially manageable for major platforms seeking long-term growth in youth markets.
Scheeler said the Australian policy represents a turning point in social media governance, even if its long-term outcome remains uncertain.
Featured image credits: Mariia Shalabaieva via Unsplash
For more stories like it, click the +Follow button at the top of this page to follow us.
