Instagram, Facebook, and Threads to lock out hundreds of thousands of teens as Australia enforces world-first age restriction from December 10
Tech giant Meta announced on Thursday that it has begun removing users under the age of 16 in Australia from Instagram, Facebook, and Threads, as the country prepares to enforce a world-first social media age ban.
Under the new law, major platforms including TikTok and YouTube are required to block underage users by December 10, with companies facing penalties of up to AUS$49.5 million (US$32 million) if they fail to take what regulators call “reasonable steps” to comply.
A Meta spokesperson said the company is moving aggressively to meet the deadline but stressed that full compliance will require ongoing work.
“While we are working hard to remove all users who we understand to be under the age of 16 by 10 December, compliance with the law will be an ongoing and multi-layered process,” the spokesperson said.
Younger users will be able to download their data before removal, and Meta says their accounts and content will be restored once they turn 16.
“Before you turn 16, we will notify you that you will soon be allowed to regain access to these platforms, and your content will be restored exactly as you left it,” the company said.
The ban is expected to affect hundreds of thousands of Australian teenagers with Instagram alone estimating about 350,000 users between ages 13 and 15.
Some platforms, including Roblox, Pinterest, and WhatsApp, have been granted exemptions for now, though the list is still under review.
Tech Pushback & Policy Clash
While Meta says it will comply, the company also urged the Australian government to shift responsibility for age verification to app stores, arguing that centralised screening would prevent minors from repeatedly verifying their age across different apps.
“The government should require app stores to verify age and obtain parental approval whenever teens under 16 download apps,” the spokesperson said, adding that platforms could then apply those verified ages to ensure safer, age-appropriate experiences.
YouTube has also criticised the law, claiming the restrictions could make minors “less safe” because under-16s without accounts would lose access to the platform’s safety filters, even though they can still view content.
Australia’s Communications Minister, Anika Wells, dismissed YouTube’s argument as “weird”, saying platforms should be responsible for keeping children safe.
“If YouTube is reminding us all that it is not safe and there’s content not appropriate for age-restricted users on their website, that’s a problem YouTube needs to fix,” she said.
Wells added that some Australian teenagers had died by suicide after algorithms amplified content that harmed their self-esteem.
“This specific law will not fix every harm occurring on the internet,” she said, “but it will make it easier for kids to chase a better version of themselves.”
Legal Challenge & Enforcement Fears
The Digital Freedom Project, an internet rights group, has lodged a High Court challenge, describing the law as an “unfair” infringement on free speech.
Authorities acknowledge that enforcing the ban will be complex. Australian regulators warn that determined teens may attempt to upload fake identification documents or use AI to make themselves appear older in photographs.
Platforms will be responsible for detecting such attempts, though the government concedes that “no solution is likely to be 100 percent effective.”
The world is watching Australia’s experiment closely.
Malaysia has signalled plans for a similar under-16 ban next year, while New Zealand is preparing to introduce comparable restrictions.