
Meta announced on Tuesday that Instagram will soon restrict teen users to viewing only PG-13-rated content by default, as part of its most comprehensive safety overhaul yet for younger audiences.
Under the new system, teen-specific accounts will automatically filter out photos, videos, and posts containing sexually suggestive material, drug use, or dangerous stunts — bringing Instagram’s safety standards closer to those of PG-13-rated films.
“This includes hiding or not recommending posts containing strong language, risky stunts, or material that could encourage dangerous behavior — such as marijuana paraphernalia,” Meta said in a blog post, calling the update its “most significant change” since teen accounts were introduced last year.
Parents Gain More Control Over Teen Accounts
In addition to default content restrictions, Meta will introduce a new “Limited Content” setting — a stricter parental control option that allows parents to further limit what their children can view or interact with.
Teens won’t be able to change these safety settings without parental approval, giving parents more authority over their child’s online environment.
Meta said the move is designed to protect teens from exposure to harmful material while addressing growing public concern over the platform’s impact on youth mental health.
The changes follow mounting criticism and research suggesting Instagram’s existing safeguards failed to fully protect teens from inappropriate content.
A recent independent study found that teen accounts created by researchers were still being recommended sexualized and self-harm-related content, including explicit text descriptions, cartoon depictions of sexual acts, and body image–related posts that could harm vulnerable users.
Meta acknowledged the concerns, saying the latest measures go “further than ever before” by cutting off teens’ access to accounts that frequently share or promote age-inappropriate material — including those referencing OnlyFans or similar adult platforms.
Existing teen followers of such accounts will automatically lose access to their content and will be unable to message, comment, or interact with them. Likewise, those accounts will be blocked from contacting teen users.
Meta said the new PG-13 framework will also extend to search, chats, and AI-powered experiences on Instagram.
“AIs shouldn’t deliver responses that would be inappropriate in a PG-13-rated movie,” the company said, noting that content filters will now cover a broader range of sensitive topics — from suicide and eating disorders to alcohol and gore, even when such words are misspelled.
The company already blocks searches related to self-harm and eating disorders but said this update will widen the safety net to catch additional harmful content.
Meta described the update as part of its ongoing effort to “create the safest online experience possible for teens” while maintaining an environment for self-expression and creativity.
The new protections will roll out globally in the coming weeks, as Meta continues to face scrutiny from regulators and parents alike over its handling of youth safety, algorithmic recommendations, and mental health impacts.