Instagram Tightens Teen Controls Amid Safety Scrutiny

New global policy blocks under-18s from viewing mature content without parental approval as Meta faces mounting scrutiny over youth safety.

Topics

  • [Image source: Chetan Jha/MITSMR Middle East]

    Meta Platforms Inc. is rolling out new restrictions on Instagram aimed at limiting what teenagers under 18 can see, placing them by default in a PG-13–like content setting that they can’t opt out of without parental approval.

    The company said in a blog the updated setting reflects guidelines similar to US movie ratings and will be enforced globally on teen accounts.

    It expands on controls first introduced in 2023, when Meta created supervised experiences for users under 16 with limited contact and content exposure.

    Under the new rules, teens will be blocked from following accounts that share age-inappropriate content or suggest mature themes in their bios.

    If they already follow such accounts, they will no longer see their posts, send messages, or view their comments.

    Search restrictions have also been tightened. Instagram will expand its existing blocklist, originally aimed at terms related to suicide and eating disorders, to include words like “alcohol” and “gore,” as well as common misspellings used to evade filters.

    Content will be screened across Reels, Feed, Explore, Stories, and even direct messages.

    Meta said its AI chatbot will also be updated to prevent age-inappropriate responses, in line with the PG-13 threshold.

    For families seeking stricter guardrails, a new “Limited Content” filter will further reduce what teens can access.

    The company said it reviewed over 3 million parental content ratings to calibrate the new settings.

    An Ipsos survey commissioned by Meta found that 95% of US parents supported the changes, and 90% said the filters would help them better understand their teens’ online consumption.

    The update comes amid rising scrutiny of social media’s impact on adolescent mental health.

    Meta, alongside Google, ByteDance, and Snap, was named in a master complaint last year alleging their platforms contributed to a youth mental health crisis and facilitated harmful behaviors.

    Multiple US states have also sued Meta, accusing it of building addictive products targeted at children.

    The new content settings will begin rolling out to Teen Accounts in the US, UK, Canada, and Australia this week, with a full global rollout expected by the end of 2025.

    Updated Guidelines

    Meta has made these changes in order for teens’ to experience settings closer to the Instagram equivalent of watching a PG-13 movie. These updates include-​

    • Accounts: Restrictions on following accounts that regularly share age-inappropriate content, or if their name or bio suggests the account is inappropriate for teens. Moreover, in case they already follow, they will no longer be able to see or interact with their content, send Direct Messages, or see their comments on other posts.
    • Search: Searching terms related to certain sensitive topics, such as suicide, self-harm, and eating disorders, have already been on a block list for Instagram. Going ahead, teens won’t be able to see content results for a wider mature list of words, such as ‘alcohol’ and ‘gore.’ They are also looking to block misspelled versions of such words.
    • Content Experience: Such accounts will now be fully protected from content that violates updated guidelines across all surfaces—including Explore, Reels, Feed, and Stories—even from users they follow. Teens won’t be able to open links to such content sent via DMs.
    • AI: With an updated AI experience, the chatbot will be unable to give age-inappropriate responses that would feel out of place in a PG-13 movie.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.