
More than a year ago, the social media platform Instagram announced that its parent company, Meta, would roll out special Teen Accounts designed to safeguard teenage users. Unfortunately, a new report shared with Time Magazine suggests that the latest accounts and features, just launched worldwide last month, have done little to make the app safer.
The study was commissioned by child advocacy groups ParentsTogether Action, the HEAT Initiative, and Design It for Us, and the results are truly shocking. According to the report, nearly 60% of teens (aged 13 to 15) who received unwanted messages said they came from users they believed to be adults. The same percentage said that they had encountered inappropriate content in their feeds, and most chillingly, nearly 40% of kids who got unwanted messages said the messages from adults were sexual or romantic in nature.
Shelby Knox, director of online safety campaigns at ParentsTogether, criticized Meta’s lack of progress, saying, “Parents were promised safe experiences. We were promised that adults wouldn’t be able to get to our kids on Instagram.”
The Report is The Second Such To Be Made Public In Recent Weeks
In late September of 2025, Reuters reported on a similar study involving researchers at Northeastern University that found that most of the 47 child safety features promised by Instagram were flawed. Researchers found that of the 47 features Meta rolled out, only eight worked as promised. The remaining 39 features reduced harm but had limitations, and many were either ineffective or no longer available. Restrictions like sensitive-content controls, time-management tools, and tools meant to protect kids from inappropriate contact did not work as advertised.
The researchers also found that adults were easily able to message teenagers that they were not connected to. When suggesting who to follow, Instagram suggested teens follow adults they don’t know. Sexual content, violent content, and self-harm and body-image content, all supposedly blocked by Meta’s sensitive-content filters, were still showing up in teenagers’ feeds. Even more concerning, the study found that despite Meta’s policy requiring users to be over 13 years of age, many pre-teens were seeing this kind of content amplified by Instagram’s recommendation-based algorithm.
What Meta Promised Users

Meta announced in September of 2024 that they were rolling out new features to Instagram to make the platform safer for younger users. These included special “Teen Accounts” that would filter harmful content and restrict messages from users they don’t follow and aren’t connected to. At the time, Meta promised “built-in protections for teens, peace of mind for parents.” However, it appears that the new measures did not stop or stem the tide of inappropriate content or contact for teen accounts. Researchers even found that 56% of teens in the study didn’t even report the content or messages because “they were used to it.”
Meta disputes the findings of this report and others that criticize its safety features. Meta spokesperson Liza Crenshaw said in a statement to TIME:
“This deeply subjective report relies on a fundamental misunderstanding of how our teen safety tools work. Worse, it ignores the reality that hundreds of millions of teens in Teen Accounts are seeing less sensitive content, experiencing less unwanted contact, and spending less time on Instagram at night. We’re committed to continuously improving our tools and having important conversations about teen safety—but this [report] advances neither goal.”
Former Meta Employee Agrees That The Algorithm Isn’t Doing Enough
A former senior engineering and product leader at Meta, Arturo Bejar, told TIME that suggestive content is prevalent because Instagram’s algorithm rewards it. Even young users who aren’t aware of what that content is will see it in their feeds. “The minors didn’t begin that way, but the product design taught them that. At that point, Instagram itself becomes the groomer.”
The Heat Initiative’s CEO, Sarah Gardner, said that Instagram “is absolutely falling flat on delivering the safeguards that it says it does.”
The day after the September report was released, Meta announced that it had converted millions of underage users’ accounts into Teen Accounts. They also expanded the program worldwide through their Facebook and Messenger apps, and announced new partnerships with schools and educators, including a new online safety curriculum for middle schoolers.