Log in · Sign up

Instagram Introduces Parental Alerts For Self-Harm Content

Photo of author

Steph Bazzle

Adolescent teen girl texting on a smartphone sitting in bed at home in her bedroom
Photo by yobro10 on Deposit Photos

Self-harm is a prominent theme of adolescence. About 1 in 8 teens admits to engaging in it at some point, which means that most teens, whether or not they consider it themselves, are likely to either know someone who does or bump into content or information about self-harm by the time they make it to adulthood.

If your child is a social media user, especially one who follows other teens’ accounts, they may encounter self-harm-related content, including content that promotes self-harm, and content that promotes healing or replacing those habits.

Now, if your child seeks this content out on Instagram, Meta will make an effort to contact you with a warning.

What’s Meta’s Plan For Self-Harm Content?

front view of teenager sitting with crossed legs and using smartphone
Photo by HayDmitriy on Deposit Photos

Meta introduced teen accounts in 2024, adding safety features for accounts the site identified as belonging to a user under the age of 16, based on the birthdate they gave when joining the site.

The new feature would apply only to teen accounts, which already give parents limited access to their teens’ online activities, such as seeing who their children are messaging. That means if your child is over 16, or if their profile is set up as though they were, you won’t get these warnings.

Instagram already redirects searches for suicide and self-harm content, giving kids access to crisis lines and other content intended to support them in safer decisions. Now, if there are multiple searches within a short period, Meta will send a message to the account’s parent, according to the company’s announcement.

“The alerts will be sent to parents via email, text, or WhatsApp, depending on the contact information available, as well as through an in-app notification. Tapping on the notification will open a full-screen message explaining that their teen has repeatedly tried to search Instagram for terms associated with suicide or self-harm within a short period of time. Parents will also have the option to view expert resources designed to help them approach potentially sensitive conversations with their teen.”

Offering Parents Resources For Helping Their Teen

Meta isn’t planning just to send a message that says, “Hi, parent, your child is searching for self-harm tips on Instagram. Just thought you’d like to know.”

See also  Teens Are Experiencing Educational Burnout, Here's How To Help

Instead, their proposal includes offering parents resources to help them have a productive, helpful conversation with their teen. The company also offers information on how to connect with a professional for further help.

Also, the new feature is rolling out on Instagram’s search feature first, but won’t stop there.

Meta says that kids are already using their AI tools to seek support for their mental health.

It’s true that kids are heavily relying on these products for mental health. A UK study, for instance, found that about 1 in 4 teens has used AI chatbots for mental health support, according to EdSource, and among those who had been either victims or perpetrators of serious violence, those numbers are higher.

So, Meta’s next planned step is to add this feature to their chatbot as well. That means a teen whose AI chats hit on these sensitive topics enough times will also trigger a message to their parents.

Is This The Best Feature To Protect Teens?

african american mother with closed eyes embracing depressed teenage daughter at home
Photo by HayDmitriy on Deposit Photos

There’s some criticism and backlash for Instagram’s new feature. Not everyone agrees that sending a message to parents is the best way to combat suicide and self-harm.

There are some legitimate concerns.

First among them is the risk of false positives, in which an algorithmic error results in false alarms, with parents receiving alerts that their child is engaging with harmful material when it’s not.

However, the ‘false negative,’ or a false sense of security, may be more worrying. Parents should not consider this feature a replacement for their own diligence. Teens, recognizing the potential consequences of using Meta platforms to access self-harm content, may turn to other platforms instead, or shift to new coded phrasing to dodge algorithms.

See also  How To Clean Bath Toys To Avoid Unhealthy Conditions

On the other hand, some experts say this is a case of Meta shifting responsibility off its platforms and onto parents. Ged Flynn, chief executive of charity Papyrus Prevention of Young Suicide, told the BBC that Meta’s newest feature fails to address the root problem: the material that its platforms allow and promote through algorithms.

“Parents contact us every day to say how worried they are about their children online. They don’t want to be warned after their children search for harmful content, they don’t want it to be spoon-fed to them by unthinking algorithms.”

The American Psychological Association (APA) would like stricter regulations on chatbots to ensure that those used for mental health purposes are safer and to ensure that the public knows the limitations of these tools.

A Step In The Right Direction

Instagram’s teen account feature for self-harm material has shortcomings. It won’t work with accounts that haven’t been set up for parental supervision. For kids who do have a teen account, it won’t prevent them from making a secret second account with a false birthdate. It certainly won’t prevent them from accessing unsafe content on other platforms or by sidestepping keywords.

However, if it works as planned, it may be one step towards making the internet slightly safer for teens. At the moment, there’s no hard evidence for whether safety features on social platforms do reduce self-harm in teens.

We do know that kids are engaging with this content online, and that it may increase risks. We know that kids have used AI chatbots to answer their mental health concerns and ended up in worse condition, to such an extent that there are lawsuits in progress after multiple teens have taken their own lives.

See also  How To Defend Your Baby From Unwanted Touching This Respiratory Season

And, of course, we know that parents can find it difficult to truly know what their kids are doing online, especially when kids have smartphones that give them internet access from anywhere.

With all that in mind, this feature could be a step in the right direction.

Follow Parenting Patch on

Your Mastodon Instance