Instagram teen accounts ban: Australia’s internet regulator estimates that around 3,50,000 Instagram users and 1,50,000 Facebook users fall into the 13–15 age group. (Reuters Photo)Meta’s internal documents about child safety suggest that the social media giant engaged in a broad pattern of deceit to downplay risks to young users while being aware of the serious harms on its platforms, a legal court filing has alleged based on newly unredacted information.
The legal brief was filed in the United States District Court for the Northern District of California in November this year. It was submitted to the court as part of proceedings related to a sweeping 5,807-page lawsuit filed by a variety of plaintiffs, including school districts, parents, and state attorneys general, against social media companies such as Meta, Google’s YouTube, Snap, and TikTok.
The newly released legal brief collectively accuses these tech giants of failing to take action and misleading authorities despite knowing that their respective platforms caused various mental health-related harms to children and young adults.
However, the brief has drawn attention for citing several research studies carried out internally by Meta that allegedly showed how millions of adult strangers were contacting minors on its sites, how its products exacerbated mental health issues in teens, and how posts related to eating disorders, suicide, and child sexual abuse were frequently detected but rarely removed, among several other allegations first reported by Time.
The brief has reportedly been compiled by gathering testimonies from current and former Meta executives as well as company research, presentations, and internal communications. However, these documents continue to remain under seal. A common theme running through the allegations is that Meta employees proposed ways to curb child safety issues that have long plagued its platforms, but were repeatedly blocked by executives who feared that new safety features would hamper teen engagement or user growth.
While the lawsuit is playing out in the United States, its implications may stretch beyond it. To note, India is the largest market for Meta’s platforms, including Instagram, Facebook, and WhatsApp, which have been integrated with Meta AI.
It also comes days after Meta scored a major victory after a US court ruled in its favour in a high-profile antitrust lawsuit by the US Federal Trade Commission (FTC), which had accused the company of holding a monopoly in social networking. Here is a closer look at the allegations outlined in the legal brief, Meta’s response, and the steps the company says it has taken to reduce online harm for teens.
In late 2019, Meta allegedly initiated a study codenamed ‘Project Mercury’ that sought to “explore the impact that our apps have on polarization, news consumption, well-being, and daily social interactions”. However, the preliminary findings of the study showed that people who stopped using Facebook “for a week reported lower feelings of depression, anxiety, loneliness, and social comparison”.
This finding was based on a random sample of consumers who stopped their Facebook and Instagram usage for a month, as per the legal brief. It further alleged that Meta chose to stop the research after executives were disappointed about the preliminary findings of Project Mercury.
Fearing a migration of young users to rival platforms such as TikTok, Meta devised a strategy to retain these users by launching a campaign to connect with school districts and paying organisations such as the National Parent Teacher Association and Scholastic to conduct outreach to schools and families, as per the legal brief.
The document further alleged that Meta used location data to push notifications to students in “school blasts” to boost engagement among young users during the school day. “One of the things we need to optimise for is sneaking a look at your phone under your desk in the middle of Chemistry :)” an employee allegedly said.
In 2019, Meta researchers recommended making all teen accounts on Instagram private by default in order to prevent adult strangers from connecting with children, according to the brief. The company’s policy, legal, communications, privacy, and well-being teams all supported the same recommendation.
But Meta’s growth team allegedly did not support the recommendation, as it would likely reduce engagement and result in a loss of 1.5 million monthly active teen users on Instagram each year. As a result, the company did not implement the recommendation to make all teen accounts private by default that year.
In that time, inappropriate interactions between adults and kids on Instagram surged 38 times more than on Facebook Messenger. The launch of Instagram Reels further allowed teen users to broadcast short videos to a wide audience, including adult strangers, the brief alleged.
Additionally, an internal study in 2022 allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adult-owned accounts to teenage users in a single day. The default privacy settings to all teen accounts on Instagram was announced in 2024.
When Vaishnavi Jayakumar, Instagram’s former head of safety and well-being, joined Meta in 2020, she was allegedly shocked to learn that the company had a “17x” strike policy for accounts that reportedly engaged in the “trafficking of humans for sex.” This is based on testimony that was reportedly included in the legal brief.
“You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended,” Jayakumar reportedly testified, adding that “by any measure across the industry, [it was] a very, very high strike threshold.”
Meta’s policy states that users under 13 are not allowed on its platforms. However, as per the brief, Meta knew that children under 13 were using the company’s products as internal research showed that there were four million users under 13 on Instagram in 2015 and by 2018, roughly 40 per cent of children aged 9 to 12 years said they used Instagram every day.
In 2019, Instagram CEO Adam Mosseri announced it is testing a new feature that lets users hide likes on posts. The announcement was made after an internal research study codenamed Project Daisy allegedly found that hiding likes would make users “significantly less likely to feel worse about themselves,” as per the brief.
But its testing showed that the feature would negatively affect platform metrics, including ad revenue, according to the plaintiffs’ brief. As a result, Meta allegedly backtracked and made the ‘hide likes’ feature optional for users.
Another internal Meta study concluded that beauty filters on Instagram “exacerbated the “risk and maintenance of several mental health concerns, including body dissatisfaction, eating disorders, and body dysmorphic disorder” with children being particularly vulnerable.
These findings led Meta to ban beauty filters on Instagram in 2019. But the features were rolled out again in the following year after the company realised that banning such filters could negatively impact the platform’s growth, plaintiffs alleged in the brief.
Meta uses AI tools to monitor its platforms for harmful content. However, the company allegedly did not take down such content even after determining with “100% confidence” that it violated Meta’s policies against child sexual-abuse material or eating-disorder content.
Posts glorifying self-harm were not automatically deleted unless they were 94 per cent certain they violated platform policy, according to the plaintiffs’ brief. As a result, such posts remained on its platforms where teen users could come across them. An internal 2021 survey also found that more than 8 per cent of respondents aged 13 to 15 reported having seen someone harm themselves, or threaten to do so, on Instagram during the past week.
As part of an internal 2018 study, Meta surveyed 20,000 Facebook users in the US and found that 58 per cent showed signs of problematic use, with 55 per cent showing mild-level signs and 3.1 per cent showing severe-level signs.
In response, Meta’s safety team allegedly proposed features designed to lessen such addiction or ‘problematic use. But these proposed features were set aside or watered down, plaintiffs’ alleged in the brief. For instance, one employee suggested a ‘quite mode’ feature on Instagram and Facebook which was allegedly shelved because the company was concerned that this feature would negatively impact metrics related to growth and usage.
“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture,” Meta spokesperson Andy Stone was quoted as saying by CNBC.
“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens—like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences,” he added.
On the findings of the Project Mercury study, Stone wrote in a post on Bluesky, “This is a confirmation of other public research (“deactivation studies”) out there that demonstrates the same effect.”
“A pilot of the study ran. Researchers analysed the results and found the study didn’t overcome those expectation effects. That was the “company’s disappointment” (not the reductive way you frame it in your story) and the reason the project didn’t continue,” he said.
In February this year, Instagram announced it will be rolling out ‘Teen Accounts’ in India. A Teen Account on Instagram has enhanced privacy and parental controls. Accounts created by any user between the ages of 13 and 18 are categorised as Teen Accounts and their profiles are private by default. Meta has said it has continued to roll out new features to make the Teen Account user experience safer.
In July this year, the company announced a new feature that lets Teen Account holders block and report users directly from their private messages (DMs). These users can also limit sensitive content, turn off notifications at night, and disable incoming messages from unconnected adults.
To be sure, the additional protections that come with Teen Accounts have been expanded to Meta’s other platforms, including Facebook, Messenger, and Threads.