Journalism of Courage
Advertisement
Premium

Australia social media ban on users aged under-16 kicks in: Why it could trigger a global crackdown

Australia had initially granted YouTube an exemption from the ban, citing educational value, but reversed this in July 2025 after a key regulator found it was the most cited platform for harmful content exposure among kids.

Australia social media banThe regulation has left Big Tech scrambling. All of them have publicly opposed the law, while maintaining that they will comply with it. Local reports suggest Meta has already started deactivating accounts of users under the age of 16. (File)

Australia has become the first country in the world to enforce a minimum age for social media use, requiring platforms such as Instagram, YouTube and Snap to block more than a million accounts of users below the age of 16. The Australian legislation, which has drawn criticism from tech companies but support from parents, is likely to set a template for a broader global push to tighten regulation of young users’ online safety.

According to the new law, called the ‘Online Safety Amendment (Social Media Minimum Age) Act’, age-restricted platforms will be expected to take “reasonable” steps to find existing accounts held by under-16s, and deactivate or remove those accounts, prevent them from opening new accounts, including prohibiting any workarounds that may allow under-16s to bypass the restrictions. Platforms also need to have processes to correct errors if someone is mistakenly missed by or included in the restrictions, so no one’s account is removed unfairly.

The Australian approach differs significantly from India’s. Here, the data protection framework states that tech companies offering services to those under 18 years will have to seek consent from parents. India’s framework also prohibits behavioural tracking and targeted advertising to children. It has been notified but is yet to come into effect.

Australian govt’s rationale

According to the Australian government, the restrictions aim to protect young people from “pressures and risks” that users can be exposed to while logged in to social media accounts. These come from design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing. A government regulator, through a survey, had earlier found that more than half of young Australians have faced cyberbullying on social media platforms.

The regulation has left Big Tech scrambling, as all of them have publicly opposed the law, while maintaining that they will comply with it. Local reporting suggests that Meta has already started deactivating accounts of users under the age of 16. While the law does not penalise young Australians who try accessing social media after its enforcement, platforms which fail to block them risk fines of up to $33 million.

To be sure, the Australian government has excluded dating websites, gaming platforms and AI chatbots from the law, even as the latter has recently come under the scanner for allowing children to have “sensual” chats. Apart from tech companies, the Australian Human Rights Commission has also said a blanket ban on social media for under-16s may not be the “right response”, as it could curtail their right to free speech.

Which platforms are covered?

From December 10, Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X and YouTube will be required to take reasonable steps to prevent Australians under 16 from having accounts on their platforms. The Australian government may revisit the list depending on the evolving situation, and if young users rush to other platforms that are currently not covered.

Story continues below this ad

Australia had initially granted YouTube an exemption from the ban, citing educational value, but reversed this in July 2025 after a key regulator found it was the most cited platform for harmful content exposure among kids.

More generally, age restrictions will apply to social media platforms that meet three specific conditions: the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end-users; the service allows end-users to link to, or interact with, some or all of the other end-users; and the service allows end-users to post material on the service.

Under-16s at risk

According to the government, being logged into a social media account increases the likelihood that under-16s will be exposed to pressures and risks that can be hard to deal with, exposing them to cyberbullying, stalking, grooming, and harmful and hateful content. These come from social media platform design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing. An Australian online safety regulator found that a large percentage of children in the country had experienced harmful content online. (see box)

How have tech firms reacted?

While companies are complying with the law, they resisted its implementation during the consultation phase.

Story continues below this ad

YouTube said that as the law requires kids to use the platform without an account, “it removes the very parental controls and safety filters built to protect them — it will not make kids safer on our platform”. Meta called the law “inefficient” and said it will “fail to achieve its stated goals of making young people safer online and supporting those who experience harm from their use of technology”.

Snap said disconnecting teens from their friends and family doesn’t make them safer, but may push them to less safe, less private messaging apps. X said that it was concerned about the potential impact the law may have on the human rights of children and young people, including their freedoms of expression and access to information.

How does it compare to India’s?

While India does not have a law specifically to regulate the use of social media platforms by children, under the Digital Personal Data Protection Act, 2023, tech companies are required to implement a mechanism for collecting “verifiable” parental consent before processing personal data of children, even though it does not prescribe a particular technical measure to collect such consent. As per India’s law, a child has been defined as an individual below the age of 18.

The law also directs companies to not process personal data of children in cases where it could cause any detrimental effect on the well-being of a child, and not tracking or engaging in behavioural monitoring or targeted advertising directed at children.

Curated For You

Soumyarendra Barik is Special Correspondent with The Indian Express and reports on the intersection of technology, policy and society. With over five years of newsroom experience, he has reported on issues of gig workers’ rights, privacy, India’s prevalent digital divide and a range of other policy interventions that impact big tech companies. He once also tailed a food delivery worker for over 12 hours to quantify the amount of money they make, and the pain they go through while doing so. In his free time, he likes to nerd about watches, Formula 1 and football. ... Read More

 

Tags:
  • Australia Express Explained Express Premium
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Express ExclusiveKey part to be fixed before Dhruvs with Navy, Coast Guard fly again
X