Karnataka Chief Minister Siddaramaiah’s announcement mooting a “ban” on the use of social media by children under the age of 16 indicated the latest instance of a government stepping into a global debate around the impact of digital platforms on teenagers.
What the announcement did not specify was how such a ban would be implemented.
The proposal raises several questions: how would platforms identify underage users? Who would enforce it – the state government or the platforms themselves? And can a state government regulate access to global digital platforms?
While the Centre has been mulling age-based restrictions, sources in the Ministry of Electronics and Information Technology (MeITY) said that they remain unsure how state-specific bans can be implemented. The ministry is currently studying the framework of Australian law, a government official said.
Australia’s Online Safety Amendment (Social Media Minimum Age) Act, in effect since December 2025, does not penalise under-16s who access an age-restricted social media platform, their parents or carers. The government has characterised the policy as “not a ban; it’s a delay to having accounts.”
However, “age-restricted social media platforms may face penalties if they don’t take reasonable steps to prevent under-16s from having accounts”, its government website notes. A court can order civil penalties for platforms that don’t take reasonable steps to prevent underage users from having accounts on their platforms. Such platforms are expected to remove accounts of those under 16 through multiple age determination technologies, such as age verification at the device, operating system, or app store level; age verification through an Australian bank account ID or government-issued ID; or facial age estimation through image/video selfies while registering/logging in, etc.
Story continues below this ad
These platforms are identified as those that encourage users to spend hours on them, through personalised, endless content feeds and time-sensitive rewards, such as Snapchat. Also included are those using emerging AI-driven features, such as content modification tools. Presently excluded are messaging-centric platforms such as WhatsApp, Pinterest, Messenger, and Discord at present.
Social media companies have already begun adapting to the law. Snapchat told users that “if you are under 16 years old, your account will be locked… you will not be able to maintain or create a new Snapchat account.” The company added that “approval from a parent or guardian is not an option for age verification in Australia.”
Are state-level restrictions technically possible?
Karnataka’s proposal has also raised questions about legislative competence. Critics argued that internet regulation falls within the Union government’s domain. Others point out that states may still legislate to address the tangible effects of digital platforms.
A similar issue came up before the Madras HC in a challenge to the Tamil Nadu Prohibition of Online Gaming and Regulation of Online Games Act, 2022. Online gaming companies had argued that internet-based activities fall under the union list and therefore could not be regulated by states. The HC rejected this, saying “these games have posed serious mental and physical health risks to the citizens in the state. It is a case of public health, and the state has full competence to pass legislation to govern matters affecting public health.”
Story continues below this ad
The court held that a law does not become invalid merely because it incidentally touches on subjects under the Union’s powers, adding that, “if an enactment substantially falls within the powers expressly conferred by the Constitution upon the legislature which enacted it, it cannot be held to be invalid merely because it incidentally encroaches on matters assigned to another legislature.”
Since the legislation sought to address harms such as addiction, financial distress and suicides linked to online gaming, the court held that its core objective was the protection of public health, placing it within the legislative competence of the state.
Even if a state were to impose such restrictions, tech platforms can limit access based on specific geographical locations.
Nehaa Chaudhari, partner at law and policy firm Ikigai Law, cites the legislative framework that evolved around regulating online gaming companies and platforms.
Story continues below this ad
“Before the central law, different states were regulating online gaming differently, and companies had to comply with this patchwork of laws. The gaming companies were deploying a variety of technological tools, including geo-fencing/geo-blocking, based on location and IP address, to ensure their services weren’t available in banned states,” she said. “I imagine it will need to play out similarly for social media platforms as well, where companies may need to deploy a combination of such tools to see where you’re using the service from, geographically, along with a self-declaration, perhaps a checkbox when you log in—or other similar measures—to clarify that you are not in the state physically where usage has been banned/restricted. However, a lot of these nitty-gritties also depend on how such laws will be framed.”
The enforcement challenge
Even with existing laws, enforcing age restrictions online is far from straightforward. Most platforms still rely on users entering their birthdate during registration — easy to bypass. It is well known that many teenagers enter false ages while signing up on online platforms.
More advanced verification systems are not fully reliable. Facial recognition tools estimate age but are not always accurate. Behavioural and voice-based verification systems can also be manipulated. Minors may also access platforms from shared family devices. “In such a context, it will not be possible to block/restrict access based on geo-fencing, or even IP addresses, for instance, since parents and kids will be using the same devices,” says Chaudhari.
There is also the risk of minors bypassing restrictions by using VPNs. Such loopholes, she said, only exacerbate the real risk of minors “accessing the internet in unsafe ways, or finding unsafe parts of the internet to access”.
Story continues below this ad
Because of these limits, most regulatory frameworks require companies to take reasonable steps rather than eliminate underage users.
One possible approach, similar to the Australian framework, would be to place enforcement responsibility on the social media companies themselves, thus requiring them to prove their systems can prevent users below a certain age from creating or keeping accounts.
Policy experts say that if safeguards fail, governments might use financial penalties against platforms. This is often seen as more practical than criminal liability, especially for global tech companies that operate in many places without a strong physical presence in a specific state.
The focus thus shifts from monitoring individual users to requiring platforms to create systems that limit access for underage users.
Who bears responsibility?
Story continues below this ad
The larger question behind these policy debates is who ultimately bears responsibility for protecting young users online.
That question is also being examined in courts in the United States. Earlier this year, TikTok settled with a 19-year-old woman whose lawsuit was expected to become the first jury test of whether social media companies could be held liable for mental health harm. What remains is the core trial. Meta’s Instagram and Google’s YouTube now face proceedings that focus not on the content posted on their platforms, but on how those platforms were built.
The lawsuit alleges that companies deliberately designed features to keep young users engaged. “Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” the complaint says.
Among the features cited are endless scrolling feeds, recommendation algorithms that learn user behaviour, notifications that repeatedly draw users back, and social metrics such as likes and streaks.
Story continues below this ad
The harm, the plaintiffs argue, was the “foreseeable outcome of how these platforms were built.”
On government-imposed restricted access for children vis-a-vis parental controls: How will it be different?
Parental controls are decided by the families and guardians in what they deem the best interests of their children. On the other hand, government-imposed restrictions have the state instead deciding what is in the best interests of children and taking on the colour of a public health/public safety issue.
Chaudhari notes, “You find supporters and critics on both sides. Some will say that state control takes away parental autonomy, while others will say state-imposed restriction ensures baseline protection for kids, especially in cases where parents are ill-equipped/unwilling/unable to exercise control, which may be in the best interests of their children.”
Open questions
Several implementation questions remain unresolved. As Chaudhari points out, for social media regulation by a particular state, “there are initial questions around legislative competence between the state and the centre which will need to be resolved.”
Story continues below this ad
Chaudhari cautions against blanket age bans. “For many kids from marginalised groups, social media is a safe space to build communities, find allies, and find peer support. Blanket age bans don’t work for groups like these.”
Additionally, ensuring compliance will require platforms to make engineering and technological tweaks, which can come at a high financial cost for the companies, Chaudhari notes, as was the case with online gaming companies.