Australia’s internet regulator has criticised the world’s biggest social platforms of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and insufficient measures to stop new account creation. In its first compliance report since the ban took effect, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Revealed in First Major Review
Australia’s eSafety Commissioner has documented a troubling pattern of failure to comply among the world’s biggest social media platforms in her inaugural review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish adequate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, highlighting that some platforms have permitted children who originally stated themselves under 16 to later assert they were older, thereby undermining the law’s intent.
The findings demonstrate a notable intensification in the regulatory response, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has stressed that merely demonstrating some children still hold accounts is insufficient; platforms must rather provide concrete evidence that they have established robust systems and processes intended to stop under-16s from opening accounts in the outset. This shift signals the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that fail to meet the statutory obligations.
- Enabling previously banned users to re-verify their age and regain account access
- Allowing repeated attempts at the same age assurance method without consequences
- Inadequate safeguards to stop accounts for under-16s from being created
- Limited reporting tools for parents and members of the public
- Absence of clear information about compliance actions and account removals
The Extent of the Challenge
The considerable scale of social media activity amongst Australian young people underscores the regulatory challenge facing both the authorities and the platforms in question. With numerous accounts already restricted or removed since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the technical and procedural obstacles to enforcing age restrictions have proven far more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from fraudulent ones. This complexity has left enforcement authorities grappling with the core issue of whether current age verification technologies are adequate to the task.
Beyond the technical obstacles lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have long resisted stringent age verification measures, citing data protection worries and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms may not be making sufficient effort to deploy the infrastructure mandated legally. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their compliance infrastructure, or they risk facing substantial fines that could transform their operations in Australia and possibly affect compliance frameworks internationally.
What the Numbers Reveal
In the opening month after the ban’s implementation, Australian officials indicated that 4.7 million accounts had been restricted or deleted. Whilst this statistic initially appeared to prove regulatory success, subsequent analysis reveals a more complex picture. The sheer volume of account deletions implies that many under-16s had been able to set up accounts in the first place, indicating that preventive controls were inadequate. Additionally, the data raises questions about whether deleted profiles reflect genuine enforcement or merely users deleting their profiles willingly in in light of the new restrictions.
The limited transparency surrounding these figures has frustrated independent observers seeking to assess the ban’s actual effectiveness. Platforms have revealed little data about their compliance procedures, effectiveness metrics, or the characteristics of suspended accounts. This lack of clarity makes it challenging for regulators and the general public to determine whether the ban is operating as planned or whether younger users are simply finding different means to access social media. The Commissioner’s push for comprehensive proof of consistent enforcement practices reflects increasing concern with platforms’ unwillingness to share complete details.
Industry Response and Pushback
The social media giants have responded to the regulator’s enforcement action with a mixture of compliance assurances and doubts regarding the ban’s practicality. Meta, which runs Facebook and Instagram, emphasised its commitment to complying with Australian law whilst simultaneously arguing that precise age verification continues to be a significant industry-wide challenge. The company has called for a alternative strategy, proposing that strong age verification systems and parental consent requirements implemented at the app store level would be more effective than platform-level enforcement. This position demonstrates broader industry concerns that the existing regulatory system places an unrealistic burden on separate platforms.
Snap, the developer of Snapchat, has adopted a more assertive public position, announcing that it had suspended 450,000 accounts since the ban took effect and claiming to continue locking more daily. However, sector analysts question whether such figures reflect authentic adherence or merely reactive account management. The fundamental tension between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, citing privacy issues and technical constraints, establishing an impasse between authorities and platforms over who bears responsibility for execution.
- Meta contends age verification ought to take place at app store level rather than on individual platforms
- Snap claims to have locked 450,000 user accounts since the ban’s implementation in December
- Industry groups highlight privacy concerns and technical obstacles as barriers to effective age verification
- Platforms assert they are making their best effort whilst challenging the ban’s overall effectiveness
Wider Considerations About the Prohibition’s Effectiveness
As Australia’s under-16 social media ban enters its implementation stage, key concerns remain about whether the legislation will accomplish its stated objectives or merely push young users towards unregulated platforms. The regulator’s first compliance report reveals that despite months of implementation, substantial gaps remain—children continue finding ways to bypass age verification systems, and platforms have had difficulty stop new underage accounts from being created. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply migrate to other platforms, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s worldwide effects add another layer of complexity to assessments of its impact. Countries including the United Kingdom, Canada, and multiple European countries are observing Australia’s experiment closely, considering similar laws for their own citizens. If the ban proves ineffective at reducing children’s online activity or cannot protect them from damaging material, it could weaken the case for comparable regulations elsewhere. Conversely, if regulation becomes sufficiently robust to truly restrict underage participation, it may encourage other governments to implement similar strategies. The conclusion will likely influence global regulatory trends for years to come, making Australia’s implementation efforts scrutinised far beyond its borders.
Those Who Profit and Who Is Disadvantaged
Mental health campaigners and child safety organisations have backed the ban as a essential measure against algorithmic manipulation and exposure to harmful content. Parents and educators maintain that removing young Australians platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also removes legitimate uses of social media for young people—maintaining friendships, obtaining educational material, and participating in online communities around common interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families question.
The ban’s concrete implications goes further than individual users to influence content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously utilised effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to build age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects extend far beyond the simple goal of child protection.
What Lies Ahead for Compliance Monitoring
Australia’s eSafety Commissioner has indicated a significant shift from passive monitoring to active enforcement, marking a key milestone in the execution of the youth access prohibition. The regulator will now gather evidence to determine whether platforms have neglected to implement “reasonable steps” to prevent underage access, a regulatory requirement that goes further than simply noting that children remain on these services. This strategy demands demonstrable proof that platforms have implemented proper safeguards and processes meant to keep out minors. The Commissioner’s office has signalled it will launch probes methodically, building cases that could lead to considerable sanctions for failure to comply. This move from oversight to action reveals mounting concern with the platforms’ current efforts and indicates that voluntary cooperation by itself is insufficient.
The rollout phase raises critical issues about the sufficiency of sanctions and the concrete procedures for holding tech giants accountable. Australia’s legislation provides compliance mechanisms, but their success relies on the eSafety Commissioner’s readiness to undertake regulatory enforcement and the platforms’ capacity to respond substantively. Global regulators, especially regulators in the UK and EU, will closely monitor Australia’s regulatory approach and consequences. A effective regulatory push could establish a template for additional countries considering equivalent prohibitions, whilst inadequate results might undermine the entire regulatory framework. The next phase will be critical whether Australia’s innovative statutory framework produces real safeguards for adolescents or becomes largely performative in its effect.
