Age Restrictions on Internet Platforms: Platform Compliance and Accountability
Access and Accessibility, Privacy and Surveillance ▪ April 21, 2026
Introduction
In December 2025, Open Terms Archive tracked updates by Snapchat, TikTok, and X restricting access to these platforms for children below 16 years of age in Australia. In January 2026, the minimum age requirement had been updated for Telegram as well in Australia. These changes were in response to the Social Media Minimum Age (SMMA) requirements as part of Australia’s Online Safety Act (2024). Australia became the first country to ban social media for children under the age of 16. Many other nations are now following suit by passing and considering similar laws. The United Kingdom (UK) for instance, has also proposed a ban on social media for children under 16. The Online Safety Act (2023) in the UK already requires age verification for harmful content. Open Terms Archive spotted a change in X’s Community Guidelines regarding its automated content enforcement mechanism in compliance with the UK’s Online Safety Act.
Other countries that are rapidly moving towards age-restriction laws include the United States (US), Denmark, France, Norway, New Zealand, and Turkey. But, how are internet platforms responding to these major regulatory changes, especially when there is little consistency in these implemented and proposed legislations across jurisdictions? Platforms have a lot to lose in this scenario in terms of liability, compliance costs, reputational damage, and penalties. On the other hand, they can also gain immensely through the deployment of data intensive age-assurance mechanisms. Are platforms looking to tread the fine line of avoiding liability but capturing more data?
Reasons for global policy shift at this time
The reason for age-restriction regulations gaining momentum now has been primarily attributed to concerns over adolescent mental health, citing excessive screen time, exposure to harmful content, and online abuse. Technological developments in AI and the resulting incidents such as the Grok deepfakes of minors scandal have also significantly pushed this regulatory shift worldwide. In the US, a lawsuit was filed against YouTube and Meta over a minor getting addicted to their feeds, thereby causing harm to her mental health. Another lawsuit was filed by the state of New Mexico against Meta for misleading users about the safety of its platforms, subsequently enabling child exploitation. Both cases concluded in favour of the user and the state.
There has been substantial political momentum around this issue as it can enable a vital electoral advantage for politicians. The use of government ID or linking the digital identity system for verification can also allow authoritarian states to gather granular data on people’s social media use. The rise of right-wing governments globally is another major factor in the push for restrictive laws. Social media platforms are only seen as a space that enables addiction, abuse, and violence. This perception of digital platforms is founded more in moral policing than in ethical considerations around rights.
The regulatory frameworks around age restrictions vary extensively in the different countries in terms of the age limits, platforms covered, role of parental consent, and age verification and assurance mechanisms. These variations are reflective of the region-specific perception of autonomy for minors, role of parental mediation, use of internet platforms, and significance of data protection. The vast differences in these laws could lead to a fragmented global policy landscape which is ineffective and enables significant privacy, security and access issues. It can also result in inconsistent content access and service availability, likely impacting marginalised populations disproportionately.
Platform strategies and response to age restriction laws
Compliance actions by platforms
While the concern around internet platforms is primarily about its extreme pervasiveness and thus harmful impact on children, the solutions for age-verification are also immersed in intrusive methods. These methods have been heavily criticised for their privacy risks by civil society, researchers, and activists. For example, connecting digital IDs for age-gating will generate centralised personal data repositories linked to third-party systems, exposing them to risks like data breaches, cyberattacks, and censorship. Internet platforms have so far verified user age through self-declaration, credit card verification, or phone number check. With the tightening of global age-verification laws, platforms are considering or implementing methods that include AI powered tools such as detecting behavioural signals and facial age estimation, biometric authentications, and third party age-assurance services.
Meta has made several changes to its technical architecture in response to the age-verification laws. In Australia, it removed access to around 550,000 underage accounts across Instagram, Facebook, and Threads after the SMMA requirements were implemented. If a user below 16 years attempts to change their date of birth on Instagram, the platform asks them to take a “video selfie” to prove they are over 16. These are assessed using Yoti, a third party age assurance service. Meta has also begun integrating AgeKey, an age assurance tool that can be set up by users and stored on their devices. The AgeKey can be shared by users on multiple participating platforms for verification.
In the US and UK, Meta added more safeguards to its existing Teen Accounts feature to ensure age-appropriate content for teens. It also introduced parental supervision to limit teens’ use of the platforms. It has started using AI for age detection based on their activity and facial age estimation technology.
Google implemented age assurance systems across Search, YouTube, and Play Store in order to comply with the regulations in different jurisdictions. People can manage their Google Account to meet the minimum age requirement for their region by using a government ID or credit card, and underage users can set up parental supervision for their accounts. In Australia, Google blocked viewers and creators below 16 from accessing their YouTube accounts, clarifying that they will still be able to watch YouTube while signed out. The channels by under 16 creators will no longer be viewable by anyone.
For YouTube, Google has applied an age estimation model in Australia, Brazil, Singapore, Switzerland, the United Kingdom, the US, and countries in the European Economic Area, to determine if a user is underage, regardless of the birthdate in their account. Google began implementing AI age-estimating technology in the US in July 2025 for YouTube, estimating a user’s age based on search patterns, viewing history and the longevity of the account. On Play Store, Google applied age verification for US residents in October 2025 in response to the state-level legislations, using options such as government ID, facial age estimation through a selfie, credit card verification, email-based checks, and third-party verification using VerifyMy. It has also made age-rating mandatory for apps on the Play Store.
Snapchat blocked around 450,000 accounts in Australia, preserving them for 3 years or until the user turns 16. Users are now required to verify their age to continue accessing the platform using bank account verification, scanning government-issued IDs, or AI-powered facial age estimation, both handled by a third-party provider, k-ID. Users under 18 cannot change their birthday, and those over 18 have limited attempts, preventing minors from simply editing their birth year to bypass restrictions. While it has not applied all of these checks outside Australia yet, Snapchat has extended the use of methods like behavioral signals, facial scans, and ID linking in other jurisdictions.
Platforms such as Discord and Roblox are currently not included in the social media ban in Australia, but are taking steps to incorporate age-verification methods. Despite these compliance actions taken by the platforms, enforcement agencies have raised serious gaps in their implementation measures.
As part of the ongoing assessment of social media platforms’ compliance of the SMMA requirements, e-Safety Commissioner (Australian government’s online safety regulator), has flagged significant concerns in its first report around inadequate safeguards deterring under-16s from gaining access to the platforms. Investigating Facebook, Instagram, Snapchat, TikTok and YouTube for compliance, the report highlights the reluctance from Big Tech companies in enforcing the policy more effectively. As the regulating agency gears up to gather evidence to substantiate these compliance gaps, social media platforms stand to face fines of up to $49.5 million.
In March 2026, the European Commission opened an investigation into Snapchat’s compliance with child protection rules under the Digital Services Act (DSA) citing gaps in under-13 access. The UK media regulator, Ofcom, and data authority, Information Commissioner’s Office (ICO) also asked Meta, Snapchat, TikTok, YouTube, Roblox and X to strengthen their age checks for under-13s in the UK.
Strategic responses and narrative building
The initial public stance by most platforms in the face of Australia’s age-verification announcement was explicit opposition, along with a reluctant promise to comply. Meta and Google announced their apprehensions with the law and stressed upon the difficulty in its implementation. At parliamentary hearings in October 2025, TikTok and Snap said they were against the ban but would comply with it. Kick committed to introducing various measures while continuing to constructively engage with authorities. Reddit echoed a similar sentiment just prior to the ban and pointed out that it was deeply concerned about the law as it undermines people’s right to free expression and privacy.
In its argument, Meta cited concerns such as isolation of vulnerable teens from getting support from online communities, inconsistency of age verification methods across industry, teens using less regulated and unsafe alternative platforms, and lack of interest from teens and parents in compliance. Google expressed its disagreement with the social media ban, calling it rushed and extremely difficult to enforce. It argued that the rules, which require age verification for platforms like YouTube, are too broad, risk infringing upon user privacy through required identification checks, and may make platforms less safe by removing existing nuanced parental controls. The platforms are using rights-based language to argue against the laws. These concerns from the two tech giants ring hollow given their lack of accountability so far towards minors.
Platforms like YouTube and Snapchat claimed to be outside the scope of the social media definition under the Australian law. YouTube is increasingly being viewed on TV screens and is gradually positioning itself as a video sharing and viewing platform. But, it is also unwilling to be regulated as TV content is. Snapchat argued that it has always been a messaging app, and should be exempted from the law based on this. It emphasised that it disagrees with this interpretation of Snapchat as a social media platform, and believes that the law has been unevenly applied and risks undermining community confidence in the law.
Another argument against the regulations is their inefficacy when minors will be able to access harmful content when logged out. Both Meta and Google stated their own safeguards for minors are more effective than letting them access content in logged out mode. Meta warned that teens will still be exposed to the “algorithmic experience” when they are not logged in on the platforms, advocating for its Teen Accounts feature that provides “built-in protection” for teens. However, this argument exposes the flaws in the platforms’ functionality and features such as availability of harmful content, endless feeds, and algorithmic influences.
Several of these platforms attempted to project an image of trustworthiness through these arguments, while also avoiding the legal responsibility of removing minors from its apps. Google emphasised its priority for safety in its public stance. It focused on how the safety features will be affected by the ban, instead of advocating for the beneficial content that minors will lose access to. This stance invited a response from the Australian Communications Minister Anika Wells suggesting if YouTube is mindful of these safety issues and has emphasised the fact that harmful content will still be viewable in logged out state, then it should address these issues on its platform.
There were also contradictory reasons from platforms in their disagreement with the ban. With its early steps towards AI age-estimation in the US, Google claimed to be at the forefront of delivering safety protections for teens while preserving their privacy using technology. At the same time, it lobbied to be exempted from the Australian law by highlighting the unreliability of the same technology. Big tech companies are racing to exploit the AI hype and to deploy it for data collection wherever possible. They have now implemented several AI based age and behavioural detection tools, overlooking the concerns of their invasive reach. Their shifting position on AI tools in the age-verification context is revealing of a strategy to avoid liability but still gain data.
Meta along with other major platforms strongly pushed for legislation at the app store level in order to ensure consistent and effective safeguards for minors, which was contested by Google. This once again demonstrated the platforms’ approach of evading legal liability.
Conclusion: Political and monetary advantages for platforms and state actors
While the enforcement of age-verification regulations would mean significant changes in technical architecture and risk of paying heavy fines in case of non-compliance, platforms are also trying to steer this shift in their favour through data collection and reputational change. As many platforms have introduced or extended the use of parental control via family dashboards, it will drive more users to purchase family plans. Age-verification will fundamentally allow platforms to collect age-segregated insights which will get more granular as they apply AI powered behavioural detection. Many platforms are already using AI data for personalised ads and content. Age based behavioural data will be used for further refined algorithms. The cost of compliance with the age-restriction laws is too high for many smaller market players. As these laws get implemented more widely in the industry, bigger platforms will have the competitive advantage. These platforms will also gain reputational benefit based on parental trust and legal compliance. Amid growing lawsuits for causing harm to children through chatbots and deepfakes, internet platforms can use age-gating as a way to dodge liability. They can shift the blame on minors and parents for bypassing age-restrictions, avoiding their responsibility in the process.
Besides internet platforms, politicians and the state in some cases also have a lot to gain from age-verification laws. These laws and verification methods enable both the state and internet platforms more control over children’s and people’s data. The laws will also allow the state more control over political narratives with reduced access to diverse political ideas for younger people. In many countries where access to the internet is already compromised, further restriction will only hinder the potential for access to information, free speech, and opportunity for younger people.