TLDRs:
Contents
- Meta wants app stores to verify users’ ages under new child protection laws; Apple and Google oppose.
- State laws in Utah and Louisiana back Meta’s stance, but privacy concerns fuel resistance.
- Apple and Google say app-level age checks are safer, while Meta likens its proposal to liquor ID checks.
- Tensions grow as Meta exits Chamber of Progress to push for app store responsibility on age-gating.
As more U.S. states pass laws demanding stricter online age verification to protect minors, tech giants are clashing over who should shoulder the responsibility.
Meta is calling for Apple and Google to verify users’ ages at the app store level. However, Apple and Google firmly disagree, arguing that individual apps should handle age checks to safeguard user privacy.
This debate comes amid the growing wave of child protection laws in states like Utah, Louisiana, and Texas. These laws require digital platforms to confirm users’ ages and secure parental consent before granting minors access. In June 2025, the U.S. Supreme Court upheld such laws, setting a precedent and encouraging other states like South Carolina and Ohio to consider similar measures.
The Fight Over Online Age Gates
Meta’s proposal, comparing app stores to liquor stores that check ID before purchase, has drawn sharp criticism. The Facebook and Instagram parent company argues that centralized age checks at the app store level would simplify compliance and ensure consistency. However, both Apple and Google caution that such a move would undermine user privacy and increase security risks.
Instead, Apple and Google advocate for app-specific verification systems. Their stance is echoed by privacy advocates who argue that handing over control to centralized platforms could create surveillance concerns and data security liabilities. Still, some state-level regulations appear to support Meta’s argument, indicating legal fragmentation across the U.S.
UK Online Safety Act Raises Stakes for Global Platforms
Beyond the U.S., similar pressure is mounting overseas. In the UK, the Online Safety Act requires platforms to prevent minors from accessing adult content.
The UK’s communications regulator, Ofcom, has begun enforcing the law, warning that violators face fines up to £18 million or 10% of global revenue.
Global tech firms like Reddit, TikTok, and Elon Musk’s X have introduced AI-driven “age assurance” tools, including facial recognition, credit card checks, and machine-learning age inference systems. While Meta and YouTube have also adopted AI to detect deceptive age claims, these methods have yet to be endorsed by regulators. Ofcom has announced that it will assess these tools for effectiveness by September 2025.
Meta’s Exit from Chamber Signals Broader Strategy Shift
Amid intensifying debates, Meta has exited the Chamber of Progress, a prominent tech industry lobbying group. This move is widely seen as a bid to push its own agenda and rally support for app store accountability.
Meanwhile, groups backed by Apple continue to oppose age verification laws that could extend beyond app-level enforcement.
A federal proposal by Senator Mike Lee aims to create uniform national rules that mirror Meta’s preferred approach. But with tech firms deeply divided, the path forward remains uncertain. For now, the legal patchwork across states and countries ensures that the age verification battle will remain a contentious issue in digital policy circles.