TL;DR: The Online Safety Act age verification defenders are clutching at straws, ignoring fundamental human rights violations and the Act’s absurd failure to distinguish between Spotify and porn sites. Their arguments fall apart under scrutiny.
Since I wrote my previous post about the Online Safety Act in March, there have been a few more developments, key among which is Age verification. Let’s break down their justification for this legislative car crash.
The Privacy Smokescreen: ECHR Article 8 Under Attack.
The Online Safety Act apologists love to claim the legislation respects privacy rights. They’re either lying or haven’t read their own law properly.
Article 8 of the European Convention on Human Rights states clearly:
“Everyone has the right to respect for his private and family life, his home and his correspondence.”
The OSA tramples all over this fundamental right in multiple ways:
Mass Surveillance Requirements: The Act’s provisions for content scanning effectively mandate mass surveillance of private communications. When WhatsApp or Signal are forced to scan your messages for “harmful content,” that’s not protecting children – that’s creating a surveillance state that would make the Stasi proud.
No Warrant Required: Unlike traditional law enforcement, Online Safety Act compliance doesn’t require judicial oversight or warrants. Companies must proactively scan and report, turning private platforms into extensions of state surveillance without the protections we’ve fought centuries to establish.
Chilling Effect on Expression: When people know their private messages might be scanned and reported, they self-censor. This isn’t theoretical – it’s a documented phenomenon (Penny: Online Surveillance & Wikipedia use , OpenDemocracy: Writers silenced by surveillance) that fundamentally undermines both privacy and free expression rights.
The European Court of Human Rights has repeatedly ruled that mass surveillance programmes violate Article 8. The OSA’s requirements constitute exactly this type of prohibited mass surveillance, dressed up in child protection rhetoric.
Data Anxiety: The Online Safety Act Age Verification
Age verification companies like Persona (which LinkedIn and Twitter rely on) are sitting ducks for malicious actors. Tech insiders predict a malicious actor breaching (just like the Tea App data breach) within three months – many reckon it’ll happen before Christmas.
The government’s mixed messaging is the real problem here. They’re demanding the Online Safety act age verification whilst simultaneously claiming to protect privacy. You can’t have both – it’s like asking for a waterproof sieve.
Yes, that’s right. The Govt want you to hand over your highly sought-after data to 3rd parties who can’t guarantee its safety.
The Absurd Categorisation Problem of OSA: Spotify Meets PornHub
Here’s where the Online Safety Act’s stupidity really shines through. The legislation applies the same regulatory framework to music streaming services like Spotify as it does to explicit adult websites. Let that sink in.
Under the OSA’s broad definitions, Spotify faces the same compliance burdens as hardcore pornography sites because:
User-Generated Content: Spotify allows user-generated playlists and comments
“Harmful Content” Definitions: Song lyrics discussing sex, drugs, or violence could trigger compliance requirements
Age Verification: The same age verification systems required for porn sites could theoretically apply to music platforms
This isn’t hyperbole – it’s what happens when politicians write legislation without understanding how the internet actually works.
Real-World Consequences:
- Spotify needs the same online safety act age verification as for PornHub.
- YouTube Music gets treated the same as adult video platforms.
- BBC Sounds could need the same compliance framework as OnlyFans
The defenders claim this is necessary for “comprehensive protection.” What it actually demonstrates is the government’s complete failure to understand the digital landscape it’s attempting to regulate.
Here’s a prime example of that defence that Spotify needs age verification:
Mark (stop_the_prop) here is wrong on so many levels, and a basic Google / AI search would expose why.
- There are no books which legally carry an age restriction in the UK
- There is no music which legally carries an age restriction
- He fails to accept any level of parenting responsibility
- He fails to accept that children are playing PEGI 18 games, like Call of Duty & GTA.
The Technical Impossibility Defence Falls Apart
OSA supporters often argue that “technical solutions will emerge” to solve the encryption problem. This reveals a fundamental misunderstanding of how encryption works.
Mathematics Doesn’t Care About Your Feelings: End-to-end encryption either works or it doesn’t. You cannot create a “child protection backdoor” that only the good guys can use. Every security expert knows this, but politicians prefer fantasy to physics.
The Scanning Paradox: Defenders claim client-side scanning solves the encryption problem. It doesn’t – it just moves the privacy violation to your device. Your phone becomes a surveillance tool instead of the platform.
International Reality Check: When UK-based services implement these requirements, users simply switch to non-UK alternatives. The defence that “international cooperation will solve this” ignores the reality that countries like Russia and China would love nothing more than backdoors in Western encryption systems. See HM Govt backdown vs Apple data protection tool.
The “Think of the Children” Emotional Manipulation
Every authoritarian law hiding behind child protection follows the same playbook. The OSA defenders are no different.
Existing Laws Already Cover This: Child sexual abuse material is already illegal. Grooming is already illegal. The criminal law already provides comprehensive protection for children online.
Creating New Vulnerabilities: By weakening encryption and mandating surveillance, the OSA actually makes children less safe by creating systems that can be exploited by actual predators and foreign governments.
Displacement Effect: Even if the Online Safety Act worked perfectly (which it won’t), it would simply push harmful activity to platforms outside UK jurisdiction (as previously discussed in our initial post). The children aren’t protected – the problem just becomes invisible to UK authorities.
Online Safety Act & OFCOM Enforcement Fantasy.
Defenders claim OFCOM has the tools to make this work. This is delusional optimism at its finest.
Resource Reality: OFCOM received no additional funding to regulate the entire internet. They’re expected to oversee thousands of platforms with the same budget they used for traditional broadcasting.
Technical Expertise Gap: OFCOM regulates radio frequencies and TV licensing. They now need to understand complex cryptographic systems, international content delivery networks, and rapidly evolving digital platforms. Good luck with that.
Jurisdictional Limitations: When non-UK platforms ignore OFCOM’s demands (which they will), what’s the enforcement mechanism? Strongly worded letters? The government can block websites, but as we’ve established, UK users know how to use VPNs – 76% according to Forbes.
The Innovation Killer
Perhaps most damaging is how the OSA will stifle UK digital innovation. Defenders dismiss this concern, but the evidence is clear:
Startup Death Sentence: Small UK tech companies cannot afford multi-million pound compliance systems. They’ll either move overseas or never start at all.
Investment Flight: Why would international investors fund UK digital startups when they face unique regulatory burdens not found elsewhere?
Brain Drain: The UK’s best digital talent will follow the opportunities, which are increasingly elsewhere.
International Mockery
While OSA defenders claim the UK is “leading the world” in online safety, the reality is international mockery:
US Tech Giants: Complying minimally while moving operations offshore
EU Officials: Quietly pleased that UK tech sector self-destruction helps European competitors
Digital Rights Groups: Using the OSA as a cautionary tale of regulatory overreach
The Way Forward: Admitting that the Online Safety Act is failing.
The OSA defenders need to face reality. This legislation is:
- Technically impossible to implement effectively
- Legally questionable under human rights law
- Practically useless for protecting children
- Economically damaging to UK digital innovation
Instead of doubling down on failure, we need:
Honest Assessment: Acknowledge the Act’s fundamental flaws.
Rights-Respecting Alternatives: Focus on education, parental controls, and existing criminal law.
International Cooperation: Work with allies on evidence-based approaches that don’t require surveillance backdoors.
Technical Reality: Consult people who actually understand how the internet works.
Conclusion: Stop Defending the Indefensible
The Online Safety Act 2023 represents everything wrong with modern political approaches to technology – ignorant, authoritarian, and counterproductive.
Defenders who claim it protects privacy while mandating surveillance, or argue it’s proportionate while treating Spotify like a porn site, have abandoned logical thinking in favour of political tribalism.
The internet doesn’t recognise borders, and it certainly doesn’t care about poorly written British legislation. The OSA will fail because it’s based on fundamental misunderstandings of technology, law, and human behaviour.
Instead of defending this legislative disaster, we should be demanding better.
Our children deserve real protection, not security theatre that makes everyone less safe while destroying digital rights. The sooner we admit the OSA is a failure, the sooner we can start building something that actually works.
Tools
To use TOR – a browser originally created by the US Navy.
VPN; we’d recommend Windscribe or ProtonVPN