UK governments of all persuasions have form in failing to understand the internet in their attempts to regulate it by legislation. Past efforts to regulate our digital lives have failed to achieve their goals. They either proved unworkable or had ‘unintended consequences’, and were ultimately dropped or had to be amended by subsequent legislation. The root cause of these failings was a refusal to listen to expert voices warning the government that its plans were unworkable. To this list of failures, we can add a further piece of legislation in the form of the new online safety bill.
The bill takes giant steps towards arbitrary government censorship of our online activities. It includes worryingly vague and subjective definitions, places excessive power in ministerial hands, and threatens encryption, confidential communication and journalistic activity. Most of these failings stem from attempting to mandate technological solutions to problems that cannot be solved by technology because they involve subjective human judgement.
Solving the ‘legal but harmful content’ conundrum
The government claims that the online safety bill represents a “milestone in the fight for a new digital age, which is safer for users and holds tech giants to account”. A central tenet of the bill is to require service providers to identify and remove, or limit the reach of, content that isn’t actually unlawful but which could be considered ‘harmful’ to adults or children. This includes user-generated content, such as social media posts or comments on published articles.
The question of what is and isn’t ‘harmful’ was to have been delegated to the communications regulator Ofcom. However, the furore over the ultimately withdrawn attempt to install Daily Mail editor Paul Dacre as chair of Ofcom (and the similar concerns over the current nominee Michael Grade) demonstrated clearly that this position is a political appointment. Hence, Ofcom’s independence is nominal at best.
Should the bill become law, Ofcom would be required by its political masters to make arbitrary decisions as to what is and is not ‘legal but harmful’. In practice, the inability to define what the vague wording of ‘legal but harmful content’ means, will lead to broad powers being granted to Ofcom to require any website to scour for and remove such content.
Since it is in practice impossible to define what ‘legal but harmful’ means, the draft bill delegates the power to government ministers. This they can do for any reason, including arbitrary political ones. The bill empowers those ministers to direct Ofcom to require online companies, such as Facebook and Google, to enforce these content moderation rules immediately. The bill offers token gestures towards parliamentary oversight of this process but, as we have seen previously with heavily criticised legislation, the use of the government whip normally results in these proposals sailing through the Commons, despite attempts at scrutiny.
Service blocking and restriction orders
The bill gives Ofcom the powers not just to remove content, but to apply to the courts to restrict public access to an online service, restrict a company’s ability to do business in the UK, or even block access to them in the UK completely. We aren’t just talking about fines either. Criminal conviction and even imprisonment is also explicitly contemplated as powers for Ofcom and these powers could be exercised over anything from ‘illegal content’ to failure to tick a (metaphorical) compliance box.
The government claims the bill is about “reining in the tech giants”, but this is not entirely true. Whilst giants such as Google, Amazon and Facebook will face stricter compliance provisions than smaller entities, the bill’s powers apply to every online service, if that service has ‘user-to-user’ capabilities. A small family run business whose online shop allows users to exchange comments about the wares on offer would fit the definition.
The target is ‘dangerous’ or ‘harmful’ content such as terrorist or child abuse material and the bill would empower Ofcom to dictate via a ‘technology notice’ that a company should install an Ofcom-approved technology, of Ofcom’s choosing, to scan for these types of content. However, automated scanning often just isn’t viable. Is a technical discussion about preparing explosives ‘terrorist material’? Is a photo of children at bath time ‘child sexual abuse material’? What about teenagers researching sex education material as part of their school curriculum?
It all depends on the context, which is something only a human can determine – yet it’s also impractical, either for commercial or simple human resources reasons, to employ enough people skilled enough to make that determination. These burdens would make it impractical for new service providers to enter the UK market and cement the control of the existing large providers that the bill claims to tackle.
Implications for encryption
Popular messaging services such as WhatsApp, Telegram and Signal employ end-to-end encryption technologies to ensure messages cannot be intercepted and read. A similar principle applies to an increasing amount of our web browsing activity, not just that involving financial transactions such as online banking or shopping, but as increasingly accepted good practice.
Government attempts to portray encryption as used solely to hide distasteful or illegal activities, such as child abuse, have been heavily criticised but were intended to pressure Facebook into limiting the rollout of end-to-end encryption. However, the online safety bill explicitly requires companies to scan our private and personal messages for evidence of criminal offences.
Comparisons with Orwell’s work are inevitable. So too is the hypocrisy of the proposal in the light of Conservative MPs operating encrypted, secret WhatsApp groups. Nadine Dorries praises attempts to roll out secret, encrypted communications channels to Ukrainians whilst at the same time the government funds campaigns against these channels and tries to ban them in the UK.
Face the press
The online safety bill contemplates exemptions for the press to say things that the rest of us could not under the bill’s core proposals (legal but harmful). This in turn leads to a thorny question for Ofcom: who are ‘the press’? In practice, it may mean that a publisher has to show they are a member of a regulatory body such as IMPRESS. The government appears to be quite happy to protect its allies in the established press (the Daily Mail, The Daily Telegraph and The Sun for instance). Bloggers and citizen journalists such as North West Bylines on the other hand receive no such dispensation.
Previous attempts to require some form of age verification before accessing ‘adult’ content had to be dropped. But since the entire bill appears to be driven by ‘never let a bad idea die’, this too is back.
In theory, age verification might be a noble goal but in practice, it is laden with problems. We already know the government wants to massively water down data protection laws. The ‘honeypot’ of data created by those wishing to exercise their right to view content such as pornography will prove highly attractive to extortionists, with many past examples of similar attempts. The only remedy for victims of fraud (other than criminal prosecution), would be a retrospective claim for data breaches against the services in question. This failure to provide specific privacy protections was a major reason online age verification proposals failed last time.
Fail to plan, plan to fail
The proposals detailed by the bill are not only offensive in terms of the further attempts to impose an authoritarian regime on UK citizens, but they also represent the latest in a long line of government failures that tackle the wrong causes of problems in the online world. In this case, the proliferation of ‘harmful’ views online is the ultimate result of Facebook-style business models that actively encourage division and hatred in order to boost page views and thereby advertising revenue.
The onerous requirements of the online safety bill would only entrench the status of wealthy tech giants the government claims to be so concerned about. Once on Facebook, you can’t leave to another service without losing all your friends and interests lists. Your data is ‘siloed’ and Facebook actively prevents would-be competitors from developing interoperable services. If we could easily move our personal data between competing social media platforms, then the business models, and by extension the ‘harms’ they promote that the bill ostensibly seeks to address, would be neutralised by that competition. But then is the bill, really intended for its stated purpose? Comparisons with other state-sponsored control and censoring strategies, such as the ‘Great Firewall of China’ internet, seem not unreasonable.
As with other misguided legislation, such as the police and crime bill, the nationality and borders bill, and the elections bill, is the online harms bill just another tactic in a sustained attack on fundamental rights and democracy in the UK?