Last week the UK government published a draft of the proposed Online Safety Bill, after having initially introduced formal proposals for said bill in early 2020. With this post we aim to shed some light on its potential impacts and explain why we think that this bill - despite having great intentions - may actually be setting a dangerous precedent when it comes to our rights to privacy, freedom of expression and self determination.

The proposed bill aims to provide a legal framework to address illegal and harmful content online. This focus on “not illegal, but harmful” content is at the centre of our concerns - it puts responsibility on organisations themselves to arbitrarily decide what might be harmful, without any legal backing. The bill itself does not actually provide a definition of harmful, instead relying on service providers to assess and decide on this. This requirement to identify what is “likely to be harmful” applies to all users, children and adults. Our question here is - would you trust a service provider to decide what might be harmful to you and your children, with zero input from you as a user?

Additionally, the bill incentivises the use of privacy-invasive age verification processes which come with their own set of problems. This complete disregard of people’s right to privacy is a reflection of the privileged perspectives of those in charge of the drafting of this bill, which fails to acknowledge how actually harmful it would be for certain groups of the population to have their real life identity associated with their online identity.

Our view of the world, and of the internet, is largely different from the one presented by this bill. Now, this categorically does not mean we don’t care about online safety (it is quite literally our bread and butter) - we just fundamentally disagree with the approach taken.

Whilst we sympathise with the government’s desire to show action in this space and to do something about children’s safety (everyone’s safety really), we cannot possibly agree with the methods.

Back in October of 2020 we presented our proposed approach to online safety - ironically also in response to a government proposal, albeit about encryption backdoors. In it, we briefly discussed the dangers of absolute determinations of morality from a single cultural perspective:

As uncomfortable as it may be, one man’s terrorist is another man’s freedom fighter, and different jurisdictions have different laws - and it’s not up to the Matrix.org Foundation to play God and adjudicate.

We now find ourselves reading a piece of legislation that essentially demands these determinations from tech companies. The beauty of the human experience lies with its diversity and when we force technology companies to make calls about what is right or wrong - or what is “likely to have adverse psychological or physical impacts” on children - we end up in a dangerous place of centralising and regulating relative morals. Worst of all, when the consequence of getting it wrong is criminal liability for senior managers what do we think will happen?

Regardless of how omnipresent it is in our daily lives, technology is still not a solution for human problems. Forcing organisations to be judge and jury of human morals for the sake of “free speech” will, ironically, have severe consequences on free speech, as risk profiles will change for fear of liability.

Forcing a “duty of care” responsibility on organisations which operate online will not only drown small and medium sized companies in administrative tasks and costs, it will further accentuate the existing monopolies by Big Tech. Plainly, Big Tech can afford the regulatory burden - small start-ups can’t. Future creators will have their wings clipped from the offset and we might just miss out on new ideas and projects for fear of legal repercussions. This is a threat to the technology sector, particularly those building on emerging technologies like Matrix. In some ways, it is a threat to democracy and some of the freedoms this bill claims to protect.

These are, quite frankly, steps towards an authoritarian dystopia. If Trust & Safety managers start censoring something as natural as a nipple on the off chance it might cause “adverse psychological impacts” on children, whose freedom of expression are we actually protecting here?

More specifically on the issue of content moderation: the impact assessment provided by the government alongside this bill predicts that the additional costs for companies directly related to the bill will be in the billions, over the course of 10 years. The cost for the government? £400k, in every proposed policy option. Our question is - why are these responsibilities being placed on tech companies, when evidently this is a societal problem?

We are not saying it is up to the government to single-handedly end the existence of Child Sexual Abuse and Exploitation (CSAE) or extremist content online. What we are saying is that it takes more than content filtering, risk assessments and (faulty) age verification processes for it to end. More funding for tech literacy organisations and schools, to give children (and parents) the tools to stay safe is the first thing that comes to mind. Further investment in law enforcement cyber units and the judicial system, improving tech companies’ routes for abuse reporting and allowing the actual judges to do the judging seems pretty sensible too. What is absolutely egregious is the degradation of the digital rights of the majority, due to the wrongdoings of a few.

Our goal with this post is not to be dramatic or alarmist. However, we want to add our voices to the countless digital rights campaigners, individuals and organisations that have been raising the alarm since the early days of this bill. Just like with coercive control and abuse, the degradation of our rights does not happen all at once. It is a slippery slope that starts with something as (seemingly) innocuous as mandatory content scanning for CSAE content and ends with authoritarian surveillance infrastructure. It is our duty to put a stop to this before it even begins.

Twitter card image credit from Brazil, which feels all too familiar right now.

The Foundation needs you

The Matrix.org Foundation is a non-profit and only relies on donations to operate. Its core mission is to maintain the Matrix Specification, but it does much more than that.

It maintains the matrix.org homeserver and hosts several bridges for free. It fights for our collective rights to digital privacy and dignity.

Support us