Article written for CapX - published 13 January 2022
This month marks the one-year anniversary of the Capitol Hill riots in which five people died. It’s a stark reminder that dangerous behaviour online can metastasise into real-world violence. Plenty of other examples can be found closer to home, like the racist abuse of England footballers after the final of the European Championships, social media ads promoting human trafficking across the Channel, and of course conspiracy theories about Covid-19 and 5G that led to assaults on telecoms workers and arson attacks on phone masts.
The internet has revolutionised our ability to communicate, do business, stay connected to family and friends, and share information and ideas. Most of this is positive, but it’s also become clear that, as in any other major industry, consumer protection regulation is needed to curb the worst excesses of behaviour found on social media platforms and search engines.
Last summer I was elected to chair the Joint Committee on the Draft Online Safety Bill, which was created to stress-test the Government’s plans to establish a new regulatory framework in order to make the UK ‘the safest place to be online in the world’. We took evidence from a wide range of witnesses, including businesses, trade bodies, civil rights groups, and current and former tech employees. I read with particular interest the 2020 Centre for Policy Studies paper ‘Safety without Censorship’, calling for the Government to rethink its approach to the bill and find the right balance between safety and civil liberties.
In the current draft of the bill the ‘safety duties to protect adults,’ set out in Clause 11, ask platforms to take action not just against the hosting and promotion of criminal content, but also some content that is considered to be ‘legal but harmful’. There was wide agreement in the evidence received by the committee that this definition lacked clarity both for users and the platforms themselves, and could lead to over-reach with social media companies removing legitimate content. So, we have recommended the Government remove Clause 11.
In its place we propose, in our report published last month, to base the regulatory framework on existing offences in law, and new offences proposed by the Law Commission. This would include, for example, racist abuse, religious hatred, and the promotion of known financial frauds. We also support the creation of new offences to address cyberflashing, deliberately sending people with epilepsy flashing images, and the promotion of self-harm and suicide. In short, if it’s illegal offline, it should be regulated online. On this basis we think that the Safety Codes established by the regulator should be mandatory, meaning that the safety standards online are based on UK law, rather than terms of service written in Silicon Valley.
Under this system the regulated service providers would have a duty to mitigate content that was in breach of the codes of practice. This could mean removing that content, but also ensuring that the systems of social media platforms are not being used to actively promote it to other users. Freedom of speech is not the same as freedom of reach, and whilst people have a right to express their opinions, they have no more right to demand that social media platforms amplify them than they have a right to demand that they are printed in a newspaper.
Making sure the Online Safety Bill is not a ‘Censor’s Charter’ was also our priority. The committee has recommended that protecting freedom of speech should be one of the core safety objectives of the regulator. If companies are removing content or accounts without clear grounds for doing so, then they should be held to account for that by the regulator. We recommended an automatic exemption for news organisations from the codes of practice, as they already have clear editorial guidelines and systems for people and organisations to seek redress. Also, that there should be a general presumption that content from news organisations will not be removed by the regulated online service providers.
The Online Safety Bill can be world leading legislation, clearly setting out the obligations for big tech companies to keep people safe online and creating an independent regulator with the power to take action against them for non-compliance. Some have questioned whether oppressive regimes might use similar powers to suppress the voices of opposition in their countries. However, in an unregulated internet, this is happening already. Around the world journalists are victims of cyber espionage, opposition groups are targeted by hate campaigns that often lead to physical violence, and alternative voices are drowned out by orchestrated state-sponsored campaigns of disinformation. Often the big tech platforms do nothing about it at all.
The Joint Committee took evidence from Maria Ressa, this year’s co-laureate of the Nobel Peace Prize. Maria is under constant threat from the Filipino government, who have cooked up spurious ‘cyber-libel’ charges against her for her journalistic work at Rappler.com. She made clear her support for our bill, and told us ‘we definitely need legislation’, adding that ‘doing nothing pushes the world closer to fascism’.