Introductory remarks delivered to the 4th meeting of the International Grand Committee on disinformation and data privacy, 2 December 2020
Next year, the UK Parliament will debate a Government Bill to regulate Online Harms and establish that social media companies have a duty of care to their users, which includes the responsibility to act against harmful content. Some of this is already illegal, like images of child abuse, but there will also be requirements for the take-down of content that is harmful but not necessarily illegal, including disinformation. An independent regulator will be given powers to oversee compliance, which may include obligations for companies to go further than their own terms of service. Just as the UK Information Commissioner can prosecute firms for breaches of data protection law, an online harms regulator could act against them for failing in their duty of care.
In other industries the principle that companies follow regulatory guidelines to protect citizens from harm is well established. In the UK, Ofcom regulates broadcasters on grounds of taste and decency, and the fair balance of opinions in their programmes. The Financial Conduct Authority and the Gambling Commission ensure that banking products and gaming machines don’t expose users to unfair risk. It’s time for the tech sector to catch up. It shouldn’t just be left to people like Mark Zuckerberg to determine when Instagram should remove images that could lead teenagers to self-harm, how quickly the company should respond to a terrorist attack being broadcast on Facebook Live, or whether it’s ok for Steve Bannon to call for the beheading of public servants on his Facebook Page. Facebook say they remove 95% of harmful content before anyone reports it, but there has never been any independent scrutiny of that, and they frequently deny academic researchers the opportunity to do so. There needs to be external auditing of how effectively they remove harmful content.
We should remember too, that the real harm we are regulating here has not been caused by people crossing the line between freedom of speech and the harm that speech can cause others, but a business model that makes money by amplifying content based on engagement, regardless of whether it is harmful or not. Freedom of reach is not the same as freedom of speech.
We saw as well in Germany in 2016, from an internal Facebook report, that 60% of people who joined Facebook groups promoting extremist content did so at the recommendation of the company. That is clearly irresponsible and should be a breach of their duty of care.
In the next year we have a massive public health challenge, to vaccinate the world against COVID-19. Today in the UK, the Pfizer vaccine has received regulatory approval and the first doses will be administered next week. One of the greatest risks to the success of this programme is anti-vaccine disinformation warning people not to take it. This is one of the clearest examples of the real-world harms that fake news can cause.
Recent analysis by CounterAction found large quantities of anti-vaccine disinformation on Facebook. This included more than 30,000 posts in Germany, which for example compared vaccinations with the Holocaust, claimed there would be a “vaccine genocide” and that the vaccine will cause cancer. Their analysis found two million Germans were members of groups sharing such content.
According to the campaign group Avaaz the top 10 websites spreading health disinformation on Facebook have almost four times as many estimated views as content shared from the websites of the world’s 10 leading health institutions. Here, Facebook’s own algorithms are pushing anti-vaxx content over authentic health information.
The impact of this is declining trust in the vaccine. According to research by the Hamburg Center for Health Economics, 70% of Germans said they would take the vaccine in April 2020, but last month that had fallen to 57%. The same research showed that 69% of people in the UK would now take it, but in France it was just 46%.
The is not just a public health challenge, but an example of why legislating to combat harmful disinformation is so necessary.