Anti-vaccination disinformation is harmful and must be addressed in the government’s Online Harms Bill

Anti-vaccination disinformation is harmful and must be addressed in the government’s Online Harms Bill

Written for The House Live / PoliticsHome. Published 18 November 2020

We need social media companies to work proactively to identify and remove harmful disinformation, including anti-vaccine conspiracy theories - not shirk this responsibility.

The prospect of mass vaccination against the coronavirus is now close to being a reality, with the news that Pfizer and Moderna vaccine trials are demonstrating positive response rates of respectively 90% and 95%, as well as of the progression of trials in Oxford and elsewhere. But the shadow of so-called ‘anti-vaxx’ disinformation looms over us all.

For a vaccine to be an effective solution to Covid-19, the uptake rate needs to be as high as possible. Yet research from YouGov for the Center for Countering Digital Hate in June found that 31% of Britons could refuse to be vaccinated once a Covid-19 vaccine is found.

This is concerning but perhaps not surprising.  According to campaign group Avaaz, the top 10 websites spreading health disinformation on Facebook have almost four times as many estimated views as organic content from the world’s 10 leading health institutions’ websites. The platforms’ own algorithms are pushing anti-vaxx content over authentic health information.

This summer, the Center for Countering Digital Hate reported over 900 anti-vaccine disinformation stories that appeared on Facebook, Instagram and Twitter, and the companies failed to remove 95% of them. This is even more serious when we consider that the algorithms of social media amplify and promote content to their users.

An internal investigation conducted by Facebook that was leaked and published this year, showed that 60% of people in Germany who had joined groups on the platform that were sharing extremist content, did so at the recommendation of the company. The business model of these platforms is based on holding people’s attention and making money out of it, often regardless of what interests its users.

Last week the Secretaries of State for Health & Social Care and Digital, Culture, Media & Sport announced that, following recent discussions with the tech giants, Facebook, Twitter and Google had committed to the principle that ‘no user or company should directly profit from Covid-19 vaccine mis/disinformation’, guaranteeing ‘a timely response to disinformation content flagged to them by the government’.

The cross-departmental Counter Disinformation Unit will flag dangerous content found on their own platforms to the tech giants; and these companies have yet again promised to take a firm stance on the issue. But crucially, they’ve once again shirked the responsibility of having to proactively find and take down harmful content themselves.

We need the companies to work proactively to identify and remove harmful disinformation, including anti-vaccine conspiracy theories. This category of disinformation should be included in the area of harmful content to be identified in the government’s proposed Online Harms Bill, which will create a legal duty of care for the companies to act against harmful content.

There has always been a tension between people’s freedom of speech rights and the harm that speech can cause others. Whilst people are free to express their views, they do not have a right to have those opinions amplified to millions of people on social media via opaque algorithms, especially when they are harmful baseless conspiracy theories.

Copyright 2024 Damian Collins. All rights reserved

Promoted by Dylan Jeffrey on behalf of Damian Collins, both of FHCA, 4 West Cliff Gardens, Folkestone, Kent, CT20 1SP.

Site by FLOURISH

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram