The coronavirus pandemic has been punctuated by outbursts of online hate that have had real-world consequences. The most alarming examples are the 6 January riot in Washington, which was spurred by rightwing groups organising online, and the racist abuse of England footballers during Euro 2020 that culminated in a mural of Marcus Rashford being defaced in Manchester.
Following the killing of Sir David Amess, there have been renewed concerns about whether lockdowns have created the conditions for a surge in hate, as frustrated extremists or people vulnerable to radicalisation hunkered down over their laptops and mobile phones.
The Report Harmful Content platform, run by the UK Safer Internet Centre, reported a 225% increase in online hate speech incidents in the UK last year.
According to Imran Ahmed, chief executive of the Center for Countering Digital Hate, a US- and UK-based campaign group, the pandemic has given an edge to the existing culture of online abuse and hate associated with social media and video-sharing platforms. “It’s just a little bit nastier than ever before,” says Ahmed.
He says coronavirus has enhanced a link between “disgust sensitivity” and xenophobia, whereby a sense of disgust – triggered by disease, for instance – makes people find their own group more attractive and those outside it less so. “The pandemic has driven all types of movements which are based on protection of identity groups, those groups people feel kinship to.” Pointing to the US, he adds: “If you look at the surge in rightwing, authoritarian identity movements, that has been really, really stark.”
According to Ofcom, the UK communications regulator, British adults were online for longer than their counterparts in other European countries last year, spending on average three hours and 37 minutes online each day, a rise of 8% on 2019.
Even without exact platform-by-platform data on instances of hate crime, the political and regulatory environment for social media companies after lockdown has hardened. In the US the competition regulator is trying to break up Facebook, in the EU the competition commissioner also says big tech firms should be broken up, while in the UK the draft online safety bill will impose a duty of care on social media companies to protect users from harmful content.
There have been renewed calls to tackle abuse from anonymous social media accounts since the killing of Amess on Friday. The UK’s home secretary, Priti Patel, hinted at a crackdown on online anonymity on Sunday, although attempting to ban it entirely would raise objections on freedom of speech grounds. Companies under the scope of the online safety bill, such as Facebook and Twitter, will be expected to tackle anonymous abuse as part of their duty of care.
Damian Collins, the Conservative MP chairing the joint committee scrutinising the online safety bill, said anonymity should be retained, but social media companies must have a means of identifying abusers.
“I don’t think anonymity should be taken away,” said Collins. “But if users exploit it to break hate speech laws or those against incitement of violence, I think social media companies should have enough information on who they really are so that they are able to clearly identify them to the police.”
Moderation of harmful and radicalising content will remain a point of focus. The Wall Street Journal published another exposé of Facebook’s safety systems on Sunday when it cited internal documents indicating that the platform was struggling to deal with hateful content. According to one internal document, Facebook’s automated systems removed posts that generated 2% of the views of hate speech on the platform.
In response, Facebook said the statistics did not reflect its full range of measures to tackle hate and that over the past year it had halved the amount of views of content that contained hate speech, to five out of every 10,000 content views. Nonetheless, the regulatory and political pressure is going to increase against online hate and the companies that host it.