The Times: Independent regulator is only forward for Mark Zuckerberg and Facebook. Like?

The Times: Independent regulator is only forward for Mark Zuckerberg and Facebook. Like?

Article written for The Times - published 11 May 2021

Facebook knows it has a problem with content that can cause harm, and its own research has shown that not only does it allow such content to spread on its systems, but that its recommendation tools actively promote it to users. The company has to be held responsible for the profit it makes from hate, and the harm this is causing in society.

As we await the publication of the UK government’s proposed Bill on Online Harms, the predictable failure of the oversight board to be an effective tool for self-regulation highlights why legislation is required.

It should be for an independent regulator, with powers laid out in law, to set the guidelines for action against harmful content, and to assess whether or not tech companies have acted in accordance with their responsibilities. Decisions on when to act against harmful content should not be left to social media platforms on their own.

Equally, the action of de-platforming a leading political figure should not just be based on the personal preferences of men such as Zuckerberg. We need a regulator with the power to investigate, including the ability to gain access to data and information from within the companies, and with the power to impose meaningful penalties where appropriate.

What’s the appropriate Facebook penalty for inciting an insurrection at your nation’s parliament, leading to the deaths of five people including one police officer, the hospitalisation of 15 more, and the injury of a further 123?

Last week the Facebook Oversight Board decided that it had been appropriate for the company to suspend Donald Trump’s Facebook accounts on January 6, after attendees of his Save America rally stormed the United States Capitol in Washington DC.

The board said: “Mr Trump created an environment where a serious risk of violence was possible. At the time of Mr Trump’s posts, there was a clear, immediate risk of harm and his words of support for those involved legitimised their violent actions.”

The question remains, though, as to how long this suspension should last; and the board has given Facebook a further six months to come up with an answer. However, as Facebook communications chief Nick Clegg said after the decision, the board’s “recommendations are not binding”: the company could just ignore that request.

What would it take for Trump’s indefinite suspension to be turned into a lifetime ban? Would that require more deaths, or an actual full blown and successful coup d’état? We eagerly await Facebook’s response.

The ridiculous non-event that was the announcement of the ruling of the oversight board has demonstrated why decisions like these cannot be left to informal structures created and funded by big tech companies.

First, it could be almost a year before Facebook reaches a final resolution on how long Trump’s accounts should be suspended for. Second, the board was restricted to considering just the act of suspending Trump’s account, rather than Facebook’s failure to act long before the violence started.

The insurrection on January 6 was the culmination of a campaign of lies asserting that the election had been stolen. This started during the election, and was greatly expanded after it became clear that Trump had lost. In the period from election night on November 3 until the insurrection, Trump made 400 comments falsely claiming the election result was fraudulent, including calling for wild protests.

When Trump’s former chief strategist Steve Bannon used Facebook to call for the decapitated heads of public officials who weren’t “with the programme” to be placed on spikes, and cited how in the American Revolutionary Wars people were executed for collaborating with the British, Mark Zuckerberg responded that it “did not cross the line” that would require his account to be suspended.

So does Facebook think it is OK for someone to organise an insurrection of their platform, but that it only becomes a problem when people turn up and the violence starts?

Facebook knows it has a problem with content that can cause harm, and its own research has shown that not only does it allow such content to spread on its systems, but that its recommendation tools actively promote it to users. The company has to be held responsible for the profit it makes from hate, and the harm this is causing in society.

As we await the publication of the UK government’s proposed Bill on Online Harms, the predictable failure of the oversight board to be an effective tool for self-regulation highlights why legislation is required.

It should be for an independent regulator, with powers laid out in law, to set the guidelines for action against harmful content, and to assess whether or not tech companies have acted in accordance with their responsibilities. Decisions on when to act against harmful content should not be left to social media platforms on their own.

Equally, the action of de-platforming a leading political figure should not just be based on the personal preferences of men such as Zuckerberg. We need a regulator with the power to investigate, including the ability to gain access to data and information from within the companies, and with the power to impose meaningful penalties where appropriate.

Copyright 2021 Damian Collins. All rights reserved

Promoted by Stephen James for and on behalf of Damian Collins, both of Folkestone & Hythe Conservative Association both at 4 West Cliff Gardens, Folkestone, Kent CT20 1SP

Site by FLOURISH

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram