A 1964 Ku Klux Klan meeting led by Clarence Brandenburg in Hamilton County, Ohio and recorded by a Cincinnati television station has become central to two big debates this year. Firstly, should President Donald Trump have been impeached for his part in an insurrection at the US Capitol on the 6 January, which sought to prevent the Congress confirming the election of Joe Biden as America’s next President. Then, whether Facebook and Twitter right to de-platform him by closing down his social media accounts.
Following that meeting of the Klan, Brandenburg was convicted for advocating unlawful violence after a racist and anti-Semitic speech calling for “revengeance” and for people to march on the Washington DC political establishment that “continues to suppress” white people. He appealed his conviction in a case that was eventually heard by the Supreme Court in 1969, which found in his favour and ruled that the First Amendment of the Constitution protected a citizen’s freedom of speech rights – unless that speech is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action”. Brandenburg’s defence rested on the lack of immediacy and the uncertainty over whether people would have acted on his words.
Unsurprisingly, Facebook’s definition of harmful speech on its platform is based on this ruling. In the last year they have removed content calling for people to set fire to 5G mobile phone masts, on the false claim that their signals spread COVID-19, and also removed posts dangerously advocating the use of poisonous substances as a panacea against the virus. However, they took no action against the campaign by Donald Trump and his supporters which asserted that the Presidential election had been stolen from him, a movement that led to the deaths of five people in violent scenes when they stormed the Capitol on 6 January.
When Trump’s former chief strategist Steve Bannon used Facebook to call for the decapitated heads of public officials who weren’t “with the program” to be placed on spikes, and cited how in the American Revolutionary Wars people were executed for collaborating with the British, Mark Zuckerberg responded that it “did not cross the line” that would require his account to be suspended.
In the period from the election night on 3 November 2020 until the insurrection, Donald Trump made over four hundred comments falsely claiming the election result was fraudulent. He called for “wild” protests and told supporters at his ‘Save America’ rally on 6 January, “We fight like hell. And if you don’t fight like hell, you’re not going to have a country anymore.” At the same event his personal lawyer, former New York City Mayor, Rudy Giuliani, called for “trial by combat” and his son Donald Trump Jr. warned members of the Congress, “We’re coming for you.” In February, at Donald Trump’s impeachment trial on the US Senate, Democrat Congresswoman Diana DeGette recalled the words of one of the protestors: “the attack was done for Donald Trump, at his instructions and to fulfil his wishes.”
According to Professor Richard Wilson, from the University of Connecticut School of Law and author of ‘Incitement on Trial’, as a result of Trump’s actions that day, “He should be criminally indicted for inciting insurrection against our democracy.” However, Mark Zuckerberg at Facebook and Jack Dorsey at Twitter didn’t need to wait for a decision of the courts. Protected in law from liability for any action they take to moderate speech on their platforms, they kicked Trump off. I believe they should have acted well before 6 January to prevent their systems being used to boost the ‘Stop the Steal’ campaign. In the social media age, we need to be concerned not just about the immediate advocacy of an unlawful act, but the increasing amplification of a constant drumbeat of false and harmful messaging that can radicalise its recipients. There’s also the legitimate question about whether it should be left to the chief executive of a social media company alone to decide when to silence the President of the United States.
However, concerns about whether social media is being used to incite harmful behaviour go well beyond politics. These platforms have been used to coordinate genocide in Myanmar and lynchings in India. In 2020, we saw declining levels of trust in Europe in particular, in the likely efficacy of a COVID-19 vaccine, following waves of anti-vax conspiracy theories online. In recent years there have been significant increases in self harm amongst pre-teen girls, reported hate crimes, problem gambling and deaths from drugs. Inducements for all of these are widespread across the internet.
I’ve advocated since the publication of the Digital, Culture, Media and Sport select committee report on Disinformation and fake news in 2018 that we need a regulatory structure which has the legal authority to hold social media companies to account, both when they fail to remove known harmful content and when their systems are found to be actively promoting it. This year in the UK, the government will present its Online Harms Bill to parliament to create such a structure. In the European Union, the Digital Services Act published by the European Commission also proposes a system of oversight and regulation for the big tech companies. In addition to these, the various competition investigations around the world into whether companies like Google and Facebook have abused their market power, and the recent legislation passed in Australia requiring them to make a bigger financial contribution to support the news media industry, suggest a growing appetite to reign in the power of the tech titans.
With Joe Biden safely installed in the White House, there is growing interest in the approach his administration will take to regulating social media. A joint essay published last year by the new Deputy Chief of Staff, Bruce Reed, and Jim Steyer, the Chief Executive of Common Sense Media, suggests a much more interventionist approach from the new government. Here Bruce Reed called for the tech platforms to be held accountable for any content they make money out of, writing: “If they sell ads that run alongside harmful content, they should be considered complicit in the harm. Likewise, if their algorithms promote harmful content, they should be held accountable for helping redress the harm.” Similarly, Tim Wu, the Columbia University professor and author of, ‘The Curse of Bigness: Antitrust in the New Gilded Age’, has been hotly tipped to join the White House National Economic Council. Wu has previously referred to Facebook as a “recidivist company” and said “we need a shake-up in big tech” to promote competition and the consumer interest. In the words of Sam Cooke, one of Joe Biden’s favourite singer-songwriters, it feels like a change is gonna come.