Nick Clegg has announced that Facebook has made some ‘big changes’ to its policies in order to make political communication more transparent, and prevent foreign interference in elections, as was seen in America in 2016 when a Russian agency targeted voters with fake adverts. However, there is a lot more it should be doing, and its policies remain much weaker that those adopted by Google and Twitter.
In his article in The Daily Telegraph, Sir Nick sets the context for these changes by saying that, ‘America was deeply divided long before the events of the last few months.’ That’s true, but there are people at Facebook who believe that the company is contributing to these divisions. The Wall Street Journal recently revealed an internal Facebook report from 2016 found that, ‘64% of all extremist group joins are due to our recommendation tools,’ driven by the company’s algorithms. ‘Our recommendation systems grow the problem.’ Two years later, another internal report repeated the warning, stating: ‘Our algorithms exploit the human brain’s attraction to divisiveness.’ These reports underline why what happens on Facebook is an increasingly important part in shaping the way many people see the world around them.
Nick Clegg has also stated that Facebook is setting new policies on political messaging and election campaigning, ‘in the absence of government regulation.’ I agree with him that there are aspects of our election law that need to be updated for the digital age, but we should rememberthat it is already against the law in the UK and America for a foreign person or organisation to pay for advertising for political purposes. What the Russians did in 2016 was illegal: the problem was that Facebook was under nolegal obligation to prevent it happening. That’s why its systems didn’t pick up someone from St Petersburg paying in roubles to run adverts targeting voters in Texas. In fact, when I asked Facebook in 2018 whether they thought any similar interference had happened previously in the UK, the company made it quite clear that it was not obliged to look for evidence of this, but would respond to intelligence which proved it was there. The problem here is one that goes to the heart of the debate on what the responsibilities of social media companies should be for the activity that takes place on their platforms. In America, these companies have consistently lobbied to maintain their ‘safe harbour’ status whereby they donot have legal liability for the content that they curate and promote to their users. Since Donald Trump became President, these provisions, which are contained in Section 230 of the 1996 Communications Decency Act, have also been incorporated into the new USA trade agreements with Canada, Mexico and Japan, and the USA would like the same arrangement with the UK, and all with the support of the big tech companies. This is something that we cannot allow to happen. So rather than Facebook having to act ‘in the absence of government regulation’, this kind of intervention is something that it would do everything in its power to resist.
When we look at the specifics of the new policies that Facebook has announced, there are some positive changes, but they fall short of addressing the fundamental problems. First, it is right to ‘be blocking all ads in the US during the election period from state-controlled media organisations from other countries.’ If by this it means, for example, Russian news agencies like RT and Sputnik however, much of their content is shared not through advertising but by people posting and sharing news stories or promoting them through Facebook groups. Previous investigations have found examples of Facebook groups with hundreds of thousands of members, each sharing content in support of extreme political voices. Also, the fake Russian ads that ran during the 2016 election in America were not placed by ‘state-controlled media organisations’, but by shadowy fake campaigners.
Facebook has previously announced that to run a political advert you have to have proven your identity and location in the process of setting up a Facebook Page. Also, it is their policy that ‘All political ads must have a “paid for by” disclaimer attached to them – a label which will remain on the ad even if it is shared – and information on which voters are being targeted by the ads, and how many saw them, is logged in an ads library for everyone to see.’ This is all well and good - but Elizabeth Denham, UK Information Commissioner, has previously stated, in evidence to the Digital, Culture, Media and Sport (DCMS) select committee, her concerns that it is relatively easy for the true funder of a political campaign to hide their real identity behind a front person who has set up the account. We have also seen technical problems with the Facebook political ad library which crashed for a time during the 2019 UK general election.
The one genuinely new measure that Facebook has announced is that it will be giving users the choice to turn off political adverts, ‘so that they don’t see them in their feed – not just in the US but globally from the autumn.’ If by this it means that by changing your personal settings you can stop receiving any political ads, rather than just blocking an individual advertiser, which you could already do, then this is progress. I had asked the company’s Chief Technology Officer, Mike Schroepfer, when he gave evidence to the DCMS select committee in April 2018, why at that time Facebook wouldn’t allow users to do this, and his response was that, ‘if there are advertisers who want to reach all people, it is hard to say, “No, you can’t reach that particular person.”’
However, the big reform that Facebook should make, and which YouTube already has, has been avoided: to stop the Cambridge Analytica-style micro-targeting of users with political adverts based on detailed analysis of their data. That means even when you choose not to reveal your political affiliations on Facebook, the company can guess what they are anyway and sell that information to advertisers. Of course, Twitter has gone further still and banned all political adverts from its platform.
Last week in his open letter to Facebook, former Vice President Joe Biden called for the company to ‘promptly remove false, viral information’ and ‘to prevent political candidates and PACs (Political Action Committees) from using paid advertising to spread lies and misinformation.’ Facebook is again not prepared to act on disinformation unless ‘it will cause imminent harm or suppress voting’ which, as Nick Clegg has himself admitted, is hard to prove. Social media companies should not be required to arbitrate on political debate, but they should have a responsibility to act when people are abusing their systems and advertising tools to deliberately target people at scale with information that is demonstrably not true. Last year, a deep fake film was released of The Speaker of the US House of Representatives, Nancy Pelosi, at a press conference which had been deliberately distorted to make her sound drunk or impaired. YouTube agreed to remove this film, but Facebook did not. If a similar film was released in the final weeks of the American election campaign to damage one of the candidates, does Facebook really believe that this is a matter of free speech - and should it be left to Facebook to make that decision?
Fundamentally not enough has changed but other companies are now starting to take steps to act against harmful disinformation that they won’t. If you were a bad actor wishing sow division and confusion with campaigns of fear and lies, Facebook would be your social media platform of choice.