The senior Facebook executive Andrew Bosworth wrote in a now infamous internal company memo “the ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good”.
The trouble the tech giants face now, is that increasingly governments and parliaments around the world no longer believe this to be the case.
These companies make money from users’ engagement with their services, and have developed business models based on holding that attention for as long as possible, whatever it takes.
Tuesday’s quarterly results call with Facebook’s investors proved that their model continues to be profitable. It is this approach that has allowed social media to become a quagmire of hate speech, anti-vaccine conspiracy theories and harmful content.
They’ve allowed their systems to be used by foreign states to interfere in the elections of other countries and even for the organisation of an insurrection in Washington DC on Jan 6 that led to the storming of the US Capitol building.
In recent high-profile examples in this country, such as abuse on social media directed at black members of the England men’s football team after their defeat against Italy in the final of the European Championships, the problem wasn’t just that they were the targets of racist language and monkey emojis but the recommendation tools of the platforms were actively promoting this content to other users.
This example represents one of a series of questions to be answered by a new law to be put through Parliament, the Online Safety Bill. This draft law will finally put a legal framework around hate speech and harmful content and empower an independent regulator to hold the tech giants to account for their role in hosting and promoting it. This is a once-in-a-generation piece of legislation that will update our laws for the digital age.
A new joint parliamentary committee, comprising members of both the Lords and Commons, and which I chair, is now charged with examining this Bill line-by-line to make sure it is fit for purpose. However, we need the public and expert bodies help with this work.
Our social media feeds are personalised to content the algorithm thinks we want to see and engage with. One of the ways the public can help this committee is to tell us what you see on your feeds that you think might be harmful. This could be something that is recommended by, for example, the “for you” feature on TikTok, YouTube’s “next up” or content ranked near the top of your Facebook “news feed”.
Unless content is reported, these incidences – whether they be terrible images of self-harm or violence – can otherwise be unknowable to regulators.
This is part of the problem of creating new laws in this area, that the tech companies don’t always allow independent researchers to access this content and to understand how it spreads. So if you see harmful content, take a screenshot and send it to us at email@example.com. We won’t share your identity but it will help our research.
In the social media age, we have still not yet got the balance right between stopping harm and protecting freedom of speech, and now we have the opportunity to fix it.