Keynote Speech - European Data Summit 2022

0935hrs (CET) Friday 2 December 2022, At the Representation of the European Commission in Germany, Unter den Linden 78, Berlin.

Thank you to Dr Pencho Kuzev and the Konrad Adenauer Foundation for inviting me to speak at this European Data Summit. I have been pleased to attend numerous meetings organised by the Foundation in my twelve years as a member of the UK parliament. Their work to support dialogue around the world with policy makers and parliamentarians in Germany is greatly valued by everyone who has engaged in such events. I would say as well, particularly as we are meeting here in Berlin, that I believe sharing insights, ideas and experience as we develop public policy to meet the challenges of the digital age, will be hugely beneficial to all. I will certainly do all I can to promote such discussions, not just between the UK and Germany, but between the UK and the institutions of the European Union, as a good neighbour on the continent that we all share. 

This is an important moment to be discussing the regulation of user-to-user services, like Facebook, Twitter, TikTok and YouTube. Online platforms are increasingly the public square of our democracies, the place where people go for news and to exchange ideas. They allow people to speak out and reach new audiences with greater ease than ever before. However, they have also become places where truth is drowned out by disinformation, public health undermined by conspiracy theories, people daily face intimidation, children are encouraged to self-harm and where the foundations of our public institutions, including government, parliament and the public media, are being undermined. Some of this has been fuelled as well by the active campaigning of hostile foreign states like Russia, who use disinformation as a weapon. The great danger that societies face today is not so much one big lie, but that people no longer know what to believe or who to trust. We cannot leave child safety and the health and strength of the public square, to a black box. The laws we have established to protect people in the offline world, need to apply online as well. 

We are now seeing different approaches around the world to setting standards to keep people safe on social media, from the work of the eSafety Commissioner in Australia, the passing of the Digital Services Act in the European Union, and the UK’s Online Safety Bill, which will be approved by the House of Commons in the next few weeks. In the United States of America as well, there is a growing consensus that the liability shield created by section 230 of the 1996 Communications Decency Act needs to be reformed: even if not yet an agreement on how.  

What lies at the heart of all these approaches is the idea that platforms should be held to account for the enforcing the community standards they promise to their users, and to prevent their services being used to break the law. 

For most users their experience on social media platforms is not that of searching for content, but content finding them. Recommendation tools driven by Artificial Intelligence are data profiling users in order to provide them with the content they are most likely to engage with. This process is designed to hold people’s attention for as long as possible, and to prompt them return as often as possible. These systems were not always central to the user experience on social media but have been developed for business reasons by the platforms. It is in part, because of their active curation of content, that social media companies cannot claim to be the neutral hosts of words, images and films created and shared by others their platform. They are shaping the experience to make money from it, and they should have liability for that. 

We should be concerned as well about the impact of AI driven recommendation tools on user experiences on social media, and how they shape their view of themselves and the world. In the UK, a coroners court recently determined that a fourteen-year-old girl, Molly Russell, ‘died from an act of self-harm whilst suffering from depression and the negative effects of on-line content.’ The truth is that if you a vulnerable person, those vulnerabilities will be detected by social media platforms and will influence what you see. Someone who is already self-harming is more likely to see content that will encourage them to continue and do worse.   

Recent data published by NHS Digital in the UK has shown that more than a quarter of 17 to 19-year-olds had probable mental health issues in 2022, compared to one in six in 2021. In 2017, the rate was just over 10% in that age group. One in eight of all 11 to 16-year-old users of social media reported they had been bullied online. This rose to nearly one in three among those with a probable mental health issue. Among all 17 to 24-year-old social media users, one in five young women had experienced online bullying compared to half that number amongst young men. There was a time that a child could escape a bully and find a safe place at home, but there is no sanctuary when the bully is in the palm of your hand, along with other apps you use daily to work, socialise and stay connected. I know this is an issue that concerns all parents, and from the campaigning work of my 15 year old daughter Claudia, is an issue that frightens young teenagers as well. 

The echo chambers of social media have also created spaces where extremism, radicalisation and conspiracy can thrive. A research study conducted by Facebook in Germany in 2016 showed that 60% of people who joined groups sharing extremist content did so at the active recommendation of the platform. During the Covid-19 pandemic, the Center for Countering Digital Hate published a report called the ‘Disinformation Dozen’ which showed that just 12 people were responsible for 65% of the anti-vax disinformation that was circulating on social media, driven by the platforms recommendation tools. 

So before we consider how we should established higher safety standards for social media, we must recognise that we are regulating the systems that drive these platforms, not just the content placed on them. For this reason, whilst the design of these systems can in some circumstances cause harm, the response cannot just be greater user awareness and media literacy training. The platforms themselves must be held responsible. To believe this doesn’t make you anti-tech, any more than requiring seat belts to be installed in cars makes you anti-car.  

How then will the online safety bill address these issues, for companies that are serving users based in the UK? Our approach starts with recognising the liability of the platforms and the responsibility they have to both uphold the law as well as the promises they make to their users. It also requires platforms to be proactive in identifying and mitigating illegal activity, as well as behaviour that is in breach of their community standards. They cannot just wait to act on user referral, or treat users complaints on a case by case basis, without considering other examples of the same information, displayed in the same context and with the same meaning elsewhere on their platform. Nor can they have an excuse, that they have simply failed to monitor content that has met the threshold for mitigation, especially if it has already been recommended to other users. Content in Facebook ‘newsfeed’, ‘next-up’ on YouTube for ‘for you’ on TikTok must have already been monitored by the platform in order to be recommended. 

The Online Safety Bill sets our priority illegal harms, written on the face of the legislation including not just child sexual exploitation, and terrorist content, but other offences like incitement to violence, harassment, race hate and fraud. It also creates new offences for cyberflashing, the deliberate targeting of people suffering from epilepsy with flashing images and promoting suicide and self-harm. With all these offences the platforms will have to demonstrate to the regulator, Ofcom, and agree through codes of practice how they will identify and mitigate this content. This will require them to establish whether or not they believe content on their platform has reasonably met the criminal threshold, and then take action to remove and or mitigate that content. 

There are some who say that this will require platforms to pre-empt the law and make content moderation decisions based on their interpretation of it. However, this already happens through the enforcement of the platform’s community policies. What we are saying though is that the minimum safety must be based on our laws, not just policies written in Silicon Valley. 

For users I also believe that freedom of speech is not the same thing as freedom of reach, an idea that it now appears that Elon Musk is embracing at Twitter, where he has promised to demonetize and not promote tweets containing hate speech or otherwise ‘negative’ content. We all know that there has always been a tension between freedom of speech rights and the harm that speech can cause others. Social media platforms should not be promoting speech that incites violence, race hate and other criminal activity. Whilst people have a right to express unpopular and controversial opinions, they do not have a right to have their views amplified to millions of people on social media, any more than they have a right to be broadcast on television and radio, or be printed in someone else’s newspaper. Online platforms will also be required to remove networks of accounts where it has been clearly shown that they are being used by a hostile foreign state to spread disinformation. 

The Online Safety Bill does though respect the right of media organisations to reach their audiences through social media platforms, without those same platforms effectively becoming their editor in chief. Recognised media companies, where named individuals are legally responsible for their content, will not be regulated under this system, unless they are publishing material that is clearly illegal. Where a social media platform believes the content from that news organisation is in breach of their community policies, they will not be able to remove that content until after an appeal from the media company has been heard. 

The regulator Ofcom will also conduct risk assessments to see how effective platforms are at enforcing their terms of service and ensure that those community standards are clearly set out to users. If a social media company claims that it doesn’t tolerate hate speech, then Ofcom will have the power to investigate what they are doing to address it. Should companies fail to comply with requests from the regulator for information then a named company director could face criminal sanctions. Overall, failure to work to meet the requirements set out in the Online Safety Bill could leave companies liable for fines of up to 10% of global annual revenues, a sum that for some companies could run to £billions. 

There are additional protections as well for children. Online platforms that host adult only content must have systems in place to prevent under 18s from accessing it. This would apply for social media platforms that host adult content as well as to commercial pornography platforms. Platforms will have to demonstrate to Ofcom how they can effectively enforce this. There will also be greater transparency around age assurance. If a social media app like Instagram or Tik Tok says that you have to be 13 years old to use their service, Ofcom will ask them to demonstrate how they enforce this rule, beyond self-certification from the user. This could be through using device level settings which can prove the age of a child, or other technologies like Yoti, that help to identify younger and olders users. Ofcom could also request evidence from platforms showing how old they believe their app users actually are, from data that they hold about their online activity or have acquired from devices. 

The Bill will also require platforms to offer more safety features for adult users to opt in to. This will mean that users could request not to see hate speech that neither meets the legal threshold for removal, nor is in breach of the platform’s community standards. 

The Online Safety Bill is in effect regulating the output of social media recommendation tools powered by artificial intelligence. AI in an enabling process so we also need to consider the inputs into those systems, and whether they have been established along clear safety by design principles. In the UK, in addition to considering the regulation of the output of AI systems, we have established the AI Standards Hub, an initiative that has been funded by the government and is being led by the Alan Turing Institute in partnership with the British Standards Institution, the National Physical Laboratory and supported by Department for Digital, Culture, Media and Sport and the Office for AI. 

The Hub’s mission is to advance trustworthy and responsible AI with a focus on the role that standards can play as governance tools and innovation mechanisms. Product safety, cybersecurity, and resilience in non-AI contexts have long been established as important areas for standardisation. The Hub will consider the role of established standards in these areas when it comes to the AI context. Trustworthy AI sets out to enable technologies and prevent misplaced trust which can lead to harm.  

We are still in the early stages of understanding how to regulate and set standards for user to user services online. New technologies, like the Metaverse, whilst still being in scope of the Online Safety Bill, will present new challenges on how to identify illegal activity in live virtual reality environments. We will need to continue to respond both to the challenges and opportunities of our technological future. The passing of our current legislation is just the next stage in that journey, and effective partnership between governments, regulators and the tech companies themselves, is the best guarantee of success. 

Published: 2/12/23

Copyright 2021 Damian Collins. All rights reserved

Promoted by Stephen James for and on behalf of Damian Collins, both of Folkestone & Hythe Conservative Association both at 4 West Cliff Gardens, Folkestone, Kent CT20 1SP

Site by FLOURISH

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram