As the father of two children, aged 14 and 12, their safety now, and the nature of the world we are building for them for the future, are my main concerns. The internet is already central to how most of us stay connected, access information, work and play games. With the development of new extended-reality technologies, the experience of being online might become almost indistinguishable from the real world. So, if our public square is increasingly to be found on the internet, in a space created by one of the major tech companies, what kind of place is it?
In recent years we’ve become all too aware of the growing problem of online disinformation driving dangerous conspiracy theories and fake medical information, leaving people uncertain what to believe. Too many people have become victims of abuse online or have been defrauded by financial scams. Vulnerable young people are being targeted with content that glamorises self-harm and worse.
During our joint committee hearings on the Online Safety Bill, the former England footballer Rio Ferdinand spoke of his experience of being targeted with racial abuse and noted that unlike other forms of bullying, you can’t just shut it out, because it’s in the palm of your hand every time you look at your phone. That’s why, with the publication of our joint committee report today, we’re pressing the start button on making the internet a safer place.
These concerns have been raised many times with the big social media and online service providers. Yet despite all of their soft words, the problems have got worse. As Facebook whistleblower Frances Haugen told us, these companies have a business model based on holding users’ attention for as long as possible, and time and time again, they favour that engagement over safety concerns.
Big tech has failed its chance to self-regulate and this role needs to be taken on by an independent regulator with legal powers to hold the companies to account. The committee were unanimous in their conclusion that we need to call time on the wild west online. As the regulator, Ofcom should have the power in law to set minimum safety standards for the services it will regulate and to take enforcement action against companies if they don’t comply.
What’s illegal offline should be regulated online, and we have set out recommendations to bring more offences clearly within the scope of the Online Safety Bill. This includes taking action, for example, to mitigate terrorist and child sexual exploitation, online fraud, dangerous disinformation and abuse on the grounds of a person’s race, religion or sexual orientation.
We’re also endorsing the recommendations of the Law Commission to create new offences for cyberflashing, sending people with epilepsy flashing images, and for promoting suicide or self-harm. This would mean that social media platforms would have a duty in law under the Online Safety Act to identify and mitigate – including take down – such content and activity. If they fail to meet their obligations under the act, then they would be held liable, face large fines and in the worst cases, criminal sanctions.
I believe that our report represents the strength of feeling in parliament about this bill, on all sides. I look forward to debating an updated Online Safety Bill when the government introduces it to parliament in the new year.