Article by Dan Milmo for the Guardian - published 14 December 2021
Britain’s online safety bill needs a sweeping overhaul to prevent children from accessing pornography, vulnerable people from being encouraged to commit self-harm and negligent tech chiefs from failing to protect users, according to a committee of MPs and peers.
A wide-ranging series of proposals to amend the pioneering legislation also includes creating a new criminal offence for cyberflashing, punishing tech platforms for hosting fraudulent adverts, and exempting news organisations from content takedowns.
The recommendations by the joint committee on the draft online safety bill will tackle an industry that has become the “land of the lawless”, according to committee’s Conservative chair, Damian Collins MP. “A lack of regulation online has left too many people vulnerable to abuse, fraud, violence and in some cases even loss of life,” he said.
Other changes urged by the report include:
The online safety bill covers websites and apps that offer user-generated content, such as Facebook, Instagram, Twitter, TikTok and YouTube, as well as search engines such as Google. It imposes a duty of care on these companies to protect users from harmful content, or face substantial fines levied by Ofcom, the communications regulator charged with overseeing the legislation once it becomes law.
However, the draft legislation has been criticised by campaigners for leaving too many loopholes on a range of issues from preventing children from accessing pornography to tackling anonymous abuse. The committee’s recommendations include introducing new criminal offences for: cyberflashing; encouragement of serious self-harm; deliberately sending flashing images to people with epilepsy with the intention of inducing a fit; and sending false communications – such as deep-fake videos – which intentionally cause “non-trivial” emotional, psychological or physical harm.
The committee warns that cyber-flashing has become a serious and prevalent problem online, with more than three-quarters of girls aged 12-18 and four out of 10 of all women reporting having been sent unsolicited images of penises. “Regardless of the intention(s) behind it, cyberflashing can violate, humiliate and frighten victims, and limit women’s participation in online spaces,” said the report.
Although the criminal charge for encouraging serious self-harm would apply to individuals, the report also makes clear that platforms must be held to account legally as well and urges the government to make it easier for users and their families to sue platforms for failing to adhere to the act.
The father of Molly Russell, a 14-year-old schoolgirl who killed herself after viewing graphic images of self-harm and suicide online, welcomed the report. “I am glad the era of a self-regulated internet is coming to an end. The platforms must now stop monetising misery and instead be compelled to prioritise safety,” said Ian Russell.
The report also steps up the threat of criminal sanctions for tech executives, who have been warned by Boris Johnson and the culture secretary overseeing the legislation, Nadine Dorries, that they will be in the firing line under an amended bill. The report calls for tech companies to appoint a boardroom-level executive who will be designated the firm’s “safety controller” and will be liable for a new criminal offence: failing to deal with “repeated and systemic failings that result in a significant risk of serious harm to users”. Tech firms have warned that such a move will stymie investment in the UK and could be copied by non-democratic regimes.
The report proposed a series of measures to protect children, including imposing a legal duty on pornography sites to prevent children from accessing them, which is likely to require age-assurance procedures called for by children’s safety campaigners. The committee also calls for social media and video sharing platforms, which bar under-13-year-olds from their apps, to reveal how many underage users they have.
The inclusion of advertising in the report will be seen as another victory for campaigners, who have warned of the devastation caused by fraudulent ads online. Under the committee’s recommendation, Ofcom would be charged with acting against platforms that consistently allow the publication of harmful adverts.
The government is expected to respond to the report early next year, followed by the publication of a revised bill, a second reading in parliament by April and then the bill becoming law in late 2022 or early 2023. The committee’s report is expected to be influential, with Dorries having pledged in November to look at the recommendations “very seriously”.
In a statement, Dorries said: “Our groundbreaking bill will require tech firms and social media companies to take long-overdue responsibility to protect their users – especially children – from a full range of illegal and harmful content. Crucially, the new comprehensive legislation will hold big tech to account if they fail to act.”