Why toxic big tech must be held to account

Why toxic big tech must be held to account

Article written for Mace Magazine - published 18 October 2021

If we want to end the tragedy of the harm that can come from young minds engaging with toxic content on social media platforms, the corporate culture of big tech companies needs to change. “It is our experience that there is frustratingly limited success when harmful content is requested to be removed by the platforms, particularly in terms of self-harm and suicidal content.” That was the view of Ian Russell, in the evidence he gave to the UK parliament’s Joint Select Committee set up to scrutinise the government draft Online Safety Bill, and which I chair. His daughter Molly took her own life four years ago, aged just 14. As the father of a daughter of the same age, I know that the pain Ian will always feel is the worst fear of all parents of teenage children. While we can give them love, support and guidance as best we can in the real world, the virtual one is a personalised black box working day and night to hold their attention. In that space we cannot follow them in every step they take, and they themselves might not always appreciate that every piece of content they engage with influences both what they will see next and the direction of their journey.

Worrying content

You might say that the best way to stop children finding harmful content online is to keep them off the internet. Well, good luck with that. Social media has created an environment where the fear of missing out, the need for positive affirmation through likes and views, and constant lifestyle comparison are now engrained deep into the childhood experience. According to the 2021 Ofcom Media Nations report, 44 per cent of eight- to 11-year-olds use social media sites, even though the minimum joining age is 13, and 27 per cent of children in this age group have seen “worrying or nasty content” online.

For 12- to 15-year-olds, 87 per cent use social media apps and 31 per cent report seeing nasty or worrying content. These figures, while still high, probably understate the true picture. They also clearly demonstrate that there is no effective age verification system for social media.

Teenage anxiety

If it is the case that the experience for many young people includes exposure to harmful content, why do they continue to engage with it? It’s a question Instagram asked themselves in research conducted by the platform over the past two years. This reported back that “32 per cent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse”. Another research presentation told the company: “Teens blame Instagram for increases in the rate of anxiety and depression… This reaction was unprompted and consistent across all groups.” However, a research manager noted: “Teens have told us that they don’t like the amount of time they spend on the app but feel like they have to be present.”

Profit before harm

We know about this research not because the company has shared it with regulators, academics or public health organisations, rather it has formed part of the sensational Facebook Files investigation, published by the Wall Street Journal. As a result of the concern of whistleblowers about the devil-may-care attitude that Facebook, the owners of Instagram, take towards the health and wellbeing of their young users, we can now see some of what they know. This investigation has exposed how the company, time and again, puts profit before harm. Its own research is telling it that a large number of teen Instagram users say the service makes them feel worse about themselves, but the company just wants them to keep coming back, and as long as they are engaged with the app, that’s all that matters to them.

Exploiting the vulnerable

However, the failings of social media companies to improve the user experience are not just based on the fact people engage with harmful content posted by others, but that the platforms themselves actively promote it to vulnerable people. The Pathways research report, recently published by the 5Rights Foundation, highlights how children on Instagram are served harmful content by the platform, including images related to suicide and self-harm. In one example, an account was served both with a Home Office campaign aimed at children to encourage the identification of child abuse, alongside other posts of sexualised images. The role of the engagement-based algorithm of social media companies in promoting and recommending harmful content is something that they are clearly responsible for. This is why legislating on online safety is so important. Research exposing the harm caused by the engagement-based business model of platforms like Facebook needs to be available to independent regulators. Tech execs need to be open to external challenge and scrutiny for the decisions they take. Tech regulation is not just about content moderation but creating oversight of the systems the companies have developed to hold the attention of their users whatever it takes, even if that involves harm, abuse, extremism and disinformation. The time has come to hold them to account.

Copyright 2024 Damian Collins. All rights reserved

Promoted by Dylan Jeffrey on behalf of Damian Collins, both of FHCA, 4 West Cliff Gardens, Folkestone, Kent, CT20 1SP.

Site by FLOURISH

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram