
Increasing incidents of cyberbullying are taking a serious toll on young people’s mental wellbeing. It’s time to hold platforms accountable for their role in fostering digital environments where abuse thrives – and here’s what we can do about it
With social media now a go-to outlet for self-expression and connection for so many people, it’s perhaps unsurprising that the impact of technologically-mediated bullying is increasingly widespread. In fact, one in six adolescents has experienced cyberbullying, according to a 2024 WHO/Europe study. The research, which examined forms of peer violence among youths across 44 countries and regions, found that exposure to online abuse and harassment now surpasses forms of in-school bullying – a consequence of the intensified digitalisation of young people’s everyday lives.
Unlike offline bullying, which has been typically seen as a problem set in specific situations and activities, cyberbullying follows its victims home, invading their personal time and private space. It manifests as a continuous onslaught of alerts and notifications that can reach its victims anywhere.
‘Cyberbullying’ refers to intentionally hurtful behaviour carried out using digital devices. It can include:
– Sending unkind or abusive messages or comments.
– Sharing images or videos with the aim of causing shame or embarrassment to the victim.
– Spreading rumours and false information about an individual.
– Creating hate sites or groups targeting a person.
– Using digital channels to encourage someone to harm themselves or avoid getting the help they need.
All forms of bullying have a severe emotional toll on their victims. However, the often relentless and inescapable nature of online harassment can lead to magnified feelings of anxiety, depression, low self-esteem, heightened stress levels, loneliness, and isolation.
The current crack-down
As part of ongoing efforts to address rising cases of cyberbullying, policymakers have called for schools to implement bans on the use of smartphones, aiming to restrict students’ exposure to harmful online content. Recent proposals plan to implement lockable phone pouch schemes, which will require students to secure their devices in magnetic pockets during the school day.
However, current evidence suggests that these measures have little effect on changing young people’s online habits and behaviours. A study of 1,227 students across 30 different secondary schools across England, by researchers at the University of Birmingham, found that restricting smartphone access at school neither decreased overall device and social media use, nor improved student’s mental wellbeing.
Importantly, as researchers at the London School of Economics argued in a 2024 post, framing smartphone bans as a ‘silver bullet’ that will prevent the negative impact of social media on young people “lets the profit-hungry tech sector off the hook”. Instead, policymakers should be demanding action from the platforms enabling the spread of online abuse.
Hate by design
While it’s true that many people can find community, connection, and creative outlets through social media, this does appear to be only one side of the story. Recent events have shone a damning light on how hate and hostility are baked into the business models of many online platforms. Driven by an attention economy that profits most from emotionally provocative content, outrage and animosity have become the internet’s most valuable currency.
Algorithms contribute to this by rewarding users for creating and sharing content that triggers strong reactions, by maximising their visibility. Within this system, toxicity becomes a deliberate strategy, weaponised for the sole aim of baiting others into reacting, responding to, and boosting the overall reach of harmful content.
With immediacy and reactivity embedded in the design and functionality of online platforms, gone are the days of holding your tongue if you’ve got nothing nice to say. Epitomised by the rise of ‘stitch culture’ on TikTok, users are incentivised to shame, and belittle one another on a public stage, turning bullying behaviour into a form of entertainment.
These features have created a digital landscape where abuse and harassment are increasingly normalised. For young people who engage with online platforms day-to-day, these behaviours can seem acceptable, and even, sometimes, socially commendable.
The anonymity enabled by social media also allows individuals to share hurtful content without consequence or repercussion. Meanwhile, for victims of online bullying and harassment, the uncertainty of not knowing who their abuser is can lead to a loss of trust in the people around them, making it more difficult to seek support or pursue accountability.
The problem of platform inaction
Platforms have continuously failed to put an end to online abuse, placing their financial interests above the safety and wellbeing of users. Rather than taking steps to curb harmful behaviour, it could be argued that some social media giants have actively contributed to hostile online environments by cutting moderation teams, and removing policies against sharing hateful content. This reflects a longstanding pattern of neglect when it comes to protecting platform users.
In 2023, research from the Center for Countering Digital Hate (CCDH) found that X (formerly Twitter) failed to act on 99% of reported hateful tweets from users promoting neo-Nazi, antisemitic, racist, and anti-LGBT hate, despite the messages clearly violating the platform’s own policies.
Similarly, a 2022 report on #HiddenHate by CCDH (which analysed 8,700+ messages sent to women with a collective 4.8 million followers), revealed that Meta failed to act on 90% of abusive direct messages sent via Instagram.
Calling for platform accountability
While it’s easy to feel powerless against the influence of social media platforms, we can still take meaningful actions to challenge the culture of cyberbullying they’ve helped create – whether that’s for your own sake, someone you know, or simply for the hope of a more harmonious social media landscape for future generations.
1) Write to policymakers
To tackle abusive online environments, effective change begins with legislation and policy. Email your local MP to voice your support for stronger legislation that holds platforms accountable, and makes user safety a priority.
2) Report abuse consistently
Although we know that platforms routinely fall short when it comes to addressing online harassment, it is still important to report abuse when you see it. These reports generate valuable data that can be used later to pressure technology companies into implementing better safeguards for users.
3) Support advocacy groups
Join or support campaign groups dedicated to holding platforms accountable, such as the CCDH, as well as charities like the NSPCC and YoungMinds, whose advocacy includes promoting safer online spaces for young people.
4) Share resources with victims
If you know someone who’s been a target of cyberbullying, let them know they’re not alone. Organisations like YoungMinds and the NSPCCoffer guidance through their sites, as well as signposting to relevant helplines that provide specialist support for children and adolescents, parents, and carers.
