Social media giants TikTok and Instagram are still pushing “industrial levels” of suicide, self-harm and depression content to vulnerable teenagers, despite new online safety laws intended to protect children from harmful material.
Shocking new research by the Molly Rose Foundation has revealed that algorithms on both platforms continue to bombard teenage accounts with a “tsunami of harmful content”, with almost all recommended videos containing material promoting suicide, self-harm or intense depression. The findings come just weeks after Ofcom’s children’s safety codes came into force under the Online Safety Act.
The charity, founded by Ian Russell after his 14-year-old daughter Molly took her own life following exposure to harmful social media content, found that 96 per cent of videos recommended on TikTok’s For You page and 97 per cent on Instagram Reels contained harmful material when teenage accounts engaged with suicide and depression posts.
Devastating Scale of Harmful Content
The research, conducted between November 2024 and March 2025, used dummy accounts posing as a 15-year-old girl to test how algorithms responded to engagement with harmful content. The results revealed an alarming pattern of amplification that mirrors the tragic circumstances of Molly Russell’s death eight years ago.
More than half (55 per cent) of recommended harmful posts on TikTok actively referenced suicide and self-harm ideation, whilst 16 per cent included references to suicide methods, some of which researchers had never previously encountered. The posts were found to “glorify” suicide and normalise intense feelings of misery and despair.
Andy Burrows, chief executive of the Molly Rose Foundation, said the findings showed harmful algorithms continue to operate at an “industrial scale”. “It is shocking that in the two years since we last conducted this research the scale of harm has still not been properly addressed, and on TikTok the risks have actively got worse,” he stated.
The reach of this content is staggering. One in ten harmful videos on TikTok’s For You page had been liked at least one million times, with the average harmful post receiving 226,000 likes. On Instagram Reels, one in five harmful recommended videos had been liked more than 250,000 times.
Platforms ‘Gaming’ Safety Measures
The research revealed that whilst both platforms had implemented features allowing teenagers to offer negative feedback on recommended content, as required by Ofcom under the Online Safety Act, they simultaneously enabled users to provide positive feedback on the same harmful material, resulting in increased exposure.
The Molly Rose Foundation accused both platforms of “gaming” the Online Safety Act by introducing features ostensibly designed to comply with legislation whilst continuing to amplify dangerous content through their recommendation systems. The charity found “no evidence at all” that TikTok had taken measures to increase safety-by-design features.
Ian Russell, whose daughter’s death prompted nationwide calls for social media reform, expressed devastation at the findings. “It is staggering that eight years after Molly’s death, incredibly harmful suicide, self-harm and depression content like she saw is still pervasive across social media,” he said.
Profit from Harmful Content
The investigation also uncovered evidence that social media platforms profit from advertising adjacent to harmful posts. Adverts for fashion and fast food brands popular with teenagers, as well as UK universities, were found alongside content promoting suicide and self-harm.
The charity’s report, produced in partnership with Bright Data, highlighted how personalised AI recommender systems amplified harmful content once users had engaged with it, despite platforms making it harder to search for dangerous content using hashtags.
Regulatory Response Falls Short
Ofcom began implementing the Online Safety Act’s children’s safety codes in July 2025, with measures intended to “tame toxic algorithms”. However, the Molly Rose Foundation expressed serious concerns that the regulator had recommended platforms spend just £80,000 to correct algorithmic issues linked to deaths like Molly’s.
An Ofcom spokesperson defended the new measures, stating: “Change is happening. Since this research was carried out, our new measures to protect children online have come into force. These will make a meaningful difference to children – helping to prevent exposure to the most harmful content, including suicide and self-harm material.”
Technology Secretary Peter Kyle acknowledged the ongoing challenge, noting that 45 sites are under investigation since the Online Safety Act came into effect. “Ofcom is also considering how to strengthen existing measures, including by proposing that companies use proactive technology to protect children from self-harm content,” he added.
Platform Responses and Denials
Both TikTok and Meta, which owns Instagram, disputed the research findings. A TikTok spokesperson claimed: “Teen accounts on TikTok have 50+ features and settings designed to help them safely express themselves. With over 99% of violative content proactively removed by TikTok, the findings don’t reflect the real experience of people on our platform.”
Meta similarly rejected the assertions, with a spokesperson stating: “We disagree with the assertions of this report and the limited methodology behind it. Tens of millions of teens are now in Instagram Teen Accounts, which offer built-in protections that limit who can contact them, the content they see, and the time they spend on Instagram.
Calls for Stronger Action
The findings have prompted renewed calls for more robust government intervention. Gregor Poynton, Labour MP for Livingston and Chair of the All-Party Parliamentary Group on Children’s Online Safety, described the report as “damning” and highlighted how social media companies are “still unforgivably pushing the most devastating harmful content to children.
The Molly Rose Foundation is now urging the government to strengthen the Online Safety Act, arguing that current measures are merely “a sticking plaster” that will not address preventable harm. The charity wants tech firms to be required to identify and mitigate all risks faced by young people on their platforms.
Russell warned that Ofcom’s implementation of the Online Safety Act has been too timid in the face of disturbing levels of preventable harm. “For over a year, this entirely preventable harm has been happening on the Prime Minister’s watch and where Ofcom have been timid it is time for him to be strong and bring forward strengthened, life-saving legislation without delay,” he said.
The Human Cost
The research serves as a stark reminder of the human cost of algorithmic amplification. Molly Russell viewed 2,100 suicide, self-harm and depression posts on Instagram alone in the six months before her death. A coroner concluded that social media “more than minimally contributed” to her death.
Eight years later, despite promises of reform and new legislation, vulnerable teenagers continue to face what the Foundation describes as an “inescapable rabbit hole” of harmful content. The charity’s research found that once users strayed into harmful material, algorithms made it almost impossible to escape the cycle of dangerous recommendations.
As the debate over online safety continues, the Molly Rose Foundation’s findings underscore the urgent need for meaningful action to protect vulnerable young people from the devastating impact of algorithmic amplification of harmful content.
Follow for more updates on Britannia Daily