Social media companies fail to detect and remove content about suicide and self-harm

Social media companies fail to detect and remove content about suicide and self-harm

Some of the largest social media platforms are failing to identify and remove dangerous content about suicide and self-harm, according to a new study.

Research by the Molly Rose Foundation found that of the more than 12 million content moderation decisions made by six of the largest platforms, over 95% were detected and removed by just two sites – Pinterest and TikTok.

The charity said Meta’s Instagram and Facebook were each responsible for just 1 percent of all suicide and self-harm content detected by the major websites examined, and that X, formerly known as Twitter, was responsible for just one in 700 content decisions.

Parents across the country will rightly be appalled that while Instagram and Facebook promise kind words, they continue to expose children to avoidable dangers. Without a doubt, it is clear that decisive action is needed.

The study analyzed publicly available records of over 12 million content moderation decisions made by six websites: Facebook, Instagram, Pinterest, Snapchat, TikTok and X. It found that most platforms’ response to suicide and self-harm content was “inconsistent, inconsistent and ineffective.”

The foundation now warns that the Online Safety Act does not go far enough to address the obvious systematic deficiencies in social media companies’ content moderation.

The charity’s chairman, Ian Russell, called on the UK government to push for a new online safety law that can further tighten regulation.

Mr Russell and his family founded the Molly Rose Foundation in memory of his daughter Molly, who ended her life in November 2017 at the age of 14 after seeing harmful content on social media.

“Nearly seven years after Molly’s death, it is shocking to see that most major technology companies continue to sit on the sidelines, choosing inaction over saving young lives,” Russell said.

“As the last few weeks have shown, it is extremely wise that much more ambitious regulation is needed.

“That is why it is time for the new government to finish its job and commit to a stronger online safety law.

Ian Russell advocates for change (Yui Mok/PA)

“Parents across the country will rightly be appalled that platforms like Instagram and Facebook, while promising kind words, continue to expose children to harm that is inherently preventable.

“There are no ifs and buts, it is clear that decisive action is required.”

In its report, the foundation said it found that social media sites consistently fail to detect harmful content in the most vulnerable areas of their services.

For example, it was said that only one in 50 posts about suicide and self-harm detected by Instagram was a video, even though the short video feature Reels now accounts for half of all time spent on the app.

The study also accuses the sites of failing to enforce their own rules, pointing out, for example, that while TikTok detected nearly three million pieces of suicide and self-harm content, it only blocked two accounts.

The investigation was based on decisions taken in the EU on content moderation, which must be made publicly available.

The safety and well-being of our community is our highest priority

In response to the study, a Meta spokesperson said: “Content that encourages suicide and self-harm violates our rules.

“We do not believe the statistics in this report reflect our efforts. Last year alone, we removed 50.6 million pieces of such content on Facebook and Instagram globally, and 99% of them were acted upon before they were reported to us.

“However, we in the EU are currently not in a position to implement all the measures that are underway in the UK and the rest of the world.”

A Snapchat spokesperson said: “The safety and well-being of our community is our highest priority. Snapchat was designed to be different from other platforms: there is no open news feed with unverified content, and content is moderated before being shared publicly.”

“We strictly prohibit content that promotes or encourages self-harm or suicide. If we see such content or it is reported to us, we will promptly remove it and take appropriate action.

“We also share self-harm prevention and support resources when we learn of a member of our community in distress and can notify emergency services if necessary.

“We also continue to work closely with Ofcom to implement the Online Safety Act, including protecting children from such harm.”

TikTok did not comment, but said the company’s rules are clear and do not allow the display, promotion or sharing of plans for suicide or self-harm.

A spokesperson for the Ministry of Science, Innovation and Technology said: “Social media companies have a clear responsibility to ensure the safety of users of their platforms and their processes must be effective in this regard.”

“Under the Online Safety Act, people who intentionally encourage self-harm currently face up to five years in prison. Once the law is fully implemented, platforms will also have to proactively remove illegal content that encourages serious self-harm and prevent children from seeing material that promotes self-harm or suicide, even if it is below the criminal threshold.

“We want to introduce these new protections as quickly as possible, but companies should not wait for the laws to come into force – they must take effective action now to protect all users.”

Pinterest and X did not respond to a request for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *