Studies show: Social media companies do not remove suicide content

Studies show: Social media companies do not remove suicide content

LONDON: Some of the largest social media platforms are failing to identify and remove dangerous content about suicide and self-harm, according to a new study.

Research by the Molly Rose Foundation found that of the more than 12 million content moderation decisions made by six of the largest platforms, over 95% were detected and removed by just two sites – Pinterest and TikTok.

The charity said Meta’s Instagram and Facebook were each responsible for just 1 percent of all suicide and self-harm content detected by the major websites examined, and that X, formerly known as Twitter, was responsible for just one in 700 content decisions.

The study analyzed publicly available records of over 12 million content moderation decisions made by six websites: Facebook, Instagram, Pinterest, Snapchat, TikTok and X. It found that most platforms’ response to suicide and self-harm content was “inconsistent, inconsistent and ineffective.”

The charity’s chairman, Ian Russell, and his family founded the Molly Rose Foundation in memory of his daughter Molly, who ended her life in November 2017 at the age of 14 after seeing harmful content on social media.

“Nearly seven years after Molly’s death, it is shocking to see that most major technology companies continue to sit on the sidelines, choosing inaction over saving young lives,” Russell said. “Without any ifs, ands or buts, it is clear that decisive action is needed.”

In its report, the foundation said it found that social media sites consistently fail to detect harmful content in the most vulnerable areas of their services.

For example, it was said that only one in 50 posts about suicide and self-harm detected by Instagram was a video, even though the short video feature Reels now accounts for half of all time spent on the app.

The study also accuses the sites of failing to enforce their own rules, pointing out, for example, that while TikTok detected nearly three million pieces of suicide and self-harm content, it only blocked two accounts.

The investigation was based on decisions taken in the EU on content moderation, which must be made publicly available.

In response to the study, a Meta spokesperson said: “Content that encourages suicide and self-harm violates our rules.

“We do not believe the statistics in this report reflect our efforts. Last year alone, we removed 50.6 million pieces of such content on Facebook and Instagram globally, and 99% of them were acted upon before they were reported to us.

“However, we in the EU are currently not in a position to implement all the measures that are underway in the UK and the rest of the world.”

A Snapchat spokesperson said: “The safety and well-being of our community is our highest priority. Snapchat was designed to be different from other platforms: there is no open news feed with unverified content, and content is moderated before being shared publicly.”

“We strictly prohibit content that promotes or encourages self-harm or suicide. If we see such content or it is reported to us, we will promptly remove it and take appropriate action.

“We also share self-harm prevention and support resources when we learn of a member of our community in distress and can notify emergency services if necessary.

“We also continue to work closely with Ofcom to implement the Online Safety Act, including protecting children from such harm.”

TikTok did not comment, but said the company’s rules are clear and do not allow the display, promotion or sharing of plans for suicide or self-harm.

Pinterest and X did not respond to a request for comment. – PA Media/dpa

Leave a Reply

Your email address will not be published. Required fields are marked *