Facebook, Instagram, Snapchat and X do not remove dangerous content about suicide and self-harm

Facebook, Instagram, Snapchat and X do not remove dangerous content about suicide and self-harm

Some of the largest social media platforms are failing to detect and remove dangerous content related to suicide and self-harm, according to a study.

The Molly Rose Foundation found that of the more than 12 million content moderation decisions made by six of the largest platforms, over 95% were detected and removed by just two sites – Pinterest and TikTok.

The other four platforms mentioned in the report were Facebook, Instagram, Snapchat and X, formerly Twitter.

The foundation said most platforms’ response to such content was “inconsistent, inconsistent and ineffective.”

The charity said Meta’s Instagram and Facebook were each responsible for 1% of all suicide and self-harm content detected by the major websites examined, and X was responsible for just 700 content decisions.

The foundation now warns that the Online Safety Act does not go far enough to address the social media companies’ apparent systematic failures in content moderation.

Ian Russell, the charity’s chairman, called on the government to commit to a new online safety law that can further tighten regulation.

Mr. Russell and his family established the Molly Rose Foundation in memory of his daughter. Mollywho ended her life in November 2017 at the age of 14 after seeing harmful content on social media.

“Nearly seven years after Molly’s death, it is shocking to see that most major technology companies continue to sit on the sidelines, choosing inaction over saving young lives,” Russell said.

“As the last few weeks have shown, it is abundantly clear that much more ambitious regulation is needed.

“That’s why it’s time for the new government to finish its job and commit to stronger online safety legislation.”

“Parents across the country will rightly be appalled that Instagram and Facebook, while promising kind words, continue to expose their children to avoidable dangers.

“There are no ifs and buts, it is clear that decisive action is required.”

In its report, the foundation said it found that social media sites consistently fail to detect harmful content in the most vulnerable areas of their services.

For example, it was said that only one in 50 posts about suicide and self-harm detected by Instagram was a video, even though the short video feature Reels now accounts for half of all time spent on the app.

The study also accuses the sites of failing to enforce their own rules, pointing out, for example, that while TikTok detected nearly three million pieces of suicide and self-harm content, it only blocked two accounts.

The investigation was based on decisions taken in the EU on content moderation, which must be made publicly available.

In response to the study, a Meta spokesperson said: “Content that encourages suicide and self-harm violates our rules.”

“We do not believe the statistics in this report reflect our efforts. Last year alone, we removed 50.6 million pieces of such content on Facebook and Instagram globally, and 99% of them were acted upon before they were reported to us.

“However, we in the EU are currently not in a position to implement all the measures that are underway in the UK and the rest of the world.”

Read more tech news
Grieving parents call on Ofcom to “intervene”
Children are confronted with violent content for the first time in primary school
Love Island star on the effects of trolling

A Snapchat spokesperson said: “The safety and well-being of our community is our highest priority. Snapchat was designed to be different from other platforms: there is no open news feed with unverified content, and content is moderated before being shared publicly.”

“We strictly prohibit content that promotes or encourages self-harm or suicide. If we see this or it is reported to us, we will promptly remove it and take appropriate action.

“We also share self-harm prevention and support resources when we learn of a member of our community in distress and can notify emergency services if necessary.

“We also continue to work closely with Ofcom to implement the Online Safety Act, including protecting children from such harm.”

TikTok did not comment, but said the company’s rules are clear and do not allow the display, promotion or sharing of plans for suicide or self-harm.

A spokesperson for the Ministry of Science, Innovation and Technology said: “Social media companies have a clear responsibility to ensure the safety of users of their platforms and their processes must be effective in this regard.”

“Under the Online Safety Act, people who intentionally encourage self-harm currently face up to five years in prison. Once the law is fully implemented, platforms will also have to proactively remove illegal content that encourages serious self-harm and prevent children from seeing material that promotes self-harm or suicide, even if it is below the criminal threshold.

“We want to introduce these new protections as quickly as possible, but companies should not wait for the laws to come into force – they must take effective action now to protect all users.”

Pinterest and X did not respond to a request for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *