Far-right unrest prompts reassessment of UK online safety laws

Far-right unrest prompts reassessment of UK online safety laws

The persistence of far-right social media groups is a constant cause for concern for the UK government. Earlier this month, they fomented misinformation about fatal stabbings in Southport, England, which sparked anti-Islam and anti-immigrant riots across the country.

One tool the government hopes to use to curb this threat is the Online Safety Act. It requires social media platforms to remove posts that contain “illegal material” under UK law, such as threats or hate speech. When the law comes into force next year, companies that fail to comply could face fines of up to £18 million or 10% of their global turnover – whichever is higher.

Why we wrote this

A story about

In a bid to curb far-right unrest, Britain is seeking to crack down on the online activity that sparked violence earlier this month, but the laws the government may introduce have been criticized for being too weak and overreaching.

Politicians confronted with the real consequences of hate and disinformation online see the bill as a panacea to curb the threat of future violence. But it is facing fierce criticism from all sides. Human rights groups warn that the law threatens users’ privacy and impairs free speech. Others say it does not go far enough.

“I am convinced that there is not enough regulation at the moment,” says Isobel Ingham-Barrow, CEO of a think tank that specialises in British Muslim communities. “But the regulation has to be specific. … You have to keep freedom of expression in mind.”

When a wave of anti-Islamic violence against migrants swept the UK in early August, far-right social media groups stoked the anger of the rioters. Today, the same channels are still active – this time to trace the aftermath of the unrest.

“The regime is cracking down on patriots,” complained one poster on a far-right channel, citing the case of a woman who was sentenced to 15 months in prison for saying in her local community’s Facebook group: “Don’t protect the mosques. Blow up the mosque with adults inside.”

The continued existence of such groups is a constant concern for the British government, which is looking for new ways to combat online extremism. One possible tool is a law passed last year that grants powers to monitor social media: the Online Safety Act.

Why we wrote this

A story about

In a bid to curb far-right unrest, Britain is seeking to crack down on the online activity that sparked violence earlier this month, but the laws the government may introduce have been criticized for being too weak and overreaching.

Although the law does not come into force until next year, it is seen by politicians facing the real consequences of online hate and disinformation as a panacea to curb the threat of future violence. However, the law has also faced fierce criticism from all sides. Human rights groups have repeatedly warned that the law threatens users’ privacy and affects free speech. Others, such as London Mayor Sadiq Khan, believe the law simply does not go far enough. The result means the government is forced to walk a difficult tightrope over the unknown.

“I am convinced that there is not enough regulation at the moment,” says Isobel Ingham-Barrow, CEO of the Community Policy Forum, an independent think tank specialising in the structural inequalities facing British Muslim communities. “But regulation has to be specific and you have to be careful because it can work both ways: you have to keep freedom of expression in balance.”

Security for users in the UK

The white paper that later became the Online Safety Bill was written by lawmakers in 2019. It initially explored ways in which government and businesses could regulate content that, while not illegal, could pose a threat to the wellbeing of users – particularly children.

Leave a Reply

Your email address will not be published. Required fields are marked *