Far-right unrest prompts reassessment of UK online safety laws
The persistence of far-right social media groups is a constant cause for concern for the UK government. Earlier this month, they fomented misinformation about fatal stabbings in Southport, England, which sparked anti-Islam and anti-immigrant riots across the country.
One tool the government hopes to use to curb this threat is the Online Safety Act. It requires social media platforms to remove posts that contain “illegal material” under UK law, such as threats or hate speech. When the law comes into force next year, companies that fail to comply could face fines of up to £18 million or 10% of their global turnover – whichever is higher.
Why we wrote this
A story about
In a bid to curb far-right unrest, Britain is seeking to crack down on the online activity that sparked violence earlier this month, but the laws the government may introduce have been criticized for being too weak and overreaching.
Politicians confronted with the real consequences of hate and disinformation online see the bill as a panacea to curb the threat of future violence. But it is facing fierce criticism from all sides. Human rights groups warn that the law threatens users’ privacy and impairs free speech. Others say it does not go far enough.
“I am convinced that there is not enough regulation at the moment,” says Isobel Ingham-Barrow, CEO of a think tank that specialises in British Muslim communities. “But the regulation has to be specific. … You have to keep freedom of expression in mind.”
When a wave of anti-Islamic violence against migrants swept the UK in early August, far-right social media groups stoked the anger of the rioters. Today, the same channels are still active – this time to trace the aftermath of the unrest.
“The regime is cracking down on patriots,” complained one poster on a far-right channel, citing the case of a woman who was sentenced to 15 months in prison for saying in her local community’s Facebook group: “Don’t protect the mosques. Blow up the mosque with adults inside.”
The continued existence of such groups is a constant concern for the British government, which is looking for new ways to combat online extremism. One possible tool is a law passed last year that grants powers to monitor social media: the Online Safety Act.
Why we wrote this
A story about
In a bid to curb far-right unrest, Britain is seeking to crack down on the online activity that sparked violence earlier this month, but the laws the government may introduce have been criticized for being too weak and overreaching.
Although the law does not come into force until next year, it is seen by politicians facing the real consequences of online hate and disinformation as a panacea to curb the threat of future violence. However, the law has also faced fierce criticism from all sides. Human rights groups have repeatedly warned that the law threatens users’ privacy and affects free speech. Others, such as London Mayor Sadiq Khan, believe the law simply does not go far enough. The result means the government is forced to walk a difficult tightrope over the unknown.
“I am convinced that there is not enough regulation at the moment,” says Isobel Ingham-Barrow, CEO of the Community Policy Forum, an independent think tank specialising in the structural inequalities facing British Muslim communities. “But regulation has to be specific and you have to be careful because it can work both ways: you have to keep freedom of expression in balance.”
Security for users in the UK
The white paper that later became the Online Safety Bill was written by lawmakers in 2019. It initially explored ways in which government and businesses could regulate content that, while not illegal, could pose a threat to the wellbeing of users – particularly children.
These efforts came at the right time: When the pandemic struck less than a year later, policymakers were able to observe firsthand the rapid spread of such disinformation—and especially the enormous reach that social media could give it.
But as the months went by, the law’s scope expanded to tackle an ever-growing list of potential digital threats. In its final form, the law contains more than 200 clauses. It requires social media platforms to remove posts that contain “illegal material” under UK law, such as threats or hate speech. If the law comes into force, companies that fail to comply could face fines of up to £18 million or 10% of their global turnover – whichever is greater.
While some of the reforms are generally welcomed – for example, the legislation prohibits the distribution of deepfake and revenge pornography – others are causing extremely controversial debates.
One would require websites to verify the age of their users to prevent minors from seeing inappropriate content. Organizations such as the Wikimedia Foundation have already stated that they cannot meet this requirement without violating their own rules on collecting user data.
Another much-discussed clause requires platforms to scan users’ messages for content such as child sexual abuse. This requirement is not only seen as an attack on users’ privacy, but many experts believe it is also virtually impossible for end-to-end encrypted services such as WhatsApp.
However, concerns remain that the law does not go far enough, particularly when it comes to combating extremist rhetoric. The law originally required platforms to remove content that was deemed “legal but harmful,” such as disinformation that posed a threat to public health or promoted eating disorders. However, that law was eventually scrapped.
Some now say it is time to review whether the restrictions could also be used to counter rumours such as those that sparked the far-right riots in August. The riots began to escalate when posts on X falsely claimed that the teenager who killed three children in Southport, England, was a Muslim migrant. It was later confirmed that the killer was British-born and had no connection to Islam.
“I think the government should look very quickly at whether the Online Safety Act is fit for purpose,” Mayor Khan said in an interview with The Guardian.
Too little regulation vs. too much
However, human rights groups and activists who have opposed the Online Safety Act say such calls for the law to be applied as a one-size-fits-all solution are cause for concern.
Advocacy groups fear that the already all-encompassing scope of the law will lead to over-moderation of social media platforms. The UK media regulator Ofcom, which will be responsible for implementing the law, has not yet published its guidelines for assessing “illegal content”. In addition, the UK does not have a written constitution that sets out the protection of free speech. The current atmosphere is one of uncertainty.
“If you don’t define exactly what ‘illegal content’ is, companies will play it safe,” says James Baker, campaigns and advocacy manager at the Open Rights Group, which campaigns for privacy and free speech online. Because if a platform wrongfully leaves something online, it will be punished for it, but “there is no punishment for wrongfully restricting free speech.”
Even the attempt to judge the legality of content based on existing laws brings inconsistencies to light.
“In cases of racial hatred, UK law protects against abusive, threatening or insulting words or behaviour. (But) in cases of religious hatred, victims are only protected against threatening words or behaviour. There are differences in the thresholds for different types of hatred,” says Ms Ingham-Barrow. “The lack of clarity in the definition of harm – far from making the UK the ‘safest place in the world for Muslims online’, this bill will do little to protect Muslim communities from Islamophobic abuse online.”
Experts stress that they will not be able to assess the impact of the law until next year. “It is foolish to call for a change in the law before we have seen it in practice,” says Baker.
But laws to combat disinformation and hate speech are just one piece of a much larger puzzle.
“A lot of what makes people vulnerable to disinformation has to do with both the online and offline world,” says Heidi Tworek, associate professor of history and public policy at the University of British Columbia in Vancouver. “It can depend on age, gender, race, your political leanings, the things a platform’s algorithm shows you, and the community you find. We need to go beyond just regulation and recognize that disinformation has many online and offline causes.”