Can the Online Safety Act Safeguard Us Without Compromising Our Freedoms?

credit: news.sky

London (Parliament Politics Maganize) – Six years since its conception, the Online Safety Act has at last been enacted into law. Its implementation is long overdue, considering the internet’s historical lack of regulation. Over the years, the internet has expanded its reach into every aspect of people’s lives.

Recent global events underscore the repercussions of an unregulated online environment. The Israel-Hamas conflict, for instance, saw a flood of misinformation on social media, blurring the lines between real and fake videos and photos.

Concurrently, cases of online grooming and child sexual abuse have surged by an alarming 80 percent in the last four years. Additionally, four out of ten UK children aged eight to 17 have faced bullying, whether online or offline.

Online Safety Bill Under Scrutiny

Throughout its development, the Online Safety Bill faced intense scrutiny from both advocates of online safety and free-speech campaigners. Each faction argued that the bill failed to adequately ensure people’s safety while also inadequately safeguarding free expression and privacy.

After numerous fluctuations, the ultimate version of the act retains the core principles outlined in the original draft. The independent regulator, Ofcom, is empowered to impose fines on social media platforms, amounting to up to £18 million or 10 percent of their global annual revenue (whichever is higher) in cases where they fail to promptly remove illegal content.

 This encompasses offenses such as child sexual abuse, terrorism, aiding suicide, and threats to kill. The act also delineates a list of “priority offenses,” specifying the types of illegal content that tech platforms must prioritize for removal.

Under the new provisions, tech companies now bear the possibility of criminal liability, with social media executives facing the prospect of up to two years in jail if they persistently neglect to safeguard children from harm.

This encompasses not only illegal offenses but also specific types of legal content, such as material that promotes eating disorders, self-harm, or cyberbullying. Tech companies are mandated to formulate their own “terms of service” based on the “codes of practice” established by Ofcom and ensure consistent enforcement.

Cyberflashing and Upskirting Have Been Criminalized

It’s noteworthy that the initial bill included provisions aimed at protecting adults from “legal but harmful” content as well, but these were ultimately discarded. In alignment with the act, several other laws have been reinforced, particularly concerning violence directed at women and girls. Amendments to the “revenge pornography” law stipulate that individuals who threaten to disseminate intimate images without consent may now face up to two years in prison, as well as those who actually share such images.

Cyberflashing and upskirting have been criminalized, and three new communications offenses have been established pertaining to the intentional dissemination of harmful, false, and threatening communications.

Free-speech advocates express widespread dissatisfaction with the ultimate form of the Online Safety Act. Barbora Bukovská, Senior Director for Law and Policy at the human rights organization Article 19, notes that although the final version has seen a “slight improvement” through the removal of “legal but harmful,” the act still stands as an “extremely complex and incoherent piece of legislation” that, in her view, will undermine freedom of expression, information, and the right to privacy. She further asserts that it will prove “ineffective” in enhancing internet safety.

Read More: Can the UK Economy Match the Performance of the Rest of Europe in 2024?

New Requirements for Social Media Platforms

With the new requirement for social media platforms to enforce their own terms of service, Barbora Bukovská expresses concern that this may lead companies to become overly vigilant, granting them excessive censorship power. She believes that this approach will incentivize platforms to censor various categories of lawful speech deemed harmful, inappropriate, or controversial by them or their advertisers. Additionally, she points out that algorithm-based moderation technology is currently not advanced enough to navigate this intricate landscape effectively.

In Bukovská’s view, the Online Safety Act should have shifted its focus from content moderation to addressing the business model of Big Tech, which relies on advertising and monetizing users’ attention. Instead, she suggests that the legislation could have explored measures to increase competition rather than consolidating the market power of industry giants.

Legitimate concerns exist regarding the suppression of marginalized voices online, as social media companies have faced criticism for their moderation policies in the past. An investigative report by the Intercept revealed that TikTok moderators were instructed to suppress videos from users who were perceived as “ugly, poor, or disabled.”

Beth Malcolm

Beth Malcolm is Scottish based Journalist at Heriot-Watt University studying French and British Sign Language. She is originally from the north west of England but is living in Edinburgh to complete her studies.