The rapid proliferation of digital technologies has transformed how people communicate, work, and access entertainment. As more of daily life shifts online, concerns about internet safety, data privacy, and the spread of harmful content have intensified globally.
The United Kingdomhas responded with comprehensive legislation designed to regulate online platforms, protect users—especially children—and enhance accountability among digital service providers. This effort culminated in the Digital Safety Act 2025, a landmark framework shaping the future of internet safety in the UK.
Overview of the UK Digital Safety Act 2025
The Digital Safety Act, known formally as the Online Safety Act 2023 (with key implementation phases completed or ongoing in 2025), establishes new legal duties for social media companies, search engines, messaging platforms, and other online services to safeguard users from harmful, illegal, and age-inappropriate content. Administered by Ofcom, the UK’s independent media regulator, the Act aims to make the internet a safer place for all users, prioritizing the protection of children and vulnerable adults.
Key provisions require providers to:
- Implement robust systems to identify, remove, and prevent illegal content such as child sexual abuse material, terrorism-related content, hate speech, and fraud.
- Shield children from harmful but legal content, including materials promoting self-harm, suicide, eating disorders, bullying, and inappropriate pornography.
- Enforce effective age assurance mechanisms to prevent underage access to age-restricted content.
- Offer adult users enhanced controls over the content they see and who interacts with them online.
- Introduce transparency measures regarding the operation of algorithms that recommend content.
- Provide accessible reporting tools for users to flag unsafe or illegal content.
- Maintain accountability even for companies based outside the UK if they have significant UK user presence.
Who Does the Digital Safety Act Apply To?
The Act’s scope covers a broad range of online services accessible within the UK. This includes:
- Major social media platforms and messaging services.
- Search engines.
- Video-sharing websites.
- Online forums and dating services.
- Consumer cloud storage and file-sharing platforms.
Importantly, it applies regardless of where service providers are based, targeting companies with significant UK engagement or which consciously target UK users. This extraterritorial reach aims to hold all influential digital services accountable for safety standards affecting UK residents.
Protecting Children Online: A Core Objective
Children and young people are among the most vulnerable internet users, often encountering content that can harm their mental health or safety. The Digital Safety Act enshrines stringent duties on online providers to prevent children from accessing:
- Pornographic or sexually explicit content.
- Legal content encouraging or instructing self-harm, suicide, or eating disorders.
- Bullying, hateful language, and violent materials.
- Dangerous stunts or substance misuse promotion.
Platforms must conduct detailed risk assessments to identify potential harms to children and implement age verification technologies to block access where appropriate. These measures coincide with Ofcom’s published guidelines to assist providers in assessing and mitigating risks effectively. Providers must also transparently communicate the age limits and enforcement mechanisms in their terms of service.
As of mid-2025, these child safety protections have entered full force, with platforms required to complete risk assessments and publish compliance reports to Ofcom. Failure to comply may lead to significant enforcement actions.
Addressing Illegal Content and New Offences
The Digital Safety Act defines a comprehensive list of priority illegal content categories that online platforms are obligated to proactively address. These include child sexual abuse and exploitation, extremist and terrorist material, hate crimes based on race or religion, fraud and cybercrime, as well as intimate image abuse and revenge pornography.
Additionally, the Act covers the promotion and facilitation of drug trafficking or weapons sales, public order offenses, and incitement of violence. Online service providers are legally required not only to promptly remove such illegal content once detected but also to implement preventative design measures aimed at reducing the likelihood that illegal activities occur on their platforms in the first place.
Beyond regulating content, the Digital Safety Act introduces new criminal offenses targeting individuals who engage in harmful online behaviors. These offenses encompass acts such as cyberflashing (the unsolicited sending of sexual images), sending threatening communications, encouraging serious self-harm, and disseminating false information intended to cause significant harm.
Convictions have already been secured under some of these provisions, reflecting the active enforcement and practical impact of the legislation. This dual approach—holding both platforms and individual offenders accountable—is central to the UK’s strategy for creating a safer digital environment for all users.

Algorithmic Accountability and Transparency
Recognizing the significant role algorithms play in shaping user experiences—often amplifying harmful content—the Digital Safety Act requires online platforms to carefully assess the risks posed by their recommendation systems, especially concerning children’s exposure. Service providers must conduct thorough algorithmic risk assessments focused on identifying how their algorithms might increase users’ exposure to illegal and harmful content. Once risks are identified, platforms are obligated to mitigate them through design changes or by implementing effective control measures.
To enhance transparency and public accountability, providers are also required to publish annual reports detailing how their algorithms affect content visibility and user engagement. This requirement marks an important advancement in regulating the previously opaque “black box” systems that govern online content distribution, aiming to ensure greater user safety and responsiveness in the digital environment.
Empowering Adult Users on Content Control
Beyond child protections, the Digital Safety Act empowers adult internet users by requiring major platforms—classified as Category 1 services—to provide tools that enable greater control over their digital experience. These tools allow users to filter or block specific content categories such as bullying, hate speech, or material related to self-harm.
Platforms must also offer features to restrict engagement from unverified or anonymous accounts, helping to prevent unwanted interactions or harassment. In addition, users can choose safer browsing modes designed to limit their exposure to harmful, although legal, content. The availability of these user controls supports personal autonomy online while ensuring platforms maintain their responsibility to create safer digital environments.
Enforcement and Penalties
Ofcom, as the designated regulator under the Digital Safety Act, holds extensive enforcement powers to ensure online platforms comply with their safety obligations. Platforms that fail to meet these duties face severe consequences, including fines of up to £18 million or 10% of their global annual turnover, whichever is higher. Beyond financial penalties, Ofcom has the authority to secure court orders requiring the blocking of access to non-compliant services within the UK, effectively cutting off their availability to UK users.
Additionally, senior executives can face criminal liability if they are found responsible for breaches of the regulations. Ofcom actively monitors compliance and has already launched enforcement actions against several companies, underscoring the UK Government’s and regulator’s strong commitment to uphold a safe digital environment.
Does the UK Ban VPNs?
The Digital Safety Act does not ban Virtual Private Networks (VPNs). VPNs serve as privacy tools enabling users to mask their IP addresses and encrypt internet traffic for various legitimate purposes such as enhancing personal privacy, securing data on public Wi-Fi, or accessing regional content.
However, any online service accessible through VPNs remains subject to the Act’s rules if it targets UK users or hosts illegal/harmful content. The use or provision of VPNs is not prohibited under the Digital Safety Act.
That said, internet providers and regulators retain powers to block access to illegal content sites and can restrict services violating the law, but these measures do not equate to a broad ban on VPN technology.

What is the Data Act in the UK?
The Data Act is separate from the Digital Safety Act and focuses on regulating data access, sharing, and governance within the UK economy. Its purpose is to promote fair and secure use of data to drive innovation while safeguarding individual rights.
Complementing existing frameworks such as the UK General Data Protection Regulation (UK GDPR), the Data Act addresses challenges around public and private sector data interoperability, transparency, and security.
Together, the Data Act and Digital Safety Act form components of the UK’s broader digital strategy, aiming to foster a trustworthy, innovative, and user-centric digital environment.
What Are the Rights of Data Privacy in the UK?
UK citizens benefit from robust data privacy protections anchored in two primary legal frameworks. The first is the UK General Data Protection Regulation (UK GDPR), which governs the practices surrounding data processing to ensure it is conducted fairly, transparently, and in compliance with lawful purposes. The second is the Data Protection Act 2018, which complements UK GDPR by implementing its principles domestically and adding further national regulations tailored to the UK context.
These frameworks grant individuals key rights over their personal data, including the right to be informed about how their data is collected and used, the right to access personal information held by organizations, and the right to correct inaccurate or incomplete data. Additionally, they provide mechanisms for individuals to request the erasure of their data under specific circumstances, commonly referred to as the “right to be forgotten.”
Data subjects also have the right to restrict or object to certain types of data processing, as well as the right to data portability, enabling them to transfer their data between service providers. Protections extend to automated decision-making processes, ensuring such systems operate fairly and transparently.
While the Digital Safety Act focuses primarily on regulating harmful content and online safety rather than broad data privacy issues, it complements these rights by maintaining fundamental online freedoms and enhancing user safety. Together, these measures form a comprehensive legal shield to protect personal data and privacy rights for individuals across the UK.
The UK’s Digital Safety Act 2025 represents a pioneering effort to create a safer online environment, particularly for children and vulnerable users. Through robust duties placed on digital platforms, comprehensive enforcement mechanisms, algorithmic transparency, and empowering user controls, the Act sets a high standard for internet safety regulation.
While it does not ban VPNs or directly overhaul data privacy laws (which are covered under the Data Act and UK GDPR), the Act forms a key pillar in the UK’s evolving digital regulatory landscape. As technology advances and online risks morph, ongoing vigilance, dialogue, and collaboration between government, industry, and civil society will remain essential in safeguarding the digital future for all.

