The UK’s Online Safety Act: What Trust & Safety Teams Can Expect

By
October 26, 2023
UK Online Safety Bill graphic showing the UK flag and a lock symbolizing safety and security.

The UK’s much-awaited Online Safety Bill has received Royal Assent and is now law. Intended to “make the UK the safest place to go online,” the bill has been through a very long iterative process since its initial introduction in April 2019. Among the most controversial online safety regulations passed in recent years, the bill seeks to ensure online safety for users in the UK, but its application may pose significant problems for tech platforms. 

The legal team at ActiveFence has reviewed the bill and we share what we believe are its most crucial components below. 

Who is Ofcom, and what powers do they have?

Ofcom, the United Kingdom’s communications regulator, will become the policing body for the Online Safety Bill when it becomes law. As part of the office’s duty to oversee compliance with this law, Ofcom will draft initial guidance and codes of practice on numerous aspects covered by the bill. This guidance must be published within the first 18 months after it passes and Ofcom also expects to publish its codes of practice soon after the commencement of the bill, possibly within the first 2 months.

Who will be impacted?

Broadly speaking, the Online Safety Bill will apply to three categories of services with services offered in the UK. 

  1. User-to-user services: These are services where users can post content seen by other users. These may include: Social media platforms, online chat and messaging services (excluding email, SMS, and MMS), gaming, file-sharing services (excluding internal business tools), music and video sharing services, blogging platforms, and potentially website hosting services. Internal business services (such as CRM systems) and “limited functionality services” (which include websites that only allow you to post below-the-line comments or reviews for example) are exempt.
  2. Search services
  3. Pornography services: Including any pornography services that create and share their own content, any service where users share their own content will be treated as a user-to-user service.

The Online Safety Bill represents a seismic shift in the approach to the regulation of online platforms in the UK, by imposing such specific duties on platforms that host user-generated content (UGC). Online services hosting multiple services may also need to ascertain which parts of their businesses apply to the law. For instance, a platform offering one-to-one messaging and voice calling may have requirements placed on messaging, while voice calls will remain outside the law’s scope.  

Last year, another piece of platform regulation was also implemented in the EU in the form of the Digital Services Act (DSA). Whilst there are some parallels to be drawn, the DSA focuses predominantly on transparency of moderation, risk assessments, and compliance processes in relation to illegal content. By contrast, the Online Safety Bill focuses on the measures that platforms have in place to tackle not only illegal content but also content that is harmful to children.

What are the obligations? 

The law states that platforms will have a “duty of care” to keep their users safe,  but what this means in terms of specific obligations will depend on the size and capacity of the platform in question and the likelihood of harmful content being shared on it. In fulfilling this “duty of care,” Ofcom will likely be expecting platforms to take steps including proactive monitoring for online harm (especially for high-risk platforms), tools that allow users to control the type of content they access, and effective notice and takedown systems. Additionally, platforms will need to consider how their own algorithms and design exacerbate the scope of harm.

Some examples of actions that platforms may be expected to conduct: 

  • Remove illegal content quickly or prevent it from appearing in the first place
  • Mitigate the risks of the platform being used to carry out certain criminal offenses (e.g. terrorism, child sexual exploitation, threats to kill, suicide assistance)
  • Prevent children from accessing harmful and age-inappropriate content
  • Protect children at risk of harm due to the features, functionalities, or design of the service
  • Enforce age limits and age-checking measures, and use age verification or age estimation to prevent children from accessing certain types of harmful content (e.g. pornography or content that encourages self-harm, suicide, or eating disorders)
  • Ensure the risks and dangers posed to children on the largest social media platforms are more transparent, including by publishing risk assessments
  • Provide parents and children with clear and accessible ways to report problems online when they arise

In addition to its firm protections for children, the bill empowers adults to take control of what they see online. It provides three layers of protection for internet users which will:

  1. Make sure illegal content is removed
  2. Make sure that  social media platforms enforce the promises they make to users when they sign up, through terms and conditions
  3. Place a responsibility on “Category 1” services (likely to be the largest social media platforms) to provide certain user “empowerment” tools, which include:
  • Enabling users to block other users who have not verified their identity
  • Offering users the option to filter out harmful content, such as bullying, that they do not want to see online
  • Offering all adult users the option to verify their identity
  • Explaining which control features are offered, and how users can take advantage of them, clearly in the terms of service
  • Giving adults the opportunity to say whether they want to use the available control features on sign-up

Initially, online services will need to conduct at least one to potentially three detailed “risk assessments.” These assessments are the following:

Illegal content risk assessments

All in-scope services will need to assess the risks of harm to users that could arise as a result of illegal content on the platform, including how quickly and widely illegal content could be disseminated using algorithms. This risk assessment must take into account a number of factors including the platform user base, functionalities of the service, the different ways that the service is used, and the risk of the service being used for the commission or facilitation of a serious criminal offense. The risk assessment must also consider how the design of the service helps to mitigate or reduce any identified risks and promote media literacy.

Children’s access assessments

All in-scope services will need to carry out a specific risk assessment if their service (or part of the service)  is likely to be accessed by children. In order to determine if the service is likely to be accessed by children, platforms must first undertake a children’s access assessment. The purpose of the access assessment is to ascertain whether the service is likely to be accessed by or appeal to, a significant number of users who are children. Platforms will only be able to conclude that the service is not accessed by children if they can demonstrate that they are successfully using age verification or estimation technologies to prevent this.

Children’s risk assessments

Like the illegal content risk assessment, the children’s risk assessment must take into account a number of factors including the number of children who use the service (and their different age groups), the level of risk that children have of encountering certain types of harmful (not just illegal) content on the platform, the risks these categories of content could pose to children of different age groups and characteristics. The risk assessment must also take account of the way the service is used and designed, including how it could facilitate the dissemination of content that is harmful to children. Platforms must also consider how the design of the services helps to mitigate or reduce any identified risks and promotes media literacy.

Once risk assessments and policies are in place, platforms will be legally required to uphold those policies and report on these activities. Platforms will also be required to carry out further risk assessments before making any significant changes to the design or operation of the service. 

Duties

In response to these risks, platforms must create policies and implement measures to counter them. For instance, platforms will need to provide the means for users to easily report illegal content or content that is harmful to children where applicable. They will also need to provide an easy-to-use and transparent complaints procedure and keep accurate records of the risk assessments they have undertaken in relation to illegal content and the risk to children.  Platforms also have a duty to enforce their terms of service and ensure that they comply with them when taking down UGC, restricting user access to content, and suspending or banning users who do not comply. If a platform is potentially risky to children, it will also need to define the specific actions it will take to mitigate that risk. 

In addition to the core duties of care, there are several other requirements that platforms may need to abide by. Requirements include the necessity to report child sexual exploitation content to the National Crime Agency and duties on larger platforms to tackle fraudulent advertising and produce transparency reports.  The bill also introduces a number of balancing measures, which oblige all regulated services to have “particular regard” for freedom of expression when implementing safety measures. Larger platforms also have specific duties to assess the impact of their measures on freedom of expression and privacy rights, to protect news and journalistic content that appears on the platform, and not to act against users other than in accordance with their terms of service.

What forms of content fall within the scope of the bill?

The law will set up two main categories of content that platforms will be required to act on:

1. Illegal Content

The law will require platforms to take proactive measures to protect users from encountering 13 different types of illegal content, all of which are already in existing legislation. These include:

  • Terrorist content
  • CSAM
  • Assisting suicide 
  • Threats to kill
  • Public order offenses, like harassment and stalking
  • Drug dealing
  • Weapons dealing
  • Assisting illegal immigration
  •  Human trafficking
  • Causing/inciting prostitution for gain 
  • Possession of extreme porn and image-based sexual abuse content
  • Fraud
  • Offenses relating to criminal property and proceeds of crime
  • Certain financial services-related offenses
  • Certain offenses relating to foreign interference and national security
  • Other inchoate offenses including aiding, abetting, counseling, or procuring the commission of an offense 

In addition, platforms will need to take action against illegal content beyond the listed offenses after notification of the content’s existence. 

 2. Legal but Harmful to Children

Platforms accessible to children will be required to define risks to children that are legal but harmful. Implementation of proportionate measures to mitigate these risks and prevent children from accessing harmful content will also be necessary. The most damaging content for children, which platforms will need to take particular care to prevent, are set out in the bill. These include “primary priority content” (which includes pornography and content, which encourages self-harm, suicide, or eating disorders), “priority content that is harmful to children” (which includes abusive content that targets protected characteristics, is bullying, encourages or depicts violence, encourages high risk “challenges” or “stunts” and encourages taking harmful substances) and other non-designated content that presents a “material risk of significant harm” to a lot of children in the UK. 

How will it be enforced?

Ofcom intends to be a proactive regulator once the bill becomes law, and has already hired a significant task force to support these efforts. The commissioner also expects the number of online services impacted by this law to reach 25,000 or more. 

In preparation for this endeavor, Ofcom expects to produce over 40 regulatory documents – including codes of practice and guidance for service providers – which will set the specific expectations and rules for platforms to follow. Monitoring these services at scale will require Ofcom to establish automated data collection and analysis systems, as well as advanced IT capabilities – adding up to an expected cost of  £169m by 2025, with £56m already incurred by the end of 2023.

What are the implications of non-compliance? 

Fines for failure to comply with the law will be the greater of £18 million or 10% of a company’s global annual turnover – this can add up to billions of Pounds for a large online platform. Moreover, Ofcom will be able to seek court rulings to stop payment platforms and internet service providers from working with harmful sites. Additionally, the law will impose criminal liability for company executives such as senior managers and corporate officers who fail to cooperate with the law.

How companies can prepare?

Over the next 18 months (or sooner), Ofcom will issue codes of practice and guidelines for online platforms, at that point, platforms will need to begin implementing new online safety mechanisms – as defined by law and described above. Platforms and executives will likely be held liable for lack of compliance.

While the laid-out timeline seems prolonged, it is critical to note that the process of implementing online safety mechanisms is complex and expensive. Platforms that haven’t already enlisted the help of dedicated technology and tools will find that by the time specific requirements are laid out – they may be too late. 

Technology platforms should take proactive actions to keep users safe in preparation for the bill’s enactment. However, the codes of practice and guidance issued by Ofcom will form key planks of the online safety regime and its practical application once they have been published.

Ranging from managed intelligence services to content detection APIs, and a dedicated Trust & Safety platform, ActiveFence’s services allow platforms of all kinds to ensure the safety of their users and services. By providing proactive insights into online harms, before they impact users, we enable platforms to be legally compliant across geographies and languages. Moreover, by using ActiveOS, Trust & Safety leaders can quickly assess platform risks, establish policies, and ensure that content is quickly and efficiently handled by the right moderation team – limiting platform liability for harmful content. To learn more about how ActiveOS enables teams to remain compliant, click below. 

Table of Contents