ActiveFence’s General Counsel, Michal Brand Gold, shares insights on the UK’s Online Safety Bill, explaining potential implications for online platforms.

The UK’s much-awaited Online Safety Bill intends to “make the UK the safest place to go online.” Initially introduced three years ago, the legislation was read before parliament, proposing bold measures to ensure online safety for all users in the UK and subsequently around the globe. However, the bill poses many potential problems for technology companies operating in the UK.

The bill will become a defining piece of legislation for the Trust & Safety industry. While leaving many concerns unresolved, such as limitations to freedom of speech, compliance costs, and potentially criminal implications, the bill is the first attempt at unraveling the complicated work of Trust & Safety teams.

The legal team at ActiveFence has reviewed the bill and we share what we believe is its most crucial components below. 

Who will be impacted?

The Online Safety Bill will apply to any businesses, including search engines, that host user-generated content such as video, text, and images, for others to view. While the bill only specifies excluded platform categories, based on its current phrasing, we assume the legislation will apply to the following types of platforms:

  • Social media 
  • Chatting 
  • Gaming 
  • Certain file-sharing services (excluding internal business tools)
  • Online messaging (excluding email, SMS, and MMS services)
  • Music and video sharing
  • Blogging and potentially website hosting 

The Online Safety Bill is the first law to impose such specific duties on platforms that host user-generated content. Online services hosting multiple services may also need to ascertain which parts of their businesses apply to the law. For instance, a platform offering one-to-one messaging and voice calling may have requirements placed on messaging, while voice calls will remain outside the law’s scope. 

What are the bill’s requirements? 

Platforms falling under the law’s requirements will have a “duty of care” to keep their users safe. Introducing new terminology to the online space, the duty of care requires platforms to assess the risks to their users, put policies and procedures in place to minimize risk, and take the necessary actions. Initially, online services will need to conduct at least one to potentially three detailed “risk assessments.” These assessments are the following:

  1. Assess the risks of illegal content on the platform 
  2. Assess the risks of harm caused by the service to children. All in-scope services will first need to assess whether they are accessible to children. If determined that a platform is accessible to children, companies will then need to assess the risks of harm to children further. 
  3. Assess the risks of harm to adults 

These risk assessments are internal evaluations of potential risks and their severity to users. For the risk assessments for children and adults, platforms will also need to consider the prevalence of legal but harmful content. Future legislation will define certain types of harmful content as priority, deeming the risks and negative consequences of these types of harm as more severe. 

In response to these risks, platforms must create policies to counter them. For instance, if a platform is potentially risky to children, it will need to define the specific actions it will take to mitigate that risk. Alternatively, online services with verifiable age gates will not need to conduct a risk assessment for child safety concerns. Once risk assessments and policies are in place, platforms will be legally required to uphold those policies and report on these activities. 

In addition to the core duties of care, there are several other requirements that platforms may need to abide by. Requirements include user reporting capabilities, a content moderation appeals system, and the necessity to report child sexual exploitation content to the National Crime Agency. “Category 1” services, currently defined as user-to-user platforms and is likely to include the largest social media platforms, may need to provide users with increased control over harmful content as well as filter out “non-verified” users.

Who is Ofcom, and what powers do they have?

Ofcom, the United Kingdom’s communications regulator, has been deemed the policing body of the bill’s regulations. As part of the office’s duty to oversee the execution of this law, Ofcom will have the power to: 

  • Demand information and data from tech companies
  • Enter the premises of technology companies to access data and equipment, request interviews with company employees, and require companies to undergo an external assessment of how they keep users safe
  • Audit the algorithms that control what users see on platforms, such as social media feeds and search results
  • Set expectations on what tools service providers may use, such as proactive technology, to comply with their duties

What forms of content fall within the scope of the bill?

The law will set up three categories of content that platforms will be required to act on:

1. Illegal Content

The law will require platforms to take proactive measures to protect users from encountering 13 different types of illegal content, all of which are already in existing legislation. These include:

  1. Terrorist content
  2. CSAM
  3. Assisting suicide 
  4. Threats to kill
  5. Harassment
  6. Drug dealing
  7. Weapons dealing
  8. Assisting illegal immigration
  9. Causing/inciting prostitution for gain 
  10. Revenge porn 
  11. Fraud
  12. Offenses relating to criminal property
  13. Certain financial services-related offenses

In addition, platforms will need to take action against illegal content beyond the listed offenses after notification of the content’s existence. 

 2. Legal but Harmful to Children

Platforms accessible to children will be required to define risks to children that are legal but harmful. Implementation of proportionate measures to mitigate these risks and prevent children from accessing harmful content will also be necessary. The most damaging content for children, which platforms will need to take particular care to prevent, will be decided in secondary legislation after the bill’s passage. Amendments that will add other types of harmful-to-children content beyond those in secondary legislation must be considered. However, the requirements attached to these will not be as stringent.

3. Legal but Harmful to Adults

After the bill’s passage, legal but harmful content that impacts adults and the necessary duties to counter them will be defined in secondary legislation. Category 1 platforms will need to explain to users how the service handles this type of content. 

What are the implications of non-compliance? 

Fines for failure to comply with the law will be the greater of £18 million or 10% of a company’s global annual turnover. Additionally, the law will impose criminal liability for company executives who fail to cooperate with the law by either destroying evidence, not providing information, or obstructing regulators from entering company offices. These are punishable by jail time.

How companies can prepare

Currently, the law is working its way through the legislative process within the United Kingdom’s parliament. While the bill has many steps ahead, given its priority, it is expected to be passed this year. Once in effect, secondary legislation will determine when elements of the law will come into force and define what types of harmful content platforms need to address. At that point, likely in 2023, Ofcom will issue guidelines for conducting risk assessments. Once made public, platforms will have three months to conduct and share their risk assessment with Ofcom. Platforms and executives will likely be held liable for lack of compliance.

While the laid-out timeline seems prolonged, it is critical to note that three months is not enough time for a detailed risk assessment. Additionally, the law is still likely to pass despite technology platforms and privacy groups expressing concerns about the bill, such as implications for online discourse and moderation practices.

Technology platforms should take proactive actions to keep users safe in preparation for the bill’s enactment. Companies that are not already tracking potential risks should start conducting internal, informal assessments. By acting today, platforms can put proper measures in place and secure their platforms and users, comply with the regulations, and avoid liabilities.

ActiveFence is following the advancement of the bill and will continue to publish our understanding of its implications for online platforms. To learn more about online legislation,