The History of Trust & Safety

By
September 14, 2023

From the inception of the internet to the introduction of the first online safety technology, and the development of internet policies, these are the pivotal events that continue to shape the practice of Trust & Safety and lay the foundation for the future.

Across platforms, Trust & Safety teams work toward a shared goal of ensuring online safety. In doing so, their efforts contribute to the architecture of social landscapes and the moderation of culture. For newly established teams, the implementation of a comprehensive Trust & Safety strategy requires a holistic understanding of how the field has evolved.

As is the case with other industries, the practice of Trust & Safety is influenced heavily by geopolitical events and technological advances. This influence can be seen in the evolution of Trust & Safety policies, online safety legislation, and platform usage over time. From the origin of content moderation to seminal global events and the emergence of online safety regulations, here are some of the pivotal events in the history of Trust & Safety. 

Virtual Communities Emerge 

During the initial stages of the internet’s rise, the primary emphasis was placed on developing platforms and services rather than ensuring online safety.

  • 1991: The public debut of the World Wide Web
  • 1996: Described as “the 26 words that created the internet,” Section 230 of the US Communications Decency Act is passed, providing immunity from liability for providers of an “interactive computer service” who publish information provided by third-party users
  • 2000: Napster, the audio-sharing sharing platform, is essentially shut down after a judge bans the site from allowing the free exchange of copyrighted music 
  • 2004: “The Facebook” launches for university students at select US schools. 
  • 2005: YouTube is created

Content Moderation Takes Root

As platform usage gains momentum, the creation of UGC is accompanied by rising rates of harmful content, sparking the need for online safety through content moderation.

  • 2009: Microsoft develops PhotoDNA, a digital hashing technology that is used today to flag and remove CSAM, by “coding” images into strings of text and numbers and then automatically identifying whether new images have already been classified as harmful.
  • 2010: Google becomes the first platform to release a Transparency Report, and Facebook releases its first Community Standards in English, French, and Spanish. Over the next three years, eight more platforms will release their first Transparency Reports.

Global Events Shape Content Moderation 

During this time period, key geopolitical events and a shifting legal landscape directly impact the course of online safety. 

  • 2016
    • Reports break of widespread Russian-orchestrated social media campaigns carried out to interfere in the US presidential election.
    • Facebook implements its first fact-checking mechanism.
  • 2017
    • Twitter implements an NLP tool to analyze data related to an increase in hate speech on the platform.
    • Molly Russell, a 14-year-old from the UK, died by suicide after exposure to self-harm content, prompting widespread calls for increased content moderation.
  • 2018
    • Activists and academics launch The Santa Clara Principles for “how to best obtain meaningful transparency and accountability around moderation of user-generated content.”  Since then, major companies like Apple, Meta, Google, Reddit, Twitter, and Github, have endorsed these principles
    • A content moderator who developed PTSD files a class-action lawsuit against Facebook in connection with work conditions that involved prolonged exposure to harsh content. Around this time, media coverage begins to highlight the impact on content moderators’ health and well-being.  
  • 2019
    • The “Christchurch Call” is created in response to the live streaming of the mass shootings that took place in mosques in Christchurch, New Zealand. The call introduces a plan to stop platforms from being used as a tool for terrorists
  • 2020: 
    • An “infodemic of misinformation” spreads as the COVID-19 virus, and related misinformation, take hold around the world.
    • The Trust & Safety Professionals Association (TSPA) is founded to “support the global community of professionals who develop and enforce principles and policies that define acceptable behavior online.”
  • 2021: 
    • Alt-right rioters attempt an insurrection on the US Capitol. Participants leveraged social media to plan and mobilize support for the rebellion. As a result, the platforms used were heavily criticized for failing to moderate related content.
    • The UK introduces the Online Safety Bill, a first-of-its-kind piece of legislation about user safety on digital platforms. Two years after its introduction, the bill is now set to become law.
    • Facebook whistleblower Frances Haugen testifies before Congress about the platform’s alleged hiding of its harms, calling for urgent external regulation.

New Regulations Mandate Terms of Online Safety

In recent years, new national online safety regulatory frameworks have emerged to proactively combat illegal online activity and harmful online content. This leads to a growing shift toward increased platform accountability and transparency.

  • 2021
  • 2022
    • Russia invades Ukraine, using long-developed misinformation as the grounds for the military campaign. Tech companies respond by implementing measures to prevent the usage of their platforms as tools of information warfare.
    • EU lawmakers adopt the Digital Services Act, establishing online safety requirements for all online platforms active in the EU. In August 2023, the DSA becomes enforceable for VLOPs and VLOSEs.
    • Consumers gain access to Generative AI, transforming the trajectory of online engagement and safety. 
    • The Republic of Ireland, Singapore, and Turkey adopt national online safety laws.
  •  2023
    • The US Supreme Court declines to review Section 230, leaving safe harbor immunity intact for platforms.

Looking Ahead to 2024 

  • In February 2024, the DSA will be enforceable for non-VLOPs active in the EU.
  • Proposed by the EU, the Artificial Intelligence Act is the first law on AI introduced by a major regulator to regulate AI applications. If passed, the law could be adopted before the next EU elections in May 2024.

These foundational events have formed the bedrock of the Trust & Safety community and have set the stage for the shared objective of ensuring online safety. What comes next? While it is impossible to accurately predict the future, looking at the history of Trust & Safety can give us an idea of just how quickly the industry can change.

One thing we know for sure is that the Trust & Safety community will continuously face a steady stream of online safety threats that encompass various abuse areas and languages, demanding swift responses to ever-evolving evasion techniques. For experienced and newly formed teams alike, understanding core principles, such as safety by design, policy development, and enforcement, and navigating a shifting legal landscape are critical to proactively mitigating these evolving threats. Recognizing the industry’s origins can help empower Trust & Safety teams to protect online users and preserve the integrity of their platforms. 

For additional insights into the Trust & Safety industry, read The Trust & Safety Industry: A Primer.

Table of Contents