In part four of the Guide to Trust & Safety, we examine the industry through a historical lens, exploring how content moderation came into existence. From the internet’s creation, the first safety technology online, and the development of internet policies, we share the events that shaped, and continue to shape, Trust & Safety today.

The industry which we call “Trust and Safety” is new and growing rapidly. To grasp the complexities of the industry, understanding how it developed is crucial. Here, we delve into the history of content moderation. Exploring content and free speech laws from the 1800’s to current events, we highlight the events that shaped the industry as we know it today.

Setting the Stage for Free Speech: Limiting Content Distribution

The 19th century saw the US and European countries either expanding and limiting freedom of speech in different formats- the press, books, mail and even telegrams. “Obscene, lewd or lascivious” materials were seized by the US postal service in 1873, while France and Germany expanded freedom of the press. With the invention of the internet, these already complex laws became more complicated. 

Freedom of Speech Transforms to Freedom of Web Speech

  • August 1991: The World Wide Web becomes publicly available
  • October 1991: The first internet provider, CompuServe, is sued for hosting defamatory content on a forum. The court rules in their favor, claiming that internet intermediaries are not liable given their lack of editorial involvement
  • 1996: Dubbed the “26 words that made the internet,” the famous Section 230 is passed. The code provides immunity from liability for providers of an “interactive computer service” who publish information provided by third-party users

Communications Move to the Web, Building Virtual Communities

  • 1997:The first social media platform, Six Degrees, goes live. The platform allows users to send messages and post on the virtual boards of people in their networks, as well as see their mutual connections on the site
  • 1999: eBay is created and quickly introduces policies to counter the high volume of illegal goods being sold
  • 2000: Napster, the audio-sharing sharing platform, is essentially shut down after a judge bans the site from allowing copyrighted music to be exchanged
  • 2001: Yahoo! bans the sale of Nazi memorabilia after a landmark ruling.  French courts rule that since Yahoo can access content users generate, they are required to block this content or face fines
  • 2004: “The Facebook” launches for university students at select US schools
  • 2005: YouTube is created
  • 2005- 2008: MySpace is deemed the largest social networking site in the world, surpassing Google as the most visited website in the United States in 2006
Digitalized online world

Countries Begin to Take Action; Companies Start to Moderate


  • 2007: Turkey bans YouTube after videos insulting Turkey’s President, Mustafa Kemal Ataturk surface
  • 2009: YouTube and Facebook are blocked in at least 13 countries
  • 2009: Microsoft develops PhotoDNA technology, safety technology that flags and removes still images of child sexual abuse materials (CSAM). In 2018, the technology was made available for free and has been adopted by major tech companies, decreasing the exposure of content moderators to this type of challenging content
  • 2010: Facebook releases its first set of Community Standards, which outlines what is and isn’t allowed, in English, French, and Spanish
  • 2012: Twitter launches its first transparency report which describes how the company enforces their rules, protects privacy, navigates requests from governments around the world, and more

Global Events Shape Content Moderation 

  • 2014: The beheading of American journalist James Foley appears online, spurring the wide spread of terror-linked content online. In response, YouTube bans all violent content, reversing its policy that allowed violent videos for educational purposes on its platform
  • October 2016: News breaks that thousands of social media campaigns orchestrated by Russia interfere in the US presidential elections
  • November 2016: Sleeping Giants, a social activist group, was created after Donald Trump’s election with the purpose of pressuring companies to remove ads from alt-right news outlets. Its first action was the launch of a Twitter account promoting the boycott of Breitbart News
  • December 2016: Facebook implements a fact checking mechanism and forms partnerships with fact checking organizations
  • March 2017: Over 250 brands pull their ads from Google after a newspaper investigation discovered that their ads appeared next to extremist videos
  • March 2017: To counter hate speech on their platform, Twitter implements IBM Watson, a tool that analyzes data using natural language processing
  • April 2018: Mark Zuckerberg testifies before congress for the first time, where he apologizes for allowing Facebook tools to be used for harm
  • June 2018: Activists and academics launch The Santa Clara principles for “how to best obtain meaningful transparency and accountability around moderation of user-generated content.”  Since then, major companies like Apple, Meta, Google, Reddit, Twitter, and Github, have endorsed these principles
  • July 2018: The FOSTA (Fight Online Sex Trafficking) and SESTA (Stop Enabling Sex Traffickers) Act holds platforms liable for sex trafficking content, amending Section 230. Ads for prostitution and consensual sex work are included in the ban
  • November 2018: Facebook increases the number of content moderators from 4,500 to 7,500
  • February 2019: The Verge publishes a news story about the harsh working conditions of content moderators, citing low pay, no breaks, and development of health conditions such as PTSD and trauma. Workers are said to turn to marijuana and sex during breaks to cope with the mental weight of viewing horrific content
  • March 2019: The “Christchurch Call” is created in response to the live streaming of the mass shootings that took place in mosques in Christchurch, New Zealand. The call introduces a plan to stop platforms from being used as a tool for terrorists

Where We Stand Today

  • March 2020: As the COVID-19 pandemic intensifies, Secretary General of the UN labels the “infodemic of misinformation” as our enemy alongside the virus, stating the need to promote facts and science
  • June 2020: The Trust & Safety Professionals Association is founded to “support the global community of professionals who develop and enforce principles and policies that define acceptable behavior online”
  • July 2020: Facebook expands its content moderation team to 15,000
  • Jan 6, 2021: Donald Trump supporters attack the US capitol. Not only were social platforms accused of spreading disinformation that caused the riots, but they were also accused of allowing the organization of the riot to happen
  • May 2021: The UK introduces the Online Safety Bill, establishing a Duty of Care for internet users, with heavy fines for non-compliance
  • July 2021: France passes the Respect For Principles Of The Republic bill, requiring platforms to have a legal representative within the country, remove content as directed by French courts and publicize their trust and safety measures
  • May 2021: The global number of content moderators increases to over 100,000 people
  • October 2021: Facebook whistleblower, Frances Haugen, testifies before congress, alleging that Facebook hides the harms of its platform. Calling for urgent external regulation, she claims that Mark Zuckerberg ‘has unilateral control over three billion people’
  • December 2021: Facebook’s parent company rebrands itself as Meta
  • January 2022: Neil Young starts a wave of musicians boycotting Spotify protesting their hosting of the Joe Rogan show
  • February 24, 2022: Russia attacks Ukraine, using long-standing disinformation as the catalyst for the invasion. Tech companies respond, implementing strong measures to keep their platforms from being tools of information warfare.


The trust and safety community will continue to be challenged. From adapting traditional media practices to online media, evolving online laws, regulations, and policies worldwide, to the ever increasing threats online, trust and safety teams must move from being reactive to proactive. User experience, safety and trust is where teams must place their efforts and energies. 

We believe that the lens of trust and safety’s history provides context to how the industry developed and will continue to develop. Current events, globalism, and developing technologies have all contributed to the way the online realm looks today. As we have learned, what takes place online, directly impacts what takes place offline. Trust and safety teams have the power not only to protect online users, but protect each living, real citizen of the world each and every day. 

To learn about the trust and safety industry, visit our Guide to Trust & Safety for more insights.