The Trust & Safety Policy Review

By
October 28, 2021
Building policies online

With a dynamic digital environment, crafting platform policy can be challenging. ActiveFence has completed an intensive deep dive into the policies of over twenty-six platforms to serve as a guide to assist Trust & Safety teams as they build their policies.

Person on social media

Building a company’s Trust & Safety policies is challenging. Requiring a lot of research and navigating many complexities, the task can be daunting. To assist Trust & Safety professionals, ActiveFence completed an intensive deep-dive into the policies of over twenty-six platforms to serve as a guide when crafting policy. With an in-depth analysis of each platform’s policies on various threats, ActiveFence has gathered insights on how platforms have created their policies. We’ve developed comparative policy reports on health and electoral disinformation, child safety, illegal, violent activities and marketplaces. Our Trust & Safety primer also offers an overview on policy building and the role teams play in creating effective platform policies. 

In this blog, we consolidate what we’ve learned to help policy specialists understand the ins and outs of the largest social platform’s policies providing critical considerations for crafting and developing your own policies. From the origins of policies and different approaches taken to the wording and actions platforms take, we’ve done the research. Here, we share five takeaways that we believe will help User-Generated Content (UGC) hosting platforms in building their own policies.

1. What Is The Purpose Of Platform Policy?

Whether called community guidelines, content policies, or trust & safety policies, these all create the ground rules for platform use, outlining what can and cannot be done to develop transparent processes. While first and foremost, UGC platforms develop these policies to ensure the safety of their users, they also must ensure that they are compliant with different national legislations and regulations.

Section 230 of the Communications Decency Act states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

While this seems to grant all immunity to social platforms, there are many regulations that do not fall under this, leaving social platforms liable to abuse breaches.

In the European Union, online platforms have one hour to remove or block terrorist content or face considerable fines. This has led many platforms to enact tougher legislation. When it comes to child sexual abuse materials (CSAM), many platforms take a stricter and uniform stance given the severity and legal ramifications of hosting and enabling the distribution of CSAM. 

A company must fully understand the existing laws and regulations applicable to them and their users before determining policies to ensure that they abide by the law and remove all risks of violation. Often asked by industry Trust & Safety leaders for regulation guidance, we develop up-to-date guides on our knowledge in the field. Contact us to request a comprehensive list and explanation of the legislation that applies to social sharing platforms. 

2. How Are Platform Threats Defined?

In some cases, the definition of a threat is clear. However, within all definitions there is room for interpretation, leaving platforms to make judgment calls themselves. For instance, there are clear outlines on what is and is not CSAM for policymakers to follow. But questions can arise on an issue such as appearance. If someone is not a minor but appears as one, does that fall under the definition of CSAM? 

With disinformation, a similar issue arises. Outwardly, platforms say that all disinformation is forbidden. However, when it comes to case-by-case instances, the definition of disinformation becomes more complicated. Here is when a platform must make judgment calls itself. Most recently, YouTube removed any grey questions from its vaccine disinformation policy. The company decided to ban not only vaccine disinformation, but to ban all COVID-19 misinformation as well, including high-profile anti-vaccination accounts. 

Regarding terror, some platforms cooperate with or refer to outside organizations to define which groups are and are not terrorist organizations. As we saw with the return of the Taliban, many social media platforms refer to the US Department of State’s FTO (Foreign Terrorist Organizations) list, leaving Taliban official accounts intact on social media platforms such as Twitter. When TechCrunch questioned Twitter’s activities, the company responded with the following:

YouTube complies with all applicable sanctions and trade compliance laws, including relevant U.S. sanctions. As such, if we find an account believed to be owned and operated by the Afghan Taliban, we terminate it. Further, our policies prohibit content that incites violence.

While policymakers refer to official, clear definitions, they should stay cautious and alert to ongoing developments and events and update definitions as needed. 

3. When Should Platforms Choose Between Specific And Broad Policy Language?

We’ve learned that platform policies run the full length of the spectrum from explicit to catch-all. While platforms like Facebook, Instagram, and YouTube are extremely specific in their policies, Snapchat, Daily Motion, and Tumblr prefer more general wording. 

When taking a closer look at electoral disinformation policies, we found that YouTube far surpasses any other platform in its specificity. Prohibitions such as “content encouraging others to interfere with, obstruct or interrupt democratic processes,” are one of many on the list of YouTube’s specific prohibited content. 

An example of a more general platform prohibition is Snapchat’s wording on terrorism. The platform bans “terrorist organizations from use of the platform,” and uses equally broad phrasing to prohibit “engagement with any illegal activity.”

It is worth noting that while a broad interpretation allows for wide interpretation, it can also leave room for criticism on freedom of speech violations, adding the potential for a public relations crisis.

4. How Do Social Platforms Respond To Abuse Of Policies?

Platforms take many different approaches to the responses to policy violations. We’ve discovered that while some platforms are confident to remove material they determine to be fake, others flag potentially problematic material. 

Here we list the top responses to policy violation:

  • Platform flagging and labeling as “disputed content” or, in the case of Instagram on COVID-19 disinformation, offering COVID-19 resources at any mention of the virus. 
  • Removal of content proven to have negative intent
  • Removal of content regardless of intent
  • Reporting content to law enforcement
  • Immediate, temporary, or permanent suspension of an account violating policy
  • Fact-checking in cases of grey content

Responses to violations do not need to be one-size-fits-all and can vary based on the violations. 

5. How Do Platforms Tackle Threats As They Develop?

As we mentioned above, we recommend that platforms both follow clearly defined definitions in their policies while staying alert to trends and events as they develop. In recent years, this has become increasingly necessary. We’ve seen many instances where social media platforms had to grapple with how to respond to developing threats.

For instance, in response to the disinformation and violence surrounding the United State’s 2020 presidential election, platforms like Facebook and Twitter developed highly specified policies, whereas other platforms, including TikTok, worked with fact-checkers and organizations to verify claims made during elections.

After the US Capitol Riots on January 6, 2021, social media giants took tougher precautions than ever before in advance of the presidential inauguration. Facebook implemented a measure blocking the creation of new Facebook events near the White House, US Capitol, and state capitols through Inauguration Day and restricted users who have repeatedly violated its policies. The social media giant also blocked ads for military gear and accessories from US users. This measure was in response to these ads appearing next to news about the DC riots.

As we’ve learned, content policies should first and foremost protect the safety of platform users. Policymakers must be aware of all relevant regulations and legislation to ensure that policies are in accordance with the law. Policies should be rigorous and detailed, but they should also be non-exhaustive. The challenge faced by policy builders is building a system for evaluating content that protects the spirit of the wording and can respond to new and evolving threats. In addition, companies must understand their place within a dynamic digital environment—both as it is today and as it will be in the future.

 

Table of Contents