Policy Enforcement: A Nuanced Approach

By
June 1, 2022
laptop showing trust and safety on the online world

Proper enforcement is key to platform policy. However, to ensure a healthy online community, Trust & Safety teams must take a nuanced approach to policy enforcement. In this blog, we provide the necessary context to do just that- create a healthy and safe space for users.

Trust & Safety teams look to platform policy as their rule book, guiding almost everything in their day-to-day work. But for these rules to properly work, teams must be able to enforce these policies effectively. Tying Trust & Safety altogether is the actions themselves that content moderators take against violations, ensuring that rules are enforced and keeping users safe.

Now that we have a better understanding of policy development and the detection tools necessary for content moderation, in this blog, we focus on the next step- policy enforcement. From the philosophy and considerations of policy enforcement actions, we share how developing nuanced responses to violations and violators fosters a healthy and safe online community.

Law on the online world

The Philosophy of Policy Enforcement

Typically, enforcement actions are viewed as black and white- if an account or content is violative, it is removed. However, many options are available in the gray that offer more effective ways to handle violations.

According to Eric Goldman, Professor at Santa Clara University School of Law, varied content moderation remedies “help to improve the efficacy of content moderation, promote free expression, promote competition among internet services, and improve Internet services’ community-building functions.” In his article, he provides an in-depth understanding of the advantages of such an approach.

The way we see it, a diverse approach to enforcement has three main benefits to content moderation: building a healthy community, ensuring safety, and allowing for expression and freedom of speech. 

1. Safety

First and foremost, platform priorities should be on the safety of its users. Platforms can respond more effectively to a broader range of harmful content with more enforcement options. Suppose the only two options are immediate takedown or nothing at all. In that case, categories of harmful content may be considered non-violative, allowing potentially damaging content to remain accessible to all users. More options can give platforms less extreme choices, such as limiting content visibility, hiding content, or gating content with age verifiers.

A good example would be potentially harmful content to one community, such as children, but not another. With black and white options, age-inappropriate content may remain accessible (or inaccessible) to all. However, age verification gates allow for the protection of children while allowing appropriate-aged users to still access it.

With more options, more protection is possible.

2. Freedom of speech

On the flip side of user safety is the right for users to express themselves freely. According to a recent Pew Research report, “roughly three-quarters of Americans (77%) now think it is very or somewhat likely that social media sites intentionally censor political viewpoints they find objectionable, including 41% who say this is very likely.”

To prevent users from forming this opinion, it’s essential that platforms do what they can to maintain freedom of expression.

When it comes to content that may be more nuanced than directly dangerous, platforms can utilize different enforcement mechanisms to allow for freedom of expression. A more comprehensive approach may include mechanisms such as labeling content, warning users, or offering alternative information. COVID-19, election-related content, and content published by public figures are great examples of where these additional options are helpful.

3. Building a Healthy community

Safety and freedom of speech are at the heart of a healthy community. Dynamic options allow users to feel that they have a safe space to express themselves and interact with other users. Additionally, alternative enforcement options provide users with platform transparency. Instead of binary takedowns, enforcement options such as warnings, notifications, or labels allow users to understand and appeal decisions. Furthermore, leaving controversial content online fosters trust in a platform, while a simple notice can still inform users that the content may be problematic.

This contributes to a healthy community where platforms can interact with their users and vice versa.

User connections

Considerations

When determining enforcement actions against violations, there are a few essential things Trust & Safety teams should consider:

  • The context of a violation
  • User history
  • If the content targets a person or group of people
  • If an account is a public leader
  • If the content is of importance to the public
  • The severity of a transgression

These considerations provide color to each violation, allowing for more fair, nuanced, and varied enforcement options.

As we’ve learned, a nuanced enforcement policy is crucial for maintaining a healthy and vibrant online community. In the grey, dozens of options exist, such as labeling content, warning viewers, limiting visibility, and much more. To learn more, access our Guide to Policy Enforcement for a complete list of enforcement actions, including their suggested uses and examples.

Table of Contents