In part one of the Guide to Trust & Safety, we share the four functions necessary to build a Trust & Safety team. From safety by design to policy enforcement, we walk you through the must-haves every team needs.
Whether a small company or a large social network, Trust & Safety practices should be built into a company by design. With the purpose of protecting a company’s users from harm, these practices should apply to any online platform with user generated content (UGC).
From product design to company operations, beginning to build an effective trust and safety can be daunting. Here, we tackle one of the most important components of Trust & Safety- building an effective team team that ensures the safety of your platform and company.
Before covering the core functions of trust and safety, it’s important to note that the hires that make up your team will likely come from diverse backgrounds. While you might be looking for the perfect resume, remember that trust and safety is a new industry. Diversity is only an added value to your team. Passion, ability to execute and belief in protecting users may be more important than experience itself. That being said, backgrounds in NGO, law or security may signal not only passion but relevant experience as well.
With that in mind, we’ll dive right into the four Trust & Safety functions any company needs to get started.
1. Safety by Design
In today’s online climate, companies must prioritize trust and safety from the start. By creating a team role designated to product, teams can ensure that safety by design is built into the product from the start.
This role ensures that a product includes safety mechanisms, such as a reporting mechanism, and meets trust and safety guidelines. Once a product launches, issues of safety within the product will arise and this function will be needed to manage these needs.
2. Policy, Policy, Policy
You’ve heard it before and you’ll hear it again- it’s all about policy. Without effective policy, your team won’t be able to take any actions against harmful content. Your first hire should be someone who can develop platform policy that defines what harmful content is. This role will be largely legal and requires someone with a legal background, ideally in technology law. Understanding harmful behaviors and being able to define the red lines of a violation will be key. While some activities will be clear violations, such as child sexual abuse materials, issues like what is considered misinformation will be trickier to define. This is where someone with a nuanced approach is needed.
Furthermore, not only are skills necessary for the job, but passion about trust and safety is needed as well. With a personal calling to protect online users, whether it be children or stopping disinformation, employees will be able to do their job far more efficiently.
Once a product and policy are in place, companies will need to enforce policy breaches and violations, as without enforcement, policy is meaningless. Here, companies will often vary on the makeup of this function. Size, product nature, and budget will influence what a team not only needs but can realistically implement.
In a larger company, enforcement departments may have separate teams to manage different threats, while others may outsource these needs completely or combine in-house teams with outsourced services. In a small startup, oftentimes this responsibility will fall to a community manager or customer support specialist who handles user complaints and tickets. This need will likely develop quickly and require a digital first responder or a more robust team.
Depending on the nature and needs of your product, these are roles that you might consider:
- Content moderators, otherwise known as digital first responders, decipher content that is flagged as harmful and decide what action to take. As content moderators are exposed to harmful content daily, resilience is an important trait to look for when hiring.
- Intelligence expertise is needed to understands the TTPs (Tactics, Techniques, and Procedures) of threat actors. Trust & Safety teams should ensure that this role is filled, by either someone who has a background in intelligence or outsourcing. When it comes to organized campaigns orchestrated by bad actors, we generally think larger platforms are targeted. However, we are witnessing that smaller platforms are exploited more and more.
- A law enforcement liaison should be able to communicate to law enforcement any escalating, immediate threats.
- More mature teams may seek intelligence professionals or specific threat experts to manage high risk incoming threats. Academic backgrounds for these roles may be a bonus.
While some of these roles are suitable for a more robust trust and safety team, they are important components to protecting your users. Here is an area where outsourcing may be a great option for your company.
4. Running the Show
To tie a team together, operational practices must be set in place. Operations will handle all operational matters from budget to determining the tools needed to support a team and team processes. Day to day responsibilities of the operations team may include:
- Setting a budget
- Managing teams of people
- Vendor operations
- Implementing automation tools
- Determining processes for policy enforcement such as reporting, appeal system and escalation actions
While we’ve outlined the main roles, it’s important to keep in mind that while you may not be able to build a large team at first, trust and safety is a culture that can be fostered within the company. The ethos of trust and safety should lie within the company, not just within one team. The key is to mainstream the idea of creating a healthy and safe community for your users.
Check out part two of the Guide to Trust & Safety: Measuring Trust & Safety.