NEW REPORT! Generative AI: The New Attack Vector for Trust & Safety Download It Now!

17 Questions T&S Leaders Should be Able to Answer

By
April 10, 2023

Trust & Safety teams look and behave differently from company to company. But while size, organization chart, and threats of concern will differ, the questions Trust & Safety leaders should ask to align with the job’s main task; keeping users safe – are similar.

From team management and measuring impact to policy building and ensuring compliance, every component of a Trust & Safety team should constantly be assessed to ensure top performance. The following questions help frame the day-to-day work of Trust & Safety leaders, and their answers can help identify the areas that need improvement and how.

Team Management

While team assessment is important for every team leader, it is critical for Trust & Safety leaders whose staff work under constant stress. Given the essential nature of content moderators’ work and its impact on their health, the questions Trust & Safety teams should consider are critical.

1. How can I keep my team healthy and resilient?

Content moderators face a high volume of malicious content daily, resulting in a high rate of burnout and turnover. Using dedicated tools to blur graphic content is one way to decrease exposure and ensure moderator safety.

2. What training and career development opportunities can I offer my team?

The Trust & Safety field is fairly new, and only recently have dedicated courses become available. Proper training can help ensure teams understand the critical nature of their work, stay on top of abuse trends, and know how to react in times of crisis.

 

Blurring harmful content helps ensure team resilience

Measuring Content Moderation

Successful leaders should constantly be evaluating their team’s effectiveness to ensure constant improvement. Trust & Safety metrics of concern range from hard metrics like enforcement rates and threat coverage to soft metrics like the perception of your platform’s work, fairness, and sincerity.

3. What is my team’s AHT?

Average Handle Time (AHT) is a core metric of moderator efficiency. It averages the Handle Times (time between when a moderator opens a piece of content to the time an action is taken) of an individual moderator, team, or abuse area over time, providing a measure of how quickly items are handled. The more automated actions teams take, the lower their AHT.

4. What is my recall rate? What about precision?

Your recall rate measures the percentage of your platform’s malicious content that is picked up by its moderation systems. A high recall rate means that more harmful content is identified, though it can also mean more false positives. Precision measures the percentage of items which were identified as violative, which are, in fact, violative.

While most automated detection mechanisms have a high recall rate, and lower precision, solutions that are based on intel-fueled, contextual, adaptive AI maximize both.

5. How do these metrics compare based on threat type and language?

While each of these metrics is important, they should not be evaluated in a vacuum. Instead, measure these metrics over time, and compare them across teams, abuse areas, and languages, focusing on optimizing the specific areas that lag behind the rest.

6. What tools are available to monitor and help improve my team’s performance?

The first key step to improving performance is measurement. One way to track team performance is by implementing dedicated Content Moderation Software to automatically track team activities and provide analytics, so you can assess performance and improve over time.

Analytics dashboard

Monitoring metrics allows consistent improvement

Business Impact

In user-generated content (UGC) platforms, Trust & Safety teams deal with the essence of the business, in that they define what can and can’t be posted and implement these rules. Their actions, or lack thereof, directly impact user experience, engagement, retention, and ultimately, revenue – meaning their work is critical to the bottom line.

7. What is my approach to handling high-risk, highly influential users?

Powerful, influential users create communities and accrue user engagement, ultimately driving high revenues for both themselves and the platform, they are generally good for business. However, when these users begin participating in violative activities, Trust & Safety teams may find themselves in a complicated dilemma between revenue and user safety.

T&S leads should closely monitor highly influential users and have plans in place to ensure fair treatment, in case these accounts violate policy.

8. How does my policy compare to that of similar businesses?

UGC is a business of engagement, and platforms with overly strict policies risk harming user experience, and potentially limiting engagement. To ensure that your platform provides users with the space they need – closely monitor the policies of similar platforms, to ensure that your rules of engagement are fair.

9. What is my team’s ROI?

In today’s cost-conscious economy, the focus on ROI has sharpened and Trust & Safety teams are challenged with doing more with less. To ensure positive ROI, teams should focus on efficiency – which includes improving core metrics, implementing contextual AI, and decreasing R&D costs, among other activities.

 

Alignment with Product & R&D

A core industry principle is safety by design, which refers to how technology can proactively minimize online threats rather than after the fact. This principle focuses on how a product is built and continuously develops, placing safety at the forefront of each decision made in a product’s lifecycle. This principle requires that Trust & Safety teams constantly align with product and R&D.

10. How do my policy updates align with product and R&D?

While Trust & Safety define what can and can’t be posted on a platform, it is up to R&D and product management to ensure those limitations can be executed on the back-end, reducing the need for manual detection of policy violations. To ensure that policy updates are smooth and efficient, T&S leaders should work towards an open, and mutually-beneficial relationship with product, where policies are supported on the backend.

Alternatively, implementing SaaS solutions that allow for no-code policy changes can ensure policies are constantly up-to-date, with minimal reliance on external teams.

11. Which AI tools am I using, and how do I select them?

Automated detection is a critical part of content moderation, and AI tools enable it. The tools integrated into your T&S systems should be up-to-date, cover all relevant abuse areas, be adaptive to your decisions, and take into account context.

Automated workflows to decrease moderator exposure

Using codeless workflows allows instant policy changes

Threat Detection

Policies establish the rules of engagement on a UGC platform, defining what can and can’t be posted and, therefore, what Trust & Safety teams should take action on. But policies cannot outline risks that Trust & Safety teams are not aware of. Proper threat detection ensures that teams are not left blindsighted in the face of new harms.

12. What processes do I have in place to identify impending risks?

The best way to stop harm is to avoid it in the first place. To do this, teams should be proactively assessing risks, and creating policies to stop them, before they reach platforms. Establishing a trend detection or intelligence team is one way to do this.

13. Which abuse areas is my team strongest in? Which do I need to strengthen?

UGC platforms face a wide range of abuses: from child abuse and CSAM to disinformation, and hate speech content to terrorism promotion – each of these abuse areas requires specialized knowledge and threat detection activities. Effectively detecting this type of content requires robust, multi-faceted teams. Building a team in-house is one way to do this, but hiring an external threat intelligence team may be more cost-effective way to ensure full coverage of all risks.

14. Is my team able to assess new risks in new regions and non-English languages?

The wider your platform’s reach, the more potential language-based risks you are exposed to. T&S leaders should assess the languages used on their platform, and ensure that they are well-positioned to detect risks in most if not all of them.

15. How well-positioned am I to detect risks in times of geopolitical events?

Geopolitical events, either planned (like elections) or unplanned (like wars and natural disasters) all quickly create new risks, placing huge strains on global T&S teams. Proactive insights into these events are not always possible, but identifying the threats before they manifest on your platform is one way to minimize risks.

Identifying threats before they reach your platform helps proactively detect harm

Ensuring Regulatory Compliance

As countries around the world create new laws that establish liability for harmful content, Trust & Safety teams are tested to create policies that are at once compliant with a wide set of global laws and allow users the freedom to express themselves. In assessing legal risks, T&S leaders should ask themselves the following questions:

16. What regulations am I bound to, and am I currently compliant?

Online safety laws are frequently built such that they are legally binding in the region where content is accessed – not where the company is headquartered, meaning that platforms must assess compliance where their users are, abiding by the most stringent regulations to avoid heavy fines. For example, the European Union’s Digital Services Act (DSA) applies to all platforms accessible in the EU, creating a baseline of processes that platforms should take to ensure safety.

17. How do I stay current with new and changing regulations concerning my services?

Avoiding fines requires being compliant with multiple laws around the world, and being compliant with such laws requires constant monitoring and analysis of new regulations and what they mean for your platform. Having a source of information on these changes is critical to successfully navigating a complicated legal landscape.

 

Trust & Safety leaders face a multitude of challenges as they strive to protect users and maintain the integrity of online platforms. From developing policies to managing teams and complying with internet legislation, the responsibilities of these teams are far-reaching and complex. However, with consideration for the topics and questions outlined above, Trust & Safety leaders can navigate these challenges and build strong and effective teams.

To better understand how leaders can enhance their operations, download our report on Five Ways to Improve Trust & Safety ROI.