Three Measures for Improving Content Moderator Productivity

By
March 2, 2023
Automation

2023 is shaping into a year of resilience and budgeting for the tech world. This year will be more challenging for Trust & Safety teams, who have continuously operated on a limited budget while scaling. With the increase in harmful content online and the growing sophistication of bad actors, now more than ever, Trust & Safety teams must improve efficiency and remain cost-effective.

Watch our webinar recording, “Increasing Content Moderation ROI in 2023,” and hear how platforms can efficiently scale their Trust & Safety operations while maintaining compliance in 2023.

Three Components of Moderator Productivity

Content moderators are tasked with an essential yet challenging job. Faced with vast volumes of complex user-generated content in multiple languages, moderators must correctly identify violations in a wide range of abuses promptly and effectively.

Specifically, moderators’ productivity is dependent on three components:  speed, efficiency, and accuracy.

Speed

Speed is a critical component of productivity – especially among content moderators. Moderators must quickly decide whether or not the content is violative. However, traditional moderation processes require moderators to sift through a high volume of content, which is seldom properly prioritized. In one shift, a content moderator working in a non-English language may be required to assess items involving CSAM, incitement to violence, terrorism, and spam. Moreover, that moderator may not know which of the items is a high vs. low priority, and which may pose a legal risk. A messy moderation queue will inevitably lead to a slower-than-desired average handle time (also known as AHT). 

Therefore, to improve moderation speed, teams must find ways to optimize the decision making process and reduce manual efforts.

Efficiency

Efficiency involves doing more with less. As the user base of online platforms grows – so does the volume of potentially harmful content that moderators handle. Currently, scaling moderation to meet high content volume requires increasing the number of moderators, dedicating engineering and R&D time to make policy changes, and revamping operations to accommodate complex processes and growing teams – an inefficient process, which comes at a high cost.  

To improve team productivity and flatten the moderation cost curve, especially as budgets slim, a different approach is required: one which utilizes automated processes to streamline moderation efforts and minimize manual actions.

Accuracy

Teams who need to grow their operations while maintaining low costs, often do so by implementing automated detection mechanisms like keyword detection and AI. However, while these models allow teams to scale detection, they are often ill-equipped to understand complex threats, especially in non-English languages. This limitation leads to two undesired outcomes: false positives, where content that is benign is flagged as violative – creating more work for moderators, or false negatives, where violative content is ignored, resulting in a policy breach and potential legal risk. Both outcomes create the need for a large team of language and abuse specialists, which only increases cost and decreases efficiency. 

To improve productivity, more robust models of automated detection should be implemented, ideally, ones that rely on context, and not just content to detect items. 

Championing Speed, Efficiency, and Accuracy

To improve productivity and increase the speed, efficiency, and accuracy of detection mechanisms, three areas of content moderation should be considered: Detection and filtering models can be enhanced to improve pre-moderation processes, moderators’ work processes can be improved as they evaluate content, and the overall working conditions of moderators should also be considered.

Pre-moderation

The opportunity to optimize moderation before content even reaches a moderator’s desk is the most valuable. By filtering content beforehand, only the most relevant items arrive at a moderator’s desk, allowing them to focus only on those items that truly need a human’s review. Filtering can happen in two ways:

1. Contextual AI

Contextual AI analyzes content to provide an accurate, abuse-specific risk score in line with a company’s policies. The contextual model analyzes the metadata surrounding the flagged item, such as title, description, user name, thumbnail, and more, to determine the risk level of an item. ActiveFence’s contextual, adaptive AI model is constantly updated with intelligence collected in the field, and implements feedback loops from actioned items to continuously enhance accuracy. From the latest tactics of bad actors and repeat offenders to changes in slang, languages, and emojis, the intel-fueled model enables accurate labeling of content risk.

 

CSAM medadata

2. Workflows and Automation

Contextual AI enables teams to improve accuracy on automated detection, allowing them to scale while reducing costs. The next step in the process will use that information to automate enforcement actions while minimizing the involvement of a human moderator. Using smart workflows, content can be prioritized based on policy, and some of it can automatically be actioned on or approved based on the AI-provided risk score.

ActiveFence’s no-code workflows can be set to automatically ignore content that is labeled as low risk and automatically remove that which is marked as high risk – allowing humans to evaluate only those items where doubt exists. This way, items that pose no risk- like a chef’s knife in a video, are automatically approved or ignored, while items that are clearly violative – like images of gory violence, are automatically removed or suspended. Only the middle layer of unclear content is sent to a moderator for review.

On the Moderator’s Desk

With most benign and high-risk content automatically removed through workflows, only a selection of detected content lands on a moderator’s desk. For this subset of content, the AHT is further reduced in three ways:

1. Interface and Explainability

Frequently, content moderators work with multiple connected platforms and content detection technologies. Processes that involve multiple tools are necessarily more complex and inefficient. 

The simple act of bringing all information, which includes all user and item data, and explanations for why an item was flagged – into one queue, streamlines the decision making processes and improves AHT. ActiveFence’s prioritized queues include explainability metrics and one-click actioning directly from the platform to allow a speedy decision making, actioning, and notification process.

Moderation queue

2. Third-party Integrations

While having all of the information in one platform saves moderation teams the time of mastering different tools and switching from one tool to another, the one-tool approach doesn’t work for everyone, and some teams may find it more efficient to continue using some of their existing tools, in a more streamlined fashion. For example, some teams can choose to continue using Zendesk or receive Slack notifications as part of the moderation workflow. 

ActiveFence’s open platform approach allows teams to bring in any existing tools in their stack for a truly streamlined moderation flow. Integrate any third-party tools, including messaging apps, case management software, and more to enable quick policy enforcement, user flag management, and user notifications, all from a single interface.

3. Analytics

You can’t improve what you can’t measure, and measuring the efficiency of content moderation teams is key to improvement. With analytics, operation managers gain complete visibility of efficiency in multiple dimensions to achieve continuous improvement over time. Individual or team performance analytics can highlight areas of struggle while analyzing how different threat categories can show where more resources may be needed. Training, reorganization, or more automation are just a few examples of how analytics improve productivity.

Analytics dashboard

Moderator Wellness

Mental wellness significantly impacts productivity by reducing stress, improving concentration, and enhancing motivation. Wellness is especially important for moderators, who are exposed to potentially harmful content daily. While exposure to harmful content has been proven to hinder performance, it also leads to high rates of burnout, which necessarily lead to high rates of turnover, resulting in high costs of hiring and training new staff, not to mention the potential legal liability in exposing employees to this content in the first place. 

To support the well-being of moderators, Trust & Safety leaders may implement various methodologies, like ActiveFence’s built-in image blurring technology and break reminders, or offline solutions like psychological support and wellness training.

Resilience Mode

A Trust & Safety Platform Built for Efficiency

ActiveFence’s Content Moderation Platform is designed for Trust & Safety teams’ unique needs. Our platform was built on knowledge gained over years of working with the world’s largest Trust & Safety teams, and the experience of our in-house team. Always keeping the moderator in mind, we have designed the platform to increase their productivity and wellness, reduce AHT, and provide transparency for the operation managers who monitor and optimize the content moderation processes. 

Contact us today to get a demonstration of the first Content Moderation Platform designed for scale and hear how we can help you improve your own team’s efficiency while reducing moderation costs.

Table of Contents