Content Moderators: The Cost of Burnout

By
January 19, 2023
Matches burned out, content moderator burnout

Last week, Sama, a major content moderation service provider in Africa, announced that it would shutter its content moderation services. This move follows other dramatic exits in the content moderation space and may be an indicator of what’s to come. As the human costs of content moderation are brought more and more into the spotlight, technology companies are forced to examine what makes this job so costly and to find creative solutions to offset the damage.

Moderation is Good for Business

Content moderation is not just good for the user, it is good for businesses. As companies around the world understand that safe environments breed better profitability, it is a general consensus that content moderation is a necessary part of user-generated content (UGC) platforms. To create safer environments, online platforms deploy content moderation tools with varying levels of sophistication. However, regardless of the complexity of your tool stack, it is clear that humans will always be a crucial part of content moderation.

Human moderators, however, come at a heavy cost: one that originates in personal wellbeing and ultimately impacts the bottom line. Read more to understand the highly complex environment of human content moderation, and its associated costs, both personal and financial, and to learn about two types of proposed solutions to this unique trust & safety challenge.

The Content Moderation Tool Stack

To fully understand the cost of human moderators, it is important to understand the stack, its components, and why humans are such a critical factor.

Content moderation relies on a mix of tools, ranging from highly complex, AI-driven solutions to low-tech, manual review. Each company that hosts user-generated content establishes a unique model of content moderation. Generally speaking, smaller companies with lower volumes of content will opt for more manual moderation, while larger companies will also utilize automated tools. In most cases, however, human moderators play a critical role in the process.

Automated tools that are frequently used involve basic detection technologies such as keyword detection and hash database matching or more complex tools like AI-based classifiers and natural language processing. However, despite dramatic advancements in automated technology – companies are still reliant on manual review by human moderators.

The need for human review stems from the nuances of content moderation. Harmful content comes in all shapes and sizes, all content formats – from video, audio, text, and images, in a wide range of languages – predominantly those not covered by automated detection technologies. Moreover, content shared or produced by bad actors may be disguised as something inculpable – using creative linguistic nuances, hiding text in images, l33t speak, and more – allows bad actors to hide content from automated detection methods.

Platforms cannot risk false negatives when dealing with dangerous and illegal content and therefore rely on human moderators.

Content moderators work in high-stress, high-burnout situations

The Cost of Human Moderation

Reliance on human moderators creates an undesirable situation where as a platform scales across countries and groups, its content moderation teams balloon in size – creating not only huge inefficiencies – both financial and procedural, but also harm to the moderators themselves.

Health Costs

It is well-documented that prolonged exposure to harmful content causes long-lasting mental health problems, including PTSD, anxiety, and depression. Content moderators view item after item of such content, and they must make quick decisions with high accuracy – only compounding the impact.

Undesirable on their own, these conditions will also lead to poor business outcomes – including decreased efficiency, high turnover, and PR crises, not to mention liability and legal battles.

Financial Costs

The cost of moderators varies greatly by the country where they are employed and the level of expertise that moderator holds, though they typically cost between $1000 and $3500 per month. Assuming moderators can handle between 10K-50K items per month (depending on complexity), companies often have to hire hundreds to thousands of moderators just to keep up. Beyond these basic costs, other factors should be considered:

  1. Efficiency: Human moderators cannot, and will never be, as efficient as machine models. While generally higher precision levels offset this cost, it still means that a human moderator can only identify a fraction of the items a machine can.
  2. Precision: While human moderators, particularly those with expertise in specific abuse areas and languages, are considered highly precise, moderation can still be somewhat of a subjective field. When interpreting a piece of political misinformation, for example – what one moderator may clearly identify as fake news, another may believe to be true. This type of misalignment may result in the potentially uneven application of policy.
  3. Highly Complex Processes: As companies grow across geographies and are impacted by bad actors from multiple abuse areas, the number of moderators and expert analysts required grows exponentially. Imagine employing a dedicated child safety expert in every language covered by a major social media platform’s services – add experts in every other abuse area and every language, and you begin scratching the surface of complexity.

Varied Solutions

The fact is, that despite the challenge, human moderators are here to stay, and platforms must therefore find ways to minimize the risk and financial burden. This unique challenge demands a creative, complex solution. Two schools of thought exist – one aimed at decreasing risk, and another aimed at decreasing reliance on human moderators:

Decreasing Risk

Image modification

Knowing that a key part of the emotional toll of content moderation stems from harmful imagery, a core solution involves altering the images that moderators view. The use of technology that automatically greyscales, blurs, and reduces image size and clarity is one way of achieving this.
Moderators and their employers can implement tools such as the CleanView web extension to modify images and improve wellness.

Resilience training

Another way of reducing risk involves emotionally supporting employees to improve resilience. Support groups and one-to-one sessions led by psychologists should build foundational skills such as coping, learning optimism, self-efficacy, stress resistance, post-trauma growth, and locus of control. This training should be included in onboarding and should be implemented periodically.

Read more about building resilience for content moderators.

Decreasing Reliance on Human Moderators

Prioritized moderation queues

One of the difficulties of content moderation is operational. Content to review comes from multiple sources- user flagging, AI, and, in some cases, moderators review all content before it’s published. These items are frequently reviewed in random, or chronological order, and not by risk level.
To make sense of it all, organization is required. Effective moderation queues can, and should, have all items requiring moderation flow through them in an organized and efficient, and prioritized manner. Categorized by prioritization, abuse type, or source of the flag, moderation queues can be customized to show moderators exactly what they need to see.

Automated workflows

Automated workflows take moderation queues one step further. With automation, flagged items can be automatically actioned based on pre-determined policies, rather than being immediately sent to a queue for manual review. Built on the enforcement policy, workflows can be set to take immediate action when something is flagged as a specific violation, or scored as high risk, rather than a moderator manually actioning an item. For example, if an item is flagged and clearly violates child safety policy, the item can be automatically removed, eliminating the need for a moderator.

Automated workflows to decrease moderator exposure

Contextual AI

Contextual AI refers to AI model’s ability to capture incoming content, understand the actions taken on it, and adapt its analysis based on learned responses. In content moderation, algorithms can be trained to recognize violations, and track the platform’s action on items, learning from past behavior to improve detection accuracy. This directly affects efficiency- as AI models become more accurate, the need for manual moderation will continuously decrease.

A Multi-Pronged Approach

Content moderation is good for business, and human moderators are a key component of that effort. While automated tools exist, they still cannot replace humans – who come at a heavy cost – both financial and personal. To improve moderation processes and outcomes, trust & safety teams must consider solutions that improve the wellbeing of moderators on the one hand, and reduce reliance on their manual review on the other.

By implementing a varied approach that involves contextual AI-based risk scores, automated workflows, and resilience tools, ActiveFence’s content moderation platform helps reduce reliance on moderators, and improve wellbeing. This combined approach has helped our clients achieve a 40% increase in moderator efficiency.

Join us for a demo and learn how we do it.

Table of Contents