Policy Series: Child Safety

By
July 22, 2021
little girl watching TV

In the first edition of the ActiveFence Policy Series, we take a look at the child safety policies of major tech companies and examine their core components. Download the complete report to learn how industry leaders protect their youngest users from online harm.

boy and girl drawing together

Background

The process of creating robust comprehensive community guidelines or trust and safety policies is an ongoing one. To protect your platform’s users requires constant monitoring of ever-changing on-platform behaviors, shifting legal requirements, and competitive analysis of industry best practices. This is particularly true in the child safety space, where legislation and company policies work together to keep the most vulnerable users safe.

The following article and accompanying report are the first part of ActiveFence’s Policy Series and will provide an analysis of the policies and community guidelines that the biggest online platforms use to ensure child safety. These policies are broadly broken down into four categories, including CSAM, child abuse, bullying and harrassment, and self-harm.

Given the severity and legal ramifications of hosting and enabling the distribution of CSAM, company policies (that operate platforms hosting user-generated content) tend to be strict and relatively uniform. However, companies adjust certain aspects of their policies as necessary, depending on their user-base (age) and services provided.

Policy Challenges

Platform policies must be rigorous in order to keep children safe. Furthermore, as threats evolve over time, they must also be responsive and capable of meeting new challenges. This means that those seeking to create trust and safety policies for their platforms need to have a robust understanding of the digital environment, not only as it is today but also as it will be in the future.

Given the speed at which change occurs in online spaces and the sheer amount of information there is to process, this can be a complicated endeavor. ActiveFence will monitor these community guidelines and policies over time, regularly updating on and interpreting changes. By doing so, this report aims to help platforms make informed decisions regarding their approach to trust and safety and the rules that they put in place.

A Platform Policy Guide

Complete with examples and divided by platform category, this guide provides useful insights into how various platforms—and types of platforms—work to keep predators from abusing their services and users.

The complete report features an analysis of four abuse areas: CSAM, child abuse, bullying and harassment, and self-harm. For each risk area, we will provide the responses of the five different types of digital platforms: social media, instant messaging, video conferencing, video sharing, and file sharing. Each type of platform comes with its own unique risk areas that dictate the necessities and requirements surrounding policies as related to CSAM.

Social Media

Social media platforms host everything from text, images, videos, private messages, and public and private groups. The multiple risk areas for abuse require these platforms to strictly enforce guidelines ensuring the safety of their users and compliance with different national legislations.

Instant Messaging

Instant messaging platforms are also particularly vulnerable to being exploited by child predators looking to trade and access CSAM. As a result, platforms that offer instant messaging services—both text and images—must be active in combating CSAM on their platforms.

Video Conferencing

The COVID-19 pandemic saw the use of platforms providing video conferencing services growing exceptionally. Unfortunately, this popularity also made these platforms more susceptible to abuse, including the dissemination of CSAM. As a result, these types of platforms have enacted various measures and guidelines to mitigate this dangerous activity.

Video Sharing

The very nature of video-sharing platforms inherently makes CSAM a concern for platform exploitation, either in the sharing of video and still image child pornography, as well as using the comments sections to sexualize minors depicted in innocent material.

File Sharing

File-sharing and cloud storage services are also used by child predators to store and distribute CSAM. While child predators mainly utilize dark web file-sharing platforms, the limitations of these alternative servers lead predators also to exploit mainstream and surface web platforms to ensure greater ease in access to these illegal files.

Shaping Responsive Policies

The best platform policies are responsive and should evolve as new threats arise and change over time. As policies must evolve and be shaped continuously, our team will continue to monitor all relevant changes and developments in the trust and safety ecosystem to provide updates as and when policies change.

Our comprehensive reports detail how some of the market leaders in the online space are currently addressing the threat of CSAM on their platforms.

Table of Contents