The Four C’s: Maintaining Child Safety Online Throughout the Year

By
August 16, 2022
children safety online kids on tablets and phones

Back to school season is here, shifting children’s attentions - and screen time - back to their usual activities. Coming back to school means a change in the ways that children engage with platforms on a daily basis - from social media and gaming, through e-commerce, education, and research - technology platforms will be used in different volumes and ways in the coming weeks. With this change in children’s online activities come new (and old) risks, demanding a shift in focus for Trust & Safety teams.

As higher volumes of ever-more complex threats face children every day, Trust & Safety teams must react effectively and quickly. To do so, they should be familiar with the various types of risks that face children online and have a plan of action to not only detect but also counter these risks.

Risks to Child Safety Online: The Four C’s

While content moderation teams are well-equipped to detect content risks, children face more diverse harms online. The tendency to focus on content has the potential to leave these, sometimes more significant risks, overlooked. Trust & Safety teams should evaluate the entire online ecosystem: from the content children are exposed to, to who they communicate with, and what information they provide others.

The four main categories of risks to children online, known as The Four C’s, present differently, requiring unique detection and response mechanisms. Read on to learn about each of the Four C’s and how teams can react.

images of children with content, conduct, contact and contract written over them

Content Risks

Content risks are among the core risks to children and general audiences online, these mostly center around exposure to harmful content or that which is unsafe for children. Examples include exposure to profanity, sexual content or nudity, highly violent or otherwise gory and disturbing content, and animal cruelty.

Contact Risks

Contact risks refer to the communication with threat actors that can cause harm to children. These actors may include child predators, fraudsters, criminals, terrorists, or adults pretending to be children.

Conduct Risks

Conduct risks are the risks of children participating in behaviors that may be harmful, either physically or emotionally. These include bullying, dangerous behaviors such as self-harm activities, dangerous viral challenges, or encouragement of eating disorders.

Contract Risks

Contract risks involve the risk of children agreeing to terms or contracts they don’t agree with or understand. These may include signing up to receive inappropriate marketing messages, inadvertently purchasing something, or providing access to personal data.

How Trust & Safety Can Help

To overcome the wide range of risks facing children online, Trust & Safety teams should equip themselves with a combination of tools, procedures, and processes.

1. Build robust policies focused on children

A comprehensive policy can help deter not only harmful content, but also other violative activities. Policies should take into account the more commonly detected content risks, but also be built specifically with children in mind – including policies on who can and cannot communicate with children, what content these children may have access to, and what information they can and cannot provide.

Building policy can be complex, as it must be explicit but non-exhaustive so systems can respond to incoming and developing threats. Policies should also be reactive, taking into account changes in the online threat environment, and keeping up with trends in violative activities.

Policy building can also rely on learnings from industry best practices. Check out ActiveFence’s review of the child safety policies of 25 leading technology companies in the ActiveFence Policy Series: Child Safety, Second Edition.

2. Establish Subject Matter Expertise

To counter the complex, varied, and multi-lingual risks facing children online, subject matter expertise is key. Child predator groups, terrorist networks, or eating disorder communities are each unique, using specific terminology, and exploiting different platforms in distinct ways to reach a wide audience, undetected. Experts that are familiar with the specific dangers, jargon, code words, and where to look for these threats can help to efficiently identify and stop these harms.

For example, subject matter experts will know that the #Iatepastatonight indicates self-harm or suicide content, while a pizza emoji signals child pornography. These experts understand the chatter of these groups, their hidden codes, and where they can be found.

One of the ways that predators communicate to abuse children online is by unique terminologies and emojis. For a sampling of these emojis, take a look at our report on the Weaponization of Emojis.

3. Utilize on- and off-platform intelligence

Time is of the essence, especially when protecting highly vulnerable users like children, and the ideal time to stop harm to children is before it is done. To get ahead of threats to children, Trust & Safety teams should utilize a combination of subject matter expertise and foresight.

By gathering proactive intelligence, hidden within both the dark and open web, teams can understand impending risks, and proactively deter them, taking action to keep the violative activities away from children in the first place.

Learn more about ActiveFence’s Trust & Safety Intelligence solution

4. Apply Safety by Design

Safety by design places safety at the center of a product’s development to prevent harm before it happens. By implementing safety as a guiding principle during a product’s creation, teams can ensure that children are safe from both known and unknown risks – now and into the future.

Examples of product safeguards that improve child safety online are:

  • Real age verification tools and procedures that require users to prove their age before accessing certain content
  • Reporting features that allow children, and all users to flag harmful users
  • User consent software that enables users to control the data a platform has access to, limiting threat actor’s ability to take advantage of loopholes for harm.

For a better understanding of safety by design fundamentals and uses, access our guide to safety by design.

5. Scale with Contextual AI

To effectively detect and prevent dangers to children, intelligent AI is needed to handle the scale of harm. Using contextual AI to review content can reveal patterns that the human eye can’t see. For example, a simple “how are you” may look innocent, but when AI compounds it with additional information, such as the user’s previous violations or the age of the users they interact with, a different picture is obtained.

Furthermore, when it comes to CSAM, contextual AI uses many factors to flag content accurately. Classifiers and machine learning models analyze text, images, and logos for nudity and age, allowing them to flag CSAM, harassment and hate at a high level of precision. For instance, text models detect harmful text even if it’s intentionally misspelled, such as “stoopid” or a word spaced out like “u g l y.”

To learn more about AI content moderation, read our blog on AI and human content detection tools.

Conclusion

The online world increasingly places children in harm’s way, and Trust & Safety teams seeking to maintain child safety online are tasked with improving and scaling their efforts. By understanding the wide range of threats facing children, and implementing the right processes and procedures in place, teams can proactively protect children, deterring harm before it happens.

ActiveFence provides Trust & Safety teams with the tools and research they need to keep children safe. Access a sample of our intelligence reporting about identifying predators on instant messaging platforms below

Table of Contents