Detecting Child Predator Abuse of Instant Messaging

Online child predator communities construct networks across platforms. They use instant messaging platforms to secretly communicate, sharing child sexual abuse material and coordinating child abuse. This activity occurs in all languages, placing children at risk worldwide.

In this report, we show how Trust & Safety teams can guard against systematic abuse while maintaining the right to privacy that innocent users expect. The solution is an intelligence-led approach that includes:

  • Threat actor patterns of behaviors (gathered from child predator experts);
  • Threat actor off-platform network activity;
  • Threat actor on-platform activity (implementation of known CSAM keywords, their obfuscations, and other metadata, suspicious entry rules in biographies, etc.)

This information enables teams to take targeted actions against pinpointed flagged accounts and groups.