Non-Graphic Online Child Safety Violations: A Platform Blindspot

By
November 29, 2022
Child Safety

While user-generated content (UGC) platforms are keenly aware of the dangers of graphic child sexual abuse material (CSAM), they often overlook its non-graphic counterpart. These instances of abuse, which have flown under the radar in some instances, exist on UGC platforms in much larger quantities than graphic CSAM. Because these violations are complex and nuanced, they are therefore much harder to detect.

What Are Non-Graphic Child Sexual Violations?

Non-graphic child sexual violations are instances of non-graphic CSAM, though this term is a catch-all and refers to several different types of content, including audio, text, and artistic renderings.

Audio: Audio CSAM is a popular genre among certain pedophile communities. Audio clips may be recordings detailing an erotic story involving or centered around minors, the narration of an erotic scene read by a child, a retelling of a scene depicting the sexual exploitation of a minor, or even simply non-specific, sexually-suggestive sounds or noises made by children.

Text: Some pedophile communities disseminate text-based CSAM. This type of material may involve written erotic stories involving minors. To fly under the radar of detection techniques and technologies, predator communities have developed their own unique languages, and may make use of specific terminology or code words in text-based CSAM.

Art: In some pedophile communities, cartoons, paintings, and other forms of exploitative art depicting minors are shared to express fantasies or obtain sexual gratification. While visually based, this type of material is not considered graphic, since they are depictions, and not actual acts.

Types of non-graphic CSAM

The Purpose of Non-Graphic CSAM

For pedophile communities online, there are some obvious benefits to sharing non-graphic CSAM. Detection of this material is more complicated than detecting graphic imagery, so predators use the above methods as a means of more sure communication and dissemination. However, non-graphic CSAM serves other purposes, as well.

Sex Trafficking: While pedophile communities online are largely used to share CSAM – graphic or otherwise – these digital spaces are also exploited by sex traffickers. They may use platforms to share select information about victims of sex trafficking and their location and services offered. Like predators, these traffickers may also use highly specific language or codes to communicate under the radar and evade detection. They may also use these predator communities as a gateway to platforms with encryption and cloud-based services, where they may share more in-depth information about these individuals, arrange meetings, and accept payment.

Community Building: Pedophiles, like other social media users, aim to build and sustain vibrant communities on mainstream platforms. To identify other like-minded individuals, they may use esoteric language and codewords or symbols associated with a particular group. Members of these communities take advantage of their proximity to a platform’s general population to try to legitimize pedophilia, draw comparisons with marginalized communities, or share pseudoscientific articles that support romantic or intimate relations between adults and minors.

Grooming: Grooming, a high-harm violation that does not involve sharing graphic content, takes place in large volume on mainstream social media platforms. Predators use social media to establish first communication with victims and then try to move the conversation into less-moderated platforms. The Child Crime Prevention & Safety Center estimates that there are 500,000 predators active online every day, putting millions of children at risk. With half a million of these individuals communicating in a variety of languages, each with their own set of unique dialects, nuances, and codes, moderation teams and their technologies face an immense challenge in ensuring digital safety.

Why Is It So Hard To Detect?

To effectively detect and remove graphic CSAM and the ban users that share it, UGC platforms typically employ three main methods. First, they funnel all uploaded image-based content through image recognition algorithms and hashes that can automatically detect and remove graphic CSAM. Second, they offer user flagging features, enabling users to report potentially violative content. Third, they funnel suspicious or flagged content to human moderators, who then decide whether it warrants removal. With the correct technology at hand, those decisions can be used to train AI algorithms to better detect future uploaded content for possible violations, making for a more effective moderation process.

Despite this well-oiled moderation machine, pedophiles still manage to exploit platforms to share CSAM, groom minors, and build predator communities. Detection of non-graphic CSAM in particular is challenging, since the image recognition algorithms and hash databases that would normally automatically flag content miss crucial violations. For audio-based abuses especially, technologies are only now emerging that can be used to detect violations. The bigger issue, though, is that Trust & Safety teams may not even be aware of the types of abuses occurring on their platforms, and are thus completely unable to detect, thwart, and remove malicious content and accounts. Only with specific intelligence of this trend, its pervasiveness, and tactics, can platforms adequately protect users against it and include its detection and removal in their broader content policy. No technology or strategy will be effective if it’s only geared towards a specific set of violations; the ones that fall outside those parameters will be missed, and platforms will continue to be exploited.

What Can Be Done?

Cross-platform research: Threat actors committing child safety violations often operate on several platforms to increase their reach. By tracking child safety violations on other platforms, Trust & Safety teams can preemptively prevent risks and block these users from migrating to their platform. Additionally, by tracking predators at their source, Trust & Safety teams can identify leads to their respective platforms, gain insights into behavioral patterns, and apply these learnings to find threat actors already exploiting their own platforms.

Lead investigation: The great majority of items and users removed from UGC platforms on the ground of child safety violations are removed automatically, without human intervention. Because of this, these items and the users that upload them are rarely investigated. Only by monitoring the increasingly sophisticated and ever-changing circumvention techniques and terminologies being used can moderation teams understand and prevent them.

Product flexibility: To remove content at scale, Trust & Safety teams must use highly advanced tools and products. From the outset, platforms should be built with the principle of safety by design, which puts user safety at the forefront. As digital spaces evolve and new technologies are introduced, Trust & Safety teams should remain agile with regard to the adaptability of their products. New features should be incorporated to reduce the lifetime of violations and shorten removal times, make detection more efficient, and help teams work more effectively.

User Accountability: Banning users and removing their content are not effective measures for stopping the violative activity of threat actors. These users will either return to the platform with a new account or migrate to another platform; in either case, they may learn from their experience and employ new circumvention techniques to evade moderation. Abuses should be documented and made available in a knowledge-sharing system with counterparts at other platforms to prevent threat actors from operating across multiple platforms. This type of cross-platform cooperation may be a game-changer for the industry, and will undoubtedly be a positive step towards protecting younger users online. In addition, to maintain reliable deterrence, Trust & Safety teams need to cooperate with local law enforcement to share evidence that would assist in catching offended. By effectively blocking threat actors from operating cross-platform and assisting law enforcement, Trust & Safety teams can effectively work to eradicate online child safety violations.

Want to learn more about the threats facing your platform? Find out how new trends in misinformation, hate speech, terrorism, child abuse, and human exploitation are shaping the Trust & Safety industry this year, and what your platform can do to ensure online safety.

Table of Contents