Now: Efficiently moderate content and ensure DSA compliance Learn how
Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Every user deserves to be protected - and every Trust & Safety team deserves the right tools to handle abuse.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are
The landscape of online exploitation targeting minors is undergoing a rapid transformation, as sextortion becomes a critical threat.
Rising at an alarming rate, the act of sextortion is evolving in never-before-seen ways, with shifts in attacker profiles and motivations, and, subsequently, victim profiles as well. A crime that frequently goes unreported, the changes in its presentation have given rise to a new set of multifaceted challenges, including difficulties in cross-platform detection, and numerous enforcement, jurisdictional, and technological hurdles. This article utilizes a data-driven approach to assessing the challenges of the evolving sextortion landscape and presents ActiveFence’s meticulously developed solution to mitigate this unique online harm. Focused on providing online platforms with cutting-edge tools and intelligence to identify threat actors before they cause harm, our approach offers a comprehensive strategy for protecting vulnerable users.
Thirteen year-old Anthony has begun chatting online with a new friend, Sam. Anthony is exploring his sexual orientation, and the conversation between the two teens quickly becomes sexually charged, as the two exchange intimate photos and personal details. Two weeks later, they plan to meet, and Sam asks for money to fund his trip, when Anthony says he can’t afford it, Sam threatens to “out him” to his family. Scared of the consequences, Anthony manages to get hold of the money, but soon enough, more requests for money and threats of sharing his personal information and images come through.
The textbook act of sextortion described above is one that has played out thousands of times last year, destroying the lives of teenagers, and leading at least one dozen to take their own lives. In this story – Anthony, the victim, was exploited when “Sam,” a catfisher, used various fake profiles and accounts to trick Anthony into thinking they were a teenage boy and sharing intimate images with them, only to later use that information for personal gain.
The motivations for sextortion are varied: while some conduct this activity for financial gain, others are driven by a desire for power, control, or sexual gratification. Regardless of motivation, the process for sextortion is generally the same:
Step 1: The catfisher befriends their victim, and gains their trust.
Step 2: Using that trust, the catfisher obtains intimate photos, videos, or other private information about their victim.
Step 3: Having acquired the material, the perpetrator threatens its release, leveraging the fear of public humiliation to manipulate the victim.
Step 4: Caught in this snare, the victim often feels isolated, as they are faced with the harrowing decision of complying with the demands or risking public exposure.
This digital-age crime underscores the dangerous intersections of technology, privacy, and human vulnerability.
While sextortion has been known and tracked since at least the 1990s, it is an evolving practice that has changed dramatically in recent years. The following are three significant changes we’ve seen in the past year:
An overall increase in volume – The practice of sextortion is on the rise: dramatically so. In fact, according to the FBI, there were over 7,000 reports of sextortion by over 3,000 individual victims in 2022: a staggering 463% increase over 2021. This incredible growth has seen frightening outcomes – as there were over one dozen sextortion-related suicides in 2022.
Our own monitoring of the deep web supports this trend and points at even more worrying numbers ahead: in 2021, a forum for sextortion victims featured only 2,150 testimonies of abuse. By 2022 that number tripled to 6,500, and as of September 2023, that number has already risen to 9,500: a 340% increase over 2021, and the year isn’t over yet. Using these numbers, we project over 13 thousand testimonies by year’s end: a more than 6X increase in just two years.
Shifting attacker profiles – Another meaningful change in the online sextortion world is the increasing involvement of a different type of attacker. Once solely aimed at obtaining explicit content and sexual gratification, sextortion has become an industry, involving criminals who seek financial gain. This can be seen in the numbers: our comprehensive analysis of 300 sextortion cases targeting minors in the US over the past decade reveals that while none of the 2015 sextortion cases were financially motivated, the number rose to 10% in 2018, and 55% in 2022-2023: showing a dramatic shift towards financially-motivated crimes.
The financial prospect of this crime has brought in new actors: international organized crime. Traditionally, the percentage of sextortion schemes involving perpetrators from Africa and Southeast Asia has been marginal, according to our aforementioned analysis. However, this pattern has evolved significantly, and in 2022-2023, of the financially motivated sextortion schemes, a staggering 60% of them originated from outside the US. Our analysis of 27 thousand posts by victims of sextortion over the past decade reveals that in 2018, mentions of the United States dominated with 893 instances, indicating a substantial portion of catfishers operating from within the U.S. However, as time progressed, there was an observable surge in mentions of African countries, predominantly Mali and Burkina Faso. By 2023, the dynamic had notably shifted. The U.S. had plummeted to the 12th position with only 13 mentions, while Southeast Asian and African countries, particularly the Philippines and Mali, appeared to be the new predominant locations from which catfishers operated. This underscores the burgeoning influence of these regions in the realm of online extortion.
New victim profiles – As attackers and motivations have shifted, so have the traditional sextortion victims. Back in 2015, data by the National Center for Missing and Exploited Children indicated that 76% of sextortion victims were females, however, that number has changed, as in 2023, the FBI noted an uptick in the targeting of 13-17 year-old boys. In fact, following a recent surge of online extortion complaints, the Australian Federal Police noted that young males constituted 90% of those affected by what they term financial sextortion. This evolving focus is especially concerning in today’s digital landscape, where an estimated 84% of children between the ages of 13 and 18 are active on social media, according to the Internet Crimes Against Children Task Force Program. Alarmingly, a substantial number of even younger children are also online, offering a broader and more vulnerable field for perpetrators to exploit.
Addressing sextortion in the online sphere requires a multifaceted strategy that considers both its human and technological dimensions. However, certain market-specific barriers make crafting a comprehensive solution particularly challenging. These include the following:
Fear of reporting – Contrary to other types of fraud or extortion, victims of sextortion have a strong incentive to refrain from reporting their cases to law enforcement, as they fear the humiliation involved in having their intimate content spread on the internet. This is especially true when the victims are minors, making them the primary targets of sexual extortion schemes.
Cross-platform detection – Sextortion schemes often span multiple platforms, making it challenging for the trust & safety teams of any individual platform to link the single activity on their platform to the malicious act. Imagine a social media platform witnessing an engagement between two teenagers, then a messaging platform facilitating the exchange of pleasantries, a file sharing platform enabling the sharing of pictures, and a financial services app processing the exchange of money: none of these activities are worrying in and of themselves, it is only when they are seen together, that they may raise concerns.
Enforcement – Sextortion of minors lies on the nexus between fraud, extortion, and the production and distribution of child pornography. This creates an enforcement challenge for tech companies and law enforcement agencies, resulting in many cases that fall between the cracks.
Jurisdiction – As international organized crime becomes increasingly involved in sextortion schemes, law enforcement agencies are limited in their ability to act due to jurisdiction problems. In many cases, the extortionists, tech platforms, and victims are located in different geographical regions. In this case, even if the two law enforcement agencies cooperate, the tech platforms might refuse to disclose their users’ private information in the absence of clear proof that the activity on their platform was directly related to a larger sextortion scheme.
Technological improvements and content manipulation – Technological advancements, specifically new AI technologies, have enabled perpetrators to reach a wider audience by manipulating content. It is now easier than ever for extortionists to create convincing fake materials like deepfake profiles, and carry out more sophisticated sextortion schemes, further complicating the detection and enforcement efforts by law enforcement and tech platforms.
Increased use of the internet to find romantic partners – With the growing popularity of online dating apps and social media platforms, minors are increasingly using the internet to seek romantic relationships and connections. This trend has made them more susceptible to encountering malicious actors who exploit their trust, leading to an uptick in sextortion cases where perpetrators exploit the intimate content shared during such interactions.
As described above, sextortion is a complex global problem – and its solution must also be complex, involving a combination of proactive intelligence and algorithmic detection.
In an intelligence-based approach, platforms seeking to quickly and effectively identify sextortion perpetrators, or catfishers would utilize a database of indicators to continuously search for malicious accounts. These indicators may include data points like: catfisher messages, usernames, profile and cover images, bio details, phone numbers, email addresses, IP addresses, and types of content used in sextortion schemes. Once these data points show up in the platform’s content, they can swiftly take action – often before a crime has been committed.
The challenge with this type of approach is access to data. Since these activities span multiple platforms, and discussions about them take place on deep web forums – platforms don’t frequently have access to this information. ActiveFence’s solution to combat sextortion is unique in that it utilizes a comprehensive method to acquire these crucial data points. Our multi-pronged approach uses:
A second component in the approach to sextortion detection involves collaborating with online platforms to develop profiling and behavioral analysis algorithms. These algorithms aim to identify potential catfishers based on a set of behavioral patterns and account characteristics. For example, a catfisher may often masquerade as a teenage girl targeting teenage boys, engage with them in a certain way or frequency, or move from one platform to another. Characteristics of these profiles and engagements may include being newly created, having no shared friends with the victim, or sending a high volume of messages that are romantic in nature to multiple individuals.
By continuously analyzing the markers of known catfishers, we can use these characteristics to create powerful algorithms to detect catfishers, adjusting the algorithms based on new insights and characteristics, as they surface. Once these algorithms identify a potential match, these accounts are flagged for further investigation, making it possible to preempt sextortion attempts before they escalate.
In tackling the evolving menace of sextortion, a holistic and forward-thinking solution is now more vital than ever. ActiveFence’s unique approach to sextortion detection utilizes actionable intelligence to equip online platforms combatting the multiple challenges of victim reporting, enforcement, and cross-platform detection. With sextortion increasingly becoming a cross-border issue, our ability to surface critical data points sets a new standard in proactively identifying catfishers and combating this form of exploitation effectively. By addressing the systemic issues in a coherent and comprehensive manner, ActiveFence paves the way for creating more secure digital spaces and fostering user trust.
Learn more about ActiveFence’s intelligence-driven solution to detecting and stopping sextortion and other online harms.