How to Detect Coercive Cyberbegging

By
March 1, 2023

ActiveFence’s Human Exploitation Lead Researcher Maya Lahav examines a rising trend in online behavior involving victims of sickness, poverty, or war who are coerced into recording and soliciting donations. This harmful trend exploits some of the most vulnerable in the global community, monetizing the suffering of those who cannot legally consent.

How to Detect Coercive Cyberbegging

Year after year, people increasingly opt to donate online to charitable causes. Crowdfunding and social media platforms with built-in fundraising features have helped facilitate this shift in philanthropic giving. Alongside this positive trend in online behavior has developed a coercive pattern where victims of sickness, poverty, or war are recorded and used to solicit donations without their capacity to give consent.

A Question of Consent

Consent is a fundamental stress test that must be used to evaluate the nature of online behaviors. 

For example, while adult pornography is generally legal across the world, and often permissible on online platforms, the same type of recordings that were created without the featured person’s knowledge, are considered wholly separate. These are classified as non-consensual intimate imagery (NCII), not only are they not permitted on platforms, but they are also illegal. 

In the context of requests for donations, a person may offer their agreement to be used to solicit funds. However, when that choice is taken away from them, either because they are too young to consent, are too sick, or are in distress, the content is classified as human exploitation. This exploitative content is often, but not exclusively, used by threat actors seeking to monetize suffering and generate profits online.

A Digital Presentation of an Old Threat

Threat actors are leveraging the plight of vulnerable individuals, families, and even communities. They use photographs and video recordings of at-risk people to solicit donations from which they profit. To increase revenues, threat actors generate emotive content that exploits the suffering of sick or malnourished children or vulnerable adults at risk. This content is disseminated online and across social media, with requests for money. 

The subjects of this material often cannot offer consent and have no control of the funds that are donated. In many cases, these at-risk individuals will not receive the funds or will only gain a small amount of the charitable donations solicited by the activity. This is despite the threat actors frequently posing as regulated charitable organizations or private charitable fundraisers.

This coercive cyberbegging (sometimes called e-panhandling) impacts many platforms, including social media, website hosting, crowdfunding, and payment processing services. It presents a distinct set of online behaviors. Awareness of which is essential for moderators seeking to detect harmful on-platform chatter and its related activity.

Abusing the Most Vulnerable

Geopolitical events catalyze coercive cyberbegging activity, with accounts demonstrating the extreme economic need of those living in refugee camps and the devastating impact of natural disasters such as floods or earthquakes. 

Accounts on live stream platforms, or those with livestream features, showcase children and vulnerable adults with severe illnesses, handicaps, and those living in dire conditions. They share footage of at-risk persons coerced into begging for hours, or exploitatively show them in distress to convince viewers to donate. It is claimed that the funds collected will help alleviate severe financial needs or life-threatening medical conditions. Other threat actor accounts amplify the initial recording by re-posting the content or directing followers to watch the material in evergreen posts.

Fraudulent Presentations

A significant portion of coercive cyberbegging exploits at-risk people and is also fraudulent. It is, therefore, key to distinguish between those accounts fundraising with good intentions and those working under false pretenses. There is a pattern of threat actors claiming that NGOs and other registered charitable organizations operate their accounts. Therefore, an important indicator to counter coercive begging is to check that: 

  • the NGO’s license is active;
  • the charity has external, off-platform validation, such as a functional website or legal registration as a non-profit organization.

Trust & Safety platforms should monitor circumvention techniques, which may signal coordinated network activity. Cross-platform activity with similarly named accounts and parallel content also points to coordinated fraudulent operations, even in cases where the content is shared from individual accounts. Primary accounts can be a gateway to multiple off-platform payment systems, including links to bank account information, fundraising websites, and digital payment platforms. By tracking this cross-platform activity, trust & safety teams can effectively detect this harmful content, and ensure that their platforms are not misused for harm.

Countering Exploitation

Understanding that this exploitative activity is present on major tech platforms is the first step in countering it. 

As Trust & Safety teams look for identifiable patterns of intentionally deceptive behavior, some activity used to amplify the content’s reach indicates a direct nexus to cyberbegging. Cataloging these can be used to detect future emerging examples of this damaging activity.

Signifiers include appeals for donations to broad fundraising causes, such as helping “poor children in Africa,” where requests for donations are linked to broad pleas to “help children stay alive.” Relevant hashtags may also include:

  • Keywords related to children in refugee camps, these can be general such as the term “#childrefugee,” as well as specific names of refugee camps;
  • Keywords related to broad economic need in the least developed countries (LDCs):  #poorchildren;
  • Natural disasters: #flood #famine #drought
  • Terminal illness hashtags: #terminal #cancerchild and #donate

Conclusion

Coercive cyberbegging has become increasingly prevalent, given the potential reach and threat actors’ ability to evade detection. 

At its core is the exploitation of some of the most vulnerable in the global community. It is monetizing the suffering of those who cannot legally consent. Trust & Safety teams should be aware of the intrinsically fraudulent and exploitative practices that pose a risk to their platforms and communities. Conducting deep threat intelligence to track and analyze the activity of these communities is essential for platforms to strengthen detection and moderation and enhance mitigation capabilities.

Want to learn more about the threats facing your platform? Find out how new trends in misinformation, hate speech, terrorism, child abuse, and human exploitation are shaping the Trust & Safety industry this year, and what your platform can do to ensure online safety.

Table of Contents