Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
ActiveFenceโs Human Exploitation Lead Researcher Maya Lahav examines a rising trend in online behavior involving victims of sickness, poverty, or war who are coerced into recording and soliciting donations. This harmful trend exploits some of the most vulnerable in the global community, monetizing the suffering of those who cannot legally consent.
Year after year, people increasingly opt to donate online to charitable causes. Crowdfunding and social media platforms with built-in fundraising features have helped facilitate this shift in philanthropic giving. Alongside this positive trend in online behavior has developed a coercive pattern where victims of sickness, poverty, or war are recorded and used to solicit donations without their capacity to give consent.
Consent is a fundamental stress test that must be used to evaluate the nature of online behaviors.ย
For example, while adult pornography is generally legal across the world, and often permissible on online platforms, the same type of recordings that were created without the featured personโs knowledge, are considered wholly separate. These are classified as non-consensual intimate imagery (NCII), not only are they not permitted on platforms, but they are also illegal.ย
In the context of requests for donations, a person may offer their agreement to be used to solicit funds. However, when that choice is taken away from them, either because they are too young to consent, are too sick, or are in distress, the content is classified as human exploitation. This exploitative content is often, but not exclusively, used by threat actors seeking to monetize suffering and generate profits online.
Threat actors are leveraging the plight of vulnerable individuals, families, and even communities. They use photographs and video recordings of at-risk people to solicit donations from which they profit. To increase revenues, threat actors generate emotive content that exploits the suffering of sick or malnourished children or vulnerable adults at risk. This content is disseminated online and across social media, with requests for money.ย
The subjects of this material often cannot offer consent and have no control of the funds that are donated. In many cases, these at-risk individuals will not receive the funds or will only gain a small amount of the charitable donations solicited by the activity. This is despite the threat actors frequently posing as regulated charitable organizations or private charitable fundraisers.
This coercive cyberbegging (sometimes called e-panhandling) impacts many platforms, including social media, website hosting, crowdfunding, and payment processing services. It presents a distinct set of online behaviors. Awareness of which is essential for moderators seeking to detect harmful on-platform chatter and its related activity.
Geopolitical events catalyze coercive cyberbegging activity, with accounts demonstrating the extreme economic need of those living in refugee camps and the devastating impact of natural disasters such as floods or earthquakes.ย
Accounts on live stream platforms, or those with livestream features, showcase children and vulnerable adults with severe illnesses, handicaps, and those living in dire conditions. They share footage of at-risk persons coerced into begging for hours, or exploitatively show them in distress to convince viewers to donate. It is claimed that the funds collected will help alleviate severe financial needs or life-threatening medical conditions. Other threat actor accounts amplify the initial recording by re-posting the content or directing followers to watch the material in evergreen posts.
A significant portion of coercive cyberbegging exploits at-risk people and is also fraudulent. It is, therefore, key to distinguish between those accounts fundraising with good intentions and those working under false pretenses. There is a pattern of threat actors claiming that NGOs and other registered charitable organizations operate their accounts. Therefore, an important indicator to counter coercive begging is to check that:ย
Trust & Safety platforms should monitor circumvention techniques, which may signal coordinated network activity. Cross-platform activity with similarly named accounts and parallel content also points to coordinated fraudulent operations, even in cases where the content is shared from individual accounts. Primary accounts can be a gateway to multiple off-platform payment systems, including links to bank account information, fundraising websites, and digital payment platforms. By tracking this cross-platform activity, trust & safety teams can effectively detect this harmful content, and ensure that their platforms are not misused for harm.
Understanding that this exploitative activity is present on major tech platforms is the first step in countering it.ย
As Trust & Safety teams look for identifiable patterns of intentionally deceptive behavior, some activity used to amplify the contentโs reach indicates a direct nexus to cyberbegging. Cataloging these can be used to detect future emerging examples of this damaging activity.
Signifiers include appeals for donations to broad fundraising causes, such as helping โpoor children in Africa,โ where requests for donations are linked to broad pleas to โhelp children stay alive.โ Relevant hashtags may also include:
Coercive cyberbegging has become increasingly prevalent, given the potential reach and threat actorsโ ability to evade detection.ย
At its core is the exploitation of some of the most vulnerable in the global community. It is monetizing the suffering of those who cannot legally consent. Trust & Safety teams should be aware of the intrinsically fraudulent and exploitative practices that pose a risk to their platforms and communities. Conducting deep threat intelligence to track and analyze the activity of these communities is essential for platforms to strengthen detection and moderation and enhance mitigation capabilities.
Want to learn more about the threats facing your platform? Find out how new trends in misinformation, hate speech, terrorism, child abuse, and human exploitation are shaping the Trust & Safety industry this year, and what your platform can do to ensure online safety.
True AI adoption happens when people and technology meet halfway. Gil Neulander, AI Innovation Lead at ActiveFence, shares how intrapreneurship drives responsible AI transformation within tech companies - balancing innovation, security, and human impact.
Learn why AI Risk frameworks like NIST, OWASP, MITRE, MAESTRO, and ISO 42001 are setting the global standard for AI safety and compliance. Learn how ActiveFence helps keep your AI deployments secure and audit-ready.
As AI agents become more like employees, see how you can reduce risk from autonomous agents and ensure compliance.