Real-time visibility, safety, and security for your GenAI-powered agents and applications
Proactively test GenAI models, agents, and applications before attackers or users do
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
To learn more about how GenAI is being exploited by deceptive behaviour
As GenAI becomes more accessible and powerful, scammers are leveraging it to impersonate people, brands, and institutions with alarming ease. The result? Convincing scams that blur the line between real and fake, eroding trust, enabling fraud, and threatening safety at scale.
Here’s how impersonation abuse is unfolding across platforms, and what AI deployers should be watching for.
Impersonation scams now span formats, languages, and platforms. With GenAI tools, threat actors can:
These tactics are cheap, scalable, and increasingly hard to detect, especially as attackers mix AI-generated assets with real material to enable harm like human exploitation and the creation of Non-Consensual Intimate Imagery (NCII).
Impersonators exploit the identities of well-known figures or companies to make fraudulent platforms or offerings appear legitimate. They use logos, professional terminology, and familiar imagery to lend credibility to their fake content. Here are some common uses of impersonation:
Fake videos of public figures promoting crypto schemes remain a staple. In 2024, a wave of deepfakes featuring Elon Musk promoted bogus investment platforms using cloned voice and imagery. Victims were lured by familiar branding and persuasive scripts, despite the campaigns being entirely fabricated.Â
The allure of financial advice from a wealthy tech CEO proved irresistible to many. And Musk isn’t the only one whose image has been used in deepfake financial scams—other famous victims include Mark Zuckerberg and Dr. Phil.Â
Romance scams involving fake celebrity profiles have led victims to lose thousands of dollars. Scammers use GenAI to pose as celebrities or distant connections, building trust over time. Once the target is emotionally invested, they’re pressured to send money under false pretenses.
While these scams may seem less convincing than financial ones, many still fall for them. Fraudsters use grooming tactics and target vulnerable individuals who are less likely to recognize the deceit.
Using voice cloning and social engineering, fraudsters impersonate loved ones in distress, particularly targeting older adults. Victims are rushed into sending money before they can verify the situation.
In 2022, relative impostor scams were the second most common racket in the US, with over 2.4 million reports, according to the Federal Trade Commission (FTC), with over 5,100 incidents occurring over the phone and resulting in over $8.8 million in losses.
While less common than financial scams, misinformation campaigns involving impersonation are a significant issue, especially during election years and periods of political unrest. In these campaigns, political figures, parties, news agencies, and popular figures are impersonated to spread misleading or false narratives disguised as legitimate information.Â
Celebrities, news anchors, and politicians are common targets for impersonation due to the abundance of their real appearance and voice online. These impersonations serve various purposes, from satire to scams to deliberate disinformation.
Account hijacking is a crucial step in the multi-step process of Impersonation. Cybercriminals often target social media accounts with large followings, gaining access to real social media accounts through phishing or malware. Once in, they rebrand the account with a new identity, boosting credibility while spreading impersonation content to an existing audience.
Threat actors create fake organizations or personalities using AI-generated visuals, bios, and media. These accounts mimic legitimate sources, news outlets, NGOs, or companies, to spread false narratives or scam users.
In a real-life example, ActiveFence researchers identified a deceptive identity disguised as a legitimate news source on a prominent social platform. The account mainly published videos with AI-generated voiceovers of BBC articles over stock footage. Some videos featured a simulated presenter, while others used AI-generated studio backgrounds, presenters, and voices, creating a convincing illusion of authenticity.
Beyond the obvious risks like financial loss, personal information theft, and reputational damage to public figures and businesses, impersonation scams inflict deeper societal harms.Â
Impersonation harms individuals, communities, and institutions, and it’s becoming easier to scale with GenAI.
Impersonation threats won’t be solved by content filtering alone. Here’s how leading AI teams are adapting:
1. Proactively Red Team for Impersonation Abuse: Regular adversarial testing or safety evaluations of AI systems, across text, audio, image, and video, is becoming standard practice for AI Security. By simulating real-world abuse scenarios, teams can identify how easily their applications could be misused for impersonation and fine-tune risk detection accordingly.
2. Move Beyond Static Filters With Adaptive Guardrails:Â Legacy moderation tools often miss the subtle, evolving nature of impersonation tactics. Enterprises are increasingly implementing real-time, configurable, context-aware guardrails that respond dynamically to how identity-related abuse manifests in different formats and languages.
3. Operationalize Abuse Visibility Across the Stack:Â Flagging content isn’t enough. AI security teams need deep observability, tools that provide session-level insight into when, where, and how impersonation abuse is happening. This visibility is key to responding to incidents, adjusting risk policies, and closing feedback loops.
4. Align Defenses With Emerging Threats:Â Impersonation schemes evolve rapidly, often shaped by attacker collaboration and novel GenAI misuse. Staying ahead requires visibility into emerging abuse patterns, reinforced by real-world intelligence and continuous testing. Traditional fraud detection methods still play a role, especially when integrated into a broader, abuse-aware lifecycle.
5. Strengthen Verification and User Trust Infrastructure: Strong identity and content verification is foundational. AI deployers are doubling down on safeguards like multi-factor authentication, automated content provenance checks, and real-time monitoring to prevent impersonation at the account and interaction level.
The rise of deepfakes and synthetic media has made it easier than ever for scammers to impersonate, deceive, and cause harm. What used to take time and effort can now be done at scale using GenAI. But the challenge isn’t just about detecting fake content. The real issue is fraud and abuse. Addressing it means going beyond surface-level detection and building layered defenses across the AI lifecycle.
As abuse tactics evolve, platforms need to adapt quickly. Red teaming, observability, and smart guardrails are key to keeping users safe and trust intact.
Secure your AI today.
AI misuse isn’t hypothetical - it’s happening now. This blog introduces ActiveFence’s latest guide for operationalizing AI safety and security with six real-world strategies to move from principle to protection.
Threat actors are exploiting GenAI in the wild. Learn why true AI security must extend beyond infrastructure to detect and prevent real-world misuse.
Discover principles followed by the most effective red teaming frameworks.