How Scammers Are Abusing GenAI to Impersonate and Manipulate

By
June 26, 2025
A hooded figure stands in shadow behind a row of hyper-realistic face masks resembling well-known public figures. Each mask is lit in red and blue neon tones, suggesting themes of identity theft and AI-generated impersonation. The background is dark with subtle digital circuitry and binary code, evoking a cyber threat atmosphere.

To learn more about how GenAI is being exploited by deceptive behaviour

Read This report.

As GenAI becomes more accessible and powerful, scammers are leveraging it to impersonate people, brands, and institutions with alarming ease. The result? Convincing scams that blur the line between real and fake, eroding trust, enabling fraud, and threatening safety at scale.

Here’s how impersonation abuse is unfolding across platforms, and what AI deployers should be watching for.

What Is Impersonation in the Age of GenAI?

Impersonation scams now span formats, languages, and platforms. With GenAI tools, threat actors can:

  • Clone voices to make phone calls sound like trusted figures
  • Generate deepfake videos that convincingly mimic public personalities
  • Create fake personas that pose as news sources, officials, or loved ones
  • Produce misleading content in multiple languages to broaden reach

These tactics are cheap, scalable, and increasingly hard to detect, especially as attackers mix AI-generated assets with real material to enable harm like human exploitation and the creation of Non-Consensual Intimate Imagery (NCII).

Common Impersonation Scenarios

Impersonators exploit the identities of well-known figures or companies to make fraudulent platforms or offerings appear legitimate. They use logos, professional terminology, and familiar imagery to lend credibility to their fake content. Here are some common uses of impersonation:

Financial Scams

Fake videos of public figures promoting crypto schemes remain a staple. In 2024, a wave of deepfakes featuring Elon Musk promoted bogus investment platforms using cloned voice and imagery. Victims were lured by familiar branding and persuasive scripts, despite the campaigns being entirely fabricated. 

The allure of financial advice from a wealthy tech CEO proved irresistible to many. And Musk isn’t the only one whose image has been used in deepfake financial scams—other famous victims include Mark Zuckerberg and Dr. Phil. 

Romance and Catfishing 

Romance scams involving fake celebrity profiles have led victims to lose thousands of dollars. Scammers use GenAI to pose as celebrities or distant connections, building trust over time. Once the target is emotionally invested, they’re pressured to send money under false pretenses.

While these scams may seem less convincing than financial ones, many still fall for them. Fraudsters use grooming tactics and target vulnerable individuals who are less likely to recognize the deceit.

Relative/Friend Impersonation Scams

Using voice cloning and social engineering, fraudsters impersonate loved ones in distress, particularly targeting older adults. Victims are rushed into sending money before they can verify the situation.

In 2022, relative impostor scams were the second most common racket in the US, with over 2.4 million reports, according to the Federal Trade Commission (FTC), with over 5,100 incidents occurring over the phone and resulting in over $8.8 million in losses.

Misinformation Campaigns

While less common than financial scams, misinformation campaigns involving impersonation are a significant issue, especially during election years and periods of political unrest. In these campaigns,  political figures, parties, news agencies, and popular figures are impersonated to spread misleading or false narratives disguised as legitimate information. 

Celebrities, news anchors, and politicians are common targets for impersonation due to the abundance of their real appearance and voice online. These impersonations serve various purposes, from satire to scams to deliberate disinformation.

How Scammers Operate

Hijacked Accounts

Account hijacking is a crucial step in the multi-step process of Impersonation. Cybercriminals often target social media accounts with large followings, gaining access to real social media accounts through phishing or malware. Once in, they rebrand the account with a new identity, boosting credibility while spreading impersonation content to an existing audience.

Synthetic Personas

Threat actors create fake organizations or personalities using AI-generated visuals, bios, and media. These accounts mimic legitimate sources, news outlets, NGOs, or companies, to spread false narratives or scam users.

In a real-life example, ActiveFence researchers identified a deceptive identity disguised as a legitimate news source on a prominent social platform. The account mainly published videos with AI-generated voiceovers of BBC articles over stock footage. Some videos featured a simulated presenter, while others used AI-generated studio backgrounds, presenters, and voices, creating a convincing illusion of authenticity.

Risks Posed by Impersonation Scams

Beyond the obvious risks like financial loss, personal information theft, and reputational damage to public figures and businesses, impersonation scams inflict deeper societal harms. 

  • Erode trust in digital communication and media
  • Enable fraud, theft, and privacy violations
  • Amplifie misinformation during crises or elections
  • Undermine safety through NCII and targeted harassment

Impersonation harms individuals, communities, and institutions, and it’s becoming easier to scale with GenAI.

How AI Deployers Can Address GenAI-Powered Impersonation

Impersonation threats won’t be solved by content filtering alone. Here’s how leading AI teams are adapting:

1. Proactively Red Team for Impersonation Abuse: Regular adversarial testing or safety evaluations of  AI systems, across text, audio, image, and video, is becoming standard practice for AI Security. By simulating real-world abuse scenarios, teams can identify how easily their applications could be misused for impersonation and fine-tune risk detection accordingly.

2. Move Beyond Static Filters With Adaptive Guardrails: Legacy moderation tools often miss the subtle, evolving nature of impersonation tactics. Enterprises are increasingly implementing real-time, configurable, context-aware guardrails that respond dynamically to how identity-related abuse manifests in different formats and languages.

3. Operationalize Abuse Visibility Across the Stack: Flagging content isn’t enough. AI security teams need deep observability, tools that provide session-level insight into when, where, and how impersonation abuse is happening. This visibility is key to responding to incidents, adjusting risk policies, and closing feedback loops.

4. Align Defenses With Emerging Threats: Impersonation schemes evolve rapidly, often shaped by attacker collaboration and novel GenAI misuse. Staying ahead requires visibility into emerging abuse patterns, reinforced by real-world intelligence and continuous testing. Traditional fraud detection methods still play a role, especially when integrated into a broader, abuse-aware lifecycle.

5. Strengthen Verification and User Trust Infrastructure: Strong identity and content verification is foundational. AI deployers are doubling down on safeguards like multi-factor authentication, automated content provenance checks, and real-time monitoring to prevent impersonation at the account and interaction level.

 

Final Thought

The rise of deepfakes and synthetic media has made it easier than ever for scammers to impersonate, deceive, and cause harm. What used to take time and effort can now be done at scale using GenAI. But the challenge isn’t just about detecting fake content. The real issue is fraud and abuse. Addressing it means going beyond surface-level detection and building layered defenses across the AI lifecycle.

As abuse tactics evolve, platforms need to adapt quickly. Red teaming, observability, and smart guardrails are key to keeping users safe and trust intact.

Table of Contents

Secure your AI today.

Get started with a demo.