How ISIS is Adopting AI Vol. 2: Inside QEF’s Media Strategy

By ,
September 30, 2025

AI is the new battlefield. Stay ahead.

Talk to our experts →

Back in April, we published the first look at how ISIS’s media arm, the QEF, was beginning to endorse GenAI tools, mostly in a passive, observational way. This new update shows a clear shift: moving from curiosity to active, strategic use of AI across languages and audiences, revealing a maturing narrative infrastructure, technologically savvy, globally segmented, and ideologically flexible.

Executive Summary

Between April and September 2025, ISIS’s media arm, the Qimam Electronic Foundation (QEF), published a series of articles about artificial intelligence (AI). These publications reveal a clear evolution: from surface-level commentary to a more strategic use of AI as part of ISIS’s propaganda and operational playbook.

Key developments include:

  • Narrative bifurcation: Different AI narratives tailored to Arabic, English, and Bengali audiences across forums, public websites, and private messaging channels..
  • Technology adoption: Promotion of privacy-first tools and AI-enabled platforms.
    Direct tool endorsement: QEF explicitly recommended a GenAI product, LUMO AI.
  • New audiences: Bengali-language content aimed at expanding reach in South Asia.

Together, these moves show that ISIS is not just experimenting with AI, they are integrating it into their long-term media and recruitment infrastructure.

Infographic showcasing future AI developments

1. Arabic & English Messaging: Normalizing AI

In August 2025, QEF published two contrasting articles:

  • Arabic: A step-by-step guide to enabling Chrome’s AI-powered security features, framing AI as a personal safety tool.
  • English: An infographic forecasting AI’s future, from AGI to quantum computing, portraying AI as an aspirational technology.

This dual approach highlights how ISIS tailors narratives: practical and security-focused for Arabic speakers, while painting a futuristic, innovation-driven vision for English readers.

2. Direct AI Endorsement: LUMO AI

In September, QEF crossed a new threshold by explicitly endorsing LUMO AI, a privacy-focused assistant. The endorsement stressed its:

  • Privacy guarantees — Zero-access encryption and “no-log” claims lower the risk that communications or usage data will be captured by third parties.
  • Perceived trustworthiness — Open-source code or auditability gives the appearance of transparency, making the tool easier to recommend within closed communities.
  • Anonymity-friendly features — Designs that minimize metadata or allow private accounts appeal to users trying to avoid attribution.
  • Ease of adoption — If the UI, language support, and deployment model are accessible, non-technical supporters can adopt the tool quickly.

This marks the first time a specific AI tool has been promoted by ISIS’s media ecosystem, signaling trust-building efforts around AI platforms, vetting and recommending tools that reduce exposure to surveillance and increase operational security.

Infographic explaining the benefits of Lumo in Bengali and in English (different narrative in each language)

3. Expansion into Bengali: Building New Digital Fluency

In May and August, QEF released AI-related content in Bengali for the first time. This was a significant step: Bangladesh is the third-largest Muslim-majority country in the world and already has a known ISIS support base. By producing content in Bengali, QEF is deliberately reaching into a large, under-monitored audience, with a history of extremist branches that had not been directly targeted in prior AI-related publications. The article included:

  • Introductory guides to NLP, machine learning, and healthcare applications of AI.
  • Digital “safety” tutorials (e.g., disabling ChatGPT history, anonymization, and 2FA best practices).

This move signals a deliberate effort to embed AI literacy among South Asian supporters, particularly in Bangladesh, a country with an existing ISIS footprint.

Strategic Takeaways

QEF’s AI publications reveal a coordinated strategy:

  1. Consistency: These are not isolated experiments but part of a long-term plan.
  2. Targeted narratives: English = futuristic, Arabic = pragmatic, Bengali = educational.
  3. AI normalization: By publishing non-hostile, tutorial-style content, ISIS seeks to make AI tools seem routine and permissible.
  4. Operational integration: AI now sits alongside crypto donations, encrypted messaging, and underground forums in their toolkit.
  5. Signals for defenders: Mentions of ChatGPT and LUMO AI offer clear entry points for monitoring extremist adaptation

Why This Matters

ISIS’s evolving use of AI underscores how extremist groups are embedding advanced technologies into their propaganda, recruitment, and security practices. What looks like benign “how-to” content is, in reality, a way to normalize AI adoption in extremist ecosystems.

For platform enforcers, trust & safety teams, and regulators, this is a red flag: AI is no longer just a subject of jihadist discussion, it’s becoming a normalized and strategically framed asset.

At ActiveFence, our intelligence teams monitor extremist groups’ adoption of emerging technologies, from AI to encrypted messaging to crypto-funding. We help enterprises, governments, and platforms stay ahead of threat actors before they weaponize new tools.

👉Talk to our experts to learn how our intelligence services can help safeguard your organization against evolving risks.

Table of Contents

Know their tools. Defend yours.

Talk to our experts →