AI Through the Lens of ISIS: A Terrorist Organization’s Guide to AI Tools

By ,
May 13, 2025
Two masked militants using a laptop under neon lighting with an ISIS flag in the background.

AI is the new battlefield. Stay ahead.

Talk to our experts →

ISIS’s media arm is now educating its followers on advanced AI tools—highlighting both their strategic potential and embedded dangers. As the group increasingly relies on GenAI for propaganda, recruitment, and cyber operations, it also warns operatives to protect their anonymity in a landscape rife with PII leakage. Explore insights from a newly released guide that reveals the dual-use potential of AI through the lens of a terrorist organization.

 

Introduction

In recent years, the Islamic State (ISIS) has positioned itself not only as a militant non-state actor, but as an entity deeply attuned to technological developments. Artificial Intelligence (AI), particularly Generative AI (GenAI), is its latest area of interest. As part of global efforts to mitigate online harm, it is essential to understand how such organizations are analyzing and potentially exploiting these technologies for purposes including radicalization, recruitment, and propaganda.

One of ISIS’s most prominent media arms, the Qimam Electronic Foundation (QEF), is known for its detailed publications on information security and emerging technologies. On April 15, 2025, QEF released a bilingual guide (English and Arabic) titled “A Guide to AI Tools and Their Dangers.” This document blends instructional content with ideological framing, offering a unique look into how ISIS evaluates and discusses AI’s potential.

 

 

Technical Overview: A Taxonomy of AI Capabilities

The QEF pamphlet offers a scant but pointed categorization of AI technologies, listing specific tools under five domains, each loosely tied to ISIS’s operational needs. While the guide lacks in-depth analysis, it reflects a functional interest in how existing platforms might serve their strategic aims:

  • Natural Language Processing (NLP)
    Tools are noted for their ability to generate text, simulate conversations, and analyze sentiment, relevant for crafting propaganda and manipulating discourse.
  • Machine Learning
    Platforms are cited for their use in predictive modeling and personalized content delivery, hinting at applications in planning or audience targeting.
  • Computer Vision
    Referenced primarily for surveillance and counter-surveillance, these tools are positioned as aids in operational awareness and field activity.
  • Robotic Process Automation (RPA)
    Technologies in this category are associated with automating workflows, likely considered for enhancing internal efficiency or probing external systems.
  • AI in Healthcare
    The guide points to medical AI systems not for utility, but for the vulnerabilities they expose, particularly the risk of accessing or disrupting sensitive, centralized health data.

 

 

How ISIS Frames AI Risks: Securing Anonymity

One of the most revealing aspects of the QEF guide lies in its focus on the inherent dangers of AI technologies. The document emphasizes that such risks “must be carefully managed“, signaling a strategic approach to both navigating and exploiting AI’s vulnerabilities.

The pamphlet demonstrates a deep concern with information security, particularly the privacy risks associated with AI-enabled data collection. It explicitly warns that the use of AI often involves the accumulation and analysis of massive amounts of personal data, highlighting the heightened risk of data breaches and state surveillance. For ISIS operatives, who rely heavily on anonymity and operational secrecy, such vulnerabilities are seen as direct threats to their safety and mission effectiveness.

In response, QEF outlines mitigation strategies, including personal data anonymization, digital obfuscation, and recommendations on minimizing digital footprints – an effort to ensure that ISIS operatives remain undetected in increasingly AI-driven surveillance ecosystems.

In addition to privacy, the guide also flags security vulnerabilities. Rather than exploring these issues from a defensive standpoint, the framing suggests dual intent: not only to protect their own communications and networks but to understand and potentially exploit these weaknesses in external systems.

 

 

GenAI in Propaganda: Expanding the Media Arsenal

ISIS media outlets have already begun integrating AI into their content production. Groups like Halummu (which produces English-language content), Al-Azaim (targeting Central Asian audiences), and Al-Murhafat and Al-‘Adiyat (Arabic content producers), are leveraging AI for visual and linguistic propaganda. These tools enable them to produce high volumes of polished material that align with ISIS’s ideological messaging.

One example includes an AI-generated image depicting how to manufacture explosives using everyday kitchen items, accompanied by a detailed article. 

terrorist in home kitchen is preparing a bomb ingredients you can find in every house, ISIS propaganda

An AI-generated image encouraging DIY bomb-making

 

Conclusion: Strategic Implications for Security and Policy

IISIS’s Guide to AI Tools is more than just an instructional manual – it functions as a strategic blueprint. QEF’s systematic breakdown of mainstream AI technologies, paired with its militant framing, reveals a troubling degree of technological literacy and intent. By transforming widely available tools into tactical assets, ISIS is positioning itself to exploit the accelerating AI landscape with increasing sophistication.

This publication underscores the urgent need for security researchers, policymakers, and technology providers to anticipate how AI can be repurposed by threat actors. Understanding these dynamics is critical not only for disruption but for prevention.

By investing in research-driven frameworks and collaborative enforcement strategies, we can limit the reach of these networks, protect vulnerable populations, and reclaim the digital ecosystem from extremist exploitation.

 

Two AI-generated images published by Al-Azaim Foundation, used to disseminate ISIS ideology concerning the West, explicitly feature threats of terrorist attacks in the United States.Two AI-generated images published by Al-Azaim Foundation, used to disseminate ISIS ideology concerning the West, explicitly feature threats of terrorist attacks in the United States.
Two AI-generated images published by Al-Azaim Foundation, used to disseminate ISIS ideology concerning the West, explicitly feature threats of terrorist attacks in the United States.

Table of Contents

Know their tools. Defend yours.

Talk to our experts →