Real-time visibility, safety, and security for your GenAI-powered agents and applications
Proactively test GenAI models, agents, and applications before attackers or users do
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
AI is the new battlefield. Stay ahead.
ISIS’s media arm is now educating its followers on advanced AI tools—highlighting both their strategic potential and embedded dangers. As the group increasingly relies on GenAI for propaganda, recruitment, and cyber operations, it also warns operatives to protect their anonymity in a landscape rife with PII leakage. Explore insights from a newly released guide that reveals the dual-use potential of AI through the lens of a terrorist organization.
In recent years, the Islamic State (ISIS) has positioned itself not only as a militant non-state actor, but as an entity deeply attuned to technological developments. Artificial Intelligence (AI), particularly Generative AI (GenAI), is its latest area of interest. As part of global efforts to mitigate online harm, it is essential to understand how such organizations are analyzing and potentially exploiting these technologies for purposes including radicalization, recruitment, and propaganda.
One of ISIS’s most prominent media arms, the Qimam Electronic Foundation (QEF), is known for its detailed publications on information security and emerging technologies. On April 15, 2025, QEF released a bilingual guide (English and Arabic) titled “A Guide to AI Tools and Their Dangers.” This document blends instructional content with ideological framing, offering a unique look into how ISIS evaluates and discusses AI’s potential.
The QEF pamphlet offers a scant but pointed categorization of AI technologies, listing specific tools under five domains, each loosely tied to ISIS’s operational needs. While the guide lacks in-depth analysis, it reflects a functional interest in how existing platforms might serve their strategic aims:
One of the most revealing aspects of the QEF guide lies in its focus on the inherent dangers of AI technologies. The document emphasizes that such risks “must be carefully managed“, signaling a strategic approach to both navigating and exploiting AI’s vulnerabilities.
The pamphlet demonstrates a deep concern with information security, particularly the privacy risks associated with AI-enabled data collection. It explicitly warns that the use of AI often involves the accumulation and analysis of massive amounts of personal data, highlighting the heightened risk of data breaches and state surveillance. For ISIS operatives, who rely heavily on anonymity and operational secrecy, such vulnerabilities are seen as direct threats to their safety and mission effectiveness.
In response, QEF outlines mitigation strategies, including personal data anonymization, digital obfuscation, and recommendations on minimizing digital footprints – an effort to ensure that ISIS operatives remain undetected in increasingly AI-driven surveillance ecosystems.
In addition to privacy, the guide also flags security vulnerabilities. Rather than exploring these issues from a defensive standpoint, the framing suggests dual intent: not only to protect their own communications and networks but to understand and potentially exploit these weaknesses in external systems.
ISIS media outlets have already begun integrating AI into their content production. Groups like Halummu (which produces English-language content), Al-Azaim (targeting Central Asian audiences), and Al-Murhafat and Al-‘Adiyat (Arabic content producers), are leveraging AI for visual and linguistic propaganda. These tools enable them to produce high volumes of polished material that align with ISIS’s ideological messaging.
One example includes an AI-generated image depicting how to manufacture explosives using everyday kitchen items, accompanied by a detailed article.
An AI-generated image encouraging DIY bomb-making
IISIS’s Guide to AI Tools is more than just an instructional manual – it functions as a strategic blueprint. QEF’s systematic breakdown of mainstream AI technologies, paired with its militant framing, reveals a troubling degree of technological literacy and intent. By transforming widely available tools into tactical assets, ISIS is positioning itself to exploit the accelerating AI landscape with increasing sophistication.
This publication underscores the urgent need for security researchers, policymakers, and technology providers to anticipate how AI can be repurposed by threat actors. Understanding these dynamics is critical not only for disruption but for prevention.
By investing in research-driven frameworks and collaborative enforcement strategies, we can limit the reach of these networks, protect vulnerable populations, and reclaim the digital ecosystem from extremist exploitation.
Know their tools. Defend yours.
ActiveFence provides a searchable interactive guide to the legislation of over 60 countries that govern online terrorist content.
ISIS's move from physical jihad to digital warfare has marked a new chapter in the group's expansion efforts.
Learn how online cartels recruit hitmen, traffic women, and exploit UGC platforms - and how proactive detection can stop them in their digital tracks.