Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
AI is the new battlefield. Stay ahead.
Back in April, we published the first look at how ISIS’s media arm, the QEF, was beginning to endorse GenAI tools, mostly in a passive, observational way. This new update shows a clear shift: moving from curiosity to active, strategic use of AI across languages and audiences, revealing a maturing narrative infrastructure, technologically savvy, globally segmented, and ideologically flexible.
Between April and September 2025, ISIS’s media arm, the Qimam Electronic Foundation (QEF), published a series of articles about artificial intelligence (AI). These publications reveal a clear evolution: from surface-level commentary to a more strategic use of AI as part of ISIS’s propaganda and operational playbook.
Key developments include:
Together, these moves show that ISIS is not just experimenting with AI, they are integrating it into their long-term media and recruitment infrastructure.
Infographic showcasing future AI developments
In August 2025, QEF published two contrasting articles:
This dual approach highlights how ISIS tailors narratives: practical and security-focused for Arabic speakers, while painting a futuristic, innovation-driven vision for English readers.
In September, QEF crossed a new threshold by explicitly endorsing LUMO AI, a privacy-focused assistant. The endorsement stressed its:
This marks the first time a specific AI tool has been promoted by ISIS’s media ecosystem, signaling trust-building efforts around AI platforms, vetting and recommending tools that reduce exposure to surveillance and increase operational security.
Infographic explaining the benefits of Lumo in Bengali and in English (different narrative in each language)
In May and August, QEF released AI-related content in Bengali for the first time. This was a significant step: Bangladesh is the third-largest Muslim-majority country in the world and already has a known ISIS support base. By producing content in Bengali, QEF is deliberately reaching into a large, under-monitored audience, with a history of extremist branches that had not been directly targeted in prior AI-related publications. The article included:
This move signals a deliberate effort to embed AI literacy among South Asian supporters, particularly in Bangladesh, a country with an existing ISIS footprint.
QEF’s AI publications reveal a coordinated strategy:
ISIS’s evolving use of AI underscores how extremist groups are embedding advanced technologies into their propaganda, recruitment, and security practices. What looks like benign “how-to” content is, in reality, a way to normalize AI adoption in extremist ecosystems.
For platform enforcers, trust & safety teams, and regulators, this is a red flag: AI is no longer just a subject of jihadist discussion, it’s becoming a normalized and strategically framed asset.
At ActiveFence, our intelligence teams monitor extremist groups’ adoption of emerging technologies, from AI to encrypted messaging to crypto-funding. We help enterprises, governments, and platforms stay ahead of threat actors before they weaponize new tools.
👉Talk to our experts to learn how our intelligence services can help safeguard your organization against evolving risks.
Know their tools. Defend yours.
Discover how ISIS's media arm is analyzing and adapting advanced AI tools for propaganda, recruitment, and cyber tactics in a newly released guide detailing AI’s dual-use potential.
ActiveFence provides a searchable interactive guide to the legislation of over 60 countries that govern online terrorist content.
ISIS's move from physical jihad to digital warfare has marked a new chapter in the group's expansion efforts.