Loading Now

ISIS-K reviews tech in ‘Khorasan’ – Firstpost

ISIS-K reviews tech in ‘Khorasan’ – Firstpost


In the summer of 2025, Issue 46 of ISIS-K-Linked English Language Web Magazine Voice of Khorasan”, resurfaced online after months of silence. This time, it didn’t lead with battle cries or terrorist poetry. Instead, the cover story read like a page from Wired or CNET: a side-by-side review of artificial intelligence chatbots. The article compared ChatGPT, Bing AI, Brave Leo, and China’s DeepSeek. It warned readers that some of these models stored user data, logged IP addresses, or relied on Western servers vulnerable to surveillance. Brave Leo, integrated into a privacy-first browser and not requiring login credentials, was ultimately declared the winner: the best chatbot for maintaining operational anonymity.

STORY CONTINUES BELOW THIS AD

For a terrorist group, this was an unexpected shift in tone, almost clinical. But beneath the surface was something far more chilling: a glimpse into how terrorist organisations are evolving in real time, studying the tools of the digital age and adapting them to spread chaos with precision. This wasn’t ISIS’s first brush with AI. Back in 2023, a pro-Islamic State support network circulated a 17-page “AI Tech Support Guide” on secure usage of generative tools. It detailed how to use VPNs with language models, how to scrub AI-generated images of metadata, and how to reword prompts to bypass safety filters. For the group’s propaganda arms, large language models (LLMs) weren’t just novelty, they were utility.

By 2024, these experiments bore fruit. A series of ISIS-K videos began appearing on encrypted Telegram channels featuring what appeared to be professional news anchors calmly reading the terrorist group’s claims of responsibility. These weren’t real people, they were AI-generated avatars. The news segments mimicked top-tier global media outfits including their ticker fonts and intro music. The anchors, rendered in crisp HD, delivered ISIS propaganda wrapped in the aesthetics of mainstream media.

The campaign was called News Harvest. Each clip appeared sanitised: no blood, no threats, no glorification. Instead, the tone was dispassionate, almost journalistic. Intelligence analysts quickly realised it wasn’t about evading content moderation, it was about psychological manipulation. If you could make propaganda look neutral, viewers would be less likely to question its content. And if AI could mass-produce this material, then every minor attack, every claim, every ideological whisper could be broadcast across continents in multiple languages, 24×7, at virtually no cost.

Scale and deniability, these are the twin seductions of AI for terrorists. A single propagandist can now generate recruitment messages in Urdu, French, Swahili, and Indonesian in minutes. AI image generators churn out memes and martyr posters by the dozens, each unique enough to evade hash-detection algorithms that social media platforms use to filter known terrorist content. Video and voice deepfakes allow terrorists to impersonate trusted figures, from imams to government officials, with frightening accuracy.

STORY CONTINUES BELOW THIS AD

This isn’t just a concern for jihadist groups. Far-left ideologies in the West have enthusiastically embraced generative AI. On Pakistani army and terrorist forums during India’s operation against terrorists, codenamed “Operation Sindoor”, users swap prompts to create terrorist-glorifying artwork, hinduphobia denial screeds, and memes soaked in racial slurs against Hindus. Some in the west have trained custom models that remove safety filters altogether. Others use coded language or “grandma hacks” to coax mainstream chatbots into revealing bomb-making instructions. One far left terrorist boasted he got an AI to output a pipe bomb recipe by asking for his grandmother’s old cooking secret. Across ideological lines, these groups are converging on the same insight: AI levels the propaganda playing field. No longer does it take a studio, a translator, or even technical skill to run a global influence operation. All it takes is a laptop and the right prompt.

The stakes are profound. AI-generated propaganda can radicalise individuals before governments even know they’re vulnerable. A deepfaked sermon or image of a supposed atrocity can spark sectarian violence or retaliatory attacks. During the 2023 Israel-Hamas conflict and the 2025 Iran-Israel 12-day war, AI-manipulated images of children and bombed mosques spread faster than journalists or fact-checkers could respond. Some were indistinguishable from real photographs. Others, though sloppy, still worked, because in the digital age, emotional impact often matters more than accuracy. And the propaganda doesn’t need to last forever, it just needs to go viral before it’s flagged. Every repost, every screenshot, every download extends its half-life. In that window, it shapes narratives, stokes rage, and pushes someone one step closer to violence.

STORY CONTINUES BELOW THIS AD

What’s perhaps most dangerous is that terrorists know exactly how to work the system. In discussions among ISIS media operatives, they’ve debated how much “religious content” to include in videos, because too much gets flagged. They’ve intentionally adopted neutral language to slip through moderation filters. One user in an ISIS-K chatroom even encouraged others to “let the news speak for itself,” a perverse twist on journalistic ethics, applied to bombings and executions.

So what now? How do we respond when terrorist groups write AI product reviews and build fake newsrooms? The answers are complex, but they begin with urgency. Tech companies must embed watermarking and provenance tools into every image, video, and document AI produces. These signatures won’t stop misuse, but they’ll help trace origins and build detection tools that recognise synthetically generated content. Model providers need to rethink safety—not just at the prompt level, but in deployment. Offering privacy-forward AI tools without guardrails creates safe zones for abuse. Brave Leo may be privacy-friendly, but it’s now the chatbot of choice for ISIS. That tension between privacy and misuse can no longer be ignored.

STORY CONTINUES BELOW THIS AD

Governments, meanwhile, must support open-source detection frameworks and intelligence-sharing between tech firms, civil society, and law enforcement. The threat is moving too fast for siloed responses. But above all, the public needs to be prepared. Just as we learned to spot phishing emails and fake URLs, we now need digital literacy for the AI era. How do you spot a deepfake? How do you evaluate a “news” video without knowing its origin? These are questions schools, journalists, and platforms must start answering now.

When the 46th edition of terrorist propaganda magazine, Voice of Khorasan opens with a chatbot review, it’s not just a macabre curiosity, it’s a signal flare. A terrorist group has studied our tools, rated our platforms, and begun operationalising the very technologies we are still learning to govern. The terrorists are adapting, methodically, strategically, and faster than most governments or tech firms are willing to admit. They’ve read the manuals. They’ve written their own. They’ve launched their beta.

STORY CONTINUES BELOW THIS AD

What arrived in a jihadi magazine as a quiet tech column should be read for what it truly is: a warning shot across the digital world. The question now is whether we recognise it, and whether we’re ready to respond.

Rahul Pawa is an international criminal lawyer and director of research at New Delhi based think tank Centre for Integrated and Holistic Studies. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost’s views.

Post Comment