How the Current Conflict Is Accelerating AI-Powered Information Warfare

A growing number of companies are now selling what researchers call “misinformation-as-a-service.”

Topics

  • In recent days, a surge of suspected AI-generated misinformation and manipulated media has circulated online involving the UAE, Iran, and Israel. Fabricated videos, suspected synthetic satellite imagery, and digitally generated battlefield scenes appear to have accumulated hundreds of millions of views across social media platforms.

    Researchers say the latest wave of AI-generated media may represent an important shift in modern information warfare. Generative AI tools have made it much easier to create convincing propaganda, allowing state actors, political groups, and opportunistic online networks to generate misleading narratives in minutes.

    A recent report from the Centre for Emerging Technology and Security (CETaS) at the Alan Turing Institute warns that AI-driven information threats are becoming a bigger part of crisis communication ecosystems. The report, Adding Fuel to the Fire: AI Information Threats and Crisis Events, co-authored by Sam Stockwell, Ardi Janjeva, and Broderick McDonald, says AI is amplifying the scale, speed, and impact of misinformation during geopolitical crises.

    For citizens following the conflict online, the result is a volatile information environment where authentic documentation, political messaging, and AI-generated fabrications coexist in the same feed.

    A Misinformation Flashpoint

    Conflicts have always produced propaganda. But the Israel–Iran confrontation has created particularly favourable conditions for AI-driven misinformation.

    According to McDonald, an academic researcher at Oxford and The Alan Turing Institute, one of the key reasons the current conflict has become fertile ground for synthetic propaganda lies in the nature of the war itself. “This is an asymmetrical war. Instead of relying solely on battlefield victories, weaker states often aim to undermine the morale and political resolve of their opponents,” he said.

    Digital propaganda fits neatly into that strategy. “Advances in generative and agentic AI allow for the potential for large-scale production and distribution of increasingly sophisticated propaganda.”

    In other words, the same logic that makes cheap drones useful in modern warfare also applies to synthetic media. “Combining these online and offline methods into an effective form of hybrid warfare has helped in previous conflicts—and we should expect such efforts to escalate during the current war too.”

    Global Powers and Proxy Networks

    The information battlefield becomes even more complex with the involvement of external geopolitical actors. Non-state actors are also experimenting with the technology. McDonald noted that various groups have reportedly experimented with AI-enabled disinformation tools, although the scale and sophistication of these efforts vary. 

    Opposition figures may have also begun employing synthetic propaganda techniques.

    The Power of Information Voids

    Another major driver behind the spread of AI-generated misinformation is what researchers call an “information void.”

    Wars create chaotic conditions in which reliable information is scarce and difficult to verify. “Information voids during security crises such as wars and terrorist attacks provide a fertile environment for AI-enabled disinformation to take root and grow,” McDonald said.

    The CETaS report similarly notes that crisis events create conditions in which misinformation spreads faster than verification can keep up. As breaking news unfolds, audiences actively search for updates while journalists and authorities struggle to confirm facts.

    Professional journalism relies on verification processes that take time. “While reliable information from journalists may take hours or even days to verify and report, AI-generated misinformation often begins to shape narratives almost immediately,” McDonald explained.

    Even after credible information becomes available, sensational content grabs more online attention.  Such misinformation can have serious consequences beyond the internet. Synthetic media can inflame tensions, distort public perceptions of events, and undermine democratic institutions during crisis moments.

    Research shows that in many cases, AI-enabled misinformation during crises can lead to real-world violence and dangerous societal outcomes. For instance, fabricated videos depicting political leaders calling for violence could incite sectarian conflict. Elsewhere, synthetic imagery suggesting battlefield victories or leadership collapse may influence public morale or even financial markets.

    “AI-generated images and videos showing key leaders incapacitated, or crowds cheering an invading force may lower morale and provoke defections—to say nothing of moving markets,” McDonald said.

    The Economics of Synthetic News

    Political strategy is only one of the drivers of the misinformation ecosystem. Increasingly, financial incentives are playing a central role. 

    “Geopolitical crises create massive spikes in information demand, and that attention is monetizable,” said Kurt Muehmel, Head of AI Strategy at Dataiku. Generative AI has dramatically reduced the cost of producing content capable of capturing that attention. “A single operator can now spin up dozens of sites and generate hundreds of articles in minutes,” Muehmel said.

    The result is what he describes as an industrialized content pipeline. 

    “AI generates the content. Social platforms distribute it. Programmatic advertising monetizes it,” he explained. “Each component optimizes its own metric—output volume, engagement, ad placement—and none are optimized for accuracy,” Muehmel said. In this environment, misinformation can emerge as an unintended byproduct of incentive structures that prioritize engagement and advertising revenue over verification.

    Greed, Not Just Geopolitics

    Many misinformation efforts are driven less by ideology than by profit. “These include content farms that produce false and inflammatory AI content at an industrial scale in order to profit from the high engagement that ‘ragebait’ attracts,” McDonald said.

    These operations are often run in low-wage regions where even modest advertising revenue can be lucrative. “In most instances, these entrepreneurs have little interest in the domestic politics of the country they are producing content for,” he explained. “They are driven primarily by greed rather than grievance.”

    A growing number of companies are also selling what researchers call “misinformation-as-a-service.” Unscrupulous startups have begun offering AI-powered bot networks to political campaigns, lobbyists, and foreign governments seeking to buy domestic political influence. These automated systems can generate and distribute massive volumes of posts.

    Despite often spreading misinformation, these automated bots may post and interact hundreds of times per day, depending on their configuration. 

    From a technology governance perspective, the deeper issue lies in the incentives built into digital platforms. “The key distinction is that this is increasingly profit-motivated content generation where accuracy is irrelevant to the business model,” Muehmel said.

    Different parts of the digital ecosystem measure success using metrics that ignore truthfulness. “Ad networks measure impressions and clicks, not whether adjacent content is fabricated. Social platforms measure engagement, not whether it correlates with accuracy. AI providers measure usage, not downstream impact,” he said.

    As a result, misinformation can flourish even when no single actor intends to promote it. “Each actor optimizes a metric that ignores the problem,” Muehmel explained.

    How to Spot Fake News

    While deepfake videos often dominate public discussion, the threat landscape is far broader. There are many other forms of AI information threats shaping hybrid warfare. These include automated bot networks that create the illusion of public consensus, data-poisoning campaigns that corrupt AI training datasets, and AI systems that produce misleading information.

    “AI models can produce dangerous hallucinations during fast-moving crises,” McDonald noted. In some cases, users have even asked AI chatbots to verify viral videos—only to receive incorrect answers.

    The CETaS report warns that these interactions highlight how AI systems themselves can become part of the misinformation ecosystem, inadvertently amplifying misleading narratives during crises.

    Major social media platforms have begun deploying detection tools and labeling systems to identify manipulated media, although researchers say enforcement remains inconsistent across regions and platforms.

    For individuals following the conflict online, distinguishing authentic footage from synthetic media is becoming increasingly difficult. However, a few principles can help reduce the risk of falling prey to AI-generated misinformation.

    • Practice patience. In fast-moving crises, the first viral images or videos are often the least reliable. Wait for confirmation from reputable news organizations or verification groups.
    • Be wary of emotionally charged content. Posts designed to “ragebait” are often optimized for engagement rather than accuracy.
    • Cross-check information. Verify dramatic claims or footage across multiple credible sources. If it appears on only one account or platform, treat it with skepticism.
    • Strengthen media literacy. Educating citizens to critically evaluate online content can reduce the spread of AI-generated misinformation.
    • Support local journalism. Robust, trustworthy reporting provides a crucial counterweight to false or manipulated information during crises.
    • Develop better detection tools. New technologies are needed to identify and flag synthetic media such as AI-generated images, videos, and audio.
    • Improve coordinated responses. Streamlining protocols for emergency services and developing systems like behavioral signal sharing can help authorities respond more effectively to misinformation campaigns.

    The current conflict may offer an early glimpse of how future geopolitical crises could unfold in the age of generative AI. Wars will still involve missiles, drones, and military alliances. But they will also unfold across timelines, comment sections, and recommendation algorithms.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.