The recent military exchanges between Israel and Iran have unleashed an unprecedented wave of online disinformation, marking a new frontier in digital conflict. Experts warn this period represents the “first time we’ve seen generative AI be used at scale during a conflict,” according to analysts.
The sheer volume of misleading content appearing online since Israel’s initial strikes is described as “astonishing” by online verification groups. This deluge includes everything from recycled videos of unrelated events and game clips repurposed as real footage, to highly convincing (and sometimes unconvincing) artificial intelligence-generated videos and images.
Amplifying Narratives with Fake Content
Analysis reveals distinct patterns in the disinformation spread by accounts seemingly aligned with either side.
Pro-Iranian accounts have heavily focused on exaggerating the effectiveness of Tehran’s military response and capabilities. This often involves sharing fake videos, many clearly created using AI, boasting of Iran’s strength or depicting the aftermath of purported strikes on Israeli targets. Some of the most widely viewed fake videos reviewed by BBC Verify have collectively racked up over 100 million views across various social media platforms.
Notable examples include AI-generated images claiming to show mass missile barrages raining down on cities like Tel Aviv, with one single image garnering 27 million views. Other fake clips have purported to show successful missile strikes on buildings, often depicted at night, a time frame verification experts note makes content particularly difficult to authenticate.
A recurring theme in these AI fakes targets Israel’s state-of-the-art F-35 fighter jets. A barrage of fake clips claimed to show F-35s being shot down. One analyst group estimated that if these claims were true, Iran would have destroyed roughly 15% of Israel’s F-35 fleet. However, no authentic footage confirming these claims has been verified. Obvious signs of AI manipulation were visible in some widely shared images, such as oddly scaled figures near a supposedly downed jet or a lack of realistic impact signs in the environment. One viral video presented as an F-35 shootdown was later identified as footage from a flight simulator video game and was subsequently removed by TikTok.
Focus on F-35s and Potential Influence Operations
The specific targeting of the F-35 in disinformation campaigns has been linked by some analysts to broader influence operations, including those previously associated with Russia. These operations may have shifted focus from the conflict in Ukraine to sowing doubt about the effectiveness of Western, particularly American, weaponry like the F-35, for which Russia has no direct counterpart.
Meanwhile, pro-Israeli accounts have also participated in spreading misleading content, primarily by recirculating old videos of protests or gatherings in Iran. These clips are falsely presented as current evidence of widespread dissent against the Iranian government and popular support for Israel’s military actions. One example included a widely shared AI-generated video falsely depicting Iranians chanting pro-Israel slogans in Tehran. More recently, amid speculation about potential US strikes on Iranian nuclear sites, some accounts have shared AI-generated images of B-2 bombers (aircraft capable of targeting subterranean facilities) flying over Tehran.
The Spreaders and Their Motivations
A significant portion of this disinformation is spread by individuals described as “engagement farmers” or “super-spreaders,” who appear to be seeking to profit from the conflict by sharing sensational content designed to attract attention and accumulate views and followers. Some major social media platforms offer pay-outs based on view counts, providing a clear financial incentive.
Certain obscure accounts have seen explosive growth, becoming major vectors for disinformation. For instance, one pro-Iranian account with no clear ties to official Tehran, “Daily Iran Military,” saw its follower count on X double from 700,000 to 1.4 million in under a week. Many of these accounts have blue ticks, post prolifically, and use seemingly official names, leading some users to mistakenly believe they are authentic sources.
Adding to the challenge, even official sources have inadvertently or intentionally shared fake content. Iranian state media shared fake footage and an AI image of a downed F-35, while a post by the Israel Defense Forces (IDF) using old, unrelated footage received a community note on X.
Verification Challenges and Platform Response
The proliferation of sophisticated fakes, particularly AI-generated content, poses significant verification challenges. Worryingly, X’s AI chatbot, Grok, has reportedly misidentified some clear AI videos as real, citing reputable news outlets in its incorrect affirmations, even when visual tells like autonomously moving objects were present. X did not respond to inquiries about the chatbot’s behavior.
Social media platforms are taking some steps, though their effectiveness varies. TikTok stated it proactively enforces guidelines against misleading content and works with fact-checkers, removing some identified fake videos after being contacted. Instagram owner Meta did not provide comment on its approach.
Beyond intentional spreaders, ordinary social media users contribute to the spread of disinformation. Researchers suggest that during conflicts, the clear binary choices presented make people more likely to reshare content that aligns with their political identity. Moreover, sensational and emotionally charged content inherently spreads faster online.
The fusion of escalating conflict with widespread, easily accessible AI tools has created a potent environment for information warfare, making it harder than ever for users to discern truth from fiction online.