A surge in AI-generated videos and images related to the US-Israel conflict with Iran is spreading rapidly across social media, with online creators using generative AI tools to produce misleading content that attracts millions of views and generates revenue.
According to BBC Verify, fabricated videos and satellite imagery linked to the conflict have collectively accumulated hundreds of millions of views online, highlighting the growing challenge of AI-driven misinformation during major geopolitical events.
Experts say the rapid development of generative AI tools has dramatically lowered the barrier to creating convincing fake content. Timothy Graham, a digital media researcher at the Queensland University of Technology, noted that what previously required professional production can now be produced within minutes using AI systems.
The conflict escalated on 28 February, when the United States and Israel launched strikes on Iran, prompting retaliatory drone and missile attacks by Iran targeting Israel, US military assets and locations across the Gulf region. As the situation developed, many social media users turned to online platforms for real-time updates, creating fertile ground for viral misinformation.
Investigators identified several widely shared AI-generated videos falsely depicting missile strikes on Tel Aviv and other locations. In one case, a fabricated video showing the Burj Khalifa in Dubai appearing to be on fire was viewed tens of millions of times during a period of heightened concern about possible regional attacks.
The investigation also uncovered AI-generated satellite images, including a fabricated image claiming to show severe damage to the US Navy’s Fifth Fleet headquarters in Bahrain. Analysis revealed that the fake image was based on a genuine satellite photograph taken in February 2025, with identical vehicle placements appearing in both images.
Tools such as Google’s Veo video generator, OpenAI’s Sora, the Chinese AI platform Seedance, and Grok integrated into X are among the technologies increasingly used to produce realistic synthetic media.
Social media platform X recently announced it will temporarily suspend creators from its monetisation programme if they share AI-generated conflict videos without proper labelling. The platform’s creator programme pays users whose posts generate high levels of engagement, including views, shares and comments.
Researchers say the monetisation model may be contributing to the spread of misinformation. According to estimates cited by BBC Verify, creators could earn between $8 and $12 per million verified impressions, incentivising the production of viral content regardless of its accuracy.
Experts warn that the rise of AI-generated misinformation is undermining trust in legitimate reporting and making it harder to verify real evidence from conflict zones.
They also emphasise that while social media companies are working to improve detection systems, the combination of easy-to-use AI tools, automated content pipelines and engagement-based monetisation is creating a complex challenge for platforms attempting to limit the spread of synthetic misinformation.
