As Nepal heads into a snap election on March 5, artificial intelligence is emerging as a powerful new weapon in political misinformation.
A recently viral video showed former Kathmandu mayor Balendra Shah, ex-electricity chief Kulman Ghising, and Rastriya Swatantra Party leader Rabi Lamichhane walking together, sparking speculation of a new political alliance. The encounter never happened. The entire clip was computer-generated.
The deepfake exposed how easily synthetic media can reshape political narratives in a country already gripped by instability. The vote comes just six months after youth-led protests against corruption toppled then-prime minister K.P. Sharma Oli, leaving public trust in institutions fragile.
Fake audio is spreading just as fast. Swarnim Wagle, vice president of the Rastriya Swatantra Party, has filed a complaint after a fabricated recording surfaced portraying him in a conversation with India’s prime minister. The clip circulated widely on social platforms and family chat groups.
In Nepal, voice messages travel rapidly through closed networks where verification is difficult. Tools to detect manipulated audio remain unreliable, leaving ordinary citizens with little ability to distinguish truth from fabrication.
Government attempts to curb misinformation have backfired before. During protests in September, authorities blocked Facebook, X, and YouTube for failing to register with regulators. The ban intensified unrest, with at least 77 deaths and thousands injured, according to Reuters. Misinformation simply migrated to harder-to-monitor channels.
The Election Commission has policy foundations in place. Its 2021 guidelines on social media use in elections and a new draft code of conduct ban false information and fake accounts meant to influence voters. But experts warn these frameworks are outdated for the era of generative AI.
Key questions remain unresolved: What qualifies as synthetic media? Should campaigns disclose AI-generated content? Who bears responsibility when deepfakes go viral?
Media groups and researchers are calling for a coordinated response. Poynter recommends treating every viral political video like a crime scene, tracing its origin and publishing verification methods. Political parties should commit to avoiding deepfakes and label any AI-assisted content. Platforms should deploy rapid-response teams familiar with Nepal’s languages and politics. Citizens are urged to pause, verify, and resist sharing sensational claims.
Nepal Police’s Cyber Bureau is preparing for more sophisticated attacks. “AI is getting bigger and being used in more ways,” said SP Ray. “Soon we may see crimes and cyberattacks that AI makes easier.” While he believes the technology’s early stage offers a narrow window to prepare, he acknowledged that foreign-origin attacks will be harder to trace.
With trust already eroded, Nepal’s election is becoming a live test of whether democratic systems can withstand AI-driven deception.
