NEW DELHI – In a surprising turn of events during India’s 2024 general election, AI-generated videos featuring prominent political figures have sparked both amusement and concern. One video showcases an ecstatic Narendra Modi, the Indian Prime Minister, donning a trendy jacket and trousers while grooving to a Bollywood song. Modi reshared the video on X, stating, “Such creativity in peak poll season is truly a delight.”
However, another AI video presents a stark contrast, depicting Modi’s rival, Mamata Banerjee, dancing in a saree-like outfit with a background score of her speech criticizing party deserters who joined Modi’s camp. The video has led to an investigation by state police, who warned that such content could “affect law and order.”
These incidents highlight the escalating use and potential misuse of AI technology in the electoral process, raising alarms among regulators and security officials in the world’s most populous democracy.
Artificial intelligence has made it remarkably easy to create videos with near-perfect shadow and hand movements, sometimes fooling even the digitally savvy. This poses significant risks in India, where many of the 1.4 billion population are not tech-savvy and manipulated content can easily incite sectarian tensions, especially during elections.
A World Economic Forum survey in January underscored this danger, ranking misinformation as a greater risk to India than infectious diseases or illicit economic activities in the next two years.
“India is already at great risk of misinformation – with AI in the picture, it can spread at the speed of 100X,” said Sagar Vishnoi, a New Delhi-based consultant advising political parties on AI use in the election. Vishnoi emphasized the vulnerability of elderly people, often not tech-savvy, to fall for fake narratives fueled by AI videos, potentially triggering hatred against communities, castes, or religions.
The 2024 national election, spanning six weeks and concluding on June 1, is the first to see extensive deployment of AI. Initially, politicians used AI to create personalized campaign videos and audio. However, major cases of misuse emerged in April, including deepfakes of Bollywood actors criticizing Modi and fabricated clips involving Modi’s top aides, resulting in the arrest of nine individuals.
Key Incidents of AI Misuse in the 2024 Election
Incident | Details |
---|---|
Modi Dancing Video | AI-generated video of Modi dancing, reshared by Modi himself |
Banerjee Criticism Dance Video | AI video of Banerjee dancing with her speech criticizing deserters |
Bollywood Actors Deepfake | AI deepfakes of actors criticizing Modi |
Fake Clips of Modi’s Aides | Fabricated videos leading to the arrest of nine people |
Joker-Inspired Banerjee Video | AI clip of Banerjee blowing up a hospital, mimicking a scene from “The Dark Knight” |
India’s Election Commission recently warned political parties against using AI to spread misinformation, highlighting laws that impose jail terms of up to three years for offenses such as forgery, promoting rumors, and inciting enmity. A senior national security official expressed concerns about fake news leading to unrest, emphasizing the difficulty in countering the rapid evolution of AI technology.
“We don’t have adequate monitoring capacity… the ever-evolving AI environment is difficult to keep track of,” the official stated. Similarly, a senior election official noted the challenge of monitoring social media content, let alone controlling it. Both officials requested anonymity as they were not authorized to speak to the media.
AI and deepfakes are increasingly used in elections globally, including in the U.S., Pakistan, and Indonesia. The spread of AI-generated videos in India underscores the difficulties faced by authorities in managing this new frontier.
An Indian IT ministry panel has the authority to block content that it deems harmful to public order, either on its own discretion or based on complaints. During the current election, hundreds of officials from the poll watchdog and police are tasked with detecting and removing problematic content.
While Modi reacted lightheartedly to his AI dancing video, stating, “I also enjoyed seeing myself dance,” the Kolkata police in West Bengal took a stricter approach. They launched an investigation against X user SoldierSaffron7 for sharing the Banerjee video. Kolkata cybercrime officer Dulal Saha Roy issued a notice on X, demanding the user delete the video or face “strict penal action.”
However, the user remained defiant, telling Reuters via X direct messaging, “I am not deleting that, no matter what happens. They can’t trace (me).” The user declined to share their number or real name due to fears of police action.
Election officials noted that authorities can only request social media platforms to remove content. If platforms decide the posts don’t violate their policies, officials are left scrambling. The user behind the Joker-inspired Banerjee video received an email notice from X, reviewed by Reuters, stating the platform’s strong belief in “defending and respecting the voice of our users.”
The user remained unconcerned, stating, “They can’t do anything to me. I didn’t take that (notice) seriously.”
- AI-generated videos featuring Narendra Modi and Mamata Banerjee have stirred controversy during India’s 2024 election.
- State police are investigating potentially harmful AI content, raising concerns about its impact on law and order.
- Regulators and security officials struggle to counter the spread of AI misinformation.
- The 2024 election marks the first extensive use of AI in Indian politics, with both benign and malicious examples.
- Social media platforms play a crucial role in managing AI content, but their policies can limit regulatory effectiveness.
The use of AI in the 2024 Indian general election has unveiled both the creative and destructive potential of the technology. As AI-generated content becomes more sophisticated, the challenge for regulators and security officials intensifies. With the election nearing its conclusion, the impact of AI on India’s political landscape remains a critical issue, highlighting the need for robust monitoring and regulatory frameworks to safeguard democratic processes.
Related News:
Featured Image courtesy of The Hindustan Times