
Shorter Deadline For Content Removal
India has introduced new rules that require social media companies to remove unlawful material within three hours of being notified, tightening the existing 36-hour deadline. The amended guidelines take effect from 20 February and apply to major platforms including Meta, YouTube, and X. The rules also apply to AI-generated content. The government did not give a reason for the shorter window.
The change comes as authorities have used the Information Technology rules in recent years to order platforms to remove material linked to national security and public order. Transparency reports show that more than 28,000 URLs or web links were blocked in 2024 following government requests.
Concerns About Oversight And Censorship
Critics say the move could be part of a wider tightening of control over online content and warn about the risk of censorship in a country with more than a billion internet users. Experts have said the existing rules already give authorities broad power over social media content, and the shorter deadline raises further questions about how platforms will handle requests at scale.
New Rules For AI-Generated Material
For the first time, the law defines AI-generated material, including audio and video that is created or altered to appear real, such as deepfakes. The definition excludes ordinary editing, accessibility features, and genuine educational or design work.
Platforms that allow users to create or share such material must clearly label it. Where possible, they must also add permanent markers to help trace its origin. Companies will not be allowed to remove these labels once they are added. The rules also require the use of automated tools to detect and prevent illegal AI content, including deceptive or non-consensual material, false documents, child sexual abuse material, explosives-related content, and impersonation.
Responses From Digital Rights Groups And Experts
The Internet Freedom Foundation said the three-hour deadline would turn platforms into what it called rapid censors and said the compressed timeline would remove meaningful human review and push services toward automated over-removal.
Anushka Jain, a research associate at the Digital Futures Lab, said the labelling requirement could improve transparency but warned that the shorter deadline could drive companies toward full automation. She said platforms are already struggling with the 36-hour deadline because the process involves human oversight and that full automation raises the risk of content being taken down incorrectly.
Delhi-based technology analyst Prasanto K Roy described the new system as one of the most extreme takedown regimes in a democracy. He said meeting the deadline would be difficult without heavy automation and limited human review, and added that the short timeframe leaves little room to assess whether a request is legally appropriate. On AI labelling, he said the intention is positive but noted that reliable and tamper-resistant labelling tools are still developing.
Regulatory Context
The Online Safety framework in India requires platforms to remove harmful material once identified, and the latest amendments increase the speed expected for compliance. Ofcom is not involved in India’s rules, but Indian authorities enforce the Information Technology framework through their own processes.
Featured image credits: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.
