India Slashes Social Media Takedown Window to Three Hours
New IT rules bring AI generated content under tighter scrutiny and mandate rapid removal of deepfakes and unlawful posts
Topics
News
- US-Iran Escalation: The Rising Risks to Middle East Projects and Investments
- Amazon Faces Dual Disruptions: Drone Strikes Hit AWS Sites as Retail Platform Stumbles
- Enterprise AI Scales Fast, but Structural Change Lags, Study Finds
- Inside OpenAI’s Pentagon Deal and the Three Red Lines on Military AI
- OpenAI Partners with Amazon and Microsoft
- Nvidia, Global Telecom Groups Back AI-Driven 6G Push
India’s Ministry of Electronics and Information Technology (MeitY) has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing AI-generated and synthetic content explicitly under the regulatory framework for the first time and tightening compliance obligations for social media platforms.
Under the revised rules, which take effect from 20 February, platforms such as Meta’s Facebook and Instagram, YouTube and X must remove unlawful content, including deepfakes and other AI-generated material, within three hours of receiving a government or court notification, sharply reducing the previous 36-hour deadline.
The rules also mandate that AI-generated and synthetically created media be clearly labeled and, where technically feasible, carry embedded identifiers or metadata, and they require users to declare when they post synthetic content.
Other compliance timelines have been shortened: the window for addressing user grievances has been cut to seven days, and removal of non-consensual intimate imagery must be completed within two hours.
The amendments drop earlier proposals for prescriptive label-size requirements and exempt routine good-faith edits that do not alter substance, while continuing to treat synthetically generated information as “information” subject to enforcement obligations under the IT rules.
Government officials have framed the changes as necessary to curb the rapid spread of harmful AI-enabled deepfakes and other unlawful content, though industry groups have cautioned that the compressed timelines could pose operational challenges.

