India Slashes Social Media Takedown Window to Three Hours
New IT rules bring AI generated content under tighter scrutiny and mandate rapid removal of deepfakes and unlawful posts
Topics
News
- UAE Cyber Security Council Flags Human Error as Primary Cyber Risk
- Dario Is Wrong: Yann LeCun Refutes Anthropic CEO's AI Job Apocalypse Prediction
- DDoS Attacks Jumped 8x in March, GCC Nations Among Key Targets
- AI Dispatch | 10–16 April 2026
- UAE Firms Shift to AI Messaging as WhatsApp Overtakes SMS
- OpenAI Counters Anthropic’s Mythos With GPT-5.4-Cyber Launch
India’s Ministry of Electronics and Information Technology (MeitY) has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing AI-generated and synthetic content explicitly under the regulatory framework for the first time and tightening compliance obligations for social media platforms.
Under the revised rules, which take effect from 20 February, platforms such as Meta’s Facebook and Instagram, YouTube and X must remove unlawful content, including deepfakes and other AI-generated material, within three hours of receiving a government or court notification, sharply reducing the previous 36-hour deadline.
The rules also mandate that AI-generated and synthetically created media be clearly labeled and, where technically feasible, carry embedded identifiers or metadata, and they require users to declare when they post synthetic content.
Other compliance timelines have been shortened: the window for addressing user grievances has been cut to seven days, and removal of non-consensual intimate imagery must be completed within two hours.
The amendments drop earlier proposals for prescriptive label-size requirements and exempt routine good-faith edits that do not alter substance, while continuing to treat synthetically generated information as “information” subject to enforcement obligations under the IT rules.
Government officials have framed the changes as necessary to curb the rapid spread of harmful AI-enabled deepfakes and other unlawful content, though industry groups have cautioned that the compressed timelines could pose operational challenges.