The government has officially notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, tightening regulations for social media platforms. The updated IT Rules 2021 amendment mandates “prominent” labels on AI-generated content and drastically reduces the timeline for removing unlawful content from 36 hours to just three hours. The changes, which will come into effect on February 20, 2026, aim to curb deepfakes, misinformation, and non-consensual content while strengthening accountability of intermediaries.
Prominent AI Labels Now Required Under IT Rules 2021 Amendment
- Under the new IT Rules 2021 amendment, platforms must ensure that AI-generated or synthetically generated information (SGI) carries a clearly and prominently visible label.
- Earlier, the proposal required labels to occupy at least 10% of content space, but that specific threshold has been removed after consultations with tech companies.
- However, platforms cannot allow the removal or suppression of AI labels once applied.
- This rule aims to increase transparency and help users identify manipulated or artificial content more easily.
Three-Hour Takedown Timeline: A Major Compliance Shift
- One of the most significant changes is the reduction in the content takedown timeline.
- Platforms must now remove unlawful content within three hours, instead of the earlier 36 hour window.
- In cases of non-consensual intimate imagery, the deadline has been reduced further to just two hours.
- The stricter timeline is expected to increase pressure on social media intermediaries, as failure to comply may result in the loss of safe harbour protection under the IT Act.
What Is Safe Harbour and Why It Matters?
- Safe harbour is a legal immunity that protects social media platforms from liability for user-generated content, provided they follow due diligence norms.
- If platforms fail to act within the new three-hour timeline, they risk losing this immunity.
- The government argues that compressed timelines are necessary to prevent virality of harmful content.
- However, legal experts warn that such tight deadlines may increase the risk of over-censorship and operational challenges.
Definition of Synthetic Generated Information (SGI)
- The amended rules clarify the definition of synthetically generated information (SGI).
- Carve outs have been included for assistive or quality-enhancing uses of AI.
- Routine editing of audio, video, or audiovisual content done in good faith will not fall under SGI.
- However, when platforms become aware that their services are being used to create unlawful SGI, they must take expeditious action, including removal, disabling access, or suspension of user accounts.
Technical Measures and User Declarations
- The new rules require intermediaries to implement reasonable technical measures to prevent the sharing of unlawful SGI.
- Users must declare when content is AI-generated.
- Platforms are also required to verify such declarations and ensure the AI label is prominently displayed.
- Additionally, they must prevent SGI that misrepresents real-world events or a person’s identity, directly targeting the growing issue of deepfakes.
Context: Growing Concerns Around Deepfakes and AI Misuse
- The amendments come amid rising concerns about deepfake content and misuse of AI tools globally.
- Incidents involving AI-generated explicit imagery and misinformation have intensified regulatory scrutiny.
- Governments worldwide are exploring frameworks to balance innovation with user safety.
- India’s new IT Rules 2021 amendment places stronger obligations on platforms to act quickly and responsibly.
Question
Q1. Under the amended IT Rules 2021, unlawful content must be removed within,
A. 24 hours
B. 36 hours
C. 12 hours
D. 3 hours