AI Labelling under the IT Rules Amendment, 2026
 
  • Mobile Menu
HOME BUY MAGAZINEnew course icon
LOG IN SIGN UP

Sign-Up IcanDon't Have an Account?


SIGN UP

 

Login Icon

Have an Account?


LOG IN
 

or
By clicking on Register, you are agreeing to our Terms & Conditions.
 
 
 

or
 
 




AI Labelling under the IT Rules Amendment, 2026

Sun 15 Feb, 2026

The Government of India has strengthened digital governance by amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing mandatory labelling of Artificial Intelligence (AI)-generated content and faster takedown timelines. The amendment, notified by the Ministry of Electronics and Information Technology (MeitY) and effective from 20 February 2026, aims to curb deepfakes, misinformation, and harmful synthetic media.

Background and Need

  • Rapid advances in generative AI tools have enabled the creation of highly realistic images, videos, and audio clips that may mislead users. Such content can be used for political manipulation, financial fraud, identity theft, or social unrest.
  • Governments worldwide are exploring regulatory mechanisms to ensure transparency and accountability in digital content. India’s amendment reflects this global trend toward responsible AI governance.

Mandatory Labelling of AI-Generated Content

  • The amended rules require social media platforms to prominently label “synthetically generated information” (SGI) such as AI-generated images, videos, or similar content. Platforms with more than 5 million users must obtain a user declaration confirming whether content is AI-generated and conduct technical verification before publication.
  • The purpose of this provision is to ensure that users clearly understand whether the information they see is authentic or artificially created. The government emphasised that unlabeled synthetic media can harm privacy, spread false narratives, and threaten national security.
  • However, the final rules include certain practical exemptions. Automatically retouched smartphone photographs and film special effects are not treated as SGI requiring labels. At the same time, the rules strictly prohibit unlawful AI content such as forged documents, child exploitation material, bomb-making instructions, or deepfakes impersonating real individuals.

Detection and Technical Measures

  • Large platforms must deploy “reasonable and appropriate technical measures” to detect unlawful synthetic content and ensure proper labelling and provenance tracking. This includes AI detection systems and metadata-based authentication tools.
  • One important international initiative in this domain is the Coalition for Content Provenance and Authenticity (C2PA), which provides technical standards for embedding invisible digital signatures into AI-generated media. Although the Indian rules do not mandate any specific technology, they encourage similar provenance-tracking mechanisms to ensure authenticity.

Stricter Takedown Timelines

The amendment significantly reduces the response time for removing harmful content:

  • Government or court takedown orders must be acted upon within 2–3 hours.
  • Response time for sensitive user complaints has been reduced from 72 hours to 36 hours.
  • Other grievance response timelines have been reduced from two weeks to one week.

The government argued that earlier timelines allowed harmful misinformation to spread widely before removal, making quicker enforcement necessary.

Enhanced User Awareness and Compliance

  • Platforms must now notify users of their terms and conditions every three months instead of annually. They must also explicitly warn that posting illegal AI-generated content may lead to account suspension, removal of content, or disclosure of user identity to law-enforcement agencies.
  • This change strengthens accountability and reinforces responsible digital behaviour among users.

Key Terms

  1. AI-Generated Content: Media created using artificial intelligence tools rather than human production.
  2. Synthetically Generated Information (SGI): Audiovisual or digital content produced or altered using AI techniques.
  3. Deepfake: AI-manipulated media that realistically imitates a real person’s voice, face, or actions.
  4. Provenance Tracking: Digital method of verifying the origin and authenticity of online content.
  5. C2PA (Coalition for Content Provenance and Authenticity): Global initiative developing technical standards for verifying digital media authenticity.
  6. Intermediary: Online platforms such as social media companies that host or transmit user content.
  7. Takedown Notice: Legal order requiring platforms to remove unlawful online content.

Conclusion

  • The AI labelling mandate marks a major step in India’s evolving digital regulatory framework.
  • By combining transparency requirements, faster content removal, and stronger platform accountability, the amendment seeks to balance technological innovation with public safety and democratic stability.
  • For competitive exam aspirants, the development is important from the perspective of cyber governance, data ethics, digital regulation, and emerging technology policy.

Latest Courses