The Indian government has taken a decisive step toward regulating artificial intelligence and synthetic media. On 22 October 2025, the Ministry of Electronics & Information Technology (MeitY) released draft amendments to the IT Rules 2021, mandating visible labelling of AI-generated visual and audio content. The proposal targets deepfakes, misinformation, and impersonation concerns that have surged alongside generative AI’s rapid growth. For technology companies, media platforms, and corporate counsel, this marks the beginning of AI content governance in India.
In Depth Analysis
1. Key Provisions at a Glance
- Mandatory AI-content labelling: Any AI-generated visual must carry a visible watermark or text label occupying at least 10% of the image surface.
- Audio labelling requirement: Synthetic or AI-generated voice clips must include a clear declaration in the first 10% of playback duration.
- User disclosure obligation: Content uploaders must declare whether their submission is AI-generated.
- Platform accountability: Intermediaries must implement technical systems to verify declarations and maintain metadata traceability for AI content.
- Enforcement: Non-compliance can attract penalties under the IT Act 2000 and related intermediary liability provisions.
2. Rationale Behind the Proposal
This draft regulation is India’s proactive response to the global deepfake crisis.
- Misinformation & impersonation: Fake political and celebrity videos have already influenced public opinion and privacy rights.
- Tech accountability: The rulebook seeks to shift responsibility to intermediaries, aligning with the EU AI Act and China’s content provenance rules.
- Trust restoration: The aim is to make digital content more traceable and trustworthy by ensuring end-users can distinguish AI-generated material from human-created work.
3. Practical Impact Across Sectors
A. Technology & AI Platforms
- Need to re-engineer upload workflows to collect AI-origin data and embed watermarks or labels.
- Build or integrate AI-detection APIs to confirm “AI-generated” claims.
- Prepare for audits and documentation retention, including user declarations and metadata logs.
B. Media & Entertainment
- Broadcasters, OTTs, and marketing agencies must evaluate creative workflows, especially where generative imagery or synthetic voices are used.
- Contracts with content producers should include mandatory disclosure clauses.
- Litigation exposure increases for unlabelled AI content published online.
C. Corporate Counsel & Enterprises
- Legal teams must map all AI-usage touchpoints, advertising, HR training, automated videos, internal communication.
- Introduce governance policies around AI output, ensure employee training, and maintain evidence of labelling compliance.
- Evaluate liability allocation with vendors or creative agencies..
D. Regulators & Policy Makers
- Balancing innovation with public safety remains a challenge. The 10% labelling rule, though global first, raises questions about standardisation, machine readability, and accessibility.
4. Potential Legal and Compliance Challenges
- Definition ambiguity: The draft doesn’t yet define “AI-generated” precisely, human assisted content may fall into grey areas.
- Technology feasibility: Labelling may not work for small or audio only assets.
- Cross border enforcement: How India’s law will apply to international platforms remains under consultation.
- Timeline: The consultation window closes 6 November 2025, after which final notification is expected by early 2026.
5. Strategic Recommendations
- AI Usage Audit — Conduct an internal review of all AI-related processes and tools within your organisation.
- Governance Framework — Draft or update policies to include AI output labelling, provenance tracking, and periodic audit requirements.
- Contractual Safeguards — Amend vendor and influencer agreements to reflect labelling responsibilities and indemnities.
- Training & Awareness — Educate employees and content teams on identification, labelling, and approval workflows.
- Training & Awareness — Educate employees and content teams on identification, labelling, and approval workflows.
Impact Summary
- Corporates: Must integrate AI labelling into compliance matrices; potential new disclosure obligations in annual governance reports.
- Tech Startups: Need legal vetting of AI workflows; investor due diligence will examine regulatory preparedness.
- Media Platforms: Implementation of watermark and metadata systems is now a top compliance priority.
- Legal Advisors: Should prepare advisory notes and internal training on AI output regulation for clients in media, fintech, and e-commerce.
This marks India’s first move toward regulating not just how AI learns, but how its outputs appear in public. The focus has shifted from data inputs to AI outputs, a crucial step in restoring trust in digital ecosystems.


