Industry Leaders Comment on IT Rules 2026 of India

3–4 minutes
Alisha Parul

From Opinions Desk

The Indian government has tightened the rules about AI generated matter online. Following are the salient features –

Legal recognition of artificial content: First formal definition and regulation of artificially/synthetically generated information (SGI) in India – It targets deepfakes and AI-generated impersonations, while distinguishing it from routine editing, academic and training materials etc.

Mandatory Labelling: All SGI must be prominently marked as artificial and embedded with metadata or unique identifiers to track origin – Platforms are barred from allowing removal or suppression of AI labels or metadata. 

Obligations for Significant Social Media Intermediaries (SSMIs): SSSMIs must verify the User Declarations of synthetically generated content; ensure clear and prominent disclosure of SGI before publication.

Failure to comply with these provisions may result in platforms losing their Safe Harbour protection.

“Safe Harbour” protection under Section 79 of the IT Act protects SSMIs from liability for user-posted/third party content.

Prohibited Content: Intermediaries must block synthetic content involving Child Sexual Abuse Material (CSAM), non-consensual intimate imagery (NCII), false documents, or deceptive impersonation etc.

Faster Compliance Timelines: Takedown or disabling of access on receiving lawful orders to be undertaken within 3 hours, as against 36 hours earlier – Grievance redressal timeline shortened from 15 days to 7 days.

We have received comments from industry leaders about these IT Rules –

“From a policy perspective, India’s move is especially consequential because of sheer scale. Home to over 1.4 billion people and roughly 900 million internet users, the country represents one of the largest single digital publics in the world. When a market of this magnitude mandates AI labelling, persistent metadata, and traceability for synthetic content, global platforms cannot treat compliance as peripheral, it reshapes product design and governance practices worldwide. This shifts norm-setting power away from being concentrated solely in Silicon Valley or Brussels and toward a more genuinely multipolar model of digital regulation.

In that sense, the amendment carries clear decolonising potential. It signals that countries in the Global South, representing the majority of the world’s online population, are no longer passive recipients of technological standards but active rule-makers. By embedding safeguards against impersonation, large-scale misinformation, and criminal misuse, while exempting research and routine editing, India is articulating a regulatory philosophy grounded in its own realities: dozens of major languages, national elections involving hundreds of millions of voters and a media ecosystem where misleading audio-visual content can reach millions within hours.

If effectively implemented, AI labelling in a user base this large could force global platforms to internalise transparency by design rather than treat it as a narrow compliance exercise for Western regulators. That is where the real structural impact lies, not only in curbing deepfakes at home, but in redistributing regulatory influence in the emerging global AI order.”

–Alisha Butala, Research Consultant, Future Shift Labs 

“India’s updated IT rules mandating AI content labelling and a three-hour takedown window are a significant legal advance toward accountable digital governance in the age of synthetic media. Clear labelling will enhance transparency and empower users to distinguish AI-generated content, reducing deception and harm. The tight takedown deadline also recognises the speed of online harm. 

However, fulfilling such timelines without robust procedural safeguards, independent oversight, and clear definitions may strain due process and risk over-removal or arbitrary censorship, especially if platforms shoulder disproportionate liability without rights of appeal. Balanced implementation, legal clarity, and stakeholder dialogue are essential to protect free expression while curbing misuse.”

–Parul Madan, Programme Lead, Training and Capacity, Yashoda AI

The above comments show that the industry is taking a serious view on the AI generated material and the government is walking in sync with the industry. Also, the emphasis on illegal activities using AI is being taken seriously by the government.

If implemented judiciously, these rules can help the industry and the society at large.

Read more in Technology and Society

Read more in New Products Corner