Technology

Tech industry groups urge MeitY to refine AI content rules to boost innovation, for global alignment

The technology industry has called on the Ministry of Electronics and Information Technology (MeitY) to adopt a flexible and globally harmonised approach to the proposed amendments to the Information Technology Intermediary Rules and Digital Media Ethics Code to label artificial intelligence-generated content in the public domain.The draft rules proposed by MeitY require platforms and users to label AI-generated visuals with a visible marker covering at least 10% of the display area, or, for audio, add a disclaimer for the first 10% of the content duration.The submissions to MeitY reflect broad industry concern that while addressing the threats of AI-generated content is crucial, overly rigid regulation could stifle technological progress and complicate compliance for global-facing businesses.Nasscom, which represents India’s tech sector, urged the ministry to clarify the definitions of “synthetically generated information” and “deepfake synthetic content”, arguing that the rules should focus on harmful and malicious content rather than sweeping in all algorithmically altered media.The association raised concerns about the technical feasibility of some labelling proposals and called for distinct obligations based on whether technology is consumed by businesses or individuals.The organisation warned that uniform rules for platforms with vastly different business models and capabilities could impose unworkable burdens—potentially hampering startups and small firms disproportionately.BSA, representing major global software firms, echoed several of these points in its own submission. It said tackling challenges posed by synthetically generated information, including deepfakes and malign disinformation campaigns, is urgent. However, the group cautioned MeitY against imposing inflexible standards that it said might undermine innovation.It recommended that India avoid requiring visible watermarks or labels on AI-generated content, warning that such marks are easily removed and could make Indian digital outputs less attractive globally. Instead, they suggested machine-readable markers and advocated for alignment with international protocols like the Coalition for Content Provenance and Authenticity (C2PA), making compliance simpler for multinational platforms without sacrificing transparency or user safety.Both industry groups pushed for policy frameworks that encourage responsible innovation and allow rapid adaptation to technical advances. They highlighted the risk of India falling out of step with global digital standards if it moves too fast without international alignment.Their submissions said India can be both a standard-bearer for ethical technology and a global digital powerhouse, provided its laws remain pragmatic, clear, and future-ready.MeitY’s proposed changes come as governments around the world scramble to address the ethical, social, and security issues created by generative AI. The rule changes will impact all significant social media intermediaries (SSMIs) or those with 5 million or more registered users in India.Google-owned YouTube; Meta’s Facebook, Instagram, Threads and WhatsApp; X (formerly Twitter); Snap; LinkedIn and ShareChat will have to obtain user declarations on whether the uploaded content is synthetic, deploy automated tools to verify these declarations, and ensure synthetic content is clearly marked with appropriate labels, or will be considered non-compliant.Also Read: New IT rules explained: Deepfakes must be labelled, takedowns only by senior officialsAnybody enabling the creation or modification of synthetic content must prominently label such material. All firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software.This opens up a long list of popular AI-based software, apps and services including OpenAI's ChatGPT, Google's Gemini, Microsoft's Copilot, Meta's AI assistant to scrutiny.One industry source said a major point of contention is how these amendments may affect the “safe harbour” provisions, long seen as a crucial shield for online intermediaries.Under current law, platforms enjoy conditional immunity from liability for third-party content, provided they meet due diligence requirements and act promptly upon receiving takedown notices.The new draft does not abolish this framework but explicitly clarifies that due diligence obligations now include verification and labelling of AI-generated material. Failure to comply—with respect to either flagging unlabelled content or authenticating user declarations about AI use—could strip platforms of their conditional immunity, triggering what an industry executive described as “secondary liability” for harmful content.While the safe harbour itself remains intact, “these changes further clarify the due diligence obligations for platforms, making the burden heavier”, the executive added.For providers of AI or content hosting platforms, the mandate presents significant technical challenges. “From a technology provider perspective, implementing a reliable 10% watermark is an incredibly heavy lift. Image generation models are by nature probabilistic and non-deterministic-prompt instructions like ‘make the watermark cover 10%’ often fail,” the executive explained.Such requirements add latency and cost to AI outputs, making the burden of compliance disproportionately high for platforms compared to the ease with which users may circumvent the rules by cropping, editing, or screenshotting content.Three strategies for meeting the draft’s verification demands were outlined by the executive: detecting visible labels and metadata, using hidden watermarking and embedded metadata, and deploying classifiers to infer whether media is AI-generated.Each of these is “imperfect and prone to error,” the executive said, and can potentially result in “overreaching or under-enforcement”, wrongly flagging edited photographs or missing more sophisticated forgeries.Creative professionals and advertising agencies are also expressing concerns, pointing out the practical limitations of dedicating 10% of an audio ad to disclaimers or reliably watermarking images at scale.The technology sector notes that many global AI tools do not embed detectable signals, and cross-platform sharing often strips metadata, further complicating enforcement.The public consultation on the draft rules closes on November 13, with no clarity yet on timelines for implementation or compliance grace periods.Industry participants anticipate a protracted period of negotiation over technical feasibility.The rules have prompted questions over the procedural safeguards available to users whose legitimate content could be wrongly taken down.As per current practice, affected users can pursue in-app appeals, escalate to a grievance officer, and eventually approach statutory grievance appellate committees or the courts.The industry executive warned that the draft rules fundamentally expand “due diligence obligations” and could result in more frequent and uncertain content moderation risks, especially as “the reasonable person test” for authenticity is difficult to operationalise against the evolving landscape of digital manipulation and user behaviour.

Tech industry groups urge MeitY to refine AI content rules to boost innovation, for global alignment

The technology industry has called on the Ministry of Electronics and Information Technology (MeitY) to adopt a flexible and globally harmonised approach to the proposed amendments to the Information Technology Intermediary Rules and Digital Media Ethics Code to label artificial intelligence-generated content in the public domain.The draft rules proposed by MeitY require platforms and users to label AI-generated visuals with a visible marker covering at least 10% of the display area, or, for audio, add a disclaimer for the first 10% of the content duration.The submissions to MeitY reflect broad industry concern that while addressing the threats of AI-generated content is crucial, overly rigid regulation could stifle technological progress and complicate compliance for global-facing businesses.Nasscom, which represents India’s tech sector, urged the ministry to clarify the definitions of “synthetically generated information” and “deepfake synthetic content”, arguing that the rules should focus on harmful and malicious content rather than sweeping in all algorithmically altered media.The association raised concerns about the technical feasibility of some labelling proposals and called for distinct obligations based on whether technology is consumed by businesses or individuals.The organisation warned that uniform rules for platforms with vastly different business models and capabilities could impose unworkable burdens—potentially hampering startups and small firms disproportionately.BSA, representing major global software firms, echoed several of these points in its own submission. It said tackling challenges posed by synthetically generated information, including deepfakes and malign disinformation campaigns, is urgent. However, the group cautioned MeitY against imposing inflexible standards that it said might undermine innovation.It recommended that India avoid requiring visible watermarks or labels on AI-generated content, warning that such marks are easily removed and could make Indian digital outputs less attractive globally. Instead, they suggested machine-readable markers and advocated for alignment with international protocols like the Coalition for Content Provenance and Authenticity (C2PA), making compliance simpler for multinational platforms without sacrificing transparency or user safety.Both industry groups pushed for policy frameworks that encourage responsible innovation and allow rapid adaptation to technical advances. They highlighted the risk of India falling out of step with global digital standards if it moves too fast without international alignment.Their submissions said India can be both a standard-bearer for ethical technology and a global digital powerhouse, provided its laws remain pragmatic, clear, and future-ready.MeitY’s proposed changes come as governments around the world scramble to address the ethical, social, and security issues created by generative AI. The rule changes will impact all significant social media intermediaries (SSMIs) or those with 5 million or more registered users in India.Google-owned YouTube; Meta’s Facebook, Instagram, Threads and WhatsApp; X (formerly Twitter); Snap; LinkedIn and ShareChat will have to obtain user declarations on whether the uploaded content is synthetic, deploy automated tools to verify these declarations, and ensure synthetic content is clearly marked with appropriate labels, or will be considered non-compliant.Also Read: New IT rules explained: Deepfakes must be labelled, takedowns only by senior officialsAnybody enabling the creation or modification of synthetic content must prominently label such material. All firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software.This opens up a long list of popular AI-based software, apps and services including OpenAI's ChatGPT, Google's Gemini, Microsoft's Copilot, Meta's AI assistant to scrutiny.One industry source said a major point of contention is how these amendments may affect the “safe harbour” provisions, long seen as a crucial shield for online intermediaries.Under current law, platforms enjoy conditional immunity from liability for third-party content, provided they meet due diligence requirements and act promptly upon receiving takedown notices.The new draft does not abolish this framework but explicitly clarifies that due diligence obligations now include verification and labelling of AI-generated material. Failure to comply—with respect to either flagging unlabelled content or authenticating user declarations about AI use—could strip platforms of their conditional immunity, triggering what an industry executive described as “secondary liability” for harmful content.While the safe harbour itself remains intact, “these changes further clarify the due diligence obligations for platforms, making the burden heavier”, the executive added.For providers of AI or content hosting platforms, the mandate presents significant technical challenges. “From a technology provider perspective, implementing a reliable 10% watermark is an incredibly heavy lift. Image generation models are by nature probabilistic and non-deterministic-prompt instructions like ‘make the watermark cover 10%’ often fail,” the executive explained.Such requirements add latency and cost to AI outputs, making the burden of compliance disproportionately high for platforms compared to the ease with which users may circumvent the rules by cropping, editing, or screenshotting content.Three strategies for meeting the draft’s verification demands were outlined by the executive: detecting visible labels and metadata, using hidden watermarking and embedded metadata, and deploying classifiers to infer whether media is AI-generated.Each of these is “imperfect and prone to error,” the executive said, and can potentially result in “overreaching or under-enforcement”, wrongly flagging edited photographs or missing more sophisticated forgeries.Creative professionals and advertising agencies are also expressing concerns, pointing out the practical limitations of dedicating 10% of an audio ad to disclaimers or reliably watermarking images at scale.The technology sector notes that many global AI tools do not embed detectable signals, and cross-platform sharing often strips metadata, further complicating enforcement.The public consultation on the draft rules closes on November 13, with no clarity yet on timelines for implementation or compliance grace periods.Industry participants anticipate a protracted period of negotiation over technical feasibility.The rules have prompted questions over the procedural safeguards available to users whose legitimate content could be wrongly taken down.As per current practice, affected users can pursue in-app appeals, escalate to a grievance officer, and eventually approach statutory grievance appellate committees or the courts.The industry executive warned that the draft rules fundamentally expand “due diligence obligations” and could result in more frequent and uncertain content moderation risks, especially as “the reasonable person test” for authenticity is difficult to operationalise against the evolving landscape of digital manipulation and user behaviour.

Related Articles