• About
  • Advertise
  • Careers
  • Contact
  • Privacy Policy
  • Disclaimer
Wednesday, November 26, 2025
No Result
View All Result
Northeast News - Northeast India news 24×7
  • Assam
  • Meghalaya
  • Tripura
  • Mizoram
  • Manipur
  • Nagaland
  • Arunachal Pradesh
  • Business
  • Entertainment
    (siddhant/Insta)

    Actor Shraddha Kapoor’s brother questioned by Mumbai Police in Rs 252-crore drug case

    Assam: Animated documentary ‘Lachit – The Warrior’ released on Lachit Divas

    Assam: Animated documentary ‘Lachit – The Warrior’ released on Lachit Divas

    Indian cinema’s ‘He-Man’ Dharmendra no more

    Indian cinema’s ‘He-Man’ Dharmendra no more

    24 phones, 12 gold chains stolen during Travis Scott’s Mumbai concert

    24 phones, 12 gold chains stolen during Travis Scott’s Mumbai concert

    Hollywood star Eddie Murphy to be honoured with AFI’s top lifetime recognition

    Hollywood star Eddie Murphy to be honoured with AFI’s top lifetime recognition

    Napolean RZ Thanga becomes first Mizo filmmaker on IFFI 2025 jury

    Napolean RZ Thanga becomes first Mizo filmmaker on IFFI 2025 jury

    Nora Fatehi makes sparkling debut on Jimmy Fallon’s ‘Tonight Show’

    Nora Fatehi makes sparkling debut on Jimmy Fallon’s ‘Tonight Show’

    Ishaan Khatter, Janhvi Kapoor-starrer ‘Homebound’ to release on Netflix on Nov 21

    Ishaan Khatter, Janhvi Kapoor-starrer ‘Homebound’ to release on Netflix on Nov 21

    Mumbai Police issues summons to Orry in Rs 252-crore narcotics probe

    Mumbai Police issues summons to Orry in Rs 252-crore narcotics probe

  • Opinion
  • Neighbours
  • Assam
  • Meghalaya
  • Tripura
  • Mizoram
  • Manipur
  • Nagaland
  • Arunachal Pradesh
  • Business
  • Entertainment
    (siddhant/Insta)

    Actor Shraddha Kapoor’s brother questioned by Mumbai Police in Rs 252-crore drug case

    Assam: Animated documentary ‘Lachit – The Warrior’ released on Lachit Divas

    Assam: Animated documentary ‘Lachit – The Warrior’ released on Lachit Divas

    Indian cinema’s ‘He-Man’ Dharmendra no more

    Indian cinema’s ‘He-Man’ Dharmendra no more

    24 phones, 12 gold chains stolen during Travis Scott’s Mumbai concert

    24 phones, 12 gold chains stolen during Travis Scott’s Mumbai concert

    Hollywood star Eddie Murphy to be honoured with AFI’s top lifetime recognition

    Hollywood star Eddie Murphy to be honoured with AFI’s top lifetime recognition

    Napolean RZ Thanga becomes first Mizo filmmaker on IFFI 2025 jury

    Napolean RZ Thanga becomes first Mizo filmmaker on IFFI 2025 jury

    Nora Fatehi makes sparkling debut on Jimmy Fallon’s ‘Tonight Show’

    Nora Fatehi makes sparkling debut on Jimmy Fallon’s ‘Tonight Show’

    Ishaan Khatter, Janhvi Kapoor-starrer ‘Homebound’ to release on Netflix on Nov 21

    Ishaan Khatter, Janhvi Kapoor-starrer ‘Homebound’ to release on Netflix on Nov 21

    Mumbai Police issues summons to Orry in Rs 252-crore narcotics probe

    Mumbai Police issues summons to Orry in Rs 252-crore narcotics probe

  • Opinion
  • Neighbours
No Result
View All Result
Northeast News - Northeast India news 24×7
No Result
View All Result
Home Tech

India’s new rules to tackle deepfakes look good, but are hard to put in action

The proposed rules to address the menace of deepfakes are technically unfeasible, socially naïve, and legally clumsy. A better solution would be to invest in building an informed, critical citizenry.

360info.orgby360info.org
November 26, 2025
in Tech
India’s new rules to tackle deepfakes look good, but are hard to put in action
Share on FacebookShare on Twitter

In a country obsessed with celebrities, it took an actress, Rashmika Mandanna, to become the victim of a deepfake for the policymakers to sit up and take note.

The incident served as a jarring wake-up call, demonstrating with terrifying clarity how easily generative artificial intelligence (AI) could be weaponised against someone; a stark indication that the era of synthetic media, created by generative AI, is not a future worry, it is here.

The Indian Government has responded to the crisis with a few changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules). The proposed amendments, released on October 22, 2025, allow the Ministry of Electronics and Information Technology (MeitY) to at least attempt to tackle the threat of AI-enabled misinformation and its potential consequences.

The proposed amendments contain a new set of due diligence, which states that all AI-generated content must be clearly labelled, and online platforms must be able to trace the origin.

The intent is laudable, and there’s no argument against the need to protect the masses from AI-generated deepfakes and associated dangers such as invasion of privacy, defamation, and violation of dignity.

Yet, in the rush to regulate, we could potentially risk creating a well-intentioned policy that could prove challenging to implement, and perhaps even fail to address the root cause(s).

This is a classic case of policy versus implementation mismatch – a set of rules that look great on paper but face challenges to implement in real time.

A technical mismatch?

The first gap emerges between regulatory expectations and technical realities. As per the draft Rule 4(1A), Significant Social Media Intermediaries (SSMIs) must “deploy reasonable and proportionate technical measures to verify such declarations”. Here, the mandate requires users of such social media platforms to declare that their content is not synthetically generated. The Explanation to this rule states that it is the responsibility of the social media intermediaries, or the platforms, to verify the correctness of user declarations and that “no synthetically generated information is published without such declaration or label.”

The question is, how would social media platforms implement this? Currently, there is a lack of a single, reliable technology that can definitively detect all forms of AI-generated content, with detection tools being constantly one step behind generative tools. As soon as a new detection method is developed, a new content generation model emerges to evade it.

By mandating “verification”, the government is placing a vague and highly burdensome obligation on platforms, which might prove challenging and, to a degree, unachievable. For instance, how would an Indian startup comply with this requirement? What are the legal standards for “reasonable and proportionate” technical measures to detect fakes?

These rules inadvertently allow plenty of leeway to a few big tech companies, who will be able to pour billions into experimental research and development, while stifling the very innovation in India’s own AI system that the government has been championing.

Misdiagnosis of consumption

The second — and perhaps more profound — mismatch lies in the policy’s understanding of the human side of misinformation.

The government’s core assumption is that the problem is technical, as the user cannot tell if a video is fake. Therefore, the solution as per the Rule 3(3) is also of a technical nature – any synthetically generated information must be clearly “labelled”.

This misdiagnosis could prove dangerous.

A deepfake does not go viral because it looks authentic, but because it confirms pre-existing biases. For instance, Rashmika Mandana’s deepfake going viral implicates a broader issue of the objectification of women. Its power is not in the pixels generated, but in how it resonates with people emotionally.

In a low-trust, high-context social setting, the proposed amendment’s obligation of a 10 percent label signifying AI usage — the label should cover at least 10 percent of the total surface area — may  prove to be a rather weak shield. Arguably, a user-forwarded malicious video that validates people’s worldview is likely to elicit more trust than the label that this is AI-generated content.

The proposed amendments thus appear like a technical patch for a deep and adaptive human problem. A viable suggestion would be to invest massively in critical thinking, rather than the State outsourcing a core responsibility such as public education to the private sector.

The primary idea to tackle misinformation would be well served if we could teach citizens how to consume the available information. Without this education, any labelling obligation will essentially fail at the last mile implementation — the human brain.

Regulatory patchwork

The third big mismatch is the legal framework itself. The proposed rules look like a square peg measure of trying to integrate generative AI into the round hole that is the IT Rules, 2021.

The IT Rules were designed to regulate social media intermediaries and platforms that host third-party content, but AI ecosystems are far more complex.

It includes AI model developers, app developers, and even the application providing interface providers. These proposed Rules create enormous legal ambiguity by stretching the old definition of intermediary to cover all actors within the AI ecosystem.

The approach in itself appears reactive and fragmented, which fails to provide clarity to the very industry it is aimed at.

Such a regulatory patchwork is in contrast to a stable, predictable framework that our innovators need.

Beyond the practical challenges of implementation, as already pointed out by the Internet Freedom Foundation, the timeline for publication consultation, from October 25 to November 6, 2025, was also insufficient.

Further, the proposed amendments also raise the fundamental question regarding the separation of powers between the legislature and the executive. The grand bargain of India’s internet law regime — Section 79 of the Information Technology Act, 2000 (IT Act) —  rests on the safe harbour principle: intermediaries such as social media platforms are protected against liability for user content only if they remain neutral.

Section 79 explicitly states that the safe harbour immunity is granted only when the intermediary does not “initiate the transmission, select the receiver of the transmission, and modify the content” of the transmission.

The proposed amendment obligates the platforms to do exactly that. As per the proposed amendment Rule 3(3), the platforms must “modify” the content by adding visible labels and metadata. They will now be obligated to select the content by verifying it before it can even be published.

However, a solution to this glaring contradiction is also provided for in the proposed amendment in the form of a provision to Rule 3(1)(b). The provision, in effect, suggests that if the intermediaries undertake the modification that is asked of them, then those changes will not be considered as modifications under the IT Act. This is to say that if the intermediaries do the modification as asked of them, they will still be protected by the safe harbour of Section 79.

This is, at best, a legal manoeuvre. The government, through subordinate rules, is attempting to redefine the core substantive meaning of a parent law passed by the Parliament. This blurs the line between legislative rewriting and executive implementation.

Such a fundamental shift in intermediary liability from being a neutral host to an active editor and verifier appears to be a significant policy change.

A question of separation of powers thus emerges — should a significant change that alters the basic architecture of our internet laws be made via an executive notification, or should it be subject to full deliberation by the Parliament? To ensure its long-term legal and constitutional stability, the latter should be the right choice.

A viable solution

The government is indeed concerned about deepfakes, and rightly so. But the new proposed rules, in their current form, may not represent the solution. They are technically unfeasible, socially naïve, and legally clumsy.

Implementing them may not stop or prevent the deepfakes; instead, it might create enormous burdens of compliance for businesses without solving the root cause of the problem.

Instead, the viable solution would be twofold. First, any regulation must be technologically feasible and formulated in consultation with the relevant stakeholders. Second, it must be paired with a long-term national mission for digital literacy.

We cannot solve the problem of deepfakes with rules. We can only dream of building a resilient and critical society that is prepared to face them.

Saraswathy Vaidyanathan is an Assistant Professor, School of Law, BML Munjal University, Haryana. 

Originally published under Creative Commons by 360info™.

Join our WhatsApp Channel Get updates, alerts & exclusives Join
Tags: Artificial Intelligence (AI)Deepfake
Next Post
Manipur observes Constitution Day; Governor Bhalla urges citizens to uphold democratic values

Manipur observes Constitution Day; Governor Bhalla urges citizens to uphold democratic values

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

About Us

Northeast News is a digital only news platform covering Northeast India news 24×7. As Northeast India states – Assam, Meghalaya, Tripura, Mizoram, Manipur, Nagaland and Arunachal Pradesh hardly get any news coverage in the mainstream media, we are here to be ‘Vocal for Local’.

Category

  • Articles
  • Arunachal Pradesh
  • Assam
  • Business
  • Entertainment
  • Lifestyle
  • Manipur
  • Meghalaya
  • Mizoram
  • Nagaland
  • Neighbours
  • Opinion
  • Politics
  • Science
  • Sports
  • Tech
  • Tripura
  • Uncategorized
  • About
  • Advertise
  • Careers
  • Contact
  • Privacy Policy
  • Disclaimer

© 2022 All Rights Reserved.

No Result
View All Result
  • Assam
  • Meghalaya
  • Tripura
  • Mizoram
  • Manipur
  • Nagaland
  • Arunachal Pradesh
  • Business
  • Entertainment
  • Opinion
  • Neighbours

© 2022 All Rights Reserved.