Dealing with deepfakes (MeitY’s Version)

As deepfakes took centre stage in 2023, MeitY decided to tackle the emerging issue head-on. However, what ensued was several conflicting statements from MeitY on how to tackle it. We wrote to them cautioning against surface-level, rushed interventions without deep research and consultation.

12 January, 2024
4 min read

tl;dr

On January 09, 2024, we wrote to the Ministry of Electronics and Information Technology (“MeitY”) raising our apprehensions about the Ministry’s conflicting statement around and approach to tackling the issue of deepfakes. Amidst a series of communications and notices sent to intermediaries, reiterating the need to take proactive steps to curb ‘online harms’ on the internet, no public consultation or deep research has been initiated by the Ministry. 

Deepfakes dilemma

After Rashmika Mandana’s deepfake video got widely circulated on the internet, it caught the attention of the Union government as well, with the Prime Minister and the President expressing caution and noting the potential threats deepfakes pose to society. MeitY was also quick to respond to the incident and even conducted several meetings with intermediaries, primarily social media platforms. The Minister of Electronics and Information Technology, Shri Ashwini Vaishnaw, after conversing with AI companies and experts, Nasscom, and academicians, spoke publicly on the need for drafting a legislation on deepfakes and to take clear action on it within 10 days, i.e. by December 03. The Minister of State for Electronics and IT, Shri Rajeev Chandrashekhar, also mentioned that the government would conduct an open consultative process before finalising the legislation. 

Soon after, around December 14, 2023, Shri Rajeev Chandrashekhar shared that the Union government was considering releasing an advisory, and not a dedicated regulation, for deepfakes. This apparent change of plans was once again declared to the media, and not through any formal mechanism, without any accompanying justification or reasoning. Weeks after these statements were made in the public forum, no legislation, regulation, guideline, or appointment has been made, or public consultation held. Most recently, in the first week of January 2024, it was reported that the MeitY is considering amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), to “explicitly define deepfakes and make it obligatory for all intermediaries to make “reasonable efforts” to not host them”. 

All of this back and forth from the Ministry has created an environment of confusion and is indicative of rushed, surface-level interventions that insufficiently deal with the complex problems of synthetic media. In the absence of any consultation with a diverse range of stakeholders, these multiple, and sometimes conflicting, statements can become a cause of confusion among stakeholders, including companies and users. 

We’re on the highway to (IT Rules) hell

MeitY urged all intermediaries, through the formal advisory it sent on December 26, 2023, to follow the due diligence obligations listed under the notified IT Amendment Rules, 2023. The advisory seems to be aimed at ensuring strict compliance with identifying and promptly removing misinformation, false or misleading content, and material impersonating others, specifically targeting growing concerns around deepfakes. The intermediaries were reminded of the legal provisions applicable in case of their non-compliance with provisions of the IT Rules, 2021, i.e. loss of exemption from liability provided under section 79(1) of the Information Technology Act (IT Act), 2000. 

The notified amendments to the IT Rules in 2022 expanded the requirement of intermediaries under Rule 3(1)(b) to “make reasonable efforts to cause the users” not to post certain kinds of content. The sub-clauses (i) to (ix) under 3(1)(b), which list the grounds on which content must be removed by the intermediaries, were significantly amended. In particular, Rule 3(1)(b)(v) i.e. “knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature” was criticised for its ambiguous phrasing and vague definition, given potential consequences of making private entities arbiters of permissible speech, which violates the directions of the Hon’ble Supreme Court in Shreya Singhal vs Union of India. Moreover, the term “false” also implies the presence of an objectively identifiable truth and establishes a binary of 'true-false' for all online content. However, content in the complex digital space cannot always be neatly bucketed into a straightforward binary of ‘true’ and ‘false’ (see here, here, and here). These new provisions, purportedly aimed at providing additional recourse to aggrieved users, do not “provide a force of law” as they are ultra-vires the safe harbour framework under Section 79 of the IT Act, 2000. 

Given these conflicting statements, rushed policy interventions, and troubling suggestions, we wrote to MeitY highlighting the above stated concerns. We also noted that definitional inadequacies in the IT Rules and the challenges in accurately identifying synthetic media, combined with the threats to the immunity held by platforms and stringent punishment for non-compliance or inaction, may threaten media freedom and legitimacy. In the absence of long-term interventions that address underlying social problems, imposing a penalty or taking away safe harbour in case of inaction or delayed action on the part of platforms will not address the underlying cause for the spread of such content. Instead, it may result in over-policing of content on the part of platforms, who may be forced to make subjective or arbitrary content moderation decisions in the absence of a deeper understanding of the issue at hand to avoid losing their safe harbour. 

Dealing with deepfakes (User rights’ version)

While the rise of synthetic media is an evolving issue of significance, it is one of many emerging issues in the digital space that needs deep consideration, thorough deliberation and broad-based consultation. We cautioned the Ministry against going ahead with a hurried policy or regulatory actions based on a few closed-door meetings with technology platforms, and advised for a broader multi-stakeholder consultation with journalists, fact-checkers, policymakers, civil society organisations, etc. who are dealing with the potential malicious use of synthetic media. For a deeper and uniform understanding of harms including but not limited to deepfakes, we once again asked the Ministry to clearly state its conception of ‘user harms’ in the Indian context, including the various harms arising from the use of synthetic media. We highlighted that although the risks posed by deepfakes are emergent problems, efforts must be made to improve information literacy and efforts to understand the effects of changing information flows.

Important documents

  1. Press release of advisory dated December 26 sent by MeitY to intermediaries (link)
  2. IFF’s letter to MeitY on the issue of deepfakes (link)
  3. Link to IFF’s public brief on IT Amendment Rules 2023 (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Delhi HC issues notice on Hindutva Watch’s petition challenging the blocking of their entire X/Twitter account

The founder of Hindutva Watch, a research initiative that monitors hate speeches, hate crimes and human rights atrocities committed against marginalised communities in India, has challenged the illegal, arbitrary, and disproportionate blocking of their entire Twitter account before Delhi HC.

6 min read

2
Delhi HC issues notice on Hindutva Watch and India Hate Lab’s petition challenging the blocking of their entire websites.

The founder of research groups Hindutva Watch, and India Hate Lab which track and monitor hate speeches, hate crimes and violence committed against marginalised communities, has challenged the illegal, arbitrary and disproportionate blocking of their entire websites, by MeitY before the Delhi HC.

6 min read

3
One nation, One student ID, zero law or policy to back it up #WhatAreYouVotingFor

The BJP manifesto promises “100% implementation” of the Aadhaar-linked APAAR student ID which centrally stores a large volume of student personal and academic data—but the coercive pan-India exercise is operating without any policy document or accountability from Ministries.

10 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!