#SocialMediaComplianceWatch: analysis of Social Media Compliance Reports of November, 2021

Google (including YouTube), Facebook, Instagram, ShareChat, Snap, Twitter and WhatsApp have released their reports in compliance with Rule 4(1)(d) of the IT Rules 2021 for the month of November, 2021.

09 February, 2022
10 min read

tl;dr

Google (including YouTube), Facebook, Instagram, ShareChat, Snap, Twitter and WhatsApp released their reports in compliance with Rule 4(1)(d) of the IT Rules 2021 for the month of November, 2021. The latest of these reports was made available in January 2022. The reports exhibit similar shortcomings which outlines lack of effort on the part of the SSMIs and the government to further transparency and accountability in platform governance. The SSMIs have continued to not report on government requests, used misleading metrics, and not disclosed how they use algorithms for proactive monitoring. You can read our analysis of the previous reports here.

Background

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021  (‘IT Rules’) require, under Rule 4(1)(d), significant social media intermediaries (‘SSMI’) to publish monthly compliance reports. In these reports, they are required to:

  1. Mention the details of complaints received and actions taken thereon, and
  2. Provide the number of specific communication links or parts of information that the social media platform has removed or disabled access to by proactive monitoring.

To understand the impact of the IT Rules on users and the functioning of intermediaries, we analyse the compliance reports published by Google (including YouTube), Facebook, Instagram, ShareChat, Snap, Twitter and WhatsApp to capture some of the most important information for you below.

What is new in the reports?

In all the compliance reports we have examined since May, we found that each SSMI has, barring few minor changes, consistently followed the same format for reporting every month. This shows lack of effort on their part to further transparency by raising their standards of reporting from month to month.

For November, Twitter has listed new initiatives, projects, and updates to improve the ‘collective health’ of its platform. These measures include (i) redesigning labels for potentially misleading tweets and rolling out the feature to more users, (ii) launching a feature for in-app reporting for Child Sexual Exploitation, and (iii) expanding the scope of its Private Information Policy to disallow sharing of private media without consent.

Revelations made in the data on proactive monitoring:

In our previous post we discussed how despite disclosures made by Frances Haugen, Facebook and Instagram have not changed the metrics adopted for reporting proactive takedowns. Facebook and Instagram, in their reports, adopt the metrics of (i) ‘content actioned’ which measures the number of pieces of content (such as posts, photos, videos or comments) that they take action on for going against their standards and guidelines; and (ii) proactive rate which refers to the percentage of ‘content actioned’ that they detected proactively before any user reported the same. This metric is misleading because the proactive rate only gives a percentage of that content on which action was taken, and excludes all content on Facebook (which may otherwise be an area of concern) on which action was not taken.

As per the metrics provided, the proactive rate for actioning of content for bullying and harassment has significantly reduced to 40.7 % after it suddenly rose  to 84.8% in October, from the lower rates in previous months, where the rate was 48.7%, 50.9%, 42.3% and 36.7%, respectively. This figure, however, continues to be the lowest compared to 10 other issues where the rate is more than 91%.

Relatedly, the proactive rate for actioning content for reasons of hate speech is  91.1 %, a slight rise after its significant decline to 86.1% in October from 96.5%, 97.7%, 97.2%, and 96.4% in the previous months.

Data on proactive monitoring for the other SSMIs is as follows:

  • Google’s proactive removal actions, through automated detection, steadily decreased to 3,75,468 in November 2021, as opposed to 3,84,509 in  October, 4,50,246  in September and 6,51,933 in August.
  • Twitter suspended  35,583 accounts on grounds of of Child Sexual Exploitation, non-consensual nudity, and similar context; and 3820 for promotion of terrorism. These numbers stood at 29,938 and 4,072, respectively, in October.
  • WhatsApp banned 17,59,000 in November, 20,69,000 accounts in October 2021 as opposed to 22,09,000 in September, 20,70,000 in August, 30,27,000 between June 16 - July 31, 2021, and 20,11,000 between May 15 - June 15, 2021. WhatsApp bans are enforced as a result of their abuse detection process which consists of three stages - detection at registration, during messaging, and in response to negative feedback, which it receives in the form of user reports and blocks.

What does the data on the complaints received reveal?

*The data seems to be incorrect as the number of grievances addressed is more than the number of grievances received

In the case of Google, 96.8% of the complaints received and 99.7 % of content removal action was taken for reasons of copyright and trademark infringement. This is around 3% lesser from the last report where 99.5% of such content was taken down. Other reasons for content removal included circumvention (131), court order (56), and graphic sexual content (5). We have written about the trend of ‘complaint bombing’ of content for copyright infringement on YouTube to suppress dissent and criticism here. Content removals based on court orders increased to 56  after an increase in October (49) from the other months when it remained at 10, 12, 4, 6, and nil since the addition of this parameter to the report in May.

For Twitter, the largest number of complaints received related to abuse/harassment (24), followed by hateful conduct (13), defamation (11) and misinformation (7). The largest number of URLs actioned related to abuse/harassment (354), followed by misinformation (129) and impersonation (117). The number of complaints received for hateful conduct reduced again to 13 after it increased to 25 in October from 6, 0, 12 and 0 in the previous months.

What continues to be astounding is that zero complaints were received by Twitter for content promoting suicide, and for child sexual exploitation content. It is important to note here that India is a country with 24.45 million Twitter users. The smaller number of monthly complaints received on such prominent issues may indicate two possibilities: (i) the grievance redressal system is turning out to be ineffective, and (ii) Twitter is able to takedown most of such content proactively. Unless there are disclosures claiming the contrary, or audits are conducted into the platform governance operations of these SSMIs, there is no easy way to know which possibility weighs more.

For WhatsApp, 602 reports were received in total, out of which Whatsapp took action in 36 cases, all of which were related to ban appeals i.e. appeals against banning of accounts.

Reports by ShareChat, LinkedIn and Snap Inc.

ShareChat in its report for November 2021, unlike other SSMIs, provides the number of requests from law enforcement authorities and investigating agencies (11 requests in November 2021), out of which user data was provided in 11 requests. Content was also taken down in 2 of these cases for violation of ShareChat’s community guidelines.

64,72,386 user complaints (as against 81,96,895 last month) were received which have been segregated into 20 reasons in the report. The category for abusive language and others has been removed.  ShareChat takes two kinds of removal actions: takedowns and bans. As per the report, takedowns are either proactive (on the basis of community guidelines and other policy standard violations) or based on user complaints. Proactive takedown/deletion for November 2021 included copyright takedowns (62,711), takedowns for reasons of user generated sexual content (2,06,538), other user generated content (70,37,688), chatrooms deletion (1,126) and comments deletion (1,41,436).

ShareChat imposes three kinds of bans: (i) a UGC ban, where the user is unable to post any content on the platform for the specified ban duration; (ii) an edit profile ban, where the user is unable to edit any profile attributes for the specified time-period; and (iii) a comment ban, where the user is banned from commenting on any post on the platform for the specified ban duration. The duration of these bans can be 1 day, 7 days, 30 days or 260 days. In case of repeated breach of guidelines, user accounts are permanently removed for 360 days. As a result, 53,832 accountants were permanently terminated in November 2021.

Snap Inc. received 57,174 content reports through Snap's in-app reporting mechanism in November 2021. In 13,083 cases the content was enforced, and 8,238 unique accounts were enforced. Most reports related to sexually explicit content (23,780), followed by impersonation (16,848), spam (4,976), violence/harm (4,468), harassment and bullying (3,547), regulated goods (2,984) and hate speech (571).

LinkedIn’s transparency reports contain global summaries of its community report and government request report. With respect to India, LinkedIn received 30 requests for member data from the government in 2021.

Existing issues with the compliance reports

The following issues undermine the objective of transparency sought to be achieved by these reports since their inception, and continue to persist in the reports for November 2021:

  1. Algorithms for proactive takedown: SSMIs are opaque about the process/algorithms followed by them for proactive takedown of content. Only WhatsApp has explained how it proactively takes down content by releasing a white paper which discusses its abuse detection process in detail. The lack of transparency about human intervention in terms of monitoring the kind of content that is taken down continues to be a concern. The IT Rules provide that SSMIs shall implement mechanisms for appropriate human oversight of measures deployed for proactive monitoring which includes a periodic review of any automated tools. None of the SSMIs have reported on such periodic review.
  2. Lack of uniformity: There is a lack of uniformity in reporting by the major SSMIs. Each SSMI has adopted its own format, and provided different policy areas or issues on which it takes down content. The lack of uniformity is evidenced from the following: (i) the IT Rules were issued following a call for attention for “misuse of social media platforms and spreading of fake news”, but there seems to be no data disclosure on content takedown for fake news by any SSMI other than Twitter, (ii) Google and WhatsApp have not segregated the proactive action taken into different kinds of issues, but have provided the total number of proactive actions taken by them, (iii) Twitter has identified only 2 broad issues for proactive takedowns as opposed to 10 issues identified by Facebook and Instagram. Lack of uniformity makes it difficult for the government to understand the different kinds of concerns (as well as their extent) associated with the use of social media by Indian users.  
  3. No disclosure of government removal requests: Even though compliance with the IT Rules does not mandate disclosure of how many content removals requests were made by the government, in order to truly advance transparency in the digital life of Indians, it is imperative that SSMIs disclose, in their compliance reports, issue-wise government requests for content removal on their platforms.

How can the SSMIs improve their reporting in India?

The SSMIs can take the following steps while submitting their compliance reports under Rule 4(1)(d) of the IT Rules:

  1. Change the reporting formats : The SSMIs should endeavour to be truly transparent in their reporting in the compliance reports. They have been following a cut-copy-paste format from month to month showing little to no effort in overcoming the shortcomings and opacity in their reports. SSMIs must adhere to international best practices (discussed below) and make incremental attempts to tailor their compliance reports to further transparency in platform governance and content moderation.
  2. Santa Clara Principles must be adhered to: The SSMIs must incorporate the Santa Clara Principles On Transparency and Accountability in Content Moderation to their letter and spirit. The operational principles of version 2.0 focus on numbers, notice and appeal. The first part on ‘number’ suggests how data can be segregated by category of rule violated, provides for special reporting requirements for decisions made with the involvement of state actors, how to report on the flags received, and parameters for increasing transparency around the use of automated decision-making.

Government actions furthering transparency

  1. The Minister of Electronics and Information Technology, while responding to a question on banning of social media handles for creating fake and inciting content on Twitter, Youtube, Facebook and all other platforms, etc., provided yearly data on the number of URLs blocked under Section 69A of the IT Act. According to this data, the ministry blocked 6096 accounts/websites/URLs in 2021.
  2. As per reports, officials from the Ministry of Information & Broadcasting recently engaged with executives of Google, Twitter, Facebook and Sharechat to discuss their failure in proactive removal of fake news from their platforms.
  3. Following this meeting, in response to a question by the Congress minister Anand Sharma, Minister for Electronics and Information Technology minister Ashwini Vaishnaw said that the government is open to introducing “stricter” guidelines for social media intermediaries if Lok Sabha and Rajya Sabha are able to build a consensus on it.

What more can the Government do?

Transparency by SSMIs in the compliance reports will enable the government to collect and analyse India-specific data which would further enable well-rounded regulation. For this, complete disclosure by the social media intermediaries is imperative. The Rules must be suitably amended to achieve transparency in a systematic manner. This can be achieved by prescribing a specific format and standard for reporting. The Santa Clara Principles can be used as a starting point in this regard.

Stay tuned! We will be back next month with an analysis of the next set of reports.

Important Documents

  1. WhatsApp’s IT Rules compliance report for November 2021. (link)
  2. Google’s IT Rules compliance report for November 2021. (link)
  3. Facebook’s and Instagram’s IT Rules compliance report for November 2021. (link)
  4. Twitter’s compliance report for November 2021. (link)
  5. ShareChat’s IT Rules compliance report for November 2021. (link)
  6. LinkedIn’s Government Requests Report for 2021. (link)
  7. Snap’s compliance Report for November 2021. (link)
  8. Our analysis of previous compliance reports. (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Why do we do the “Quarterly Members’ & Donors’ calls” / For all the johnny-come-lately`s

What goes on in these “Quarterly Members’ and Donors’ calls" and why do we host them? What kind of mangoes do we eat and how?

3 min read

2
Dear Digi Yatris, it’s time to deboard

Amid suspicions about its tech operator’s criminal records and vast allegations of data privacy violations, the Digi Yatra Foundation has announced a revamp of the service and is urging its users to abandon the old app and re-install a new version. We shed light on this shady ‘makeover’.

7 min read

3
#FreeAndFair: Launching IFF’s Election Website

As the country gears up for the 2024 Lok Sabha elections, we watch every technological development that may affect electoral integrity. Visit the IFF election website freeandfair.in to read about IFF’s actions and efforts. 

5 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!