#SocialMediaComplianceWatch: analysis of Social Media Compliance Reports of October, 2021

Social Media platforms have released their reports for Oct 2021 in compliance with Rule 4(1)(d), IT Rules 2021. They continue to suffer from the same shortcomings, which exhibits lack of effort on the part of the SSMIs and the govt. to further transparency and accountability in platform governance.

21 January, 2022
9 min read

tl;dr

Google (including YouTube), Facebook, Instagram, ShareChat, Snap, Twitter and WhatsApp have released their reports in compliance with Rule 4(1)(d) of the IT Rules 2021 for the month of October, 2021. The latest of these reports was made available in December, 2021. The reports continue to suffer from the same shortcomings, which exhibits a lack of effort on the part of the SSMIs and the government to further transparency and accountability in platform governance. The SSMIs have continued to not report on government requests, used misleading metrics, and not disclosed how they use algorithms for proactive monitoring. You can read our analysis of the previous reports here.

Background

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021  (‘IT Rules’) require, under Rule 4(1)(d), significant social media intermediaries (‘SSMI’) to publish monthly compliance reports. In these reports, they are required to:

  1. Mention the details of complaints received and actions taken thereon, and
  2. Provide the number of specific communication links or parts of information that the social media platform has removed or disabled access to by proactive monitoring.

To understand the impact of the IT Rules on users and the functioning of intermediaries, we examine the compliance reports published by Google (including YouTube), Facebook, Instagram, ShareChat, Snap, Twitter and WhatsApp to capture some of the most important information for you below.

What is new this time?

We have been analysing the compliance reports published by Google (including YouTube), Facebook, Instagram, WhatsApp and Twitter since May. We have found that these SSMIs have followed a cut-copy-paste format in all their monthly compliance reports barring minor changes from time to time. For October, Twitter has introduced 2 new initiatives, projects, and updates to improve the ‘collective health’ of its platform.

These measures include (i) a ‘remove the follower’ feature which allows users to remove a follower without blocking them, and (ii) publishing the first part of its research on whether Twitter’s recommendation algorithms amplify political content. According to the report, this part examines tweets from elected officials in seven countries (Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States) and tweets from news outlets.

What does the data on proactive monitoring reveal?

Despite disclosures made by Frances Haugen, Facebook and Instagram have not changed the metrics adopted for reporting proactive takedowns. According to its reports, Facebook and Instagram adopt the metrics of (i) ‘content actioned’ which measures the number of pieces of content (such as posts, photos, videos or comments) that they take action on for going against their standards and guidelines; and (ii) proactive rate which refers to the percentage of ‘content actioned’ that they detected proactively before any user reported the same. This metric is misleading because the proactive rate only gives a percentage of that content on which action was taken, and excludes all content on Facebook (which may otherwise be an area of concern) on which action was not taken. We have written more about this in our previous post.

Facebook’s opacity continues to be a concern for transparency advocates. Another example of Facebook’s opacity in platform governance is evidenced from the responses received from Mr Shivnath Thukral (head of public policy at Facebook - now Meta) during a hearing before the Delhi Peace and Harmony Committee. During his deposition before this Committee, he refused to detail the measures Facebook took in response to communally sensitive posts which were amplified on their platform during Delhi Riots 2020.

Be that as may, as per the metrics provided, the proactive rate for actioning of content has dramatically risen for bullying and harassment to 84.8%, from previous months, where the rate was 48.7%, 50.9%, 42.3% and 36.7%, respectively. This figure, however, continues to be the lowest compared to 10 other issues where the rate is more than 91%. This either means that the largest number of user complaints were received under this category, or Facebook is consistently failing to curb the menace of bullying and harassment.

Relatedly, the proactive rate for actioning content for reasons of hate speech has reduced significantly to 86.1% from 96.5%, 97.7%, 97.2%, and 96.4% in the previous months.

Data on proactive monitoring for the other SSMIs is as follows:

  • Google’s proactive removal actions, through automated detection, steadily decreased to 3,84,509 in October 2021, as opposed to 4,50,246 in September and 6,51,933 in August - more than a 40% decrease!
  • Twitter suspended 29,938 accounts on grounds of Child Sexual Exploitation, non-consensual nudity, and similar context; and 4,072 for promotion of terrorism. These numbers stood at 25,500 and 4,790, respectively, in September.
  • WhatsApp banned 20,69,000 accounts in October 2021 as opposed to 22,09,000 in September, 20,70,000 in August, 30,27,000 between June 16 - July 31, 2021, and 20,11,000 between May 15 - June 15, 2021. WhatsApp bans are enforced as a result of their abuse detection process which consists of three stages - detection at registration, during messaging, and in response to negative feedback, which it receives in the form of user reports and blocks.

What does the data on the complaints received reveal?

*The data seems to be incorrect as the number of grievances addressed is more than the number of grievances received

In the case of Google, 96% of the complaints received and 99.5% of content removal action was taken for reasons of copyright and trademark infringement. This is minutely less from the last report where 99.9% of such content was taken down. Other reasons for content removal included graphic sexual content (7), court order (49), and counterfeit (53). We have written about the trend of ‘complaint bombing’ of content for copyright infringement on YouTube to suppress dissent and criticism here. There is also an increase in the relatively small number of content removals on the basis of court orders which remained at 10, 12, 4, 6, and nil since the addition of this parameter to the report in May.

For Twitter, the largest number of complaints received related to abuse/harassment (75) followed by hateful conduct (25), misinformation (16) and defamation (15), and the largest number of URLs actioned related to defamation (195). The number of complaints received for hateful conduct has increased to 25 from 6, 0, 12 and 0 in the previous months.

What continues to be astounding is that zero complaints were received by Twitter for content promoting suicide, and for child sexual exploitation content. Important to note here that India is a country with 24.45 million (stood at 22.1 million in October 2021) Twitter users. The smaller number of monthly complaints received on such prominent issues may indicate two possibilities: (i) the grievance redressal system is turning out to be ineffective, and (ii) Twitter is able to takedown most of such content proactively. Unless there are disclosures claiming the contrary, or audits are conducted into the platform governance operations of these SSMIs, there is no easy way to know which possibility weighs more.

For WhatsApp, 500 reports were received in total, out of which Whatsapp took action in 18 cases, all of which were related to ban appeals i.e. appeals against banning of accounts.

Bonus: Reports by ShareChat, LinkedIn and Snap Inc.

ShareChat in its report for Oct 2021, unlike other SSMIs, provides the number of requests from law enforcement authorities and investigating agencies (12 requests in October 2021), out of which user data was provided in 11 requests. Content was also taken down in 5 of these cases for violation of ShareChat’s community guidelines.

81,96,895 user complaints (as against 95,39,000 last month) were received which have been segregated into 22 reasons in the report. ShareChat takes two kinds of removal actions: takedowns and bans. As per the report, takedowns are either proactive (on the basis of community guidelines and other policy standard violations) or based on user complaints. Proactive takedown/deletion for October 2021 included copyright takedowns (88,663), takedowns for reasons of user-generated sexual content (2,50,602), other user-generated content (85,78,254), chatrooms deletion (1,889) and comments deletion (1,63,411).

ShareChat imposes three kinds of bans: (i) a UGC ban, where the user is unable to post any content on the platform for the specified ban duration; (ii) an edit profile ban, where the user is unable to edit any profile attributes for the specified time period; and (iii) a comment ban, where the user is banned from commenting on any post on the platform for the specified ban duration. The duration of these bans can be 1 day, 7 days, 30 days or 260 days. In case of repeated breach of guidelines, user accounts are permanently removed for 360 days. As a result, 67,396 accountants were permanently terminated in October 2021.

Snap Inc. received 54,445 content reports through Snap's in-app reporting mechanism in October 2021. In 11,385 cases the content was enforced, and 7,211 unique accounts were enforced. Most reports related to sexually explicit content (21,603), followed by impersonation (16,951), violence/harm (4,622), spam (4,292), harassment and bullying (3,570), regulated goods (2,889) and hate speech (518).

LinkedIn’s transparency reports contain global summaries of its community report and government request report. With respect to India, LinkedIn received 40 requests for member data from the government in 2020.

Existing issues with the compliance reports

The following issues undermine the objective of transparency sought to be achieved by these reports since their inception, and continue to persist in the reports for October 2021:

  1. Algorithms for proactive takedown: SSMIs are opaque about the process/algorithms followed by them for proactive takedown of content. Facebook and Instagram state that they use “machine learning technology that automatically identifies content” that might violate their standards, Google uses “automated detection process”, and Twitter claims to use “a combination of technology and other purpose-built internal proprietary tools”. WhatsApp has explained how it proactively takes down content by releasing a white paper that discusses its abuse detection process in detail. However, the lack of transparency about human intervention in terms of monitoring the kind of content that is taken down continues to be a concern.
  2. Lack of uniformity: There is a lack of uniformity in reporting by the major SSMIs. Each SSMI has adopted its own format, and provided different policy areas or issues on which it takes down content. The lack of uniformity is evidenced from the following: (i) the IT Rules were floated following a call for attention for “misuse of social media platforms and spreading of fake news”, but there seems to be no data disclosure on content takedown for fake news by any SSMI other than Twitter, (ii) Google and WhatsApp have not segregated the proactive action taken into different kinds of issues, but have provided the total number of proactive actions taken by them, (iii) Twitter has identified only 2 broad issues for proactive takedowns as opposed to 10 issues identified by Facebook and Instagram. Lack of uniformity makes it difficult for the government to understand the different kinds of concerns (as well as their extent) associated with the use of social media by Indian users.  
  3. No disclosure of government removal requests: Even though compliance with the IT Rules does not mandate disclosure of how many content removals requests were made by the government, in order to truly advance transparency in the digital life of Indians, it is imperative that SSMIs disclose, in their compliance reports, issue-wise government requests for content removal on their platforms.

How can the SSMIs improve their reporting in India?

The SSMIs can take the following steps while submitting their compliance reports under Rule 4(1)(d) of the IT Rules:

  1. Change the reporting formats: The SSMIs should endeavour to be truly transparent in their reporting in the compliance reports. They have been following a cut-copy-paste format from month to month showing little to no effort in overcoming the shortcomings and opacity in their reports. SSMIs must adhere to international best practices (discussed below) and make incremental attempts to tailor their compliance reports to further transparency in platform governance and content moderation.
  2. Santa Clara Principles must be adhered to: The SSMIs must incorporate the Santa Clara Principles On Transparency and Accountability in Content Moderation to their letter and spirit. The operational principles of version 2.0 focus on numbers, notice and appeal. The first part on ‘number’ suggests how data can be segregated by category of rule violated, provides for special reporting requirements for decisions made with the involvement of state actors, how to report on the flags received, and parameters for increasing transparency around the use of automated decision-making.

What can the Government do?

Transparency by SSMIs in the compliance reports will enable the government to collect and analyse India-specific data which would further enable well-rounded regulation. The Rules must be suitably amended to achieve transparency in a systematic manner. This can be achieved by prescribing a specific format and standard for reporting. The Santa Clara Principles can be used as a starting point in this regard.

Stay tuned! We will be back next month with an analysis of the next set of reports.

Important Documents

  1. WhatsApp’s IT Rules compliance report for October 2021. (link)
  2. Google’s IT Rules compliance report for October 2021. (link)
  3. Facebook’s and Instagram’s IT Rules compliance report for October 2021. (link)
  4. Twitter’s compliance report for October 2021. (link)
  5. ShareChat’s IT Rules compliance report for October 2021.(link)
  6. LinkedIn’s Government Requests Report for October 2021. (link)
  7. Snap’s compliance Report for October 2021. (link)
  8. Our analysis of previous compliance reports. (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
What we do in the shadows: IFF seeks transparency in how Indian ‘smart governments’ are using AI

Noting a glaring lack of transparency and publicly available information on how union and state governments are deploying AI in the public sector, we write to the National Institute of Smart Government urging proactive disclosures and publication of government-led AI projects.

5 min read

2
Big Relief! Supreme Court Stays Notification Constituting Fact-Check Unit!

In a small win for press freedom, Supreme Court has stayed the notification of Union Government operationalising the Fact-Check Unit under Information Technology Rules, 2021, till the constitutionality of the same is finally decided by Bombay HC.

5 min read

3
A DM from the PM (and the storm it stirred)

Last week, millions of WhatsApp users received a message from the Ministry of Electronics & IT, undersigned by the Prime Minister, asking for feedback on schemes introduced by the incumbent government. We unravel what this means for your privacy and the electoral process.

7 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!