#SocialMediaComplianceWatch: analysis of Social Media Compliance Reports for the month of January 2022

Google (including YouTube), Facebook, Instagram (both are now under Meta), ShareChat, Snap, Twitter and WhatsApp have released their reports in compliance with Rule 4(1)(d) of the IT Rules 2021 for the month of January, 2022. We have analysed them.

24 March, 2022
10 min read

tl;dr

Google (including YouTube), Facebook,  Instagram (both are now under Meta), ShareChat, Snap, Twitter and WhatsApp have released their reports in compliance with Rule 4(1)(d) of the IT Rules 2021 for the month of January, 2022. The latest of these was published by WhatsApp and was published on March 1, 2022. The reports contain similar shortcomings, which exhibit lack of effort on the part of the social media intermediaries and the government to further transparency and accountability in platform governance. The intermediaries have yet again, not reported on government requests, used misleading metrics, and also have not disclosed how they use algorithms for proactive monitoring. You can read our analysis of the previous reports here.

Background

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021  (‘IT Rules’) require, under Rule 4(1)(d), significant social media intermediaries to publish monthly compliance reports. In these reports, they are required to:

  1. Mention the details of complaints received and actions taken thereon, and
  2. Provide the number of specific communication links or parts of information that the social media platform has removed or disabled access to by proactive monitoring.

In order to understand the impact of the IT Rules on users and the functioning of intermediaries, we examine and analyze the compliance reports published by Google (including YouTube), Facebook, Instagram, ShareChat, Snap, Twitter and WhatsApp, and the functioning of intermediaries, to capture some of the most important information for you below.

What is new this time?

We have been analysing the compliance reports published by Google (including YouTube), Facebook, Instagram, WhatsApp and Twitter since May 2021. We have found that these social media intermediaries have, barring a few minor changes, consistently followed the same format for reporting every month. Failure to raise standards of reporting despite the growing call for transparency by both state and non-state actors around the world shows lack of effort and/or interest on their part to further transparency in India.

For January 2022, Twitter has introduced a few new initiatives, projects, and updates to improve the ‘collective health’ of its platform. These measures include: (i) releasing a Digital Safety Playbook (in English and Hindi) to help users have a safer experience on Twitter and (ii) a feature that allows recording of Twitter Spaces.

We also analysed the reports published by the Digital Publisher Content Grievances Counsel (DPCGC) for the months of September to December 2021 this time. DPCGC is an independent self-regulatory body for online curated content providers (OCCPs). It has reported only 1 grievance/appeal related to Code of Ethics under Part III of the IT Rules in the month of September 2021 which was not upheld by the DPCGC. It may be noted that the DPCGC’s grievance redressal mechanism falls under Part III of the IT Rules, whose operation has been stayed by an order of the Bombay High Court on August 14, 2021.

Revelations made in the data on proactive monitoring:

Previously, in our post we highlighted that Facebook and Instagram have not changed the metricts adopted for reporting proactive takedowns, despite disclosures made by Frances Haugen. In their reports, Facebook and Instagram adopt the metrics of (i) ‘content actioned’ which measures the number of pieces of content (such as posts, photos, videos or comments) that they take action on for going against their standards and guidelines; and (ii) proactive rate which refers to the percentage of ‘content actioned’ that they detected proactively before any user reported the same. This metric is misleading because the proactive rate only gives a percentage of that content on which action was taken, and excludes all content on Facebook (which may otherwise be an area of concern) on which action was not taken. We have written more about this in our previous post.

As per the metrics provided, the proactive rate for actioning of content for bullying and harassment has witnessed a significant increase to 84.5% in January 2022 from 57.4% in December 2021. This figure, however, continues to be lower compared to 11 other issues where the rate is more than 97%.

Relatedly, the proactive rate for actioning content for reasons of hate speech has reduced to 81.6% in January 2022. This is a decline from previous months when the rate was at 87.7%, 91.9%, 95.9%, 96.5%, 97.7%, 97.2%, and 96.4%.

Data on proactive monitoring for the other Social Media Intermediaries is as follows:

  • Google’s proactive removal actions, through automated detection, slightly decreased to 4,01,374 in January 2022 from 4,05,911 in December 2021. The figures stood at 3,75,468 in November 2021, 3,84,509 in October, 4,50,246  in September and 6,51,933 in August of 2021.
  • Twitter suspended 37,466 accounts on grounds of of Child Sexual Exploitation, non-consensual nudity, and similar context; and 4,709 for promotion of terrorism. These numbers stood at 37,310 and 2880, respectively, in December 2021.
  • WhatsApp banned 18,58,000 accounts in January 2022, a decrease from 20,79,000 in December 2021. The figures stood at 20,69,000 in October 2021, 22,09,000 in September, 20,70,000 in August, 30,27,000 between June 16 - July 31, 2021, and 20,11,000 between May 15 - June 15 of 2021. That is more than 13 million accounts! These bans are enforced as a result of WhatsApp’s abuse detection process which consists of three stages - detection at registration, during messaging, and in response to negative feedback, which it receives in the form of user reports and blocks.

What does the data on the complaints received reveal?

In the case of Google, 97.6% of the complaints received and 99.9% of content removal action was taken for reasons of copyright and trademark infringement. This is around the same as the last report where almost 100% of such content was taken down. We have written about the trend of ‘complaint bombing’ of content for copyright infringement on YouTube to suppress dissent and criticism here. Other reasons for content removal included, court order (31), graphic sexual content (4), impersonation (1) and circumvention (1). Content removals based on court orders slightly decreased (31 in January 2022 and 37 December 2021) after an increase in November, 2021 (56). The figure stood at 49 in October, 2021 and at 10, 12, 4, 6, and nil since the addition of this parameter to the report in May 2021.

For Twitter, the largest number of complaints received related to abuse/harassment (38), followed by impersonation (25), hateful conduct (20), sensitive adult conduct (27), defamation (11) and I.P. related infringement (11). The largest number of URLs actioned related to illegal abuse and harassment (179), I.P. related infringement (129) and sensitive adult conduct (83). The number of complaints received for hateful conduct has increased to 20 in January 2022, from a decrease to 12 in December, 2021 and 13 in November 2021, after it increased to 25 in October, 2021 from 6, 0, 12 and 0 in the previous months.

What continues to be astounding is that zero complaints were received by Twitter for content promoting suicide, and for child sexual exploitation content. It is important to note here that India is a country with 23.60 million Twitter users. The smaller number of monthly complaints received on such prominent issues may indicate two possibilities: (i) the grievance redressal system is turning out to be ineffective, and (ii) Twitter is able to takedown most of such content proactively. Unless there are disclosures claiming the contrary, or audits are conducted into the platform governance operations of these social media intermediaries, there is no easy way to know which possibility weighs more.

For WhatsApp, 495 reports were received in total, out of which Whatsapp took action in 24 cases, all of which related to ban appeals i.e. appeals against banning of accounts. This number offers an interesting contrast with the number of accounts that WhatsApp banned proactively on the basis of its own abuse detection process - 18,58,000.

Reports by ShareChat, LinkedIn and Snap Inc.

ShareChat in its report for January 2022, unlike other social media intermediaries, provides the number of requests from law enforcement authorities and investigating agencies. 19 such requests were received in January 2022, out of which user data was provided in 17 requests. Content was also taken down in 5 of these cases for violation of ShareChat’s community guidelines.

61,88,644 user complaints (as against 56,18,870 last month) were received which have been segregated into 22 reasons in the report. ShareChat takes two kinds of removal actions: takedowns and bans. As per the report, takedowns are either proactive (on the basis of community guidelines and other policy standard violations) or based on user complaints. Proactive takedown/deletion for January 2022 included chatrooms deletion (1,363), copyright takedowns (78,034), takedowns for reasons of comments deletion (97,878), user generated sexual content (2,32,792), and other user generated content (38,46,272).

ShareChat imposes three kinds of bans: (i) a user generated content ban, where the user is unable to post any content on the platform for the specified ban duration; (ii) an edit profile ban, where the user is unable to edit any profile attributes for the specified time-period; and (iii) a comment ban, where the user is banned from commenting on any post on the platform for the specified ban duration. The duration of these bans can be 1 day, 7 days, 30 days or 260 days. In case of repeated breach of guidelines, user accounts are permanently removed for 360 days. As a result, 13,863 accountants were permanently terminated in December 2021.

Snap Inc. received 63,764 content and account reports through Snap's in-app reporting mechanism in January 2022. In 13,468 cases the content was enforced, and 8,325 unique accounts were enforced. Most reports continued to be related to sexually explicit content (26,837), followed by impersonation (19,587), spam (5,481), threat/violence/harm (4,810), harassment and bullying (3,878), regulated goods (2,526) and hate speech (645).

LinkedIn’s transparency reports contain global summaries of its community report and government request report. With respect to India, LinkedIn received 30 requests for member data from the government in 2021.

Existing issues with the compliance reports

The following issues undermine the objective of transparency sought to be achieved by these reports since their inception, and continue to persist in the reports for January 2022:

  1. Algorithms for proactive takedown: Social media intermediaries are opaque about the process/algorithms followed by them for proactive takedown of content. Only WhatsApp has explained how it proactively takes down content by releasing a white paper which discusses its abuse detection process in detail. The lack of transparency about human intervention in terms of monitoring the kind of content that is taken down continues to be a concern. The IT Rules provide that social media intermediaries shall implement mechanisms for appropriate human oversight of measures deployed for proactive monitoring which includes a periodic review of any automated tools. None of the social media intermediaries have reported on such periodic review.
  2. Lack of uniformity: There is a lack of uniformity in reporting by the major social media intermediaries. Each intermediary has adopted its own format, and provided different policy areas or issues on which it takes down content. The lack of uniformity is evidenced from the following: (i) the IT Rules were issued following a call for attention for “misuse of social media platforms and spreading of fake news”, but there seems to be no data disclosure on content takedown for fake news by any social media intermediaries other than Twitter, (ii) Google and WhatsApp have not segregated the proactive action taken into different kinds of issues, but have provided the total number of proactive actions taken by them, (iii) Twitter has identified only 2 broad issues for proactive takedowns as opposed to 13 and 12 issues identified by Facebook and Instagram, respectively (categories of child endangerment, and violence and incitement  were added in the month of October.) Lack of uniformity makes it difficult for the government to understand the different kinds of concerns (as well as their extent) associated with the use of social media by Indian users.  
  3. No disclosure of government removal requests: Even though compliance with the IT Rules does not mandate disclosure of how many content removals requests were made by the government, in order to truly advance transparency in the digital life of Indians, it is imperative that Social Media Intermediaries disclose, in their compliance reports, issue-wise government requests for content removal on their platforms.

How can the Social Media Intermediaries improve their reporting in India?

The Intermediaries can take the following steps while submitting their compliance reports under Rule 4(1)(d) of the IT Rules:

  1. Change the reporting formats : The social media intermediaries should endeavour to be truly transparent in their reporting in the compliance reports. They have been following a cut-copy-paste format from month to month showing little to no effort in overcoming the shortcomings and opacity in their reports. They must adhere to international best practices (discussed below) and make incremental attempts to tailor their compliance reports to further transparency in platform governance and content moderation.
  2. Santa Clara Principles must be adhered to: The social media intermediaries must incorporate the Santa Clara Principles On Transparency and Accountability in Content Moderation to their letter and spirit. The operational principles of version 2.0 focus on numbers, notice and appeal. The first part on ‘number’ suggests how data can be segregated by category of rule violated, provides for special reporting requirements for decisions made with the involvement of state actors, how to report on the flags received, and parameters for increasing transparency around the use of automated decision-making.

What more can the Government do?

Transparency by social media intermediaries in the compliance reports will enable the government to collect and analyse India-specific data which would further enable well-rounded regulation. For this, complete disclosure by the social media intermediaries is imperative. The Rules must be suitably amended to achieve transparency in a systematic manner. This can be achieved by prescribing a specific format and standard for reporting. The Santa Clara Principles can be used as a starting point in this regard.

We will be back next month with a fresh set of analysis of reports. Stay tuned.

Important Documents

  1. WhatsApp’s IT Rules compliance report for Jan 2022 published on Mar 1, 2022. (link)
  2. Google’s IT Rules compliance report for Jan 2022. (link)
  3. Facebook’s and Instagram’s IT Rules compliance report Jan 2022 published on Feb 28, 2022. (link)
  4. Twitter’s compliance report for Jan 2022 published in Feb 2022. (link)
  5. ShareChat’s IT Rules compliance report for Jan 2022. (link)
  6. LinkedIn’s Government Requests Report for 2021. (link)
  7. Snap’s compliance Report for Jan 2022. (link)Our analysis of previous compliance reports. (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Your personal data, their political campaign? Beneficiary politics and the lack of law

As the 2024 elections inch closer, we look into how political parties can access personal data of welfare scheme beneficiaries and other potential voters through indirect and often illicit means, to create voter profiles for targeted campaigning, and what the law has to say about it.

6 min read

2
Press Release: Civil society organisations express urgent concerns over the integrity of the 2024 general elections to the Lok Sabha

11 civil society organisations wrote to the ECI, highlighting the role of technology in affecting electoral outcomes. The letter includes an urgent appeal to the ECI to uphold the integrity of the upcoming elections and hold political actors and digital platforms accountable to the voters. 

2 min read

3
IFF Explains: How a vulnerability in a government cloud service could have exposed the sensitive personal data of 2,50,000 Indian citizens

In January 2022, we informed CERT-In about a vulnerability in S3WaaS, a platform developed for hosting government websites, which could expose sensitive personal data of 2,50,000 Indians. The security researcher who identified the vulnerability confirmed its resolution in March 2024.

5 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!