Social media giants have released the second round of 'Compliance Reports'. We’ve analysed the data.

tl;dr

Google, Facebook, Instagram, WhatsApp and Twitter have released their second round of reports in compliance with Rule 4(1)(d) of the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 (the 2021 IT Rules). The reports highlight some interesting facts and figures, including the launching of a first of its kind self-regulation model called the Digital Trust and Safety Partnership (DTSP) by a coalition consisting of Discord, Facebook, Google, Microsoft, Pinterest, Reddit, Shopify, Twitter and Vimeo, and a massive number of automated takedowns.

Background

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (2021 IT rules) were notified on February 26, 2021 and came into effect with respect to significant social media intermediaries (SSMIs) on May 25, 2021. The 2021 IT Rules have triggered debates about their applicability and legality ever since they were notified. We have written extensively about the Rules which are likely to be tabled in the winter session of the seventeenth Lok Sabha. While they remain in force, social media platforms are obligated to comply with its provisions. One such provision of the 2021 IT Rules - (Rule 4(1)(d)) - requires social media platforms to publish monthly compliance reports.

In these reports, they are required to:

  1. mention the details of complaints received and actions taken thereon, and
  2. the number of specific communication links or parts of information that the social media platform has removed or disabled access to by proactive monitoring.

Some of the big social media companies have released their second compliance reports recently. To understand the impact of the 2021 IT Rules on users and functioning of intermediaries, we examine their reports to capture some of the most important information for you. You can read our analysis of the first reports here.

Facebook and Instagram takedown content in crores

Facebook has submitted a combined compliance report for Facebook and Instagram which highlights complaints received from users in India and proactive measures taken by Facebook for detection and removal of content for the period starting June 16 to July 31, 2021. This report excludes all grievances relating to intellectual property.

Complaints from users in India

Facebook has reported to have received 1504 complaints which relate to fake profiles (143),  content showing a user in nudity/partial nudity or in a sexual act (80), hacked  accounts (474), lost access to a page or group (110), bullying or harassment (146), access to personal data (46), inappropriate or abusive content (128), issues with how Facebook is processing data (62), content where users do not want to be displayed (39), and others (276).  Facebook claims to have responded to 100% of the reports and to have provided users with tools to resolve their issues in 1326 cases.

Instagram, similarly, received 265 reports complaints which related to fake profiles (61), content showing a user in nudity/partial nudity or in a sexual act (42), hacked  accounts (77), bullying or harassment (44), access to personal data (17), inappropriate or abusive content (10), and content where users do not want to be displayed (14). Facebook claims to have responded to 100% of the reports and to have provided users with tools to resolve their issues in 181 cases.

Proactive Monitoring

Facebook actioned content (which includes removal of content with a warning) relating to 10 issues, viz., adult nudity and sexual activity (26 lakh), bullying and harassment (123.4K), organised hate (94.5K), terrorist propaganda (121.2K), hate speech (324.3K), drugs (30.7K), firearms (4.6K), suicide and self-injury (945.6K), spam (2.56 crore) and violent and graphic content (35 lakh). These figures include proactive action. In each of these categories (except for hate speech, bullying and harassment, and firearms) more than 99% of the content was removed as a result of proactive action by Facebook by using its machine learning technology to detect such content.

Instagram, similarly, took action against content relating to all the issues mentioned above except for spam, viz., adult nudity and sexual activity (676.1K), bullying and harassment (195.1K), organised hate (5.5K), terrorist propaganda (9.1K), hate speech (56.2K), drugs (11.5K), firearms (0.2K), suicide and self-injury (811.0K), and violent and graphic content (11 lakh). In each of these categories (except for bullying and harassment) more than 85% of the content was removed as a result of proactive action.

All complaints in the above categories containing legal processes i.e. court orders, court decisions, statutory declarations, cease & desist letters, etc. were escalated for legal review.

Twitter has now appointed a Resident Grievance Officer

As an improvement from the last time, Twitter has now appointed a Resident Grievance Officer as required under the 2021 IT Rules. Twitter has also claimed to launch two new initiatives/updates - expansion of its policy against hateful content to include both targeted and non-targeted content, and launching a first of its kind self-regulation model called the Digital Trust and Safety Partnership (DTSP) along with a coalition consisting of Discord, Facebook, Google, Microsoft, Pinterest, Reddit, Shopify and Vimeo.

The data corresponding to legal requests made for content removal has still not been updated to include any period after December 2020.

Complaints received and actions taken

Twitter’s report relates to the period between June 26 to July 25, 2021. It includes data received by the Grievance Officer in the form of complaints from individual users and court orders. It does not contain complaints relating to account verification, account access, or seeking assistance or information regarding a Twitter account or its enforcement policies - which Twitter claims formed the majority of complaints.

Maximum numbers of grievances were received for abuse/harassment (36), followed by misinformation/synthetic media manipulation (28), IP-related infringement (13), defamation (13), and hateful conduct (12). No complaints were received with respect to child sexual exploitation, promotion of suicide/self-harm and illegal activities. Most URLs against which action was taken corresponded to abuse/harassment, IP related infringement, and privacy infringement. Additionally, 67 account suspension requests were received out of which only 24 were overturned.

Proactive Monitoring

26,250 accounts are reported to have been suspended due to issues of child sexual exploitation, non-consensual nudity, and similar content. 5,387 accounts were suspended on account of promotion of terrorism.

WhatsApp bans over 30 lakh accounts

WhatsApp’s report, for the period ranging June 16 to July 31, 2021, has captured information on the grievances received from its users in India, and the proactive action taken by WhatsApp through its prevention and detection methods against accounts violating the laws of India or WhatsApp’s terms of service.

Complaints from users in India

WhatsApp received a total of 594 reports on various issues, viz., account support (137), ban appeal (316), other support (45), product support (64), and safety (32). Actions were taken in 74 of these reports - action denoting either banning of an account or restoring an earlier banned account as a result of a complaint.

Proactive monitoring

WhatsApp states that abuse detection operates at three stages of an account’s lifestyle - at registration, during messaging, and in response to negative feedback, which WhatsApp receives in the form of user reports and blocks. ‘Edge cases’ are analysed by a team of analysts which helps improve the effectiveness of the system over time. Indian accounts are identified by a ‘+91’ phone number.  Using this abuse detection approach,  30,27,000 accounts were banned by WhatsApp in the relevant period.

Google’s report provides information for the period ranging from July 1 to July 31, 2021 on the number of complaints received, the number of removal actions taken on those complaints and the removal actions taken as a result of automated detection mechanisms. While the report is hallmarked with both Google and Youtube logos, separate information is not provided.

The data corresponding to government requests made for content removal has still not been updated to include any period after December 2020.

Complaints received and actions taken

Google received a total of 36,934 complaints from users on various issues, viz., copyright (35,678), trademark (471), defamation (259), counterfeit (67), circumvention (20), impersonation (11), pursuant to court order (4), graphic sexual content (2), and other legal issues (422).

A single complaint specifies multiple items and URLs that relate to the same or different pieces of content. Accordingly, 95,680 content removal actions arising out of 36,934 complaints were taken by Google. Out of these, 94,862 removals were removed owing to copyright issues.

Proactive Monitoring

Google claims to use automated detection processes for some of its products to prevent the dissemination of harmful content such as child sexual abuse material and violent extremist content. 576,982 removal actions have been taken as a result of automated detection as per the report.

What can SSMIs include in the report to truly advance transparency?

The SSMIs should  keep in mind the Santa Clara Principles on Transparency and Accountability in Content Moderation. These principles outline minimum levels of transparency and accountability and state:

  1. Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines.
  2. Companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.
  3. Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

Principles 1 and 2 have been incorporated in the 2021 IT Rules under Rules 4(8)(a), 4(d) and 18(3). However, principle 3 has not been incorporated, as the grievance redressal mechanism specified under the 2021 IT Rules provides for a limited timeline for responding. This makes it difficult for the SSMIs to provide for an effective appeal mechanism. Further, principle 1 has been elaborated upon to include how the numbers should be broken down which SSMIs must follow in their compliance reports.

Compliance with the 2021 IT Rules does not mandate disclosure of how many content removals were ordered by the government. However, in order to truly advance transparency in the digital life of Indians, it is imperative that SSMIs disclose government requests for content removal on their platforms. Facebook, Google and Twitter have attempted to capture this data, but they have done so only till December 2020.

All SSMIs must publicly disclose the government orders for takedown/information to truly champion transparency for the Indian users of the internet.

Important Documents

  1. Google’s IT Rules compliance report published in July 2021 (link)
  2. Facebook’s and Instagram’s IT Rules compliance report dated August 31, 2021 (link)
  3. WhatsApp’s compliance report dated August 31, 2021 (link)
  4. Twitter’s transparency report published in August 2021 (link)