Google (including YouTube), Facebook, Instagram, WhatsApp and Twitter have released their reports in compliance with Rule 4(1)(d) of the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 for the month of August. The reports highlight some interesting facts and figures, and, again, a massive number of automated takedowns. You can read our analysis of the previous reports here and here, where we discussed the issue of non-publication of data on governmental removal orders.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT rules) require, under Rule 4(1)(d), social media platforms to publish monthly compliance reports. In these reports, they are required to:
- Mention the details of complaints received and actions taken thereon, and
- Provide the number of specific communication links or parts of information that the social media platform has removed or disabled access to by proactive monitoring.
Google (including YouTube), Facebook, Instagram, WhatsApp and Twitter have released their compliance reports for the month of August recently. To understand the impact of the IT Rules on users and the functioning of intermediaries, we examine these reports to capture some of the most important information for you below.
How is proactive monitoring being done?
The reports lack true transparency as the significant social media intermediaries (SSMIs) have been opaque about the process/algorithms followed by them for proactive takedown of content. Facebook and Instagram state that they use “machine learning technology that automatically identifies content” that might violate their standards, Google uses “automated detection process”, and Twitter claims to use “a combination of technology and other purpose-built internal propriety tools”. WhatsApp has released a white paper discussing its abuse detection process in detail and disclosing how they use machine learning. While WhatsApp has made an attempt to explain how it proactively takes down content, the lack of human intervention in terms of monitoring the kind of content that is taken down is problematic.
What does the data on proactive monitoring reveal?
As per its reports, Facebook and Instagram adopt the metrics of (i) ‘content actioned’ which measures the number of pieces of content (such as posts, photos, videos or comments) that they take action on for going against their standards and guidelines, (ii) proactive rate which refers to the percentage of ‘content actioned’ that they detected proactively before any user reported for the same. The proactive rate for actioning of content for bullying and harassment stands at 50.9%, which is an improvement from previous month’s 42.3%. This figure is particularly low as compared to 8 other issues (including hate speech and violent content) where the rate is more than 95%. This means that maximum user complaints were received under this category, and that Facebook is consistently failing to curb the menace of bullying and harassment.
Be that as it may, as per the documents leaked by Frances Haugen’s, Facebook has misled the public about “transparency” reports boasting proactive removal of over 90% of identified hate speech when internal records showed that “as little as 3-5% of hate” speech was actually removed. Facebook in one of the documents has admitted, “we’re deleting less than 5% of all the hate speech posted to Facebook. This is actually an optimistic estimate” and that “the mechanics of our platform are not neutral.”
Data on proactive monitoring for the other SSMIs is as follows:
- Google proactively took an increasing number of 6,51,933 removal actions as a result of automated detection in August, as opposed to 5,76,892 in July and 5,26,866 in June.
- Twitter suspended 26,726 accounts because of Child Sexual Exploitation, non-consensual nudity, and similar context; and 4,648 because of promotion of terrorism. These numbers stood at 26,250 and 5,387, respectively.
- WhatsApp banned 20,70,000 accounts in August as opposed to 30,27,000 between June 16 - July 31, 2021, and 20,11,000 between May 15 - Jun 15, 2021.
How are user complaints/grievances received and handled by SSMIs?
As per the reports, Facebook, Google, Instagram, Twitter and WhatsApp all receive grievances via email or post addressed to their grievance officers. Facebook (and Instagram) and Google also state that complaints can be made through contact forms on their help centres and webforms grievance officer India page, respectively. Twitter users can also report directly from the tweet or the account in question when they are logged in. WhatsApp can be contracted through ‘contact us’ on the app.
What does the data on the complaints received reveal?
In the case of Google, 97.9% of the complaints received and 99.9% of content removal action was taken for reasons of copyright and trademark infringement. This is no different from the last report where also 99% of such content was taken down. Other reasons for content removal included court order (4), circumvention (3), counterfeit (1), graphic sexual content (1), impersonation (1) and other legal (1). A recent trend of ‘complaint bombing’ content for copyright infringement on YouTube is being seen in India which is being misused to suppress dissent and criticism. We have written more about it here.
For Twitter, the maximum number of complaints received related to abuse/harassment, and the maximum number of URLs actioned related to impersonation. The second and the third-highest number of complaints were received for issues of impersonation and IP-related infringement. What is striking is that in a country with 22.1 million Twitter users, there were zero complaints received for hateful conduct, illegal activities, promoting suicide or self-harm and terrorism/violent extremism, and only 1 complaint received for child sexual exploitation against which no action was taken. This could be attributed to a grievance redressal system that is turning out to be ineffective.
For WhatsApp, 420 reports were received in total out of which action was taken in 41 cases all of which related to ban appeals i.e. appeals against banning of accounts.
What are the major policy areas/issues under which content is taken down?
There is a lack of consistency in how much detail SSMI’s provide in their compliance reports. Each SSMI has also adopted its own format, and provided different policy areas or issues on which it takes down content.
When it comes to taking proactive action, Facebook has segregated action taken into 10 policy areas 6 of which are shown in graph 1. The other 4 issues are drugs, firearms, spam and violent and graphic content. Facebook and Instagram share the same content policies, and thus, Instagram has identified the same issues except for spam. Twitter has identified only 2 issues, viz., Child Sexual Exploitation, non-consensual nudity, and similar context; and promotion of terrorism. Google and WhatsApp, have, on the other hand, have not even segregated the proactive action taken into different kinds of issues, but have provided the total number of proactive actions taken by them.
When it comes to grievances received and action taken thereon, Facebook and Instagram have identified 10 and 7 categories of complaints received as identified in graph 3 and graph 4, respectively. As an improvement from its first report, Google lists 9 categories of complaints received. Twitter mentions 12, and WhatsApp 5. It is only Google’s report which has a separate category for court orders.
How can the SSMIs improve their reporting in India?
The Santa Clara Principles On Transparency and Accountability in Content Moderation adopt three main principles relating to reporting of numbers, issuance of notice and appeal mechanism.
The first principle requires that the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines should be segregated by category of rule violated, by format of content at issue (e.g., text, audio, image, video, live stream), by source of flag (e.g., governments, trusted flaggers, users, different types of automated detection) and by locations of flaggers and impacted users (where apparent). According to the second principle, companies should provide notice to each user whose content is taken down or whose account is suspended. The notice should state the reason for the removal or suspension. SSMI’s should also provide an explanation of how automated detection is used across each category of content.
While the SSMIs have attempted to adhere to these principles, there is a long way to go. Google, WhatsApp and Twitter, especially, have failed to adhere to the first principle as they have provided little to no segregation of data for proactive takedowns. The IT Rules were floated following a call for attention for “misuse of social media platforms and spreading of fake news”, but there seems to be no data disclosure on content takedown for fake news by any SSMI other than Twitter. Further, even though compliance with the IT Rules does not mandate disclosure of how many content removals were ordered by the government, in order to truly advance transparency in the digital life of Indians, it is imperative that SSMIs disclose, in their compliance reports, issue-wise government requests for content removal on their platforms.
Stay tuned! We will be back next month with an analysis of the next set of reports.
- WhatsApp’s IT Rules compliance report (link)
- Google’s IT Rules compliance report (link)
- Facebook’s and Instagram’s IT Rules compliance report (link)
- Twitter’s transparency report (link)