Summary: A Global Witness and IFF report documenting YouTube and Koo’s ineffective response to flagged hate speech

A joint report by the Global Witness and Internet Freedom Foundation investigates how YouTube and Indian microblogging platform Koo respond to misogynistic hate speech that violates their platform policies, reported both in India and the United States.

23 February, 2024
4 min read


The report titled 'Letting Hate Flourish: YouTube and Koo's lax response to the reporting of hate speech against women in India and the US' was launched on February 01, 2024. The joint investigation which spanned several months revealed that YouTube and Koo were inefficient in acting on misogynistic hate speech reported on their platforms in India and the United States of America (USA). Such a lax response on the part of the platforms paints a worrying picture for the online safety of women and marginalised groups. The prevalence of such a toxic information ecosystem in a critical election year is a cause for concern.


The inefficacious attempt of platforms to curb the widespread presence and rampant proliferation of online hate speech is well-known and documented. When these companies are questioned on their inefficiency in tackling hate speech on their platforms, they often point to their reporting tools available to users to report harmful content. Through this investigation, we wished to assess how well this reporting mechanism works and how prompt platforms were in reviewing and acting against content that violates their policies. 

For the investigation, we started by identifying gendered hateful content on YouTube and the Indian microblogging site Koo. We familiarised ourselves with the companies’ respective policies and ensured that the identified content violated their hate speech policies. We chose to focus on hate speech against women based on sex/gender, since evidence suggests the impact of online harassment is heavier for women than men (notably for journalists and politicians), and that online attacks against women are more often based on gender. 

We identified real examples of gendered hate speech content in English and in Hindi that were live on the platforms but violated the companies’ hate speech policies [The identification phase included the use of a “slur list” that was created as a part of the Uli project]. We then reported the content and details of the violation using each platform’s reporting tool, totalling 79 videos on YouTube and 23 posts on Koo. Our reasoning for conducting research in these two countries was that they are both large global democracies with national elections in 2024, where online hate speech and disinformation have already led to the targeting and murders of religious minorities, journalists, and election officials. 

We would like to extend our gratitude and appreciation to external researchers Fatima Tahir and Alex Buckey for assisting us in the identification and reporting process of the investigation. 


Prompt action on the part of platforms is essential for their hate speech policies to be effective. However, a full month after we reported the hate speech, the results showed an alarming lack of response from YouTube and inadequate action from Koo. YouTube did not remove any of the 79 reported videos containing hate speech. A full month after the content was reported, the status of only one video changed, to require users to be over 18 to watch it, and all the videos remained live on the platform. YouTube claims it “reviews reported videos 24 hours a day, 7 days a week”, but after a month there was little evidence of any review having taken place. All the videos are still live on the platform. 

In contrast to YouTube, Koo was quicker to respond to reported content, completing its review process within 24 hours for the majority of posts. It notified users in the app itself, acknowledged the review and removal of six of the 23 posts reported, and conveyed that it had reviewed and taken no action on 15 others. However, it failed to provide any response for two of the posts. In response to a request by WIRED for a comment, the Koo co-founder Mayank Bidawatka said, “Out of the 23 Koos, we have deleted 10 which violated our guidelines and taken action against the remaining. Actions taken ranged from reduced visibility of the post [or] deletion, to account level actions such as blacklisting for an account exhibiting repeated problematic behaviour.”

While YouTube and Koo operate at very different scales with different content forms, Koo’s relative responsiveness suggests a more functional reporting mechanism. Broadly, both platforms failed to deal with the reported hateful content against women, some of which also included Islamophobic, racist, and casteist hate. 

Heightened tensions in election year

The lax responses of YouTube and Koo to the reporting of misogynistic hate speech in this investigation show that both platforms are failing to review and act on dangerous content. Instead of generating revenue from hate, YouTube, Koo, and other platforms should uphold their policies to protect the safety of their users, and spaces for civic discourse and democratic engagement more broadly. Their current practices are demonstrably inadequate, and leave the door open for hate and disinformation. In a major election year, wherein India, the USA, and over sixty other countries will hold national elections, social media corporations must learn from previous mistakes and properly resource content moderation, stop monetising hate, disincentivise polarising content, and uphold election integrity.

Important documents

  1. Report on ‘Letting hate speech flourish’ (link)
  2. Press release by Global Witness on the report (link)
  3. IFF’s press release on the report (link)
  4. Wired exclusive coverage on the report (link)

The post has been updated on February 26, 2024.

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

Your personal data, their political campaign? Beneficiary politics and the lack of law

As the 2024 elections inch closer, we look into how political parties can access personal data of welfare scheme beneficiaries and other potential voters through indirect and often illicit means, to create voter profiles for targeted campaigning, and what the law has to say about it.

6 min read

Press Release: Civil society organisations express urgent concerns over the integrity of the 2024 general elections to the Lok Sabha

11 civil society organisations wrote to the ECI, highlighting the role of technology in affecting electoral outcomes. The letter includes an urgent appeal to the ECI to uphold the integrity of the upcoming elections and hold political actors and digital platforms accountable to the voters. 

2 min read

IFF Explains: How a vulnerability in a government cloud service could have exposed the sensitive personal data of 2,50,000 Indian citizens

In January 2022, we informed CERT-In about a vulnerability in S3WaaS, a platform developed for hosting government websites, which could expose sensitive personal data of 2,50,000 Indians. The security researcher who identified the vulnerability confirmed its resolution in March 2024.

5 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!