Instagram Wants to Throttle Politics

Meta rolls out a feature limiting recommendation of political content on Instagram and Threads before an important election cycle but with little to no transparency or informed consent.

04 April, 2024
7 min read

tl;dr

In a slew of blog posts and updates, Meta has announced that they will limit the recommendation of political content on user feeds by their recommendation algorithm while providing users an option to opt-out of the restriction. The setting, which is on by default, seeks to limit recommendations of political content made by accounts not followed by the user on Instagram and Threads. In this post, we talk about the issues with this ‘feature’ and the adverse effects of its influence on an important election cycle. We contend that digital platforms like Meta have an important role to play in democratising access to information for a large number of people, and as such must ensure transparency along with informed consent from users who are affected by the prioritisation of advertiser interests over user interests.

Background: What is the feature?

Amidst a medley of controversy surrounding the handling of political content on its platforms, the company formerly known as Facebook has been steadily decreasing or planning to decrease its recommendation of political content across all of its platforms. In what it has termed a continuation of their existing policy, a setting for political content is being added to the content preferences menu in the Instagram app which will apply to a user’s linked Threads account as well.

This setting/feature limits the recommendation of political content in users’ feeds automatically and without an explicit notification in-app about the change. This feature, which fails to obtain informed consent from the user, will result in reduced visibility of political content during a massive election cycle that sees 49% of the world population vote.

What is the justification?

Meta, in a blog post released on February 09, 2024, has noted that “People have told [them] they want to see less political content” – an ask which they have been implementing on Facebook over “the last few years”. According to them, this approach is now being extended to Instagram and Threads.   

Meta, in their Q4 2020 shareholders’ conference, also noted that “politics has kind of had a way of creeping into everything … a lot of the feedback that [they] see from communities that people don't want that in their experience”. In an attempt to cultivate “healthier” communities, Meta has resorted to reducing the amount of political content a user can interact with on their platforms, which may have significant implications for a user’s freedom of expression on their services and it has the side-effect of disincentivising political accounts on their platform by impairing their ability to grow and sustain themselves.

Adam Mosseri, Head of Instagram and Threads, has sustained the view that there is “no need to get into politics” and has also mentioned in a statement that the goal of the recent update is to “preserve the ability for people to choose to interact with political content, while respecting each person’s appetite for it”. It is pertinent to note that while the sentiment by Meta has been to give users who want to see political content in their feeds a choice, it is odd that Meta has chosen to exercise this sentiment by decreasing the recommendation of political content, platform-wide, by default.  Meta’s argument that this change preserves the ability of users to keep seeing political content falls flat on its face if users do not know about the change and as such are unable to exercise their choice.

While the official explanation remains their desire to incubate a less conflict-ridden service for users, it is worth considering if the reasoning behind introducing this feature is to attract more advertisers to the platform. Elections present unique challenges for advertisers as users may see brands through a partisan lens and may expect them to take sides. Digital platforms depend on advertisers for a significant chunk of their revenue and as such would benefit monetarily from a platform-wide restriction on engagement with political content in an effort to make the platform ‘advertiser-friendly’. The ongoing conflicts like the Russo-Ukrainian war have also fueled advertiser discontentment on platforms resulting in brands taking proactive action to prevent their advertisements from appearing alongside news or political content.

The curious case of the roll-out.

According to Meta, the setting will be rolled out “slowly”, with users in India reporting not having seen the update as of April 03, 2024 and with no information on what users or countries would be receiving this update and when.

While giving users the option to retain or exclude political content from their recommendation algorithm may seem reasonable in theory, the nature of the feature’s release has been befuddling. There has been little-to-no user awareness – a blog post released in February and a statement on Threads have been the only source of communication about the controversial setting.

A lack of transparency surrounding the release is worrying as users cannot make informed decisions to proactively opt-out of the feature. Empowering users with information about this feature update would have allowed them to read about the setting, understand its various implications, and then make an educated decision. However, making the feature opt-out also presents a set of problems that Meta still needs to address. Accounts disseminating critical updates on events that may be deemed political by Meta could also see a steep, sudden decline in their reach (i.e. how many accounts are shown the post) and thus would have less of an incentive to continue posting. Accounts that rely on their Instagram audience for financial support, as many freelance journalists do, would also be disincentivised to continue posting during an important election season, further fuelling an information divide between more influential and affluent political parties and organisations with lesser resources.

Platform Discretion

Social media platforms operate unilaterally – while they are subject to local laws, they exercise discretion in their day-to-day functioning. Recommender system algorithms are one way in which platforms exercise discretion which limits or amplifies the visibility of content for a user. Platforms are free to curate their own service and are bound by their own terms and conditions that a user agrees to prior to joining the platform. While they have the liberty to exercise editorial control, the introduction of a feature that may stifle political speech right before an important election cycle, without adequately informing users and seeking their consent, raises alarm.

Platforms need to be held accountable to their users for changes that have the ability to interfere with democratic processes.

Choosing to limit the visibility of posts that Meta unilaterally defines content as “political” is concerning as the creators may find themselves with no control or say over having their content defined as political. There is little clarity on the methodology relied on for categorising content as political as well as on the availability of mechanisms to appeal such decisions.

Defining Political Content

Meta has not yet provided clarity on what and how it is defining content as ‘political’ on their platforms. For advertisers, Meta’s transparency center provides clear guidelines on how content related to elections or politics is policed. A separate authorisation process exists for advertisers seeking to run ads about political content, with requirements varying by country. These requirements apply specifically to ads including those made by or on behalf of a political candidate in the context of their candidacy, about “social issues” and regulated as “political advertising” as per local laws.

While Meta has admitted that AI systems rank political content based on engagement metrics (at least on Facebook), it has signalled a shift from the existing approach and a move towards using AI systems that will rely on personalised signals, like survey responses and user feedback.

Negating Negativity

Meta personnel have reiterated their belief in eliminating “negativity” from their platforms and providing users with “healthier communities” (here, here and here). While some may view this as well-intentioned, one may wonder if this move emanates from a need to cultivate a platform on which advertisers can find a safe haven to avoid being looked at from a partisan lens. Catering to advertisers also has the unintended side effect of creating information asymmetry amongst users as social media platforms’ utility as information dissemination tools cannot be discounted.

Companies utilising advertising-based revenue models tend to follow similar practices. YouTube has come under fire numerous times for demonetising speech which it considered to not be advertiser-friendly with little-to-no grievance redressal mechanisms for creators, resulting in what users termed as an “Adpocalypse”. Creators found their content being demonetised in favour of more traditional, advertiser-safe longer content which YouTube could safely suggest via their recommended feeds. 

Meta seems to be going down the same road. They have also come under fire for ‘Shadowbanning’, a practice wherein they silently censor content, often political in nature, which results in a significant decrease in the visibility of that content on their platforms. According to Pew Research, half of American adults get their news from social media platforms like Facebook, YouTube, Instagram, etc. With faith in legacy news media outlets waning, social media platforms need to invest more time and resources into researching growing instances of false information and hate speech on their platforms, especially before elections. Several Indian digital news startups have also found success due to the rapid growth in Internet usage and digital advertising spend.

De-platforming content that is instrumental for individuals or political organisations who lack the influence and resources to reach voters on the ground is especially problematic, as bigger political parties will be able to circumvent the limitation on political content by employing more outreach strategies as a result of their expansive resources. The restriction, therefore, shifts the already uneven playing field in many electoral democracies.

This post was drafted by Digital Literacy Intern Anjney Mital, edited and reviewed by IFF Staffers.

Important documents:

  1. Meta’s update about the feature on Instagram (link)
  2. Meta’s transparency center blog on its approach to political content (link)
  3. Meta Investors Conference Transcript (link)
  4. Global Witness & IFF’s joint investigation on YouTube & Koo’s responses to hate speech (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Quarterly Transparency Report for January-March 2024: Censorship, Surveillance, Smartcities and more

In the first quarter of 2024, IFF's transparency vertical filed 64 RTI applications to seek accountability from public institutions on digital rights issues, and received some notable information, but also some silences.

9 min read

2
IFF’s cybersecurity report for the first quarter of 2024 #PlugTheBreach

This post is the first in a series tracking IFF’s work on data breaches and vulnerabilities every quarter. We will list the various cybersecurity incidents that occurred in the country and our actions in response to them.

6 min read

3
Petitioners Conclude Arguments Before Third Judge in Case Challenging Constitutionality of Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

After a marathon hearing before the Bombay HC spanning over 7 days, the Petitioners have concluded their arguments before the third Judge, Justice A.S. Chandurkar, in the petitions challenging the constitutionality of the Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

5 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!