Lessons from the International Debate on Facial Recognition

We trace the international debate on use of facial recognition technology and recommend imposing a 3 year moratorium in India.

25 February, 2020
5 min read

Tl;dr

In our previous posts in this series on Facial Recognition Technology (FRT), we have mapped FRT projects in different states in India and highlighted accuracy and privacy concerns that arise from the use of this technology. In this blogpost, we will trace the debate on FRT usage, and conclude by proposing the imposition of a moratorium on these projects.  

Spike in Facial Recognition Technology Usage

FRT has become increasingly ubiquitous in the daily lives of people from China’s surveillance state to the iPhones (or the Redmi phones) with a face unlock feature in our pockets. The previous posts have detailed how India is steadily warming up to the use of this technology for a plethora of functions. However, it is essential to clearly articulate how the data collected through this exercise will be used and safeguarded. This post will look at the solutions that some jurisdictions have tried to introduce and make the case for a moratorium being the best suited for India.

Tracing the Debate

Countries such as China and Russia have seen unsuccessful legal challenges to FRT wherein Courts have held that there has been no breach of privacy in both these instances. Hence, we must turn to countries which have taken a more proactive stance on dealing with this technology by envisaging diverse forms of regulation, such as:  

  1. Complete bans,
  2. Regulations applicable on public authorities or private players, and
  3. Moratoriums.

Bans  

The US city of Somerville, MA has placed a ban on any city officials/departments or people contracted by the city from employing this technology. The reasons it has given are as follows:    

  1. Women, young people and people of colour have a higher risk of ‘false positive’ identifications;
  2. Databases which FRT is applied to are plagued by racial biases which in turn generates copycat biases in face surveillance data; and
  3. The public use of face surveillance can chill the exercise of constitutionally protected free speech.

Other places such in the US have taken a similar stance and imposed complete bans on FRT by public officials (San Francisco, Cambridge, Oakland) or in body worn cameras by law enforcement officers (in Oregon). However, a well-executed ban would require thought and deliberation. For instance, in San Francisco, the initial ban had to be amended to allow employees to use FaceID on their government-issued iPhones, a feature that didn’t seem to have been adequately considered beforehand.

Clare Garvie of the Georgetown Law Center for Privacy and Technology has pressed the need to first define what is meant by ‘facial recognition’. Does it simply mean matching of facial images or anonymised sentiment analysis, and any other current uses? A robust regulation would also need to take into account the global nature of the technology industry. Face-recognition algorithms used in one country may be made by a company from another country. Does such a ban simply mean a ban on the purchase of that technology, meant to be applied within the borders of one country, or does it also extend to import of FRT from another country. She concludes by saying that there is no political will and global capacity to entirely prevent the use or development of FRT around the world.    

Regulations

Concurrently, countries which have embraced the use of FRT and are devising methods to regulate it. Last year when the South Wale Police’s use of FRT was challenged in the High Court of Justice, the Court did not find it violative of their data protection standards or other relevant laws. Moreover, it held that the use of this technology balanced the interests of the community (safety) and individual (privacy), and fell within the ambit of their existing laws. On the basis of this case, the Information Commissioner of the UK has recommended cogently laying down how these laws govern the use of this technology.

The US state of Illinois’ Biometric Information Privacy Act (BIPA) lays down strict standards which are applicable to private entities. It requires written consent before collecting someone’s biometric information (one of which is face geometry). Biometric data is particularly sensitive information because it is unique to every individual. Passwords and social security numbers, for example, can be changed in the event of breach, but a face cannot. Under BIPA, individuals can sue anyone violating their privacy rights even absent a financial injury resulting from the violation. Facebook has been held liable under BIPA for illegally collecting and storing facial recognition data for its ‘Tag Suggestions’ feature.  

However, exceptions exist. A private entity has to disclose the collected biometric data if it is a requirement under state/federal/municipal law, a warrant or a subpoena. The State of Washington, US - home of Big Tech companies Amazon and Microsoft - has also proposed a bill which will regulate private entities using FRT and has similar exceptions, i.e., making disclosures to government authorities if it is permitted under existing laws. For those looking to implement laws similar to these, the principal question that arises is whether there are enough existing safeguards in place to successfully protect people’s data. A question, as we have previously noted, India cannot answer in the affirmative since the Personal Data Protection Bill (PDP Bill) has still not been implemented.              

Moratoriums

Civil liberty organisations advocate putting a halt on the use of FRT until further debate. Realistically speaking, it is prudent to recognise the widespread adoption of FRT and begin contemplating how we will address its worst tendencies. In its recently published white paper on the use of Artificial Intelligence, the European Union concluded with pressing upon the need to have a sustained conversation about how to adequately regulate the use of this technology. It invites comments from various stakeholders so that standards may be set down.

Similarly, the US Congress has been under pressure from activists and concerned citizens to take action on FRT regulation. Consequently, two Senators in the US have recently put forth a legislation which proposes a moratorium while a Commission - comprising law enforcement and immigration enforcement officials, privacy and technology experts and communities most impacted negatively by the use of FRT - submits a report with its recommendations to the Congress. The Congress must then pass a law to regulate FRT within a specified timeframe.  

So what should India do?

Based on the observations in our previous post, FRT systems in India need a lot of fine tuning before they can be rolled out properly. Further, based on India’s experience with Aadhaar, it would be advisable to hit pause and contemplate the serious challenges that come with an invasive technology. Retrospectively regulating an already introduced measure would be an extremely difficult task and would mean setting ourselves up for failure. Taking the lead in flagging this issue, we propose a moratorium of three years on the ongoing FRT projects in India which would mean pressing pause on these projects. We also hope to have more clarity in this regard once the Personal Data Protection Bill is passed.        

In the meanwhile, a thoughtful dialogue is required where government, industry, civil society, human rights groups, academia and similar stakeholders have a proactive dialogue which can articulate a policy-making/legislative pathway on the issue. The conversation must be nuanced enough to differentiate between different uses of the technology which can span governance, monitoring, law enforcement, commercial, R&D, academia, healthcare, and many more unforeseeable domains. However, as we have previously observed in this series, the Indian Government and some members of industry are essentially running the risk of placing the proverbial cart before the horse. The need for a conversation is urgent and we will use these global learnings to inform our own position moving forward.

(This post has been authored by Ashi Mehta, a legal intern at IFF, and reviewed by IFF staffers, Sidharth and Devdutta.)

Important Documents

  1. On Artificial Intelligence - A European approach to excellence and trust (link)
  2. Ethical Use of Facial Recognition Act (link)
  3. Introduction to Facial Recognition Projects in India (link)
  4. Problems with Facial Recognition Technology Operating in a Legal Vacuum (link)

“Oops! I did it again” -Indian Govt., circa 2020

Want to prevent the Government from deploying yet another tool to keep tabs on you? Help IFF by becoming a member today!  

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
#FreeAndFair: Launching IFF’s Election Website

As the country gears up for the 2024 Lok Sabha elections, we watch every technological development that may affect electoral integrity. Visit the IFF election website freeandfair.in to read about IFF’s actions and efforts. 

5 min read

2
Your personal data, their political campaign? Beneficiary politics and the lack of law

As the 2024 elections inch closer, we look into how political parties can access personal data of welfare scheme beneficiaries and other potential voters through indirect and often illicit means, to create voter profiles for targeted campaigning, and what the law has to say about it.

6 min read

3
Press Release: Civil society organisations express urgent concerns over the integrity of the 2024 general elections to the Lok Sabha

11 civil society organisations wrote to the ECI, highlighting the role of technology in affecting electoral outcomes. The letter includes an urgent appeal to the ECI to uphold the integrity of the upcoming elections and hold political actors and digital platforms accountable to the voters. 

2 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!