From investigation to conviction: How does the Police use FRT?

In this post, we look at how the Police uses facial recognition technology based on its accuracy rates as well as analyse the admissibility of FRT based evidence in courts.

02 July, 2021
5 min read

tl;dr

Facial recognition technology (FRT) is not completely accurate, however its use by the Police has been increasing exponentially in the last five years. To explain this phenomenon, we will be taking a look at how FRT works currently, i.e., on the basis of its rate of accuracy wherein it generates possible matches while not giving a final result. Further, we will also try to understand whether evidence garnered from the use of this inaccurate technology can be admissible in courts.

How does the Police use facial recognition?

FRT can be used for two purposes mainly: verification and identification. Verification is done by matching the live photograph of a person to the pre-existing photograph that is on the authority’s database (1:1). Use of FRT for verification is done to authenticate the identity of an individual seeking to gain access to any benefits or government schemes. Identification is done by trying to get a match between the face of an individual which has been extracted from a photograph/video and the entire database of the authority in order to ascertain the identity of the individual (1:many). Use of FRT for identification is usually done for the purposes of security and surveillance. For this post, we are concerned only with FRT which is used for identification purposes by the Police.

FRT systems generate a probability match score, or a confidence score between the suspect who is to be identified and the database of identified criminals that the Police has access to already (for eg. the Crime and Criminal Tracking Network System). Multiple possible matches are generated and listed on the basis of their likelihood to be the correct match with corresponding confidence scores. The final identification, however, is done by the human analyst who selects one match from the list of matches generated by the technology. This identification procedure is ripe for misidentification because while the software releases several possible matches, the analyst conducting the search makes the final identification.This also opens the door for the analyst’s own biases to creep into the final result wherein they may be prejudiced against a certain race, religion or community, based on which their decision making may be affected.

What does the rate of accuracy signify?

The accuracy rates generated by the FRT depend on a number of factors such as camera quality, light, distance, database size, algorithm, and the suspect’s race and gender. Advanced systems can achieve accuracy rates of 90%, which means that false positive rates can go as low as 10%. Misidentification, or false positives, occur when a person is identified as someone they are not and is especially problematic as it can lead to the police pursuing and charging an innocent person for a crime they did not commit. Comparing this to other biometric data which may be used as evidence such as fingerprints, it is important to remember that,

(i)f the sample is contaminated or does not have enough of the biometric data, either insufficient DNA or a partial latent fingerprint, the result is inconclusive. Facial recognition is different; even if the matches include the correct suspect, the analyst conducting the search and selecting the match to forward to investigators may choose the wrong individual. The correct match may not even be in the list of results identified by the software, but the analyst reviewing the results may find a match anyway, thereby implicating an innocent person (a false positive).”

Here, it is also important to note that use of FRT is taking place in India without there being any standards in place to regulate the technology or certify its quality. Thus, there is a very real possibility that a sub-par FRT system is adopted by the Police which leads to misidentification, and ultimately, a false conviction.

Inaccuracy, however, is not an Indian problem which can be solved with “better” technology. According to a report by Georgetown Law’s Center on Privacy and Technology, the US Federal Bureau of Investigation’s (FBI) own statistics suggest that one out of every seven searches of its facial recognition database fails to turn up a correct match, meaning the software occasionally produces 50 “potential” matches who are all “innocent”. While we do not have any complete information about all FRT systems in India and their respective accuracy rates due to a lack of transparency on the part of the government authorities, it is safe to assume that their accuracy rates will be significantly lower than the FBI’s 86% accuracy rate which has access to the most advanced technology in the world. According to a submission made before the High Court of Delhi, the accuracy rate of the FRT being used by the Delhi Police is 2%.

So how is FRT evidence used in Court?

Firstly, since FRT is not completely accurate, it may not be presented in Court as evidence, but only used to provide investigatory leads. In this situation, the defendant may not even be informed that FRT was used by the Police to build the case against them. This is because it may also contain exculpatory evidence which may work against the State itself. This point was put forward by the ACLU in the amicus curiae brief in the matter of Willie Allen Lynch V. State Of Florida, wherein the ACLU states that the analyst’s use of FRT created information that would tend to exculpate him and/or impeach the state’s witnesses, including but not limited to:

  1. Other possible matches that the FRT generated, indicating the possibility of alternate perpetrator;
  2. the analyst’s choice to send only the defendant’s photo to investigators, indicating improper suggestiveness;
  3. the range of possible matches and their probability scores;
  4. an executable version of the software and the source code (which is prone to error and bias) and
  5. the analyst’s lack of training in forensic face analysis.

All of this evidence would have indicated uncertainty in the identification procedure according to the brief.

Another way in which FRT evidence may be used in Court is when it is supported by eyewitness testimony. However, it may be noted here that use of eyewitness evidence for human identification is also problematic as “memory for a face is affected by the introduction of subsequent misleading information about that face, contradicting the view that faces are special in their lack of susceptibility to interference”. According to a study conducted in the US on proven cases of wrongful convictions, 52% were based on eyewitness testimony.

This is because eyewitness evidence works best when the witness is pre-acquainted with the suspect as the probability is extremely low. If the witness does not know the suspect beforehand, then it is also especially easy to be hoodwinked by FRT as it generates matches which are similar to the suspect thereby confusing the witness. Additionally, if the eyewitness knows that the individual was identified through FRT, it may sway them to also positively identify the suspect. Thus, people who have similar features may have a higher chance of being misidentified. While sometimes such eyewitness testimony is provided by a bystander, usually, however, it may be the arresting officers themselves who testify in Court about the FRT evidence, as was the case for Willie Lynch, which may lead to further bias.

The final scenario for admissibility of FRT evidence in Court is where the evidence is admitted without any other supporting testimony, solely on its own. We currently have no clear sense of whether such evidence will be admissible in the absence of eyewitness testimony. Here, parallels can be drawn between how biometric evidence is treated in Indian courts wherein such evidence has to be supported by expert testimony. Assuming that the person who created the algorithm is the expert here, we are still waiting for such testimony to be provided in Court in matters related to FRT. However, it can be said that its admissibility will depend on a case to case basis, just like any other expert testimony is treated. Additionally, in such a situation, the defendant then should also be provided with the access to the software’s source code to meaningfully challenge the evidence presented against them.

Important Documents

  1. Is the illegal use of facial recognition technology by the Delhi Police akin to mass surveillance? You decide. dated July 3, 2020 (link)


Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Petitioners Conclude Arguments Before Third Judge in Case Challenging Constitutionality of Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

After a marathon hearing before the Bombay HC spanning over 7 days, the Petitioners have concluded their arguments before the third Judge, Justice A.S. Chandurkar, in the petitions challenging the constitutionality of the Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

5 min read

2
Why do we do the “Quarterly Members’ & Donors’ calls” / For all the johnny-come-lately`s

What goes on in these “Quarterly Members’ and Donors’ calls" and why do we host them? What kind of mangoes do we eat and how?

3 min read

3
Dear Digi Yatris, it’s time to deboard

Amid suspicions about its tech operator’s criminal records and vast allegations of data privacy violations, the Digi Yatra Foundation has announced a revamp of the service and is urging its users to abandon the old app and re-install a new version. We shed light on this shady ‘makeover’.

7 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!