Going beyond hashtags: how to ensure AI technology truly benefits everyone

IFF has submitted its comments on NITI Aayog's draft Working Document: Enforcement Mechanisms for Responsible #AIforAll, highlighting the following issues: need for concrete principles, role of the oversight body, robust regulation for the private sector, and risk assessment based restrictions .

14 December, 2020
8 min read

Tl;dr

NITI Aayog's draft Working Document: Enforcement Mechanisms for Responsible #AIforAll attempts to build on the the previous Working Document: Towards Responsible #AIforAll by providing enforcement mechanisms for the principles laid down in the latter. IFF has submitted its comments on the draft document highlighting the following issues: a need for concrete overarching principles, the extent of the role of the oversight body, robust regulation for the private sector, and risk assessment based restrictions for certain uses of Artificial Intelligence (AI).

Issues

As pointed out in Part 1 of the working document, the use of AI presents manifold risks:

  1. AI systems could pick spurious correlations in the underlying data, leading to good accuracy in test datasets but significant errors in deployment.
  2. 'Deep Learning' systems remain opaque 'black boxes', showing a high degree of accuracy even though explanations for the same elude technicians, leading to a lack of trust, accountability, and, ultimately, usage.
  3. Large scale deployment of AI leads to a large number of high frequency decisions, amplifying the impact of unfair bias. This may cause a lack of trust and disruption for social order.
  4. Technological erros may lead to large scale exclusion of citizens from services guaranteed by the state.
  5. A lack of consequences reduces incentive for responsible action, while difficulties in the allocation of liability arise in grievance redressal.
  6. AI systems may use personal data without the explicit consent of concerned persons. Advanced technology may also discern potentially sensitive information from the outputs of the system.
  7. AI systems are susceptible to attack such as manipulation of data being used to train the AI, manipulation of system to respond incorrectly to specific inputs, etc
  8. The rapid rise of AI has led to automation of a number of routine jobs, which, without adequate re-skilling and support from the state, may cause social unrest.
  9. Psychological profiling enabled by AI and the ease of spreading propaganda through online platforms has potential to cause social disharmony and disrupt democratic process.

Part 1 of the document lays down 7 core principles to mitigate these harms as well as to ensure that a common benchmark for the beneficial use of AI across different sectors is developed. These principles are:

  • Principle of Safety and Reliability
  • Principle of Equality
  • Principle of Inclusivity and Non-discrimination
  • Principle of Privacy and security
  • Principle of Transparency
  • Principle of Accountability
  • Principle of protection and reinforcement of positive human values

What does the draft document say?

The draft document clarifies the nature and roles of the oversight body for AI technology. It proposes that the oversight body be a highly participatory advisory body, and take on the following roles:

  • Manage and update AI principles laid down
  • Research into the various issues related to AI
  • Provide clarity on  design structures, standards, guidelines, etc
  • Promote development and access to AI data and technology tools
  • Help create awareness about responsible AI among various stakeholders
  • Coordinate between different sectoral regulators
  • Represent India in international dialogues on AI

The draft document aso specifies the need for an AI Ethics Committee in public sector bodies to handle the procurement, development, operations phase of AI systems and to ensure adherence to the Responsible AI principles. Self regulation is recommended for the private sector, with accountability for ethics being assigned to existing leadership.

Ensuring that AI truly works for All

Our recommendations are based on 4 key issues:

  1. Need for concrete overarching principles: Part 1 of the working document lists certain principles as principles for responsible AI. However, obscurity still persists with regards to the implication of adoption of these principles in the regulatory framework. Thus, we believe that there is a need to spell out both the detailed meaning of the principles which will work as a foundation to build enforcement mechanisms on as well as the implications of adopting them for regulatory frameworks.
  2. Greater role for oversight body: The draft document proposes the Council for Ethics and Technology as an oversight mechanism, which will be a “highly participatory advisory body”. We appreciate the formation of such an oversight body, due to the absence of clear laws and legal requirements governing AI, the Council for Ethics and Technology should have more than mere advisory functions and perform a regulatory assurance role, in conjunction with any forthcoming data authority, under the Personal Data Protection Bill, 2019. This should be specifically with the nature of any AI based system that impacts any legal right of a person.
  3. Robust regulation for the private sector: The draft document proposes voluntary self-regulation as a starting point for regulating AI in the private sector in India. While self-regulatory efforts are commendable, they should not be made a substitute for laws needed to closely monitor AI. In order to foster a healthy AI ecosystem, soft laws of self-regulation need to be complemented with strict provisions to govern high risk applications of AI.
  4. Risk assessment based restrictions for different sectors: All AI is not the same since the term envelopes within its multiple technologies. Thus, on the basis of a sector specific risk-assessment study, proportionate restrictions on the use of AI should be in place until an overarching, regulatory framework has been developed.

Risk Based Assessment - AI in Facial Recognition Technology

Procedure

In our submission, we provided a template risk based assessment for illustrative purposes, which was based on the following procedure:

  • Step 1: The Council must categorize use cases of AI into the 4 types of Algorithmic Systems  based on the AI Now Institute’s categorization of algorithmic systems used in their 2018 ‘Algorithmic Accountability Policy Toolkit’.
  • Step 2: The Council must assess the type of data collected by the AI into anonymous data, personal and sensitive personal data,,
  • Step 3: Based on type of data, risk assessment may be done with sensitive personal data having the highest risk assessment and anonymous data the lowest (Scale of 1-3 with 3 being the highest).
  • Step 4: Based on the level of risk assessment combined with the type of algorithmic system, the regulation framework may be designed by the Council.

The dangers of AI in Facial Recognition Technology

Here, we would like to address the use of AI for facial recognition technology (FRT). FRT which collects sensitive personal data for criminal justice purposes should be banned from being developed and deployed in India. Under IFF’s Project Panoptic, we have been mapping FRT systems across the country which are being developed and deployed without any regulatory framework as well as without any public awareness or transparency pertaining to how they will be used.

By our estimation, there are currently 19 different FRT projects which are being used by Police and Security/Intelligence agencies at the Central and State level in different stages of development and deployment. This is being done in the absence of a personal data protection law/regime as well as any specific regulation for FRT. The harms of such use are manifold:

  • One sort of harm results from the implementation of a faulty FRT system wherein the technology is inaccurate in identifying and matching faces from a photo/video to an existing database. Such inaccuracy could lead to a false positive result from the FRT.  This may lead to discrimination and strengthening of existing biases. In the present context, a false positive by a FRT system being used by the police could lead to wrongful arrest and detention of an innocent person.
  • Another type of harm results from the implementation of an accurate FRT system wherein the technology achieves 100% accuracy in identifying and matching faces from a photo/video to an existing database. While there have been claims of a fully accurate FRT system, none of these claims have been corroborated by independent review and audit. The National Institute of Standards and Technology (NIST) has extensively tested FRT systems for ‘1:1’ verification and ‘1:many’ identification and how accuracy of these systems vary across demographic groups. These independent studies have concluded that, currently, no FRT system has 100% accuracy.
  • Probe images for FRT systems are often collected by the police through CCTV cameras installed in public spaces. Individuals in a CCTV surveilled area may be aware that they are under surveillance but the assumption is that this surveillance is temporary. Use of CCTV in conjunction with FRT would mean their images will be stored for a longer period of time, if not permanently. This data will also be used to extract particular data points such as the facial features and other biometrics (sensitive personal data) which the individual has not consented to sharing when entering a CCTV surveilled zone and these data points can be used to track future movements of the person. Therefore, integration of FRT with a network of CCTV cameras would make real time mass surveillance extremely easy.

Goes against standards set by the Supreme Court

We would like to emphasise that use of FRT for criminal justice purposes goes beyond the standards laid down by the Hon’ble Supreme Court in Justice K.S. Puttaswamy vs Union of India (2017 10 SCC 1). The landmark decision lays down certain thresholds which the State must conform to to justify intrusions by the State into people’s right to privacy protected under Article 21 of the Constitution of India. These thresholds are:

  1. Legality: Where the intrusion must take place in the presence of a defined regime of law i.e. there must be an anchoring legislation, with a clear set of provisions. As we know, there is no anchoring legislation in place to regulate the use of FRT by the police. Additionally, we do not have a data protection regime to oversee the collection, processing and storage of data collected by these systems.
  2. Necessity: Which justifies that the restriction to people’s privacy (in this case data collection and sharing) is needed in a democratic society to fulfill a legitimate state aim. Use of FRT by the police has been justified based on reasons pertaining to security of the public and the country itself, with proponents saying it will enable automatic identification and verification through criminal databases. This characterisation is based on the faulty assumption that facial recognition technology is accurate, when ongoing research in the field has shown that completely accurate facial recognition technology has not been developed yet. Use of such inaccurate technology, especially for criminal prosecution, could thus result in a false positive.
  3. Proportionality: Where the Government must show among other things that the measure being undertaken has a rational nexus with the objective. FRT contemplates the collecting of sensitive personal information, intimate information of all individuals in the absence of any reasonable suspicion by collecting images and videos from a scene of crime. This could cast a presumption of criminality on a broad set of people. Collecting sensitive personal information of all individuals who were present at the scene of crime creates a presumption of criminality which is disproportionate to the objective it aims to achieve.
  4. Procedural safeguards: Where there is an appropriate independent institutional mechanism, with in-built procedural safeguards aligned with standards of procedure established by law which are just, fair and reasonable to prevent abuse. In the absence of any checks and balances, function creep becomes an immediate problem wherein the issue of FRT being used for functions more than its stated purpose becomes a reality. Use of FRT without safeguards could result in illegal state-sponsored mass surveillance which would have a chilling effect on fundamental rights such right to freedom of expression, freedom of movement and freedom of association which are guaranteed in the Constitution.

Use of this technology has raised concerns not only in India but also abroad with various civil society organisations such as the Electronic Frontier Foundation, the Algorithmic Justice League and Amnesty International calling for ban on the use of this technology. Calls for a total ban have been gaining momentum due to the fear that use of facial recognition by the police and security/intelligence agencies will not only lead to violation of the rights to privacy and freedom of speech and expression but also lead to human rights violations by helping to increase systemic bias against already marginalised communities. The impact on marginalised communities gains special importance for us locally due to the wide inequality and diversity present in our society. Thus, we recommend that the use of FRT be banned.

Important Documents

  1. NITI Aayog's draft Working Document: Enforcement Mechanisms for Responsible #AIforAll (link)
  2. NITI Aayog's Working Document: Towards Responsible #AIforAll (link)
  3. IFF's comments on the draft document (link)
  4. IFF's Project Panoptic FRT Tracker (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Dear Digi Yatris, it’s time to deboard

Amid suspicions about its tech operator’s criminal records and vast allegations of data privacy violations, the Digi Yatra Foundation has announced a revamp of the service and is urging its users to abandon the old app and re-install a new version. We shed light on this shady ‘makeover’.

7 min read

2
#FreeAndFair: Launching IFF’s Election Website

As the country gears up for the 2024 Lok Sabha elections, we watch every technological development that may affect electoral integrity. Visit the IFF election website freeandfair.in to read about IFF’s actions and efforts. 

5 min read

3
Your personal data, their political campaign? Beneficiary politics and the lack of law

As the 2024 elections inch closer, we look into how political parties can access personal data of welfare scheme beneficiaries and other potential voters through indirect and often illicit means, to create voter profiles for targeted campaigning, and what the law has to say about it.

6 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!