Read our comments on the DoT's Paper to develop an Indian Artificial Intelligence Stack

Our main recommendation to the committee was to set up an independent supervisory data protection authority to regulate the stack.

03 October, 2020
7 min read

tl;dr

Read our comments to the AI Standardisation Committee constituted by the Department of Telecommunications (DoT) on their paper for the Development of an Indian Artificial Intelligence Stack released on September 2, 2020. Our main recommendation to the committee was to set up an independent supervisory data protection authority to regulate the stack.

The Indian Artificial Intelligence Stack

On September 2, 2020, the AI Standardisation Committee constituted by the DoT released the paper for the Development of an Indian Artificial Intelligence Stack. The paper states that the Government of India has recognised that an AI driven economy, can transform the lives of millions, i.e., AI is the main driver for the desired socio- economic transformation of India. This paper proposes a stack that seeks to remove the impediments to AI deployment by putting in place a comprehensive framework. This will enable development of a suitable AI stack with a different mix of layers and interfaces that complement each other and achieve integration. This paper proposes to divide the AI stack in six different layers with appropriate horizontal and vertical integration. These six layers are:

  1. Infrastructure Layer
  2. Storage Layer
  3. Compute Layer
  4. Application Layer
  5. Data / Information Layer
  6. Security & Governance Layer

Our comments

After analysing the paper at length, we came to the conclusion that while the paper highlights certain problems which are generally associated with AI, it fails to provide nuance to these concerns specifically how these concerns will be resolved. Additionally, we also realized that the paper had failed to address one of the most pressing concerns related to AI, i.e, mass surveillance. Thus, we divided our comments into two parts:

1. Absence of Nuance

We noticed that while the paper does point out several concerns with AI, it fails to satisfactorily address these concerns. These concerns mainly relate to:

Privacy/Security of data collected

The data being collected by the AI Stack covers a wide range of categories which include sensitive personal data. Paragraph 5.29 of the paper states that “The data processed and stored in many cases include geolocation information, product-identifying data, and personal information related to use or owner identity, such as biometric data, health information, or smart-home metrics. For some applications personal information are also captured through audio or video, or include communication capabilities, such as those used in children’s devices.” The collection of such sensitive personal data raises the obvious question of how will the privacy and data protection issues be resolved.

Paragraph 5.3 of the paper states that “In the absence of a clear data protection law in the country, EU's General Data Protection Regulation (GDPR) or any of the laws can be applied. This will serve as interim measure until Indian laws are formalised.”. The Personal Data Protection Bill is currently languishing in the Parliament and it is not known when it will be formalised. However, while the paper does provide for alternatives, it also creates confusion and vagueness as to their applicability. The term “or any of the laws” is unclear and does not indicate which specific laws the paper is pointing to.

Thus, the paper puts forward the EU's General Data Protection Regulation (GDPR) as the data protection law which will be applied to the AI Stack. However, the paper fails to address how it will comply with the extensive provisions of the GDPR. One of the requirements under the GDPR is the establishment of a supervisory authority under recital 117 of the GDPR. It is the responsibility of the supervisory authority under recital 122 of the GDPR to ensure that the processing of personal data carried out by public authorities or private bodies acting in the public interest is done in compliance with the principles laid down in GDPR. However, India does not have any such supervisory authority nor does the paper provide for the setting up of one. In the absence of a supervisory authority, enforcement of the GDPR for the AI Stack is not possible.

In addition to this, certain features of the AI Stack mentioned in the paper are clearly violative of the GDPR. These include paragraph 2.6, which mentions the use of social media data to generate credit scores thereby violating the the conditions of consent laid down in the GDPR under article 7 and paragraph 5.20, which arbitrarily divides the collected data into three categories (hot data, warm data and cold data) thereby violating the purpose limitation principle as stated in Article 5 of the GDPR.

Algorithmic Bias

Another concern that the paper raises but fails to provide adequate nuance to is how the AI Stack will ensure against algorithmic bias. The paper, in paragraph 4.10, rightly points out that algorithmic bias occurs because “The self learning nature of AI means, the distorted data the AI discovers in search engines, perhaps based upon “unconscious and institutional biases”, and other prejudices, is codified into a matrix that will make decisions for years to come.” As a solution the paper suggests “a need for evolving ethical standards, trustworthiness, and consent framework to get data validation from users.” However, these provisions do not find any mention in the actual proposed stack itself. The paper fails to address how the data/information exchange layer & compute layer will address algorithmic bias and how these layers will ensure that ethical standards are followed in the collection of data and in the building of the algorithm itself. Standards of data collection are also important to ensure that the data collected is sufficiently representative of the population it purports to represent.

For instance, paragraph 2.5 (a) mentions that access to healthcare can be increased in rural areas with the help of AI and makes the following argument: “This can be achieved through implementation of AI driven diagnostics, personalised treatment, early identification of potential pandemics, and imaging diagnostics, among others.” However, what also needs to be taken into account here is that there exists a massive problem of gender bias not only in medical research, but also in access to medical help. The paper also needs to address and expand on such issues to ensure that existing biases in both medical research and medical access do not creep into the proposed solutions being developed under the AI Stack. To ensure against such occurrences, the AI Now Institute based in New York has released a study titled “Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability” which introduces a model framework for governmental entities to use to create algorithmic impact assessments (AIAs), which evaluate the potential detrimental effects of an algorithm in the same manner as environmental, privacy, data, or human rights impact statements. Such a safeguard, however, is lacking in the proposed AI Stack.

2. Need for the AI Stack to address state surveillance concerns

One of the major gaps in the paper is its failure to address the concerns surrounding use of AI for state-sponsored mass surveillance. According to a Carnegie Endowment for International Peace working paper titled, “The Global Expansion of AI Surveillance” authored by Steven Feldstein, AI surveillance technology is spreading at a faster rate to a wider range of countries than experts have commonly understood. At least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. Liberal democracies (such as India)  are major users of AI surveillance. The index shows that 51 percent of advanced democracies deploy AI surveillance systems. In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. Governments in full democracies are deploying a range of surveillance technology, from safe city platforms to facial recognition cameras.

As has been reported since December 2019, use of facial recognition and drones for surveillance has been rampant in the country and especially in New Delhi. The National Crime Records Bureau’s National Automated Facial Recognition System, which is a central level project aiming to create a national database of photographs which will use facial recognition to identify suspects, is also in the Request for Proposals (RFP) stage. In addition to this, there are multiple other central and state level projects already in place or in development which aim to use AI for security or surveillance purposes. Use of AI for such projects, in the absence of a data protection regime or a concrete surveillance law regime in place raises concerns about misuse. Use of AI for surveillance makes the entire process highly streamlined and cost-effective. According to Edward Snowden, this allows for the surveillance of teams of people and even populations of people—entire movements, across borders, across languages, across cultures, in other words, state-sponsored mass surveillance.  

In this context, it is pertinent for the AI Stack to also address how such use by the government for surveillance will be regulated by the AI Stack.

Our recommendations

Keeping in mind the above concerns, we made the following recommendations:

  1. An independent supervisory data protection authority consisting of relevant stakeholders should be set-up to regulate the AI Stack. We note that such an authority is presently under contemplation under the Draft Data Protection Bill and should be a condition precedent to the launch and operation of the AI Stack.
  2. The Data Protection Authority  should lay down a privacy framework to ensure the right to privacy of citizens is not violated. The privacy framework should also address how the AI Stack will address the concerns relating to state-sponsored mass surveillance. This can also be a useful body for the harmonisation of standards given the plethora of personal data policies being considered at present by the Union and State Governments.
  3. The Data Protection Authority should also work on expanding how it will ensure against algorithmic bias. For this, the authority should develop an ethical framework for data collection and storage. Some of such provisions are existing within the civil society model titled the Indian Privacy Code, 2020 which has also been introduced in parliament as a private members bill. The specific provision on the need for safeguards when automated decision making leads to legal impacts may be considered.

These comments were drafted by IFF staff with the help of software engineer Gargi Sharma.

Important Documents

  1. Comments to AI Standardisation Committee, Department Of Telecommunications on the paper for Development of an Indian Artificial Intelligence Stack dated October 2, 2020 (link)
  2. Paper for Development of an Indian Artificial Intelligence Stack dated September 2, 2020 (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Petitioners Conclude Arguments Before Third Judge in Case Challenging Constitutionality of Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

After a marathon hearing before the Bombay HC spanning over 7 days, the Petitioners have concluded their arguments before the third Judge, Justice A.S. Chandurkar, in the petitions challenging the constitutionality of the Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

5 min read

2
Why do we do the “Quarterly Members’ & Donors’ calls” / For all the johnny-come-lately`s

What goes on in these “Quarterly Members’ and Donors’ calls" and why do we host them? What kind of mangoes do we eat and how?

3 min read

3
Dear Digi Yatris, it’s time to deboard

Amid suspicions about its tech operator’s criminal records and vast allegations of data privacy violations, the Digi Yatra Foundation has announced a revamp of the service and is urging its users to abandon the old app and re-install a new version. We shed light on this shady ‘makeover’.

7 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!