Analysing the NDHM’s Health Data Management Policy: Part 2

Along with the Centre for Health Equity, Law, and Policy, we have drafted a working paper analysing the NDHM's Health Data Management Policy. Here, we look at issues of consent and confidentiality, data privacy and security, exclusion, and private sector access to health data.

15 July, 2021
11 min read


Along with the Centre for Health Equity, Law, and Policy, we have drafted a working paper analysing the National Digital Health Mission’s Health Data Management Policy. In this second post on our paper, we look at some of the key issues that plague the Health Data Management Policy, such as concerns about consent and  confidentiality, data privacy and security, exclusion, and private sector access to health data.


Last month, we released our paper ‘Analysing the NDHM Health Data Management Policy’, that we had jointly written with the Centre for Health Equity, Law, and Policy. In our previous blogpost, we provided some context to the National Digital Health Mission’s (NDHM’s) Health Data Management Policy (HDMP), summarised our previous submissions on the policy, assessed India’s existing infrastructure in terms of the prerequisites needed for the implementation of a digital health records system, and looked at the governance framework laid down by the Policy.

As we have pointed out earlier, it is welcome that the policy lays down privacy-by-design as its guiding principles. However, when one takes a more granular perspective, significant issues emerge with respect to the implementation of such ideals. In this blogpost, we will look at some of these issues.

The HDMP lays out a consent framework to govern the collecting, storing, processing and sharing of individual health data, with the objective that users “should at all times have control and decision-making power over the manner in which personal data associated with them is collected and processed further.” It also lays out the process for obtaining consent for collecting and processing of data, including in cases of children, mentally ill or incapacitated individuals, and medical emergencies. Lastly, the policy  accords certain rights to users in relation to their medical data, including the right to confirmation and access, correction and erasure, restricting or objecting to disclosure, and data portability.

For the purpose of collecting and processing personal data, data processors are obligated to furnish a privacy notice and obtain the express consent of users. The consent must be:

  1. Freely and clearly given;
  2. Informed as to the scope of the consent;
  3. Specific as to the purposes of collecting or processing the data;
  4. Capable of being withdrawn at any time.

In the case of sensitive personal data, the data processor must also inform the user of any harms that may be involved in the processing of such data. Finally, the consent may be obtained on a physical paper or electronically, either directly from the user or through an electronic consent manager.

The NDHM-HDMP rightly centres user autonomy as its guiding principle. However, certain concerns remain. First, the mandatory requirement of taking informed consent is limited to the collection and processing of personal data, and the same requirement is not explicitly extended to the creation of a Universal health ID (UHID). Reportedly (see here and here), the central government is automatically generating UHID numbers for all individuals who choose to get COVID-19 vaccines by presenting their Aadhaar number, without the consent or knowledge of those individuals. This is at odds with the guiding principle of the HDMP’s consent framework.

Secondly, the consent is required to be specific only as to the purpose for collecting and processing personal data. In effect, the health data companies can secure one-time consent of the users for collecting and processing personal data for one or more broad purposes. This is evident from the fact that companies are required to collect fresh consent only in the event of any change in its privacy policy or in relation to any previously unidentified purpose. Such a policy precludes the user from giving or refusing consent on specific lines.

For example, the user will not be able to withhold consent to digitise specific information or even refuse consent to share specific digitised information, such as abortion, substance use/dependence, HIV/STI status, suicide attempt and other mental illnesses. In order to address this, a broad consent in the beginning must be accompanied with specific consent taken at each instance of data processing and sharing. The specific consent must include the entity with whom information is to be shared, the specific purpose for sharing, and the information that is necessary to facilitate the purpose. To facilitate controlled sharing of personal data, data processors must be required to put in place systems and processes for ‘masking’ of data.

Thirdly, the NDHM Personal Data Processing Model Consent Form leaves out crucial information. The form must explicitly mentions that the collection of data is voluntary and refusal will not entail denial of services or care that one is entitled to or imposition of any additional cost. The exact duration of retention of the data must also be specified.

Fourthly, the consent framework can be further bolstered to empower users. For example, the broad consent form should be accompanied with an information sheet explaining the rights to confirmation and access, correction and erasure, restricting or objecting to disclosure and data portability, as well as the process for grievance redress, in an easy to understand language. Further, the consent should not be limited to a pre-printed form with blank spaces for limited handwritten entries and signatures, and should ideally be accompanied with online or in-person counselling.

Finally, low digital literacy levels may impede the ability of users to exercise consent in an informed and meaningful manner. For example, a 2017-18 NSO survey found that only 18.4% of persons aged 15 and above were able to operate a computer, while only 22.9% were able to use the internet. Unfortunately, existing schemes for digital literacy have witnessed slow progress. For example, under the Pradhan Mantri Gramin Digital Saksharta Abhiyan (PMGDISHA), as of March 2021, only around 4.54 crore candidates were enrolled for digital literacy training and 2.71 crore candidates were certified.

A 2019 independent impact assessment of PMGDISHA indicated that:

  • Only 50.53% of the respondents felt that their training had led to an increased confidence in the use of digital technology and a subsequent increase in earning capacity
  • Only 37.63% of the respondents stated that they used digital technologies for daily office/school work;
  • Only 30% of the respondents said that they were using the internet to access government services;
  • Only 24.75% of respondents affirmed that they were able to teach digital skills to their family members after attending the program.

Additionally, many digital literacy programmes in India focus on the usage of computers, even though most of the country accesses the internet through mobile devices. In light of this, we recommend that the NDHM should facilitate and scale up digital literacy programmes in order to address problems associated with widespread digital illiteracy, which may impede the ability of data principals to choose and consent in an informed manner.

Data privacy and security

The HDMP states that its guiding principle is ‘Security and privacy by design’, and  lays down a framework for realising the stated goal:

  1. Health data companies are bound by the principles of accountability, transparency, consent driven sharing, purpose limitation, collection, usage and storage limitation, and the adoption of reasonable security practices. Data minimisation must also be implemented.
  2. Companies must publish a ‘privacy by design’ policy. Additionally, they must conduct a data protection impact assessment, maintain reliable records, and submit to data audits.

While adopting ‘privacy by design’ as a principle is a step in the right direction, the overarching concern of  large scale processing of health data in the absence of a data protection legislation remains. Without statutory backing, effective data protection would be difficult to enforce, especially since the HDMP does not contain adequate penalties for non-compliance as a deterrent. Additionally, concerns about surveillance remain unanswered.

Besides, the policy itself does not imply a strong data protection regime. For example, the HDMP allows the processing of data till the purpose for which data was collected is no longer necessary, providing companies with the discretion to decide this. Additionally, the policy allows the blocking or restriction of personal data in case of impairment of the legitimate interests of either the user or the health data collector, which may allow the latter to store and/or process the user’s health data beyond the consented time-period and for longer than is necessary. Lastly, the HDMP allows for prohibitions against portability in case doing so would lead to the revealing of trade secrets. Now, compare this to the EU’s General Data Protection Regulation: the right to data portability guaranteed under Article 20 of the GDPR does not impose such restrictions.

The HDMP also does not envisage a strong accountability mechanism to enforce privacy. For example, in case of breach of security, only notifying the NDHM has been mandated, whereas notifying the user has not been made compulsory! Meanwhile, the carte blanche given for the processing and usage of anonymised personal data as ‘non-personal’ data ignores several attendant security hazards. By contrast, the National Centre for Disease Informatics and Research imposes strict conditions in order to prevent unauthorised access to data, including maintaining a list of authorised individuals with access to the data repository.

The policy also provides the NDHM with the discretionary power to specify acceptable purposes for collecting or processing health data, which may further contribute to excessive data collection. Such issues are further aggravated by the lack of adequate transparency and accountability measures that allow users the power to directly hold health data processors to account. Again, this can be addressed through strict access control requirements. As has been argued elsewhere, conflating privacy with security may lead to significant problems.

To combat these issues, robust access control mechanisms must be implemented:

  • Both the regulator and the data processor can log all instances of access requests and approvals;
  • Secure authorisation tokens are generated and authenticated for each approval;
  • Regular cross-verification of access logs between regulators and processors occurs.
  • Finally, only aggregated data should be stored at the cloud level, with individual electronic health records remaining at the facility level.


Prima facie, the HDMP does address issues of exclusion: the policy explicitly lays down a regime of non-exclusion, with respect to both the possession of a health ID and Aadhaar-based verification. It mandates that participation in the National Digital Health Ecosystem shall take place on a voluntary basis, and that no individual can be denied access to any health service for  lack of a Health ID. Additionally, Aadhaar will not be mandatory for registering for a Health ID, while each data principal shall have the right to opt out of the programme at any time and ask for the de-linking and deletion of their data.

However, whether deliberate or not, the potential for both exclusion and coercion based inclusion still exists. India has already faced several issues with Aadhaar based authentication. The efficacy of Aadhaar based registration for schemes has been studied in various contexts. One study for Aadhaar based verification in PDS shops found that the system is “rife with technical issues such as incomplete seeding of cardholder information, biometric failure and administrative gaps such as inadequate failure reporting and back-up systems”.

Indeed, the CEO of UIDAI had noted that in 2018, authentication failure for government services was as high as 12%. For a crucial sector such as health, such errors may end up having significant ramifications for public health outcomes. Such concerns have already been voiced before: for example, with regards to the Mother and Child Tracking System launched in 2009, activists have pointed out the threat of the extraction of data becoming a precondition for the delivery of health services. The mandatory use of Aadhaar for creating Health IDs is also contrary to the Aadhaar Act post the Puttaswamy judgement of the Supreme Court, which states that Aadhaar can only be made mandatory for government benefits and schemes.

Furthermore, it is unclear whether the proposed framework will be able to accommodate user consent in practice. Experts have stated that the registration for the health ID may be similar to Aadhaar based registration, in that it would be “‘voluntary’ on paper, but made mandatory by certain institutions, both government-owned and private”. Multiple media reports (see here and here) have mentioned cases in which the health ID has been made mandatory. More recently, as mentioned earlier, multiple media reports have mentioned that citizens who have enrolled in the COVID-19 vaccination programme have had their Health IDs created without their consent. This is done on the basis of the data entered by citizens and is linked to their Aadhaar, despite several clarifications from the government stating that Aadhaar is not mandatory for receiving a vaccine.

Concerns about ‘coercion-based inclusion’ also persist. Multiple reports have highlighted the effectively coercive nature of Aadhaar, in which citizens are coerced into registering through the Aadhaar framework for the provision of services. This can also take the form of financial compulsions, especially in the context of healthcare, as can be seen in the All India Institute of Medical Sciences (AIIMS). At AIIMS, if a patient provides their Aadhaar ID they can get the registration charges of Rs. 100 waived off. Such patients are then subsequently issued a Health ID. Thus, the health ID framework must be revisited. Even if the framework is to be retained, Aadhaar-based verification should be removed so as to ensure that issues related to privacy are at the very least partially addressed.

Private sector access to healthcare data

The HDMP allows health data companies to share health data with entities for purposes of research, which also include insurance and pharmaceutical companies. Research must not use personal health data in individual patient care and the authorization for granting access must not be vested in individual data processors. This is all the more important as a lot of “research” could actually be data mining for improving marketing strategies, sales and development of products; purposes far removed from the purposes for which individuals trusted providers with their sensitive health data.

Apart from aforementioned risks to privacy and data security, the use of aggregated health data by private commercial entities have a range of legal and ethical implications, including the potential for market abuse, unfair competition and lack of a level playing field. Digital health records may also be used by private entities to further their own commercial or private interest at the cost of the individuals and public interest. For example:

  1. An insurance company may use digital health records to profile and score individuals and offer individualised insurance contracts (as opposed to risk pooling) and premiums that could lead to denial of coverage for high risk individuals and volatility in premium amounts for others depending on their health data. Profiling and individualisation may raise social concerns, in particular if the risk is correlated with low income and low wealth. This would undermine fairness and result in exclusions and discirmianiton against individuals or groups that would need insurance coverage the most.
  2. Pharmaceutical companies may use digital health records for targeted marketing to doctors and patients for their products, which may violate regulations on direct drug promotion. In jurisdictions where digitization of health records has been operational for years, the pharmaceutical companies have been using electronic health records as marketing tools with the physicians at the point of care. This has a huge impact on patient choice and safety and expenditure on drugs. In fact, some pharmaceutical companies and diagnostic centres may even be forming EHR vendor relationships and even investing in EHR softwares to push for their products. Data dredging is another very common occurrence that takes place in the pharmaceutical industry. It involves conducting multiple analyses till one arrives at the result “so hoped for” and the same is reported without truthfully conveying the analytical course undertaken.

Thus, the NDHM-HDMP should clearly specify that the purposes for sharing health data should be limited to medical and public health research purposes, as well as expressly prohibit sharing of such data for insurance and other commercial purposes. As a reminder, the Digital Information Security in Healthcare Bill, 2018 that had been placed in the public domain some time ago had specifically barred monetisation of health data and sharing of identifiable or anonymised health data with insurance companies (save for settling insurance claims), pharmaceutical companies, employers and human resource consultants. The technical processes and anonymisation protocols to be used for anonymisation should also be formulated and approved prior to implementation of the HDMP.

Important Documents

  1. The National Digital Health Mission’s Health Data Management Policy (link)
  2. IFF and C-HELP Working Paper: ‘Analysing the NDHM Health Data Management Policy’ (link)
  3. Previous blogpost dated June 17, 2021 titled “Analysing the NDHM’s Health Data Management Policy: Part 1” (link)

We would like to once again thank the Centre for Health Equity, Law, and Policy for all their help and support on this paper. In particular, we would like to mention Shefali Malhotra, who is a Research Consultant with the Centre for Health Equity, Law and Policy, and Shivangi Rai, who is Deputy Coordinator with the Centre for Health Equity, Law and Policy.

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

Legislative Brief on Digital Rights for Winter Session 2023

In our legislative brief on digital rights for the Winter Session 2023, we highlight key areas of concern pertaining to digital rights and freedoms, data privacy, data protection, censorship and other concerns that require extensive deliberation in the Houses of Parliament.

6 min read

Statement: Exemption of CERT-In from the RTI Act dilutes institutional transparency and weakens individual privacy

An amendment to the Second Schedule to the RTI Act, 2005 was notified on November 24, 2023, exempting CERT-In from providing information under the Act. This move is certainly not in the public interest as it weakens the rights of the people by diluting an Act meant to empower them.

3 min read

Broadcast Services Bill not looking like a wow: Our First Read #LetUsChill

Our First Read of the Broadcasting Services (Regulation) Bill, 2023 includes concerns over inclusion of “Over-the-Top” (“OTT”) content & digital news under MIB's regulatory ambit. We express our concerns for online free speech and journalistic freedom.

10 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!