I, Draft National Robotics Strategy

IFF submits its comments on MeitY’s recent Draft National Robotics Strategy, highlighting concerns and making rights-affirming recommendations on the use of robotics in healthcare, and the need for a human rights impact assessment before a nation-wide roll-out.

01 November, 2023
8 min read

tl;dr

Ministry of Electronics and Information Technology ("MeitY") released its draft National Robotics Strategy (“Strategy”) in early October, and opened consultations till October 31, 2023. The Strategy highlights four “priority sectors” for adopting robotics in India, namely manufacturing, agriculture, healthcare, and national security. It focuses on building innovation and capacity in these sectors through robotic automation – except healthcare, which includes usecases like patient monitoring, surgery, telemedicine and palliative care. Implementing robotics in domains that may directly affect human dignity and privacy raises a few alarms. In our submissions, we analyse the usecases suggested in Chapter 6.2: 'Healthcare' of the Strategy by dividing them into a) infrastructural and b) patient-facing concerns, and suggest rights-affirming amendments to the draft.

Infrastructural concerns

In 2019, the World Health Organization (“WHO”) issued its recommendations on digital interventions for health system strengthening, which lists indicators for assessing the impacts of artificial intelligence and automation on health systems. One such indicator is ‘feasibility’: factors such as resources, infrastructure and training requirements determine the feasibility of implementing a digital intervention such as robotics. The Strategy points to a few incubators and dedicated research centres for robotics that have been instituted across India to accelerate indigenous manufacturing and innovation (ARTPARK, CAMRAS, IHFC, DRDO). However, the Strategy also accepts that India currently lacks infrastructure to efficiently integrate robotics into the four identified sectors. It rightly addresses India’s inadequate skilled human resources, low manufacturing capacity, high costs, low technological limitations, absence of multidisciplinary collaboration, lack of awareness and limited governance mechanisms. In addition to this, we recommended that the Strategy also examine India’s health infrastructure on the following factors: 

  1. Substitutability: WHO recommends that digital health technologies (“DHT”) such as robotics should complement and enhance health system functions, and not replace or substitute fundamental components such as the health workforce, financing, leadership and governance, and access to medicines. Further, new technology must not jeopardise the provision of high quality non-digital services in places where DHT cannot be deployed. This, for instance, means that a diagnostic or surgical robot should not be looked at as a substitute for a healthcare professional, or provided to the public as one, but can be a tool used by the professional. Availability of robots performing similar functions should not mean patients cannot opt for the services of the professional while retaining service quality. Where robotics cannot be integrated, the same service should also not suffer in quality. The Strategy suggests deploying robots for minimally invasive surgery, but studies around the world comparing robotic surgery to conventional surgery fail to show any superiority of the former. An in-depth needs assessment for robotic surgery may be difficult to conduct in India, as health systems do not expend time or energy monitoring post-surgery care. Therefore as a baseline rule, we suggested that robotic surgery should not be deployed in place of a doctor. 
  2. Added administrative burden: Automation is introduced in environments where human resources are burdened or spread thin, which is true in the Indian health systems context. However, global experiences have shown that introduction of robots in health specifically fails to save labour, and additionally burdens health workers with responsibilities. In Japan, where robots are a commonly deployed DHT, it was seen that caregiving robots themselves required care: they had to be moved around, maintained, cleaned, booted up, operated, repeatedly explained to residents, constantly monitored during use, and stored away afterwards. A growing body of evidence from other countries suggests that robots tend to end up creating more work for health workers. The learning curve is steepened in countries like India, where the digitally remote workforce will require more extensive and continuing capacity building to deploy robots.
  3. Access: In moving towards universal health care, India must prioritise making health services accessible for all. Access includes physical, social and financial access. According to a global survey report, AI and robotics should be seen as making healthcare more accessible and affordable, as such technologies can easily become the provenance of the well off. The Strategy should attempt to democratise the availability and use of robotics with the same vigour as it democratises their creation and innovation.
  4. Capacity building: Though the Strategy accedes that the Indian health workforce is not adequately trained to adopt robotics, it should make constructive recommendations on training. Continuing medical education for health workers is essential to an evolving health system, therefore solutions like bridge courses, training modules and specialisations on AI and automation can go a long way. Workers interacting with robots might not understand to an appreciable degree how they work, at least in the initial roll-outs. The opacity of automation can also make it difficult for health workers to ascertain how the system arrived at a decision and how an error might occur. They will further find it difficult to relay this to the patient. Therefore, the Strategy should place capacity building as its highest priority.
  5. Risk mitigation and management: Studies suggest there is a definite possibility of increased risks of infection by robotic instruments. One 2017 Japanese assessment reported higher levels of contamination of proteins and residue in robotic instruments as compared to other instruments – and found that it is virtually impossible to completely remove the protein from surgical instruments, which would endanger patients to unknown organisms and prion-based diseases. This raises alarms for patient safety, Further, robots are machines prone to breakdowns and malfunctions. Assessment of FDA data found that out of 10,624 robotic surgeries, 1535 (14.4%) led to significant negative patient impact, including injuries (1391 cases) and deaths (144 cases), and over 8061 (75.9%) saw device malfunction. It is pertinent for the Strategy to identify risk mitigation measures, build an accountability framework for harm caused by automated decision-making, and equip health workers to prevent/minimise such occurrences.

Patient-facing

Some internationally understood shortcomings of using robotics in healthcare include algorithmic bias, the opacity and lack of intelligibility of AI systems, undermining patient-clinician relationships, potential dehumanisation of health care, and erosion of physician skill. Factors relevant to the Indian context, which should be reflected in the Strategy, are given below:

  1. Accountability: A 2023 study shows that malpractice claims involving robot-assisted surgical procedures in the United States have increased more than 250% in the past seven years, with the most common claims being negligent surgery and misdiagnosis/failure to diagnose, and 30% of total claims being informed consent related. If robots are deployed in the health system unsupervised, liability becomes difficult to establish. The Strategy, in line with the Indian Council of Medical Research ("ICMR") ethical guidelines on use of AI in health, should require there to be a human-in-the-loop so that patients have a legal claim and redress mechanism available in case of harm caused by automated decision-making.
  2. (Un)informed consent: The AMA study above notes that when an AI device is deployed, the user (doctor, nurse, health worker) may not accurately be able to present information to the patient due to a variety of factors: fears or mistrust in DHTs, overconfidence, lack of knowledge, or confusion. The principle of taking informed consent before medical interventions requires the user to be sufficiently knowledgeable, to explain to patients how the robot or AI device will work. Automated decision-making can be opaque and difficult to understand, and doctors may not be able to provide an explanation on how the algorithm arrived at its output. As seeking informed consent is a medical grundnorm, the Strategy must address its significance and re-emphasise the need for personnel training. 
  3. Patient mistrust: A 2016 survey conducted among 12,000 people across 12 European, Middle-Eastern, and African countries found that only 47% of respondents would be willing to have a robot “perform a minor, non-invasive surgery instead of a doctor,” with the number dropping to 37% for minimally invasive surgeries. On further questioning, only 9% and 6% of respondents were willing for a robot to ‘stitch and bandage a minor cut or wound’ and ‘set a broken bone and put it into a cast’ respectively. These findings indicate that a sizable proportion of the public displays uneasiness or mistrust in using robotics in healthcare. In India, given a high instance of digital illiteracy and general mistrust in DHT, these numbers will further plummet. We recommend rolling out robotic interventions in a staggered and progressive manner, while arming health workers with information and education that they can clearly and transparently relay to their patients, which will gradually build trust. Requiring doctors to be in the loop during the initial phases of robotic surgeries can be effective and re-assuring to patients.
  4. Surveillance and privacy: Use of robots in monitoring of and communication with patients can imply constant audio-visual surveillance of patients. This may lead to data collection, whether by design or accident. Especially in palliative care, e-surveillance and monitoring robots could result in unwanted supervision that may occur without consent or knowledge or older persons. As the Strategy recommends patient monitoring and voice recognition as opportune usecases, it should also address surveillance and privacy concerns associated with them. At the outset, excessive (and incessant) data collection violates internationally accepted privacy standards. We recommended against using robotic surveillance of patients generally, or at least until the data protection laws in India are implemented and strengthened.
  5. Dehumanisation of palliative and elderly care: The Strategy highlights an urgent need for individualised support and long-term care for older persons, as India faces advanced population ageing in the coming decade. To policymakers across the world, merits of deploying robots in supporting end-of-life or palliative care include assistance and support to overworked care staff and minimised instances of abuse, violence or maltreatment of older persons. Interactions with robots, such as social companion robots, could also be beneficial for the physical and emotional well-being of the elderly. However, findings on the field are different. A report of the UN Secretary General on the role of new technologies for the realisation of economic, social and cultural rights suggests that overreliance on technology can dehumanise palliative or elderly care. DHTs such as robots may undermine the autonomy and independence of older persons and create new forms of segregation and neglect, especially among older persons abandoned in their private homes or deprived of human interactions. Further, we noted above that caregiving robots themselves require care and add to the administrative burden of caregivers. The Strategy must ensure that robotics deployed to assist older persons do not perpetuate dependency and indignity, or act as substitutes for human care. 
  6. Legal-ethical compliance: The ICMR ethical guidelines on AI in healthcare obligates all AI interventions to comply with ethical principles of responsible AI, trustworthiness, data privacy, optimisation of data quality, and accessibility. It further mandates that health workers should have strict control over medical decision-making, safety, and risk minimisation even when AI is employed responsibly. In the abovementioned report, the UN Secretary General urges governments to adopt legislative and regulatory frameworks that adequately prevent and mitigate the various kinds of adverse human rights impacts linked with the use of automation and artificial intelligence in the public and private sector. Therefore, the Strategy must establish legal-ethical safeguards for human rights, including transparency and accountability measures.

General comment

Countries must undertake Political, Economic, Social, Technological, Legal, and Environment (PESTEL) analyses or human rights impact assessments before implementing new digital technologies. The UN High Commissioner for Human Rights stated the need to “address the human rights challenges raised by digital technology”, and the UN Guiding Principles on Business and Human Rights obligate businesses to identify, assess and address their negative human rights impacts by conducting human rights due diligence. The Strategy must envision a thorough, nationwide human rights impact assessment in the four identified sectors, and proceed with integration of robotics only based on those findings.

Important documents

  1. IFF’s submissions on the Draft National Robotics Strategy (link)
  2. Draft National Robotics Strategy (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
Legislative Brief on Digital Rights for Winter Session 2023

In our legislative brief on digital rights for the Winter Session 2023, we highlight key areas of concern pertaining to digital rights and freedoms, data privacy, data protection, censorship and other concerns that require extensive deliberation in the Houses of Parliament.

6 min read

2
Statement: Exemption of CERT-In from the RTI Act dilutes institutional transparency and weakens individual privacy

An amendment to the Second Schedule to the RTI Act, 2005 was notified on November 24, 2023, exempting CERT-In from providing information under the Act. This move is certainly not in the public interest as it weakens the rights of the people by diluting an Act meant to empower them.

3 min read

3
Broadcast Services Bill not looking like a wow: Our First Read #LetUsChill

Our First Read of the Broadcasting Services (Regulation) Bill, 2023 includes concerns over inclusion of “Over-the-Top” (“OTT”) content & digital news under MIB's regulatory ambit. We express our concerns for online free speech and journalistic freedom.

10 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!