Artificial Intelligence (AI) is a global technological wave and there’s no disputing the fact that it has entered the Indian market. India has not advanced as far as giving citizenship rights to a robot (case in point –Sophia from Saudi Arabia), but personalised chatbots have flooded the market, AI has forayed into the medical stream and it is also being used to protect the cyberspace.
With greater explorations into the space of AI, the world is moving towards a goal of near-complete automation of services. The element of end-to-end ‘human involvement’ has been insisted upon by most AI advanced countries such as Canada, in order to ensure accountability and security of AI systems. AI is wholly based on data generated and gathered from various sources. Hence, a biased data set could evidently lead to a biased decision by the system or an incorrect response by a chatbot.
Pratik Jain, co-founder of Morph.ai, a Gurgaon-based AI startup, says if the chatbot does not respond correctly once deployed by the business, a human fallback is provided to correct the error based on the data generated and provided by the business.
On February 01, 2018, Finance Minister Arun Jaitley stated that the Government think-tank, Niti Aayog would lead the national programme on AI. In keeping with this objective, the government is set to support startups and centres of excellence with respect to AI training and research activities. Hence, it is established that AI is here to stay for the long run, either in the form of smart assistants like Alexa, Natural Language Processing (NLP) to process sentence formation and emotions or even Machine Learning platforms.
However, despite all the established entry modes into the global market, AI is yet to have a guidepost, be regulated or even be legally understood. Let’s take the example of Sophia: awarded citizenship under the laws governing citizens of Saudi Arabia, will she be permitted to drive from June 2018? Will she be allowed to purchase property? If she commits a crime, equal to the statement she said apparently by error i e, she wanted to destroy humankind, what punishment would be awarded?
The point being, AI is growing mutli-fold and we still do not know all the advantages or pitfalls associated with it which is why it is of utmost importance to have a two-layered protection model: one, technological regulators; and two, laws to control AI actions as well as for accountability of errors.
Let’s take the example of AI in the form of personalised chatbots. Chatbots are chat-based interfaces which pop up on websites with which customers can interact. These chatbots can either follow a scripted text or through machine learning (ML) and increased interaction deviate from the standard questions to provide a more human-like interaction. In the course of communicating with the chatbot, if a person were to divulge sensitive personal information for any reason whatsoever, what happens to this data?
Disclosure of sensitive personal information in the digital space would fall with the purview of the IT (Reasonable Security Practices and procedures and sensitive personal data or information) Rules, 2011. Rule 5(3) of the 2011 Rules states that,
‘While collecting information directly from the person concerned, the body
corporate or any person on its behalf shall … ensure that the person concerned is having the knowledge of —
(a) the fact that the information is being collected;
(b) the purpose for which the information is being collected;
(c) the intended recipients of the information; and
(d) the name and address of —
(i) the agency that is collecting the information; and
(ii) the agency that will retain the information.’
So in the case of an ML chatbot which does not work as per a scripted text and has collected sensitive personal information, who is responsible if Rule 5(3) is breached? The most obvious answer would be the business unit/company because the rules in the 2011 Rules state that “The body corporate or any person who on behalf of the body corporate…” collects information. However, could the business possibly avoid liability by claiming that it was not aware that the chatbot, due to its AI ability of machine learning, had collected sensitive and personal information?
We do not have any clear provisions for advanced chatbots which do not work on a scripted text. With the lack of a clear provision in the law, accountability may take a hit. Additionally, what happens if an AI robot is given citizenship in India? Who is responsible for their actions? Or in case of autonomous car accidents, who is responsible for damage to property or harm caused or death of a person?
Overall our laws will eventually need to be amended or new laws for AI technologies and processes will need to be adopted to fill up existing lacunae in the growing AI space. However, before taking up this arduous task, it would be simpler to form the basic guidelines which should be met on a national level for any AI activity – indigenous, foreign or even modifications to an open source AI. The guidelines would serve as the foundation for any amendments in the laws or brand new AI laws.
The present debate about AI is between human redundancy and evolution of technology. Either way, the reality is that AI has entered the market and, pros and cons aside, the need of the hour is to estimate the problems and have solutions to deal with them in advance.