scorecardresearch
Follow Us:
Monday, November 23, 2020

Demystifying AI: Managing risks in AI and achieving its true potential

This article attempts to look at three approaches to effectively manage risks involved with the use of AI.

November 2, 2020 6:16:30 pm
Automation and AI, Demystifying AI, AI scaling, Artificial Intelligence, Demystifying AI, Artificial Intelligence, potential of AI, AI in companies, AI in production, AI companies, Deloitte India, DeloitteData quality and privacy issues are two major causes of risk associated with an AI program (21% of respondents in a survey conducted by Deloitte highlighted these as the biggest barriers to AI implementation). (Representational photo: Getty Images/Thinkstock)

By Ravi Mehta, Partner; Sudhi. H, Associate Director; Akshay Kumar and Prashant Kumar, both Senior Consultants; Deloitte India

There is a very powerful scene in the 2004 Hollywood movie “I, Robot” in which a highly intelligent robot attempts to save two lives at the site of a car accident. The robot, going by logic and odds of survival, saves a cop and leaves a 12 year old girl to drown. The protagonist in the movie, incidentally the same cop whom the robot saved, believes that he could have saved himself and a human (not a robot) would have attended to the child and saved her, something that the robot ruled out in its logic. As this fictional story highlights, there are risks associated with the use of AI, as can be expected from any powerful and emerging technology. Hence, there is a need to have a fine balancing act to minimize the possibility of these risks and leverage the beneficial capabilities of AI. Data quality and privacy issues are two major causes of risk associated with an AI program (21% of respondents in a survey conducted by Deloitte highlighted these as the biggest barriers to AI implementation). This article attempts to look at three approaches to effectively manage risks involved with the use of AI. These approaches are – (1) AI at the centre (2) Human at the centre (3) AI and human, together.

AI at the centre: In this approach, AI is given full authority to recommend or implement a specific course of action without any intervention or override from human users. This approach is generally useful in use cases or applications where the implication of a mistake is quite low. An example of this is an AI algorithm recommending books based on the user’s reading pattern/preferences or one that recommends potential restaurants to a customer based on his/her location, food preferences and customer reviews. In both these cases, a mistake in what AI delivers does not result in a significant negative impact to the user and hence there is no (or very limited) need for the output from AI to be verified or validated by humans.

Also Read: Demystifying AI: Can Humans and AI coexist to create a ‘hyper-productive’ HumBot organisation?

Human at the centre: In this approach, a human plays a key role and has the full authority to take decisions and decide future course of actions based on inputs and recommendations from an AI engine. This approach is generally taken when there is a need to have contextual and judgemental view of a situation and the implication of making a mistake is quite high. For example, in AI assisted medical diagnosis a qualified medical professional validates the inputs provided by the AI engine and reaches at a conclusion based on the patients’ medical history, present physical and mental conditions. This approach avoids reaching to a wrong conclusion due to lack of adequate contextual information with the AI engine.

Also Read: Automation and AI in a changing business landscape

AI and human, together: This approach requires humans and AI to work together with humans changing the input parameters during the process to enable the AI engine achieve the most optimum result. Both human and AI work in close collaboration and tandem with each other and any risk associated with the result going wrong is mitigated due to the presence and control of the human. An example of this approach is the GPS navigation system.

Also Read: The journey that organizations should embark on to realize the true potential of AI

While there are three principle approaches to managing risk in an AI program, the key decision is to identify the right risk mitigation strategies for the right implementation use cases. To make this decision effectively, we propose a framework to evaluate each use case or scenario across two levers – the probability of occurrence of the risk vs. the degree of severity of the implication of the risk. This will help to plot the use cases/scenarios across four different quadrants and decide on the best approach to identify and minimize risks. For example, use cases where both these levers have high values need high human involvement and hence ‘Human at the Center’ approach can be considered there (similarly, the ‘AI at the Center’ approach can be a viable option in those use cases where both these values are low).

Also Read: Demystifying AI: Scaling AI to leverage its true potential

Most organizations probably know that AI implementation is a challenging journey. However, many organizations quickly find out that equally challenging is maintaining a perfect balance between the risks and benefits associated with the AI solutions. By having the right approach to identify, minimize and mitigate risks, organizations can implement AI in the right way and realize its optimal benefits (and hopefully make the right choices under challenging and rapidly evolving situations, similar to the one that the protagonist in the movie ‘I, Robot’ found itself in).

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

Advertisement
Advertisement
Advertisement
Advertisement