IIT-Hyderabad researchers find out how Artificial Intelligence teaches itselfhttps://indianexpress.com/article/education/iit-hyderabad-researchers-finds-out-how-artificial-intelligence-teaches-itself-5979341/

IIT-Hyderabad researchers find out how Artificial Intelligence teaches itself

The IIT Hyderabad team approached this problem with ANN architectures using causal inference with what is known in the field as a Structural Causal Model.

iit, IIT Hyderabad, AI, artuficial intelligence, deep learning, how AI teaches it self, ANN, startup news, startup india, iit start ups, iit news, Hyderabad news,education news
AI (Artificial Intelligence) concept.

The Indian Institute of Technology (IIT), Hyderabad researchers have developed a method by which the inner workings of Artificial Intelligence (AI) models can be understood in terms of causal attributes.

‘Artificial Neural Networks’ (ANN) are AI models and programs that mimic the working of the human brain so that machines can learn to make decisions in a more human-like manner. Modern ANNs, often also called Deep Learning (DL) are even more complex. As part of DL, machines can train themselves to process and learn from data and almost match human performance. However, how they arrive at decisions is unknown, making them less useful when the reason for decisions is necessary.

In video| Why most drop-outs from IITs, IIMs are from reserved category?

Vineeth N. Balasubramanian, associate professor, Department of Computer Science and Engineering, IIT Hyderabad, and his students Aditya Chattopadhyay, Piyushi Manupriya, and Anirban Sarkar in their recent research have found answers to the question.

Advertising

“The DL models, because of their complexity and multiple layers, become virtual black boxes that cannot be deciphered easily. Thus, when a problem arises in the running of the DL algorithm, troubleshooting becomes difficult, if not impossible,” said Balasubramanian.

“If treated as black boxes, there is no way of knowing whether the model actually learned a concept or a high accuracy was just fortuitous,” he remarked, while saying that “due to lack of transparency in DL models, end-users can lose their trust over the system. There is, thus, a need for methods that can access the underbelly of the AI programs and unravel their structure and functions.”

Read| IIT-Hyderabad launches master’s programme in development studies

The IIT Hyderabad team approached this problem with ANN architectures using causal inference with what is known in the field as a Structural Causal Model. Balasubramanian informed, “We have proposed a new method to compute the Average Causal Effect of an input neuron on an output neuron. It is important to understand which input parameter is ‘causally’ responsible for a given output; for example, in the field of medicine, how does one know which patient attribute was causally responsible for the heart attack? Our (IIT Hyderabad researchers) method provides a tool to analyse such causal effects.”

Transparency and comprehension of the workings of DL models are gaining importance as discussions around the ethics of Artificial intelligence grow, remarked Balasubramanian. This makes sense given that the European Union General Data Protection Regulation (GDPR) regulation requires that an explanation must be provided if a machine learning model is used for any decisions made on its citizens, on any domain, be it banking, security or health.