An AI program spoke in a foreign language that it was never trained to know. Such mysterious behaviours are called emergent properties— where an AI unexpectedly teaches itself a new skill. A recent example would be an AI program that recently adapted itself to Bengali based on a few prompts. With a handful of prompts, AI can now fully translate Bengali. What is this phenomenon? An AI program learning a language that it was never trained for raises many questions - why is this happening, what do we call this phenomenon? Based on recent reports, the only plausible explanation seems to be the phenomenon known as AI black box. In simple words, AI black box is a system whose inputs and operations are not visible to the user or any other interested party. It is known to be an impenetrable system. The shocking element of this is that black box AI models arrive at conclusions without providing any rationale about how they made their decision. To understand the AI black box, we need to first know how human or machine intelligence work. Learning by example is what drives most intelligence be it human or machine. For example, a child learns to recognise letters or different animals. One only needs to show them the examples of a letter or animals and they can identify them before long. According to Professor Samir Rawashdeh from the University of Michigan-Dearborn, a human brain is essentially a trend-finding machine that when exposed to examples can identify qualities and ultimately categorise them autonomously and unconsciously. Rawashdeh, who specialises in AI, says this is easy but explaining how it is done is mostly impossible. Deep learning works much in the same way as they are trained in the same way as children are trained. These systems are fed correct examples of something that they should be able to recognise. Soon after, its own trend-finding mechanism will have assessed a neural network to categorise the corresponding object. When searching the same object in their search bar, it correctly displays the object or image. Just as in human intelligence, we don’t really know how deep learning systems come to their conclusion. What Sundar Pichai said about Black Box “There is an aspect of which all of us in the field call it a black box. You don’t fully understand and you can’t quite tell why it said this or why it got wrong. We have some ideas and our ability to understand this gets better over time, but that is where the state of the art is,” Google CEO Sundar Pichai told Scott Pelley from 60 Minutes in January this year. When Pelley interjected by saying - “You don’t fully understand how it works and yet you’ve turned it loose on society?” Pichai responded by saying, “Let me put it this way, I don’t think we fully understand how a human mind works either.” Why is the black box problem a matter of concern? While AI can do a lot of things that humans can’t, the problem of the AI black box can lead to distrust and uncertainty around tools backed by AI. For data scientists and programmers, AI black boxes can pose a challenge as they are self-directed and there is no data available on their inner workings. One of the most pronounced issues would be AI bias. While bias could be introduced to algorithms through conscious or unconscious prejudice from developers, with black boxes these could potentially creep undetected. At present, Deep learning systems are used to make judgments about humans in medical treatments, loan eligibility or who should get a particular job. In these cases, AI systems have already demonstrated bias. And, with the black box problem, this could be aggravated making it difficult for many to avail certain services. The lack of transparency and accountability too can cause many issues. The complexity of black-box neural networks will lead to inadequate auditing of these systems. This could pose a problem in sectors such as healthcare, banking and financial services, and criminal justice. Besides, they possess a host of security flaws making them vulnerable to attacks from various threat actors. This can be explained by a scenario where a bad actor changes the input data of the model to influence its judgement to make potentially dangerous decisions. What can be done to counter the threat of AI black boxes? According to experts, there are two approaches to the problem of black boxes - creating a regulatory framework, another is to find a way to look deep into the box. Since the output and the judgement behind them is impenetrable, a deeper examination of the inner workings may help to mitigate challenges. This is where Explainable AI, an emerging field of AI comes into play that works towards deep learning transparent and accountable. Even though AI black boxes pose many challenges, systems that use such an architecture have already proven their utility in many applications. Such systems can still identify intricate patterns in data with high levels of accuracy. They arrive at conclusions relatively rapidly and while using less computing power. The only problem being that it is sometimes difficult to understand how exactly they come to those conclusions.