The two scientists, working separately, did most of their ground-breaking research in the 1980s, but the impact of their work is beginning to be felt only now.
Artificial Intelligence, or AI tools are used by users of computers and phones around the world to seek information, create photos and videos, or interpret large amounts of data in ways that were not possible just two years ago. AI has the potential to bring about fundamental changes in the way people live and work.
This year’s Nobel Prize in Physics recognises two scientists whose work laid the foundations of the AI revolution that is currently unfolding. John Hopfield, a 91-year-old American, and Geoffrey Hinton, a 76-year-old British-born Canadian, were on Tuesday awarded the Nobel Prize for their “foundational discoveries and inventions that enable machine learning with artificial neural networks”.
The two scientists, working separately, did most of their ground-breaking research in the 1980s, but the impact of their work is beginning to be felt only now.
The big success of Hopfield and Hinton has been in developing computer algorithms that mimic the functioning of the human brain in performing common tasks. Computers were invented to carry out repetitive calculation-based tasks that were too time-consuming for humans. But very soon, scientists began wondering whether machines could also be made to do things that humans seemed to be far better at — remembering, recognising, creating, learning, and making intelligent guesses.
AI has become common parlance now, but the origin of the term dates back to the mid-1950s, when scientists began speaking of computers as “intelligent” machines. As computers became more and more powerful over the years, they accomplished increasingly complex tasks with great efficiency, and seemingly gained in intelligence. However, these were still calculation-based tasks — and all that was essentially happening was that the computer was able to calculate faster and do many more tasks simultaneously than earlier.
Efforts to get a computer to imitate the functioning of the human brain did not make much headway until Hopfield’s revolutionary work in the 1980s. A theoretical physicist with interests in molecular biology and neuroscience, Hopfield built an artificial neural network, resembling the network of nerve cells in the human brain, that allowed computer systems to ‘remember’ and ‘learn’.
“Earlier, in 1949, the Canadian psychologist Donald Hebb had discovered that the process of learning in human beings involved permanent and irreversible changes in the synapses, or connections, between nerve cells where the communication related to the learning was occurring. Hopfield built an artificial neural network that could accomplish something similar, and this was a big breakthrough,” said Vipin Srivastava, a former professor of Physics at the University of Hyderabad, who has himself made fundamental contributions to the field.
Hopfield’s network processed information using the entire network structure, and not its individual constituents. This was unlike traditional computing in which information is stored or processed in the smallest bits. So, when a Hopfield network is given new information, like an image or a song, it captures the entire pattern in one go, remembering the connections or relationships between the constituting parts, like pixels in the case of images.
It allows the network to recall, identify, or regenerate that image or song when an incomplete, or similar-looking, image is passed as input. Hopfield’s work was a leap towards enabling pattern recognition in computers, something that allows face recognition or image improvement tools that are common now.
Hinton took forward the work of Hopfield and developed artificial networks that could perform much more complex tasks. So, while Hopfield networks could recognise simple patterns of shape or sound, Hinton’s advanced models could understand voices and pictures. These neural networks could be strengthened, and their accuracy at pattern recognition enhanced, through repeated inputs of data, called training. Hinton developed a method called backpropagation that enabled the artificial neural networks to learn from previous mistakes and improve itself.
The process of continuous learning and improvement by training on large datasets led to the development of deep neural networks that contained multiple layers of networks. Hinton demonstrated that deep networks resulted in the learning of more complex features and patterns in large datasets. Deep learning is at the heart of modern speech and image recognition, translation, voice assistance and self-driving cars.
The power of Hinton’s deep networks were most spectacularly demonstrated at the 2012 ImageNet Visual Recognition Challenge, a competition organised to test new technologies in image recognition. A pattern recognition algorithm using deep neural networks developed by Hinton and his students, called AlexNet, showed dramatic improvements in recognising images.
“It was a seminal moment in the development of artificial intelligence. Until then, the actual utility of these neural networks was not very well recognised and demonstrated. Now, of course, machine learning is being used in a wide variety of fields,” said Shravan Hanasoge of Department of Astronomy and Astrophysics at the Tata Institute of Fundamental Research in Mumbai. Hanasoge himself extensively uses machine learning for his study of stars.
“We deal with huge amounts of data that are full of possibilities. Machine learning helps us to focus on those datasets which have greater possibilities for new or interesting information,” he said.
In 2018, Hinton was awarded the Turing Prize, the most prestigious award in computer science. In fact, Hinton’s entire work has been in computer science, unlike Hopfield, who has made contributions to physics, neuroscience, and biology. Srivastava, the former professor at Hyderabad University, said the Physics Nobel was relevant mainly because Hopfield’s 1982 work borrowed from some earlier breakthroughs in physics.
“Hopfield’s network was inspired by a physical system called ‘spin glass’, alloys with some very special properties. The workings of spin glass and its mathematics was mapped on artificial neural networks,” Srivastava said.
This is not the first time that the Nobel Committee had picked a computer science breakthrough for the Nobel Prize in Physics. In 2007, the Physics Nobel was awarded for work that related to data storage devices like hard drives.







