How did Hinton go from being an enthusiastic proponent – and pioneer – of the technology, to becoming a critic? We take a look.
Story continues below this ad
Who is Geoffrey Hinton?
Hinton, 75, is a UK-born researcher and academic. He began his career with a BA in Experimental Psychology from University of Cambridge in 1970 and followed it with a PhD in Artificial Intelligence from the University of Edinburgh in Scotland in 1978. He has also served as a faculty member in the Computer Science department at Carnegie-Mellon University in Pennsylvania, USA.
In the 1980s, as most AI research in the United States was funded by the US military, Hinton said he was opposed to contributing to research for possibly using AI in the battlefield. This prompted his move to Canada.
He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. At the university, he is an emeritus distinguished professor and has written numerous academic papers on machine learning. Since 2013, he has been working half-time for Google as a VP Engineering fellow.
What is Hinton’s contribution to the development of AI?
In a Coursera course that Hinton taught, he explained that normally, a computer program or code is written by hand for each specific task to be completed by a machine (like showing the user a photo or a particular text). But in machine learning, lots of examples are collected and fed into the machine, in order to teach it to identify the correct output for a given input. “A machine learning algorithm then takes these examples and produces a program that does the job,” he wrote.
Story continues below this ad
For example, a machine can be fed thousands of images and then trained to identify what different animals or plants look like. The NYT interview notes that in 1972, as a graduate student at the University of Edinburgh, Hinton “embraced an idea called a neural network… a mathematical system that learns skills by analyzing data.” He said the aim was to solve practical problems through novel learning algorithms – inspired by how the human brain works with its networks of neurons or nerve cells.
The Association for Computing Machinery, which awarded Hinton the Turing Award for his contributions to computer science, explained in 2018 that the term ‘neural networks’ refers to “systems composed of layers of relatively simple computing elements called ‘neurons’ that are simulated in a computer.” These “neurons” only loosely resemble the neurons in the human brain, and influence one another.
A breakthrough came in 2012, when Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, “built a neural network to analyse thousands of photos and teach itself to identify common objects”, noted the NYT report. Sutskever went on to become chief scientist and co-founder at OpenAI, the company behind ChatGPT, the powerful AI chatbot capable of outperforming humans in certain exams and writing essays. Later on, Google spent $44 million to acquire a company called DNNResearch, founded by the trio. It incorporated elements from this into its social media website Google+, for image search.
Hinton said in an IIT-Bombay commencement address in 2021, that neural networks are the best way to do speech recognition and to classify objects in images and the best way to do machine translation. “Neural networks with about a trillion parameters are so good at predicting the next word in a sentence, that they can be used to generate quite complicated stories or to answer a wide variety of questions… These big networks are still about 100 times smaller than the human brain, but they already raise very interesting questions about the nature of human intelligence,” he said.
Story continues below this ad
Hinton’s profile by the UK’s Royal Society notes that the development of artificial neural networks “may well be the start of autonomous intelligent brain-like machines”.
Why has Hinton criticised the development of AI?
In his interview with the NYT, Hilton expressed concern on three major counts. First, given that tools like ChatGPT scour the internet for information and create a final output for the user, he believes the internet might soon be flooded with false photos, videos and text, etc. and the average person will “not be able to know what is true anymore.”
Second, that over time AI-powered tools may lead to machines taking over human jobs in ways we have not fully understood, impacting not just the entry-level job positions. “It takes away the drudge work,” he said, adding “It might take away more than that.”
He also told the BBC, “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems.” He says that this means a vast difference in terms of capacity, where machines can instantaneously process large amounts of data. In the future, such data could be used by “bad actors” as they wish.
Story continues below this ad
And Hinton is not alone in voicing these fears. In early April, more than 1,000 technology leaders and researchers, including Apple co-founder Steve Wozniak and Tesla founder Elon Musk, signed an open letter calling for a six-month pause on the further development AI systems, saying they pose “profound risks to society and humanity.”
They also raised concerns over misinformation and said companies must develop a set of shared safety protocols for advanced AI design and development at this time that can be overseen by independent outside experts. With a pause, the letter said, a proper framework with a legal structure to establish the liability of companies and watermarking systems “to help distinguish real from synthetic” should be created, among other safeguards.