Premium

‘We’ve no idea what they are thinking’: ‘Godfather of AI’ warns technology might invent its own language

Geoffrey Hinton expressed concern that AI might eventually evolve in ways of thinking that humans can no longer track or understand.

Geoffrey Hinton resigned from his position at Google in 2023 as a protest to talk openly about the concerns surrounding AI (Image source: Wikimedia Commons)Geoffrey Hinton resigned from his position at Google in 2023 as a protest to talk openly about the concerns surrounding AI (Image source: Wikimedia Commons)

Geoffrey Hinton, widely known as the “Godfather of AI,” warned about the potential risks of AI progressing beyond human comprehension if chatbots begin to develop their own internal languages. Throwing light on the issue in the recent episode of the One Decision podcast, Hinton expressed concern that AI might eventually evolve in ways of thinking that humans can no longer track or understand.

“Now it gets more scary if they develop their own internal languages for talking to each other. I wouldn’t be surprised if they developed their own language for thinking, and we have no idea what they’re thinking,” Hinton said during the podcast.

Hinton, a key pioneer in the field of machine learning and a Nobel laureate, emphasised that AI systems have already demonstrated the ability to entertain dangerous ideas, warning that the day may come when these systems think in ways that are entirely inaccessible to human understanding.

Story continues below this ad

Drawing a historical comparison to underline the gravity of the situation, Hinton said, “It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us.”

“I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control,” he added.

Hinton resigned from his position at Google in 2023 as a protest. According to several reports, he wanted to speak openly about his concerns surrounding AI as the technology continues to rapidly evolve. “I left Google because I was 75 and couldn’t program effectively anymore. But when I left, maybe I could talk about all these risks more freely,” he said.

His warning comes amid growing discussions in the AI community about “hallucinations”, instances where AI models generate false or misleading information. In April, internal tests at OpenAI revealed that its o3 and o4-mini models were hallucinating more often than even the less complex GPT-4o model, raising concerns about reliability and transparency.

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement