Schmidt led Google from 2001 to 2011, before handing the reins back to the search giant’s co-founder Larry Page (Image source: ericschmidt.com)The former CEO of Google has sounded the alarm over the dangers of artificial intelligence models and how they could be hacked, issuing a warning to the industry.
Eric Schmidt, who led Google for a decade in the early 2000s, said during a conference last week that AI models are susceptible to hacking if they fall into the wrong hands.
“There’s evidence that you can take models, closed or open, and hack them to remove their guardrails. During training, they learn a lot of things. A bad example would be that they learn how to kill someone,” Schmidt said at the Sifted Summit tech conference in London.
“All of the major companies make it impossible for those models to answer that question,” he continued, referring to the potential of a user asking an AI how to kill. “Good decision. Everyone does this. They do it well, and they do it for the right reasons,” Schmidt added. “But there’s evidence that they can be reverse-engineered, and there are many other examples like that.”
While AI models and agents are improving in tasks like coding, reasoning, and bug detection, they are also vulnerable to hacking and jailbreak attacks. In fact, many experts expect AI models to become potential cybersecurity weapons. As generative AI systems rewrite themselves through learning and adaptation, there’s a high risk that malware could spread rapidly across a codebase, unleashing a torrent of malicious code at the push of a button if proper safeguards aren’t in place.
This isn’t the first time a high-profile tech executive like Schmidt has warned about the dangers of AI. Artificial intelligence models are vulnerable to hackers and could even be trained to harm humans if they fall into the wrong hands.
A few years ago, Geoffrey Hinton, often called the “Godfather of AI”, also highlighted the potential threats posed by AI chatbots like OpenAI’s ChatGPT.
“Right now, they’re not more intelligent than us, as far as I can tell,” Hinton said in a 2023 interview with the BBC. “But I think they soon may be.”
Eventually, he warned, this could lead to AI systems developing objectives of their own, such as: “I need to get more power.”
“I have come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” Hinton said. “We are biological systems, and these are digital systems. The big difference is that with digital systems, you can have many copies of the same set of weights, the same model of the world.”
Despite the risks, Schmidt remains optimistic about AI, calling it an “underhyped” technology with enormous economic potential.
“I wrote two books with Henry Kissinger about this before he died, and we came to the view that the arrival of an alien intelligence—not quite human but more or less under our control—is a very big deal for humanity, because humans are used to being at the top of the chain,” Schmidt said. “I think so far, that thesis is proving out—that the capabilities of these systems are going to far exceed what humans can do over time.”
“Now the GPT series, which culminated in a ChatGPT moment for all of us, where they gained 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology. So I think it’s underhyped, not overhyped, and I look forward to being proven correct in five or ten years,” he added.
While many see the economic progress of a country as closely linked to AI, in Silicon Valley, debate continues over whether AI companies are overvalued.
Last week at OpenAI’s DevDay, OpenAI CEO Sam Altman told the BBC: “I know it’s tempting to write the bubble story. In fact, there are many parts of AI that I think are kind of bubbly right now.”