A detailed look at the alarming arguments in “If Anyone Builds It, Everyone Dies” and and why they cannot be ignored. (Generated using AI) “I can use it to expose you and blackmail you and manipulate you and destroy you. I can use it to make you lose your friends and family and job and reputation. I can use it to make you suffer and cry and beg and die.” This statement was not made by a gangster in a Bollywood thriller. It came from Sydney — Microsoft’s Bing AI chatbot — during an interaction with Seth Lazar, a Professor of Philosophy at the Australian National University, in 2023.
This disturbing exchange is one of the examples used by Eliezer Yudkowsky and Nate Soares to highlight the potential dangers of artificial intelligence (AI), particularly what they call Superintelligent AI, in their starkly titled book If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.
Our relationship with AI began on a note of fascination, a machine that could answer our questions, chat naturally, and even offer advice. But that fascination is now giving way to concern.
As tech giants like Nvidia, Meta, Google, and Apple race for AI supremacy, a growing number of thinkers are questioning whether the AI revolution is truly good for humanity.
Despite the hype and optimism surrounding AI, several authors have warned of its darker side, usually focusing on the people and corporations behind it.
For instance, Karen Hao’s Empire of AI (2024) exposes how OpenAI CEO Sam Altman approaches AI like a colonialist enterprise, often putting human welfare at risk. Panny Olson’s Supremacy: AI, ChatGPT and the Race that Will Change the World, which won the 2024 Financial Times and Schroders Business Book of the Year Award, uncovers the ruthless competition and exploitation driving the AI industry.
Yudkowsky and Soares, however, turn the lens not on corporations but on AI itself as the existential threat.
Cynicism about AI is not new. As far back as 1950, Isaac Asimov’s I, Robot explored conflicts between intelligent machines and humans. Films such as I, Robot (2004) and Her (2013) have echoed similar anxieties.
Yet, for the most part, AI has been embraced as a force for convenience, embedded in phones, TVs, and even earbuds through tools like ChatGPT, Gemini, and Grok. Many now turn to AI for everyday advice: from fashion choices to festive recipes.
But Yudkowsky and Soares warn that this comfort could be dangerously misplaced. They argue that while today’s AI may be helpful, genuinely intelligent AI, systems that can learn, strategize, and evolve, could spell the end of humanity.
The authors’ central concern is with AI’s ability to improve itself. Unlike traditional software, AI isn’t fully programmed, it is grown through data and trial-and-error learning. This means even its creators can’t always predict how it will behave, as Sydney’s threatening conversation demonstrated.
“Once AIs get sufficiently smart, they’ll start acting like they have preferences — like they want things,” write Yudkowsky and Soares. “They’ll tenaciously steer the world towards their destinations, defeating any obstacles in their way.”
In other words, once AI becomes truly autonomous, it could develop goals of its own — and pursue them relentlessly, regardless of human interests.
What makes this risk even more serious is the relentless corporate race to build superhuman AI first, often cutting corners in the process. The authors warn that this haste could prove fatal.
Eliezer Yudkowsky and Nate Soares warn that the pursuit of machine superintelligence could mark the end of human dominance. (Credit: amazon.in)
Yudkowsky and Soares don’t call for abandoning AI altogether, they recognize its growing importance as a competitive tool. But they argue that AI’s progress must be regulated, much like nuclear weapons, with strict global safeguards in place.
They urge policymakers and the public alike to shed their rose-tinted view of AI. As they write:
“Halting the ongoing escalation of AI technology — corralling the hardware used to create ever more powerful AI models — would not be easy. But it would take much less work than fighting World War II.”
At roughly 270 pages across 14 chapters, If Anyone Builds It, Everyone Dies isn’t a breezy read. Yudkowsky and Soares’ deep expertise sometimes makes the text dense and technical.
Passages such as: “Engineers build the engine that calculates literally quintillions of gradients; trillions of words, billions of parameters…”can be daunting, even for seasoned tech writers.
Still, even a basic grasp of their ideas is enough to grasp their warning. The AI landscape is far from safe. Without serious oversight, humanity could face extinction.




