Author and journalist Karen Hao (right) was in conversation with Amitabh Sinha, Deputy Editor, The Indian Express. (Photo: Renuka Puri)
Journalist and AI insider Karen Hao on what is most troubling about Artificial General Intelligence, why data centres are unsustainable, how safety and accountability mean different things in the AI world and why her book, Empire of AI, is not kind to OpenAI CEO Sam Altman. The session was moderated by Amitabh Sinha, Deputy Editor, The Indian Express
I want to start by distinguishing what I mean by AI. The word ‘AI’ is like the word ‘transportation’. It can literally refer to anything from a bicycle to a rocket and in the same way, AI can refer to anything from extremely small task-specific models, like an image recognition system, all the way to these extremely large, expansive, general-purpose systems like ChatGPT or Gemini. And what I critique in the book is not all of AI, not all categories, because it’s like critiquing physics but it’s specifically these larger-scale general-purpose systems that are being championed by companies such as OpenAI, Google and Anthropic. To me, the most troubling aspect is that in their quest to build these models, they have convinced the world that this is the only type of AI technology and that it requires an extraordinary amount of resources to develop. They completely choke off our imagination about the future and the resources to build other types of futures.
AI, as a field, started in the 1950s. Back then, there was this debate that divided the field into two main branches. That debate was basically how do we define what human intelligence is? There was one camp of AI researchers that thought that we should create knowledge-based expert systems and the other group thought we should be building Machine Learning systems. Over time, the Machine Learning branch won. And this other branch was left to die. The AI that we see today is maybe 0.1 per cent of the AI that is possible.
Author and journalist Karen Hao
On the AI race narrative: The two countries that collaborate the most on AI research are the US and China. It is scientists co-authoring papers together at the biggest conferences, trading ideas on how to improve AI systems
I actually have a lot of problems with the goal of AGI in general. AGI is basically just a rebranding of the original goal of the field of AI, which was to recreate human intelligence. But, over the years the term AI started to cheapen. It started to refer to the products and services that companies were offering. I talk in the book about these cultish ideologies that have developed in San Francisco and in the Bay Area that believe that AGI is almost like biblical destination where when you arrive at AGI, then suddenly Earth will become paradise or it will be an inferno. There’s no scientific basis for these ideas. There are these weird cults that have developed around these ideas and they are extremely well-funded because they are billionaires that are members of these cults and then create non-profits that produce white papers that then cite one another and then somehow get their ideas all the way to the highest levels of governments.
The reason why OpenAI ended up going with the large language models instead of the other approach is because they did try both. They had robotics research ongoing in the lab, and it was really frustrating because when they were working with a robot hand, it would break all the time.
The main investor at that time was Bill Gates. And he was wholly unimpressed with robotics research. What he wanted was something like a scientific assistant, some kind of AI system that he could talk to and then could give him new ideas in scientific research. So the team was like, well, large language models don’t really do that but we can make it look like it does. So they invested a lot into a demo that appeared to do this and Bill Gates was wowed and then Microsoft signs on to be a huge backer of OpenAI. Pretty much after that, all of the other research that was not large language models died and OpenAI became a large language model company.
I don’t think that would have really changed much. Because the challenge that I see with these companies, not just OpenAI, is that they’re completely anti-democratic. They make these decisions that affect billions of people around the world, but there’s no mechanism by which those billions of people can actually participate and provide feedback.
To me, openness is much more key. If OpenAI had actually released these models in a way where people can see what data they’re using to train these models, where they’re putting the data centres, how much energy and water they’re using to power and cool these data centres, and then they allowed for public contestation of these decisions and there was some kind of collective way to govern the development of this, that would fundamentally change the trajectory that we’re having.
My book is not kind to Altman. So Sam Altman did not participate in the book, and I’ve never interviewed him. I’ve asked him many times to sit for an interview with me, he’s never said no, he’s always just found an excuse.
A piece of feedback that everyone has is that sometimes you think that he’s really sincere and he’s really motivated by these bigger picture questions of how to just make the world a better place. And then other times you think that he seems to just be motivated to get more power for himself. And you’re never quite sure because everything that he does, you could see one way or another, but no one can quite pin the man down. That’s what I felt when I was watching his interviews as well. He is rhetorically savvy, always saying things that feel extremely ambiguous. He rarely ever uses concrete descriptors or numbers for anything that he says. People just don’t really know what he stands for or where he’s going. What is the ultimate objective?
There is a reason why in Mumbai two coal plants, owned by Adani and Tata, have applied for an extension, because of the new energy loads that data centres will bring to the city. This is directly tied to the expansion of the computational infrastructure that is needed to support the Silicon Valley vision, the vision of these empires of AI. That is going to be replicated in other cities, not just in Mumbai, because now there’s a huge amount of investment coming from Google and Microsoft. The other element is there’s been other reports where this construction itself is becoming a source of pollution because all of the residual waste coming out of it is polluting the water and the soil. The companies do a really great job of making it appear like these systems are just ethereal and lightweight since you’re using it on your mobile phone. It seems so easy and seamless and supported by the cloud when really it’s supported by such expansive systems. It is literally the largest infrastructure built out in human history. There have been comparisons to the Apollo programme that brought the first man to the moon. There’s not even a comparison. Trillions of dollars are being spent on these data centres. The largest one now being built is in Louisiana. It is one-fifth the size of Manhattan and uses the power capacity of half of New York City. This is completely unsustainable.
Author and journalist Karen Hao
On people’s resistance: We are seeing different communities mobilising against data centres, parents against the mental health harm on their children, artists protesting about copyright issue because AI touches so many things
The relationship between these companies and the Trump administration is very similar to the relationship of the East India Company and the British Crown in that they are two different entities and they actually have different goals. But in this particular moment, they’re actually very aligned. You see Trump opening as many doors for these companies as possible to get more data centres built in more places. He is literally talking about annexing territory from other countries to get more rare earth minerals for building their data centres, to get more land, to get more oil for powering these data centres. They are trying to reverse all the progress that we’ve made from empire to democracy. They’re trying to reverse it from democracy back to empire so that they can be the ones in control of all of the resources in the world, of all the data, all of the information flows, all of the mechanisms by which to manipulate people into having specific ideologies that are aligned with theirs.
Absolutely. We are seeing that in the US and around the world that there is increasingly a failure of democratic systems that actually reflect the will of the people. In the US, the corporate capture of the government is extraordinary. AI has become the motivating force behind a new political movement building in the US where people are starting to realise that AI is the embodiment of a deeper problem within the democratic system, where there’s such profound inequality and the people who influence the government are not in any way representative of the majority. They are asking what is fundamentally broken about our democracy and how do we not just fix the AI problem but how do we fix the democratic problem.
I don’t think that the ship has sailed. There are reports that show that social media adoption has declined. And the fastest decline is happening among the younger generation. That signals to me that everything is cyclical. There’s no such thing as we can’t go back. Younger folks are switching from smartphones to dumb phones. They’re trying to figure out how to disconnect and reconnect with the physical environment.
At the same time, almost every single person in the world has some experience with AI. I actually see this as an asset, because that also means that every single person realises that they have a stake in the conversation. In the US, we are seeing different communities mobilising against data centres, against things like mental health harm on their children, artists protesting about copyright issue. Because AI touches so many things now. There are actually a million different points of pressure that can be applied to the system to make it fall down. That’s what gives me hope. That’s the kind of work that I’ve been thinking about. How do we connect all of these different movements together because they’re actually all pushing against the same thing, even if their entry point is slightly different. To me, the guardrails will come from the people. They will build grassroots movements that are going to help guide this ship around.

On Sam Altman: Sometimes you think that he’s really sincere and motivated by how to make the world a better place and then other times you think that he seems motivated to get more power for himself. No one can quite pin him down
The fiercest movements of resistance have not come from the US and Europe. I write in my book about this huge resistance that happened in Chile, which is the same in other global majority countries. There is a narrative of we’re going to be left behind. There are many people that want technology development but not the kind of technology development that’s totally going to ruin their lives. There is a clarity that I see in these communities. Some of the poorest communities understand this. I also report on Kenya. In one of the poorest neighbourhoods, they want to contribute to technology development but not at the cost of the destruction of their entire communities because of the way that this industry exploits their labour. I was at the Bangalore Literature Festival in December. I met a lot of Indians who do actually feel exactly what I’m saying, and there’s a very robust and growing civil society movement in India.
About the China question, one of the things that the race narrative completely overlooks is the fact that the two countries that collaborate the most on AI research are the US and China. This is, of course, not government-to-government collaboration, this is people-to-people collaboration. It’s scientists co-authoring papers together at the biggest conferences, trading ideas on how to improve AI systems. At the moment the best open source models are coming out of China and the best closed source models are coming out of the US.
Sohini Ghosh: How do you see the contribution of safety researchers?
Safety means different things depending on who you’re talking to. In my book, I talk about two different communities that both appear to be under the safety umbrella but are actually very distinct communities. One group call themselves the AI safety community and they are two sides of the same coin as the empires themselves. They are the ones that engage in this ideology that AGI is possible and we should be preparing for it or we might end up in an inferno if it’s not carefully controlled. Then there’s the other group that sometimes people think are related to the safety community, like (computer scientist) Timnit Gebru. But I would call them the accountability community, which is more rooted in the harms that these companies are engaging in. One of the hardest things in reporting on these communities is that they look the same on the surface. It’s only when you talk to them that you realise which way they actually fall.
Absolutely, yes. These companies have positioned nuclear as the saviour for all the environmental problems, that small nuclear reactors are going to fix everything. It’s going to bring cheap, clean energy. They’ve been doing this around the world. Right now, they just don’t have enough to power the amount of data centres that they plan to build. It’s going to take at least a decade for small nuclear reactors to become viable. The reality is they’re actually being powered by gas and coal.
Karen Hao, a US-based technology journalist, is the author of Empire of AI (2025), an important book that provides a critical perspective on the manner in which development of Artificial Intelligence (AI) has been usurped by the world’s richest companies. While offering a detailed and unflattering account of the goings-on inside OpenAI in the run-up to the release of ChatGPT in 2022, Hao makes the point that the big AI companies had turned the scientific quest for AI into an opportunity to control resources like land, energy and data that makes them look like new-age empires.