Nvidia on Tuesday (November 25) said its chip technology is “a generation ahead” of the industry, responding to industry concerns that its dominance over AI infrastructure may be threatened by Google’s foray into AI chips.
“We’re delighted by Google’s success — they’ve made great advances in AI and we continue to supply to Google,” Nvidia said in a post on X. “NVIDIA is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done.”
The post came after Nvidia saw its shares fall 3 per cent in Tuesday’s trade after a report in San Francisco-based tech portal The Information that said Meta, a big Nvidia customer, could strike a deal with Google to use its tensor processing units for Meta’s new data centres.
ALSO READ | DeepSeek’s new Math-V2 AI model can solve and self-verify complex theorems
Two developments have raised concern for market observers:
First, whether the reported Meta deal, long a theoretical possibility but now closer to reality, may be a sign that Google is shaking off Nvidia’s dominance and could finally pull ahead. Nvidia secured a $5 trillion valuation in late October, while Google and its parent Alphabet are set to breach the $4 trillion mark this month.
Second, after propelling the tech-stock market rally to new highs in the US, whether the recent slide in Nvidia stock could reverse the gains made, even as investors conduct a reality check of the meteoric surge in the company’s valuation in recent months.
Nvidia: early mover, serendipitous rise
Nvidia’s upsurge has been serendipitous, given the chips giant’s rise from a niche graphics-chip designer to an AI titan happened because its gaming-focused chips just turned out to be better at crunching data in the early stages of LLM (large language model) training.
Story continues below this ad
The euphoria about the potential of artificial intelligence has thus far driven up the demand for Nvidia’s chips, and propelled its stock to record highs. Its GPU chips, such as the Hopper or the new Blackwell chips, are more flexible and powerful than Google’s TPU. The TPU is an entirely different chip category called ASIC (application-specific integrated circuit) designed for a single specialised function.
“NVIDIA offers greater performance, versatility, and fungibility than ASICs,” Nvidia said in its post. At this moment, Nvidia has more than 90 per cent of the market for artificial intelligence chips with its graphics processors, AI analysts say, but Google’s in-house chips have become a viable alternative to the Blackwell chips, which are expensive to run but very powerful and capable of parallel tasking.
Unlike Nvidia, Google has so far not sold its TPU chips to other companies, but used them in-house, while allowing some of its clients to rent them through the Google Cloud platform. The first big signs of a shakeup came earlier this month, when Google released Gemini 3, an upgraded AI model trained on the company’s own TPUs, not Nvidia GPUs. The Meta deal report added further fuel to market rumours.
Google: Decade-long TPU push
On whether Google could finally pull ahead, a security specialist working for one of the ‘magnificent seven’ American tech companies told The Indian Express, “It might be too early for a direct comparison between (Google’s) TPU and Nvidia (GPUs): cost vs performance, etc. Though more suppliers of accelerated compute are welcome. Nvidia has a 70 per cent margin: that was never going to last.”
Story continues below this ad
The expert also flagged the challenge posed by TSMC, the Taiwan-based foundry specialist that runs the world’s most integrated fabs, which can manufacture the most advanced chips. “So it may be more a matter not of design, but where you are in the queue and whether you can get enough chips made. Google will also have to get into the same queue to get it made… Another significant point to consider is that TSMC is being cautious. They are not building fabs like crazy. They know if the bust happens, they’ll be left with no orders and lots of idle capacity. That could be a limiting factor (for Google and other new entrants),” the expert said. It is not yet known if the TPU stack is energy efficient than GPUs or offers any other operational benefits.
For Google, TPUs are not an entirely new venture that the markets have not already factored in. Alphabet has been developing these tensor chips over the last decade and selling them as part of its cloud formula for at least five years.
However, Nvidia may yet have the edge over Google simply because it provides the software as an entire ecosystem in addition to the chip hardware as an entire ecosystem. In addition to manufacturing GPUs, Nvidia offers an Application Programming Interface (API) — a set of defined instructions that enable different applications to communicate with each other — called CUDA. CUDA enables the creation of parallel programs using GPUs, and are currently deployed in supercomputing sites around the world. It also has a leg in the mobile computing market with its Tegra mobile processors for smartphones and tablets, as well as products in the vehicle navigation and entertainment systems.
Understanding the chip ecosystem
If TSMC is the most important backend player in the semiconductor chips business, Nvidia (with Intel, AMD, Samsung and Qualcomm) is at the front end.
Story continues below this ad
Traditionally, the CPU, or central processing unit, has been the most important component in a computer or server, and Intel and AMD have dominated the market. GPUs are relatively new additions to the computer hardware market and were initially sold as cards that plugged into a personal computer’s motherboard to add computing power to an AMD or Intel CPU.
Nvidia’s main pitch over the years has been that graphics chips can handle the computation workload surge needed in high-end graphics for gaming or animation applications far better than standard processors. AI applications rely on tremendous computing power and have now been progressively getting GPU-heavy in their backend hardware.
Thus, most advanced systems used for training generative AI tools now deploy as many as half a dozen GPUs to every one CPU used, completely changing the equation in which GPUs were seen as add-ons to CPUs. Nvidia now dominates the global market for GPUs.
Google now faces an uphill challenge trying to break into this market with its specialised chip, given manufacturing constraints, and the incredible stickiness of the ecosystem approach followed by Nvidia. Even if there are hyperscalers and others who want to use Google chips but are already with Nvidia, can they just easily change over? That’s the big question.