If the year 2024 started with the promise of generational AI, 2025 could be more about testing out the use cases of some of these AI tools. And finding answers to the quintessential question of monetising AI.
Going forward, there could be three board templates for the most compelling use cases yet.
One, is this clear focus on AI agents — artificial intelligence tools that can handle multi step chores like onboarding clients, approving expenses and not just routing, but actually responding to customer-service requests, all with minimal human intervention. While OpenAI Chief Executive Officer Sam Altman termed AI agents “the next giant breakthrough”, San Francisco-based Salesforce Inc. has already started inking deals with some 200-odd companies to install its workplace AI agent called Agentforce. “We’re really at the edge of a revolutionary transformation… This is really the rise of digital labor.”
Salesforce CEO Marc Benioff said on the software company’s most recent earnings call. While Salesforce launched Agentforce in September, San Francisco-based Anthropic launched its own product a month later, followed by a Microsoft launch in November. OpenAI is set to unveil an agent at a research preview in January. Agentic AI is clearly projected to become more widespread in 2025.
Second, if monetising AI is a big question, San Francisco-based Perplexity AI has some answers. In November, Perplexity – a conversational ‘answer engine’ that uses large language models (LLMs) to answer queries using sources from the web and citing links within the text response – launched a user-focused shopping hub in a bid to beef up its platform as it attempts to take on Google’s dominance in the search engine space.
The idea is simple, but functional. For instance, a user can take out her phone, point it at a product that she wants to get, and just ask the Perplexity AI bot: where can I buy this? The answer is generated right there, alongside options to buy the product and the different price at which it’s offered on shopping websites.
Or if a user were to ask Perplexity to help build her a library, or help her buy all the items needed to host a party, the AI bot will list out all the product options right there, alongside reviews hoovered up from different parts of the web. And the user can go further and fulfill the transaction to purchase the products, all on the Perplexity app.
Story continues below this ad
Backed by Amazon founder Jeff Bezos and chipmaker Nvidia, Perplexity plans to double down on its shopping feature by giving users product cards showing relevant items in response to questions related to shopping, with each card providing product details in a visual format. The fact that the Amazon founder is a backer only helps the start-up as it works to monetise the AI-curated shopping experience.
Third, is a doubling down on improving the AI interface by the segment leaders – a handful of companies such as OpenAI, Google, Meta, xAI and Anthropic that have converged at the top of the current generation of LLM models. In early December, OpenAI announced the full availability of its Canvas tool, a day after it launched its AI video generator Sora. Canvas was introduced by OpenAI in October as an editing tool for writing and coding. It’s a notebook interface that sits beside the user’s ChatGPT chatbot conversation, which allows users to edit responses and “collaborate” with ChatGPT. New to Canvas is the ability to get feedback and edits in the form of comments. From here, users can make changes based on ChatGPT’s suggestions.
Google too has a pitch in this segment, with the promise to launch Gemini 2.0, its most capable model that it says is built for the ‘new agentic era’. With new advances in multimodality — like native image and audio output — and native tool use, Google said the new launch “will enable us to build new AI agents that bring us closer to our vision of a universal assistant”.
All this comes at a time when most of the segment leaders feel that the progress on AI is going to get harder, with the low hanging fruit gone and the curve on the hill getting steeper in 2025. The foundational models are likely to get better at reasoning, comparing a sequence of actions more reliably in a more agentic way, Google CEO Sundar Pichai said in a conversation at a New York Times event earlier this month. That could mean a tapering off of the incremental returns that AI models got through 2024 by scaling up capacity and throwing in more compute power with extra GPUs. This has prompted growing concern in Silicon Valley that AI’s rapid progression is losing steam.
Story continues below this ad
One of the properties of machine learning is that the larger the brain, the more data it can be fed, and the smarter it gets over time – a phenomenon called the Scaling Law for neural language models. And the evidence that as developers scale up the size of the models and the amount of training data, the performance of the intelligence automatically improved was clearly evident since the AI race came into the open after the launch of the ChatGPT AI-powered chatbot on November 30, 2022. This theory is now coming under challenge.
The first indication that things are turning is the lack of progression between newly updated models, with the delta between models now diminishing perceptibly and the performance improvements progressively levelling off. To that extent, the generational leap that was expected to bring us closer to AGI or artificial general intelligence could now have to be scaled back.
One hitch in all of this could be data – a key component of that scaling equation. There’s only so much data in the world that can be fed into a system, and experts have long speculated that companies would eventually hit what is called the data wall. No amount of improvements in computational capability and chip production or data center build outs can get over this.
So AI companies have been turning to so-called synthetic data, data created by AI that is then fed back into the AI system. But that could create its own problem, given the fact that AI is an industry where the principle of garbage in, garbage out is very evident. And while the pre-training phase is nearing saturation, the next stage of AI evolution – post training or inference – might not require as much compute power as the industry gets down to optimising smaller amounts of data, but using it to generate high quality, very specific output. The new reasoning models, which are able to think before they answer, are the newest leg in the AI race. The year 2025 could see if AI acceleration is tapped out and the search for use cases becomes more tangible.
Story continues below this ad
According to James Brundage, EY Global and Americas Technology Sector Leader: “Looking forward, 2025 will be the year technology companies need to translate the promise of AI into both top-line and bottom-line results for customers and investors alike. It will be a pivotal year for the tech industry to drive AI value while also effectively communicating the ROI and new business model impacts to stakeholders. This will allow the tech industry and their customers to continue in the evolution of GenAI.”