OpenAI’s ChatGPT was made available to the public in late 2022, around the time Apple launched the iPhone 14 Pro Max. Now, almost one and a half years later, everyone is waiting to see if Apple has any plans to add ChatGPT-like AI capabilities to the next iPhone, due for launch later this year. However, its biggest critics believe Apple has fallen behind other big technology companies in the race to adopt generative AI and bring those cutting-edge features to its popular products. This has raised doubts over Apple’s ability to compete with Microsoft, Meta, Google, and Amazon in embracing AI when the entire industry has shifted its focus there. However, the biggest question remains: does Apple have an AI strategy, and if so, when will it be revealed to the world?
Since the launch of ChatGPT and its subsequent surge in popularity among the masses, nearly every tech company has positioned itself as an AI-first company. However, Apple was one of the first tech companies to enter the AI space with Siri, its voice-based assistant, in the mid-2010s. Siri opened a new way of interaction, obviously through your voice, on the iPhone. Siri improved over time, and it became useful for things like navigating Apple TV, for example. The industry noticed a shift with Siri, and soon Google and Amazon followed up with their assistants. Those voice assistants — namely Alexa and Google Assistant — went one step further and expanded beyond voice.
However, Siri couldn’t catch up with the same speed as Alexa and Google Assistant were getting better in machine learning and becoming more conversational. Siri, on the other hand, seemed stuck in the past. Not only could it understand a limited number of requests, but Siri was also too slow to respond. However, the rise of Chatbots such as ChatGPT made Siri (and that also includes Google Assistant and Alexa) entirely redundant. No wonder AI Chatbots are powered by what are known as large language models or LLMs, which are systems trained to recognize and generate text based on enormous datasets scraped off the web. In contrast, Siri is limited to basic functionalities like “What’s the weather like in New Delhi”? Or “Change the music on HomePod.” And while Google and Amazon have moved on and incorporated generative intelligence in a big way to make their chatbots AI capable, Apple’s Siri functions the way it was a few years back, as if time hasn’t moved on.
Story continues below this ad
Apple has an AI strategy, but it’s different from Microsoft and Google
Google’s Pixel 8 lets you use artificial intelligence to add or remove elements from your images. (Image credit: Anuj Bhatia/Indian Express)
Microsoft’s partnership with OpenAI, the creators of ChatGPT, where it has poured in billions of dollars, and the release of Gemini, Google’s latest generative artificial intelligence system, show how big tech is capitalizing on generative AI. That leaves Apple in a vulnerable position because the company has remained relatively quiet. However, Apple has always been late to adopt new technology, and that may be the case with generative artificial intelligence as well. While Apple may not be publicly talking about AI like other companies it has been doing impressive work behind the scenes. Apple has been signalling what it’s up to in the AI space with many of its researchers releasing papers, models, and programming libraries.
But the point one misses is that while Microsoft, OpenAI, and Google are keen to commercialise cutting-edge technology and sell it to other companies, Apple could only be looking at incorporating more artificial intelligence into its products such as the iPhone and iPad. Now, whether Apple does it on its own or partners with another company remains to be seen.
Cupertino likes to have full control over its entire stack, from the operating system to the processors to the development tools running on every Apple device. History suggests Apple might take the first step on its own, rather than taking the help of a partner. However, Apple has held talks with Google to license its Gemini model to power AI features on the iPhone and its iOS 18 features. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI. That indicates that Apple’s own AI efforts are either not ready yet, or the company wants to lean on Google for only cloud-based AI features.
It would be an interesting development if Apple forges a partnership with Google. But the move could raise alarm bells — particularly since Google is already under scrutiny for paying Apple billions to be its default search provider. One question that pops up is whether Apple would use Gemini until its technology is ready, or if it needs Google for the longer term.
Story continues below this ad
Apple’s Chatbot may live on the device, not in the cloud
Qualcomm showcased generative AI on phones and laptops during its Snapdragon Summit in Hawaii last year. (Image credit: Anuj Bhatia/Indian Express)
Chatbots such as OpenAI’s ChatGPT and Google’s Gemini, as well as LLMs, typically run in vast data centers with much greater computing power than an iPhone or a Mac. So when you write a prompt, an AI chatbot like ChatGPT sends your query to a remote server that processes your request and sends the answer back to you. However, this type of system has two big disadvantages. First, it slows down the time it takes for the Chatbot to reply since your data needs to be sent and analysed. Second, the company that owns the AI Chatbot may claim not to store the data and delete it, but there’s a high chance it saves the data to improve the AI Chatbot. Although AI companies say they don’t seek out personal data to train their models, privacy continues to be a pertinent issue with how AI models and LLMs work.
In all likelihood, Apple may prefer to stay away from running large language models (LLMs) because of challenges with latency and user privacy. Instead, it would enter the AI space with a strategy to run these complex AI tools locally, rather than in the cloud.
In late 2023, Apple released a paper titled “LLM in a flash,” which describes a technique for running LLMs on smartphones and laptops with limited memory. Before “LLM in a flash,” Apple had released other papers that showed how the architecture of LLMs could be adjusted to reduce “inference computation up to three times… with minimal performance trade-offs.”
Apple has also released several open-source generative models in the past few months. Ferret, released in October, is a multi-modal LLM that comes in two sizes: with 7 billion and 13 billion parameters. The model is built on top of Vicuna, an open-source LLM, and LLaVA, a vision-language model (VLM). Typically, multi-modal models analyse an input image. Ferret generates its responses based on a specific area of the image and hence is good at handling small objects and details within images.
Story continues below this ad
Apple also released MLLM-Guided Image Editing (MGIE), a model that can modify images based on natural language commands. Separately, its researchers have developed a family of multimodal models — which refers to an AI system that can interpret and generate different types of data, such as text and images at the same time — called MM1, which boasts “superior abilities” and can offer advanced reasoning and in-context learning to respond to text and images.
Apple has been working on its version of ChatGPT, according to a report by Bloomberg last summer, dubbed AppleGPT, and building its framework, called Ajax, for large language models. Bloomberg also reported that Apple is working on a rival system to Microsoft Corp’s GitHub Copilot for software developers, which would use AI to predict complete blocks of code. All signs indicate that Apple is laying the groundwork to bring Gen AI capabilities to the iPhone, iPad, and Mac. However, its early attempts at boosting the devices’ AI prowess may shine in running LLMs right on your iPhone or Mac, rather than creating competitors to OpenAI’s GPT-4 or its successors.
Apple CEO Tim Cook hints at the much-needed AI strategy
Smartphone makers like Samsung and Motorola are going big on generative artificial intelligence on smartphones. (image credit: Anuj Bhatia/Indian Express)
Apple is a notoriously secretive company and doesn’t spill the beans till the last moment. However, for a change, Apple CEO Tim Cook told investors in February that he sees “incredible breakthrough potential for generative AI” and one could expect more of it later this year. That means Apple is close to revealing its AI strategy soon. With its Worldwide Developers Conference scheduled in June, the company may have more information to share on its work on Gen AI. There, Apple is likely to overhaul its operating systems that power the iPhone and Mac and give them a Gen AI makeover. But Apple is also likely to supercharge Siri, its voice assistant, by transitioning it to a large language model generative AI ChatGPT-like chatbot.
Cupertino has already started to weave “AI” into its marketing messages, as seen with the launch of the new M3-powered MacBook Air last week. For the first time, Apple is advertising the computer as AI-ready. It repeatedly mentions that the MacBook Air is designed to run generative AI apps like Microsoft Copilot, CapCut, Pixelmator Pro, Adobe Firefly, and more. Although it doesn’t mention if its apps run generative AI features, the message is loud and clear, that a shift is happening.
Story continues below this ad
Behind the scenes, Apple is gearing up for a big change, with devices and apps getting ready for a system-wide generative AI. With Apple reportedly shelving its electric car project moving some of those engineers to work on AI, and acquiring companies such as the Canadian startup DarwinAI, Cupertino wants its products to be more relevant and to boast new AI superpowers. Perhaps the biggest question for Apple is how the company develops these AI technologies with a more ethical approach, while keeping its users’ privacy in mind.