Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Google’s Gemini will power Apple’s Siri on the backend.
- The goal is to drive a more advanced and personalized Siri.
- However, Siri still needs to be more reliable and less prone to errors.
With pressure on Apple to finally get Siri right, the company is turning to its arch-rival in business for help.
In a joint statement released Monday, Apple and Google announced a multi-year partnership in which Google’s Gemini and cloud technology will power Apple Intelligence features, most notably a more advanced (and more personalized) Siri, which is expected to launch this spring.
In the statement, Apple referred to Google’s AI as the most capable platform for Apple Foundation Models, which lets you access and run large language models (LLMs) directly from your device.
Also: How to remove Copilot AI from Windows 11 today
“The next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology,” the companies said. The idea is to utilize Gemini’s advanced AI capabilities on the backend, while relying on Apple’s own local models and Private Cloud Compute service to ensure that your conversations remain secure and protected on your device.
How it works
Siri would rely on Google’s advanced LLMs for more natural and fluid conversations. LLMs are trained on a vast amount of data to learn how to handle and process language and sound more human-like in their responses. Late to the AI game, Apple has struggled to fully develop and power its own advanced AI and LLMs, forcing it to rely on tools from other companies — most notably OpenAI’s ChatGPT.
With Gemini on the backend, Siri should be able to act more like an advanced chatbot. Among the specific features in store, App Intents will enable Siri to work with Apple’s own apps and those from third parties, while “personal context knowledge” will allow Siri to perform tasks based on its awareness of the data and preferences on your device.
Also: Claude Cowork automates complex tasks for you now – at your own risk
Another skill, called on-screen awareness, will enable Siri to “see” and work with what’s on the screen based on your request. One more trick is “World Knowledge Answers,” in which Siri would operate like a regular search engine as it scours the web to answer your question or request.
When to expect the new Siri
Reports about the new and improved assistant, known as LLM Siri, began to appear in late 2024. At the time, Apple watcher Mark Gurman said that this upcoming version was already being tested internally on iPhones, iPads, and Macs as a standalone app.
The goal was to launch the new Siri sometime in the spring of 2026. So far, that timeframe appears to be on track. The latest reports suggest that the new Siri will debut sometime in March with iOS 26.4.
Also: How I used ChatGPT’s $20 Plus plan to fix a nightmare bug – fast
The new features and skills on Siri’s to-do list all sound interesting and potentially useful. But the key question is whether the new Gemini integration will help Apple’s error-prone and beleaguered assistant escape its problematic past.
Far too often, Siri falls short of expectations, unable to respond to requests, misunderstanding what a user says, or providing incorrect answers. These problems are especially annoying when you’re in the car trying to get driving directions, and Siri keeps giving you all the wrong information.
Those of us who’ve used Siri for years just want a chatbot that works. Apple has promised that before and failed to deliver. With Gemini on the backend, we’ll see if Siri finally gets that much-needed improvement.