Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Canonical lets you pick and choose how you’ll use AI.
- In Ubuntu, AI is built into key features and optional AI tools.
- While Microsoft is about control, Canonical puts you in charge.
In a new blog post, Jon Seager, Canonical’s VP of engineering for Ubuntu, explained how the company is baking AI into its Linux desktop and server experience in Ubuntu Linux 26.04 and beyond. Unlike Windows, where Microsoft is slapping its Copilot label on everything, Canonical cooks AI into its Linux distro on open terms: open models where possible, local inference by default, and no rebranding of the distro into an AI product.
Seager explained that Canonical is “ramping up its use of AI tools in a focused and principled manner.” That approach means a clear preference for open‑weight models whose license terms align with Ubuntu’s long‑standing open‑source values, coupled with open‑source harnesses and tooling. The Canonical developer teams are encouraged to adopt the tools that make sense for them, as long as they choose a single tool consistently at the team level.
Also: Built for a hostile internet: Canonical VP of Engineering on Ubuntu 26.04 LTS
He stressed that Ubuntu is not being repositioned as an AI product, but that “thoughtful AI integration” will make the operating system more capable and efficient for people who already rely on it. Internally, Canonical plans to educate its engineers on where AI genuinely adds value, and to avoid crude metrics like “how much AI did you use,” focusing instead on quality, control, and reviewability of AI‑assisted work.
Implicit vs. explicit AI
A core part of Ubuntu’s framework is the distinction between “implicit” and “explicit” AI features. Implicit AI features will run largely in the background, enhancing Linux’s existing capabilities. This is the kind of improvement you’ll experience as “the system just works better” rather than as a new AI product. For example, Ubuntu 26.04 boasts first‑class speech‑to‑text and text‑to‑speech, better screen reading, and other accessibility improvements powered by local models.
Also: The new rules for AI-assisted code in the Linux kernel: What devs need to know
Explicit AI features, by contrast, will arrive as new, opt‑in capabilities that clearly present themselves as AI‑driven. These features could include generative text tools in productivity workflows, agentic helpers for tasks such as file or project management, and dedicated interfaces for interacting directly with models. Seager describes this approach as phased: first, quietly improving what Ubuntu already does, then layering on “AI‑native” workflows for users who actively want them.
Don’t want these AI-enabled programs? Fine. You don’t have to use them. Good luck trying that with Windows 11.
Ubuntu is all about running AI locally. Canonical wants most Ubuntu AI features to default to on‑device inference. This approach makes these features usable offline, potentially more private, and less dependent on proprietary cloud backends. It will also make them much cheaper to use.
Also: Linux explores new way of authenticating developers and their code – how it works
This approach dovetails with Canonical’s existing work on tuned kernels, hardware enablement for GPUs and accelerators, and partnerships with silicon vendors. Seager described this as the foundation for efficient local inference on ordinary Ubuntu installations.
Accessibility is one of the first concrete targets for this AI push. Seager highlights system‑wide speech‑to‑text and text‑to‑speech, plus richer screen reader capabilities, not as flashy “AI add‑ons” but as core OS functions. Looking ahead, he wrote, “What today seems like it’s only possible with access to a frontier AI factory will become significantly more accessible in the coming months and years.”
Beyond individual features, Canonical is pushing forward an Ubuntu that can act as a safer home for AI agents and agentic workflows. Seager says users are increasingly accustomed to working with agents and that he “loves the idea” of making the accumulated power of Linux more accessible via agent‑driven interfaces. The goal is a “context‑aware OS” in which agents can reason about the user’s environment and tasks while being constrained by Ubuntu’s existing security model.
Also: ‘Like handing out the blueprint to a bank vault’: Why AI led one company to abandon open source
Here, Snap, Ubuntu’s default application container approach, becomes Canonical’s way of securing AI agents. With Snap, agents will sandbox. This step blocks them from accessing restricted data and resources. Canonical is exploring ways to integrate such workflows “in a way that feels tasteful, aligned with our user base and respectful of our privacy and security values,” explicitly acknowledging community anxiety about heavy‑handed AI.
With Microsoft making AI a marquee branding term, Seager is at pains to differentiate Ubuntu’s approach. He rejects the idea of measuring Canonical staff by the volume of AI output, and he says the company is not planning to “force” AI on users or turn Ubuntu into an AI‑first product. At the same time, he is frank about AI’s impact on engineering work, noting that while Canonical does not intend to replace people with AI, an engineer skilled with AI tools certainly could outperform one who is not.
One thing users should not expect is a universal “AI kill switch.” Seager argues that such a switch would be complex to implement, “honestly,” given that some AI functionality will blur into background system improvements rather than discrete apps. Instead, the emphasis is on keeping AI features constrained, auditable, and aligned with open‑source expectations, while allowing Ubuntu to evolve in a world where AI is rapidly becoming part of the baseline of modern computing.
Windows AI vs. Ubuntu AI
Canonical is explicitly biasing Ubuntu toward open‑weight models, open‑source harnesses, and model licenses that align with long‑standing free‑software values, rather than just grabbing whatever performs best on benchmarks. Mind you, as Seager observed, “access to model weights is meaningful, but it is not equivalent to the sort of transparency the open source community has become accustomed to.” He added, Canonical will choose models based on license terms, not just performance.
Microsoft’s mainstream AI push, by contrast, is anchored in proprietary cloud services, such as Copilot for Microsoft 365 and Azure OpenAI. Yes, Microsoft will let you use many models, but only if Microsoft acts as the gatekeeper. You can only use AI in Windows using Microsoft’s rules, including its pricing, policies, and telemetry.
Also: The new rules for AI-assisted code in the Linux kernel: What every dev needs to know
Canonical’s plan for Ubuntu is to make local inference the default. Ideally, all AI‑enhanced OS features should run on devices offline, with clearly defined interfaces that are used only when an external service is genuinely needed. That approach plays to Linux’s strengths, such as hardware tuning and GPU/accelerator enablement, while keeping your data and workflows on your machines.
Microsoft’s strategy has been “cloud first”: Copilot in Windows and Microsoft 365 is fundamentally tied to cloud‑hosted models and data processing, even when some client‑side NPUs get involved. That connection makes it easier to roll out features at scale. However, the approach also centralizes data and compute, increases vendor dependence, and makes it harder for users to understand or limit where their data flows.
As Seager pointed out, Ubuntu splits AI into “implicit,” quietly improving existing capabilities like speech‑to‑text, screen reading, and other accessibility tools, and “explicit,” new, clearly labeled AI workflows or agents that users can choose to adopt. This split is all about AI making Ubuntu “meaningfully more capable” without turning it into an AI‑branded product or forcing AI on users who want a stable Linux desktop.
Microsoft’s stance, on the other hand, is all about pushing AI into the default user experience. For instance, Copilot appears directly in the Windows shell and Microsoft 365 apps. In addition, Microsoft is exploring always‑on agents inside 365. There, agentic AI will act as an operational layer for office workflows. That shift is great if you’ve already bought into Microsoft. And, obviously, lots of people are OK with that stance — more fool they from where I sit.
Also: The best free AI for coding – only 3 make the cut now
However, being tied to Microsoft means you must interact with AI by default, not by a considered opt‑in. Are you good with that? Will you still be OK with it as AI costs surge higher?
Canonical’s AI story leans heavily on using Ubuntu’s existing security primitives, especially Snap confinement, to give AI agents tightly scoped permissions, clear auditability, and different “grades” of access from read‑only analysis through to controlled write access. The idea is a “context‑aware OS,” in which agents can be powerful, but they run inside transparent, open‑source sandboxes that users and auditors can inspect.
Also: I tried a command-line-only distro that can seriously improve your Linux skills
Microsoft’s agentic direction is more focused on integrating agents directly into business workflows, such as Microsoft 365 agents that can act across mail, documents, and line‑of‑business systems. That integration is great for automation, but harder for users to understand. Governance lives in policy consoles and connectors that IT admins configure, not in a user‑visible, open security model that can be independently examined and forked.
Canonical positions Ubuntu as a low‑friction platform for local AI experimentation and open‑source workflows. With Ubuntu, it’s easy for developers to swap out models, frameworks, and tools. This approach makes it easier for teams to prototype with local models, vector databases, and agent frameworks, and, crucially, to avoid vendor lock-in during the experimentation phase.
Microsoft’s strength is massive distribution and integrated tooling. But that same integration makes it more likely that early experiments become long‑term dependencies on Microsoft’s stack, with data, workflows, and governance all tied to the same vendor.
If you care about open models, local control, and the ability to see and shape how AI is wired into your system, Ubuntu is your friend. Microsoft’s model is compelling for tightly coupled, cloud-first enterprise workflows, but it trades openness and portability for lock-in, deep integration, and convenience. I know which model I’ll be using.