counter easy hit

Apple, Google, and Microsoft join Anthropic's Project Glasswing to defend world's most critical software

Apple, Google, and Microsoft join Anthropic's Project Glasswing to defend world's most critical software
3
Project Glasswing Claude Mythos Preview
Elyse Betters Picaro / ZDNET

ZDNET’s key takeaways

  • AI found thousands of hidden bugs in critical systems.
  • Tech rivals unite to secure shared infrastructure risks.
  • Cyberattack timelines shrink from months to minutes.

Today, a group of the world’s biggest tech companies is announcing what is essentially an AI-driven cybersecurity Manhattan Project. 

As the Cyberwarfare Advisor for the International Association of Counterterrorism & Security Professionals and part of the FBI’s InfraGard Artificial Intelligence Threat and Mitigation Cross-Sector Council, I’ve spent decades profiling global threats, from lecturing at the National Defense University to leading nationwide cyberattack simulations. But the arrival of a new frontier AI from Anthropic represents a paradigm shift that even the most prepared infrastructure specialists are scrambling to navigate.

There is a lot to unpack from this announcement, but before I go into the published details, I’m going to try to read between the lines. That’s because the mere existence of this announcement means there’s a lot that remains unsaid.

The fact that all of these companies are working together has to be indicative of the scale of the threat and the scale of the project necessary to respond to it.

Also: AI agents of chaos? New research shows how bots talking to bots can go sideways fast

What I’m going to describe is both terrifying news and, at the same time, somewhat encouraging news. It’s worrisome because clearly our entire cybersecurity infrastructure is at great risk due to advances in weapons-grade AI. Otherwise, these fierce competitors wouldn’t be working together as announced today.

It’s somewhat encouraging because these intense competitors have chosen to work together to reduce that infrastructure vulnerability. This is wild news, folks.

Introducing Project Glasswing

Project Glasswing is described in the announcement as: “An initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks in an effort to secure the world’s most critical software.”

The name “glasswing” may mean nothing, or provide some insight into the project’s overall intent. The glasswing butterfly, native to Central and South America, is so-named because of its transparent wings that allow it to camouflage itself in its surroundings. The butterfly is also unusually resilient, able to carry up to 40 times its own weight.

Also: Why enterprise AI agents could become the ultimate insider threat

At its core, this “coalition of the willing” is planning to deploy two defensive weapons: a new, unreleased AI model called Claude Mythos Preview and a pile of cash ($4 million in direct donations and $100 million in Claude usage credits).

At first glance, this announcement looks like a highly coordinated PR strategy, some security theater. Another skeptical interpretation might be that these companies are creating a security cartel to lock out startups and other players.

But I don’t think that’s the case. Based on statements from key players and the security vulnerabilities mentioned, I think this is something far more serious than a giant corporate PR photo op to make everyone look responsible with AI.

Having spent time as an executive at Symantec and a team lead at Apple, I’ve seen firsthand how fiercely these companies guard their intellectual property. To see them hand over $100 million in credits and open up unreleased models to one another tells me the threat level has moved from competitive to existential.

Also: Stop saying AI hallucinates – it doesn’t. And the mischaracterization is dangerous

The fact is, you don’t see these specific companies cooperating like this unless the alternative is mutually assured destruction of their shared infrastructure.

And no, I don’t think that’s hyperbole.

Here’s how Elia Zaitsev, CTO at cybersecurity company CrowdStrike, described the situation: “The window between a vulnerability being discovered and being exploited by an adversary has collapsed. What once took months now happens in minutes with AI.”

If the name CrowdStrike sounds familiar, it might be because back in 2024, the company pushed an update that accidentally bypassed safeguards and crashed millions of Windows systems all across the planet. If any one company knows what a bad day feels like, it’s CrowdStrike.

According to the announcement, “We formed Project Glasswing because the capabilities we’ve observed in Mythos Preview could reshape cybersecurity.”

It’s clearly worse than we thought

Anthropic described the Mythos Preview model as a “general-purpose, unreleased frontier model” with strong agentic coding and reasoning skills. The company said, “Anthropic didn’t train it specifically for cybersecurity.”

The company also said it doesn’t plan to make Mythos Preview generally available, probably because it could be weaponized by adversarial actors.

Also: AI agents are fast, loose, and out of control, MIT study finds

According to Anthropic, “Over the past few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities, many of them critical. The vulnerabilities it finds are often subtle or difficult to detect.”

Thousands. It turns out that many of the vulnerabilities are present in core, mission-critical software and have been in software deployed actively for the past 10 or 20 years. One such vulnerability was a 27-year-old bug just found in OpenBSD. For the record, OpenBSD is known for its security, and yet here was a mission-critical vulnerability nobody (at least none of the good guys) knew about.

Another example is “a 16-year-old vulnerability in a widely used video software.” Here’s the scary gotcha. Apparently, the bug is in a line of code that automated testing tools previously considered the gold standard for security checks. The testing tools analyzed that line of code five million times over the years, and not once did they catch the problem.

Think about this statement from Anthony Grieco, SVP and chief security and trust officer at Cisco, the global networking and infrastructure company that powers much of the internet and enterprise connectivity.

Grieco said, “AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back.”

Also: How Claude Code’s new auto mode prevents AI coding disasters – without slowing you down

No going back. He said, “The old ways of hardening systems are no longer sufficient. Providers of technology must aggressively adopt new approaches now.” This fact is why he says Cisco joined Project Glasswing: “This work is too important and too urgent to do alone.”

That’s a breathtaking statement, especially considering who it’s coming from.

It’s all about infrastructure

Our modern civilization is built upon a networked technology infrastructure. Ranging all the way from giant power-generating stations down to our smart rings, just about everything is based on computers and networking.

But this digital infrastructure foundation isn’t all from one company or product. In fact, a huge proportion is based on open-source software, often written by lone unaffiliated developers. Even commercial billion-dollar products use software libraries built by individual coders.

Also: How I used GPT-5.2-Codex to find a mystery bug and hosting nightmare – fast

Historically, programmers and teams have hand-tested their code and then written test suites to put their code through its paces. I do this with my open-source security product. Before I deploy an update, I test it extensively. Afterward, I often share it with a subset of users for a beta test period. Generally speaking, my product has been quite solid.

But last fall, I decided to feed the full source code to Claude Code and OpenAI’s Codex. I asked each of them for a security evaluation. Both identified vulnerabilities that my testing process missed. In fact, while both found some of the same vulnerabilities, each AI found a few that the other AI did not.

I quickly fixed the bugs the AIs identified. But what really interested me was the type of bugs identified. These weren’t bugs in the actual code itself. I didn’t make any of the classic coding errors that usually lead to vulnerabilities.

What the AIs identified were behavioral quirks that would only manifest when combined with other software and configurations — code I didn’t write. But because the AIs could look beyond the code they were asked to investigate and instead considered the entire infrastructure environment in which the code was running, they were able to identify situational problems that could have turned into exploits.

Also: I teamed up two AI tools to solve a major bug – but they couldn’t do it without me

This issue, on a much greater scale, is what Project Glasswing intends to tackle. The Project Glasswing announcement said: “No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play.”

There are hundreds of thousands of these components running on billions of devices and within millions of software programs. All it takes is one vulnerability in one piece of code, and critical infrastructure could fail.

According to Igor Tsyganskiy, EVP of cybersecurity and Microsoft Research at Microsoft, “As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented.”

A corollary is that bad actors can use AI aggressively and destructively, performing attacks at machine speed and finding vulnerabilities at a rate we’ve never encountered before.

National security concerns

This initiative must not be taken out of context. To understand its relevance, we must also consider the current geopolitical situation. IT security teams have been dealing with cyberthreats for years. Whether it’s criminals out for money, hacktivists intent on disruption, or nation states conducting a mix of data exfiltration, monetary extortion, identity theft, and infrastructure disruption, cyber threats are nothing new.

I spent years investigating a key White House email controversy for my book, Where Have All The Emails Gone?, and even then, the vulnerability of our highest offices to basic infrastructure failures was staggering. But those were human-scale errors. What Project Glasswing is fighting is a machine-speed collapse of the entire defensive perimeter.

Also: I built two apps with just my voice and a mouse – are IDEs already obsolete?

There are two very new factors in play right now. The first has been the growth of AI capabilities. While Mythos Preview is intended as a defensive tool, do not doubt that adversaries are building their own frontier models as weapons of mass digital disruption.

The second factor is the war in Iran. Back in 2012, I wrote a cyberwarfare profile of Iran, exploring its internal capabilities to wage cyberwarfare. Back then, I noted that Iran prioritizes higher education in science and math. While the Iranian government censored the internet, almost a quarter of Iranian citizens were online. Today, almost 80% are online.

My conclusion in 2012 is even more valid today. I said, “The point of all this is to showcase that Iran has substantial connectivity, resources, and educated citizenry, more than enough to fuel forays into cybercrime, cyberterrorism, and cyberwarfare itself.”

Combine that with access to frontier-level AI technology, and it’s fair to expect an intense level of cyberattacks at a rate and ferocity never seen before, leveraging exploits previously hidden in the complexity of the overall infrastructure.

Also: I used Gmail’s AI tool to do hours of work for me in 10 minutes – with 3 prompts

It’s important to acknowledge the ongoing issues Anthropic has had recently with the US Government.

The Project Glasswing announcement obliquely reflects this situation: “Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities.”

This is the only time in the announcement that Mythos was described as capable of supporting “offensive” capabilities. I invite the reader to draw their own conclusions about that detail. My take on it is that Mythos could be potentially destructively capable if that kind of action were to become necessary. That offensive capability may also be why Anthropic is limiting the release to a defined set of participants and not making it available to the world at large.

The announcement also said: “Securing critical infrastructure is a top national security priority for democratic countries. The emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology.”

Also: Anthropic’s new warning: If you train AI to cheat, it’ll hack and sabotage too

Earlier this year, the US government designated Anthropic as a supply chain risk. A side effect of this designation was that defense contractors were instructed to stop using Anthropic products in anything that could be tangentially considered related to government defense work.

That designation would have affected the government contracts of a number of Project Glasswing participants had they chosen to continue using Claude. However, on March 26, US District Court Judge Rita Lin blocked that restriction, temporarily allowing defense contractors to continue to use Claude AI products.

I see two possible between-the-lines reads here:

  • This announcement is timed to fall after the supply chain risk designation was blocked, and before it resumes.
  • The capabilities of Mythos Preview and the results seen in the early stages of its use are so profound that these arch competitors would have decided to use it anyway, regardless of the contractual restrictions.

This is how the Project Glasswing release explained the situation: “The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.”

Follow the money

If you’re going to pay real attention to the infrastructure risk posed by thousands of hidden vulnerabilities, you have to take into account the individual open-source developers operating independently.

There is an enormous ecosystem based on all those individuals, each modifying and checking in their own code, to centralized repositories. While the nature of open source means anyone (and any company) can read the code, checking in modifications is limited to the developers with commit access to the project.

Also: Switching to Claude? How to take your ChatGPT memories with you

It is certainly possible for others to fork the project (create their own copy that is also distributed). But doing so would not immediately solve any software dependency risk. That issue is because there are automated systems across the internet built to incorporate known packages into their distributions. Forking a project would require all those automated systems to change the source of their code updates.

So, when Mythos Preview finds a vulnerability, how does it reach the proper developer for repair? Project Glasswing is taking two approaches. The first is to donate a Claude Max subscription for Claude Opus and Sonnet to any verifiable open-source developer who asks. That’s not access to Mythos Preview, but even Claude Opus 4.6 can help identify bugs. To apply for Claude Max grants, maintainers interested in access can apply through the Claude for Open Source program.

When I asked about it, Anthropic told me, “We’ve donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable the maintainers of open-source software to respond to this changing landscape.”

OpenSSF is the Open Source Security Foundation. Their mission is to “Make it easier to sustainably secure the development, maintenance, release, and consumption of open-source software. This includes fostering collaboration within and beyond the OpenSSF, establishing best practices, and developing innovative solutions.”

Alpha-Omega, part of the Linux Foundation, serves: “As a helping hand and funding catalyst that supports the maintainers, communities, and ecosystems where security investment can have the greatest impact.”

The Apache Software Foundation also supports a great many projects that provide critical infrastructure across the internet.

While funding goes to these organizations, their role in high-vulnerability projects will be to facilitate outreach to individual developers and to possibly provide funding for the time required to implement fixes.

The challenge will be that many of the key developers for mission-critical components have other obligations and time commitments. On the other hand, if any group can wrangle these very independent developers, it’s the various open-source foundations that have been developer-wrangling ever since they got started.

Final thoughts

Jim Zemlin, CEO of the Linux Foundation, said, “In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers, whose software underpins much of the world’s critical infrastructure, have historically been left to figure it all out on their own.”

Here’s something to consider. He said, “Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software.”

He also addressed the funding and time concerns. He said, “By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick in every maintainer’s workflow, not just for those who can afford expensive security teams.”

My take on this approach is that it’s intriguing to see these arch-competitors apparently working together to solve cybersecurity issues. I’m also curious about how much of this approach proves to be merely acting for the cameras, and how much will impact our fundamental digital infrastructure.

I balance that concern with one that’s more visceral. This announcement, and the awareness of what a Mythos-style AI can do, tells us that we are at a far greater risk than even we cyberwarfare specialists had predicted. Given the volatile state of the world today, Project Glasswing could be the last best hope, or it could turn out to be just another PR effort that actually does nothing to prevent severe infrastructure disruption.

Do you see Project Glasswing as a genuine defensive effort, or more of a coordinated industry power move to control access to advanced AI security tools? Let us know in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Featured

Leave A Reply

Your email address will not be published.