Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Top open-source maintainers find that AI has suddenly become much more useful.
- There are still legal and ‘AI slop’ problems to overcome.
- By year’s end, AI programming tools should be much more reliable.
With open-source software running pretty much everything, you might think that multiple developers maintain most of the important programs with help from corporate sponsors. You’d be wrong.
As Josh Bressers, VP of security at software supply-chain company Anchore, pointed out last year, the vast majority of open-source projects, 7 million out of 11.8 million programs, have only a single maintainer. You might think that those programs are obscure or no longer used. You’d be wrong about that, too.
Also: 7 AI coding techniques I use to ship real, reliable products – fast
Bressers looked closely at the JavaScript NPM ecosystem and found that, among the projects downloaded over a million times a month, “about half of the 13,000 most downloaded NPM packages are [maintained by] one person.”
Ow!
To think of it another way, thousands of vital programs are one car accident or heart attack away from being knocked out. That is not good.
AI tools have recently become much better at coding
What can we do about it? You can’t wave a magic wand and miraculously find thousands of ready-to-go expert maintainers. Instead, several prominent open-source maintainers have been considering using AI to keep legacy codebases alive or to make them easier to maintain.
That’s possible because, believe it or not, AI coding tools have recently become much better at coding. That’s not my opinion. At my best, I was an OK programmer. No, that’s the opinion of Greg Kroah-Hartman, maintainer of the Linux stable kernel.
Kroah-Hartman and I got together at KubeCon Europe in Amsterdam recently. He told me, “Months ago, we were getting what we called ‘AI slop,’ AI-generated security reports that were obviously wrong or low quality.”
Also: Why AI is both a curse and a blessing to open-source software – according to developers
Then, something wonderful happened. “A month ago,” he continued, “the world switched. Now we have real reports. All open-source projects have real reports that are made with AI, but they’re good, and they’re real. All open source security teams are hitting this right now.”
What happened? Kroah-Hartman shrugged: “We don’t know. Nobody seems to know why. Either a lot more tools got a lot better, or people started going, ‘Hey, let’s start looking at this.'”
Now that doesn’t mean that Anthropic Claude is going to replace Linus Torvalds anytime soon, or even a mid-level programmer at your company. What it does mean, though, is that, when used properly — no vibe coding here — AI could help clean up old but still used code; maintain abandoned programs; and improve existing code.
Also: The overselling of AI – and how to resist it
For example, Dirk Hondhel, Verizon’s senior director of open source, posted on LinkedIn that while AI coding tools aren’t yet ready to maintain code, he believes they will be soon. “This is almost possible today. And at the rate of improvement these tools have seen over the last couple of quarters, I am convinced that it will be possible with acceptable results at some point this year.”
He’s not the only one. Ruby project maintainer Stan Lo (st0012) wrote that AI has already helped him with documentation themes, refactors, and debugging, and he explicitly wonders whether AI tools will “help revive unmaintained projects” and “raise a new generation of contributors — or even maintainers.”
Indeed, there’s already one AI project, Autonomous Transpilation for Legacy Application Systems (ATLAS), that helps developers modernize legacy codebases for modern programming languages. We can expect to see other such AI tools appearing soon. There’s a lot of obsolete but still-used code out there that could use a modern refresh.
The lawyers are going to have a field day
Before breaking out the champagne, let’s consider several major problems. First, if we can improve open-source code with AI, what’s to stop someone from copying and rewriting existing code and then putting it under a proprietary license? The lawyers are going to have a field day with this. Oh, wait! — they soon will: Dan Blanchard, maintainer of an important Python library called chardet, just released the latest “clean room” version of the program under the MIT license, replacing its GNU Lesser General Public License (LGPL). By “clean room,” he means he used Anthropic’s Claude to rewrite the library entirely. Claude is now listed as a project contributor.
A person claiming to be the project’s original developer, Mark Pilgrim, is not happy. Pilgrim says, “[The maintainers’] claim that it is a ‘complete rewrite’ is irrelevant, since they had ample exposure to the originally licensed code. Adding a fancy code generator into the mix does not somehow grant them any additional rights.”
Also: AI is getting scary good at finding hidden software bugs – even in decades-old code
Blanchard, however, claims that “chardet 7 is not derivative of earlier versions.” Did I mention that using AI to modify or clone open-source code will end up in court?
There’s another problem: Although it appears that AI is much more useful than it used to be for fixing code issues, there’s still a lot of AI slop out there, and open-source project maintainers are drowning in it. Just ask Daniel Stenberg, creator of the popular open-source data transfer program cURL.
Pretty much every open-source project maintainer can tell the same story. In some cases, the AI slop has proven so poisonous that the project itself has died. For example, Python Software Foundation’s Jannis Leidel, the lead maintainer of Jazzband, closed the program down because the “flood of AI-generated spam PRs and issues” drowned the project.
Torvalds himself, a wary AI user, warns that while AI generates code quickly, the results can be “horrible to maintain.” He views AI as a tool that boosts productivity, but it doesn’t replace the need to actually understand what’s going on in a program when things break. And, I assure you, things will break.
Also: How Claude Code’s new auto mode prevents AI coding disasters – without slowing you down
The Linux Foundation’s security organizations, the Alpha-Omega Project and the Open Source Security Foundation (OpenSSF), are addressing this issue by making AI tools available to maintainers at no cost. Kroah-Hartman said of it, “OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving.”
While AI is becoming truly useful for open-source developers and maintainers, there are still a lot of legal, coding, and quality issues to address before AI and open-source programming will truly work together in harmony.