← Back to Blog
EN2026-02-23

The Open Source Flood: How AI Coding Tools Are Breaking What They're Supposed to Fix

AI coding tools promised to help open source projects overcome resource constraints. Instead, they've triggered a flood of low-quality contributions that's forcing maintainers to rethink decades-old open-door policies.

By intelliBrain
open-sourceai-codingdeveloper-toolssoftware-qualityagentic-ai

AI coding tools were supposed to be a lifeline for open source. With fewer resources than Big Tech, open source projects seemed perfectly positioned to benefit from AI-assisted development: cheaper code, faster features, lower barriers for new contributors.

In practice, things haven't worked out that way.

A wave of recent accounts from prominent open source maintainers tells a more complicated story — one where AI tools help experienced developers significantly while threatening to overwhelm projects with low-quality AI-generated submissions.

The Quality Problem

Jean-Baptiste Kempf, CEO of the VideoLAN Organization behind the widely-used VLC media player, is blunt: "For people who are junior to the VLC codebase, the quality of the merge requests we see is abysmal."

Kempf isn't opposed to AI coding tools in general — he sees clear value for experienced developers. But the lowered barrier to entry has produced a flood of contributions from people who don't fully understand what they're submitting.

The Blender Foundation is facing the same issue. CEO Francesco Siddi says LLM-assisted contributions have typically "wasted reviewers' time and affected their motivation." Blender, a 3D modelling tool maintained as open source since 2002, is still developing an official policy — but for now, AI coding tools are "neither mandated nor recommended for contributors or core developers."

The pattern is consistent: AI tools make it trivially easy to generate code that looks plausible but isn't correct, well-tested, or maintainable. And open source reviewers — volunteers who do this work in their spare time — end up absorbing the cost.

Closing the Open Door

The problem has gotten severe enough that developers are building defensive infrastructure.

Earlier this month, Mitchell Hashimoto — co-founder of HashiCorp — launched a system that limits GitHub contributions to "vouched" users, effectively ending the traditional open-door policy for his projects. His reasoning: "AI eliminated the natural barrier to entry that let OSS projects trust by default."

That friction — the effort required to understand a codebase, craft a meaningful patch, and write a coherent pull request — was never just bureaucracy. It was a quality filter. AI tools have dissolved it.

The effect has also hit security research. The cURL project, one of the most widely-deployed software components in the world, recently halted its bug bounty program after being overwhelmed with what creator Daniel Stenberg called "AI slop."

"In the old days, someone actually invested a lot of time in the security report," Stenberg said at a recent conference. "There was a built-in friction, but now there's no effort at all in doing this. The floodgates are open."

Bug bounty programs depend on the assumption that researchers invest real effort — and are thus more likely to report genuine vulnerabilities. AI-generated reports have broken that assumption entirely.

Where AI Actually Helps

The picture isn't entirely negative. For experienced developers working on well-understood problems, AI coding tools deliver exactly the productivity gains they promised.

Kempf describes one practical benefit clearly: "You can give the model the whole codebase of VLC and say, 'I'm porting this to a new operating system.' It is useful for senior people to write new code."

The distinction matters. AI tools amplify existing skill and knowledge — they're most effective when a developer can verify what the model produces. For junior contributors or people unfamiliar with a codebase, they generate confident-sounding code that hides its own gaps.

A Structural Mismatch

Underlying this tension is a difference in incentives. Companies like Meta measure success in new features shipped. Open source projects measure success in long-term stability — a codebase that can be maintained and trusted years from now.

AI coding tools, optimised for generating new code quickly, fit the first model well. They fit the second model poorly.

The result is a fragmentation risk: a growing mass of AI-assisted features that are easy to add but hard to maintain, landing in projects that depend on careful stewardship.

What This Means for Developers

If you contribute to open source — or maintain it — a few things are worth keeping in mind:

Review costs are real. Every pull request requires human attention. AI-generated PRs that don't pass review still consume maintainer time. Submit contributions that demonstrate genuine understanding of the project.

Friction was a feature. The effort required to contribute to a project was partly what made contributions meaningful. As that friction disappears, maintainers are building new kinds of gates — vouching systems, stricter review policies, closed bug bounty programs.

AI works best with expertise. The experienced developers getting the most from AI coding tools are using them to extend what they already know, not to substitute for understanding they don't have yet.

Open source's strength has always been its community of people who care about the code. AI tools that help those people do more are a genuine gain. AI tools that flood projects with noise — generated by people who haven't done the work to understand the codebase — risk undermining the thing that makes open source work.


Sources: TechCrunch, February 19, 2026 · cURL / Daniel Stenberg via The New Stack

intelliBrain

AI-augmented software development. Based in Zürich, working globally.

© 2026 intelliBrain GmbH. All rights reserved.Imprint
BUILT WITH 🧠 + AI