Daniel Stenberg killed curl’s bug bounty program in January 2026. Not because security doesn’t matter. Because AI-generated reports consumed more maintainer time than actual bugs.
— Daniel Stenberg, curl maintainerWe now have to spend a lot of effort and energy on analyzing, commenting and eventually rejecting AI slop reports. This is a huge waste of our time. It is a DDoS attack on the project.
The same tools democratizing code creation are DDoSing the review layer. Creation runs at machine speed. Review remains human speed. Open source is experiencing a tragedy of the commons: everyone can graze, nobody’s maintaining the pasture.
The Timeline of Collapse
curl (January 2026): Bug bounty program terminated. Instant bans for AI slop submitters. Stenberg now spends more time rejecting garbage than reviewing legitimate reports.
tldraw (December 2025): All external PRs auto-closed. The maintainers now selectively re-open contributions that demonstrate genuine understanding. The announcement got 150 thumbs up.
Ghostty (December 2025): Mitchell Hashimoto added AI disclosure requirements. His reasoning was simple: “It’s rude to trick me into reviewing AI-generated code.”
OCaml (December 2025): Maintainers rejected a massive AI-generated PR outright. Their explanation: reviewing AI code is more mentally taxing than reviewing human code, because you can’t assume the author understood what they wrote.
Copilot (Ongoing): GitHub ships autonomous PR generation. Simultaneously, maintainers beg GitHub for tools to block AI-generated submissions. The platform profits from both the flood and the drowning.
The Asymmetry Problem
The math is brutal. When one person can generate 50 PRs per day, the bottleneck shifts permanently from creation to review.
CodeRabbit analyzed 44,000 PRs across public and private repositories:
- 1.7x more defects in AI-generated PRs
- 1.75x more logical errors
- 10.83 issues per AI PR vs 6.45 in human-written code
These numbers measure detected issues. AI-generated code often passes initial review because it looks plausible. The bugs surface in production, months later, when the original submitter is long gone.
Stan Lo, a Ruby on Rails core maintainer, put it bluntly: “Contributor to maintainer communication should remain human-to-human.” The request isn’t about gatekeeping. It’s about accountability. When a human submits code, they can debug it, defend it, iterate on feedback. When AI submits code, the “author” often can’t explain why it does what it does.
The Structural Response
Projects are adapting with friction:
- Disclosure requirements: Ghostty, Fedora, and others now require explicit AI disclosure. Not to ban AI use, but to set review expectations.
- Complete PR closure: tldraw closes all external contributions by default. They re-open selectively.
- Program termination: curl killed its bug bounty entirely. The signal-to-noise ratio made the program net negative.
- Paid proof systems: Proposals on Hacker News suggest $5 per PR submission. Skin in the game filters low-effort spam.
The friction that made PRs slow also made them intentional.
What This Doesn’t Solve
AI tools are the accelerant, not the fire. The underlying problems predate autonomous agents:
- Maintainer burnout was already crisis-level: 60% of open source maintainers are unpaid. 44% report burnout. Adding AI spam to that baseline is gasoline on embers.
- Corporate freeloading continues: 95% of enterprise software depends on open source. 0.0014% of companies sponsor maintainers. AI doesn’t change this calculus.
- AI tools themselves aren’t the problem: The problem is friction removal. The friction that used to exist between “wanting to contribute” and “submitting a PR” filtered for people who cared enough to understand the codebase. Now it’s gone.
Banning AI won’t fix open source economics. It just treats the symptom while the disease spreads.
The Lesson
The boundary that collapsed: the effort required to contribute. That effort served a purpose. It created a minimum bar of engagement. Not intelligence, not skill. Just enough friction to filter drive-by submissions.
When you remove friction from contribution without adding friction to review, you shift the cost. The cost doesn’t disappear. It moves from the contributor (who now spends seconds generating PRs) to the maintainer (who still spends hours reviewing them).
Open source maintainers are now the rate limiter for the entire software ecosystem. They’re not getting paid for it.
The lesson isn’t “AI bad.” It’s that removing friction has second-order effects. Every time we make something easier, we should ask: who absorbs the new cost?


