Your PM just sent you a link to BMAD Method. “This looks like what we need for AI development.” They’re excited. Analyst agent, PM agent, Architect agent, Scrum Master agent, Developer agent, QA agent. Nineteen specialized agents with defined roles and handoffs.

It feels right. It maps to how software has always been built.

That’s exactly the problem.

The Familiar Trap

BMAD’s architecture mirrors the org chart. SpecKit’s phases map to waterfall stages: Specify → Plan → Tasks → Implement. These frameworks are comfortable for people who understand traditional SDLC.

But AI doesn’t work like a team of humans.

When you coordinate 19 agents, you’re managing context handoffs, role boundaries, and workflow enforcement. You’re recreating the friction of human coordination - the very thing AI should eliminate.

Right now, using AI tools like Claude Code is like using a power drill. But by next year? We’re moving to CNC Machines. We’ll provide coordinates to a giant grinding machine that executes the work with precision.

— Steve Yegge, Vibe Coding

Yegge is right about one thing: structured inputs matter. But the mechanism matters less than he suggests. The coordinates for a CNC machine can be simple. Nineteen agents isn’t structure - it’s overhead.

AI Collapses the SDLC

The SDLC is collapsing. Traditional development phases existed because iteration was expensive. Requirements → design → implementation → testing made sense when each cycle cost weeks and required handoffs between specialized roles.

AI makes iteration cheap. Try something. It fails. Adjust. Try again. Seconds, not weeks. Better yet: AI can verify its own work through tests, type checkers, and visual inspection. The feedback loop that used to require QA handoffs now happens inline.

Phase gates that once provided quality control now create friction. The PM agent hands off to the Architect agent, which hands off to the Developer agent - each handoff losing context, adding latency, creating opportunities for misalignment.

The tooling that supported those phases is collapsing too. Storybook matters less when AI can generate and visually verify components in context. Jira matters less when memory systems like Beads track what happened and you can generate any report the business needs with a single prompt.

The best AI workflows aren’t phased. They’re continuous.

The Context Window Trap

Here’s the technical reality: more scaffolding means worse results.

Context pollution is measurable

Research consistently shows that larger context actually drops output quality. Just because context windows are bigger doesn’t mean models use that context effectively.

SpecKit users report that “agents don’t follow all the instructions” despite comprehensive specification workflows. BMAD’s 19 agent system prompts compete for attention in the same context window.

Every additional layer of scaffolding:

  • Consumes tokens that could hold actual project context
  • Dilutes signal with framework instructions
  • Creates conflicts between agent personas and user intent

Boris Cherny’s team CLAUDE.md? About 2,000 tokens. BMAD’s multi-file agent configurations? Orders of magnitude larger.

KISS/DRY for Workflows

The same principles that make code maintainable apply to AI workflows.

Keep it simple: Minimal effective structure beats comprehensive frameworks. Start with Plan Mode and a focused CLAUDE.md. Add complexity only when you hit concrete limits - not theoretical ones.

Don’t repeat yourself: If your scaffolding recreates what the tool already provides natively, you’re adding friction. Claude Code’s Plan Mode does what Cline’s plan/act split does. Cursor 2.0’s parallel agents do what BMAD’s orchestration does.

I’ve written about this pattern before - tools absorb what frameworks provided externally. But this post is about something different: the frameworks were often wrong to begin with. They imposed mental models from an era when iteration was expensive.

What Actually Works

Boris Cherny created Claude Code. His setup is “surprisingly vanilla”:

  • Plan Mode first: Most sessions start in Plan Mode. Go back and forth until the plan is solid, then execute.
  • Shared CLAUDE.md: Short, focused, updated when Claude gets something wrong. Not a comprehensive specification - a “do not repeat” ledger.
  • Verification loops: Give Claude a way to verify its work. Tests, browser, type checker. This is the actual force multiplier.
  • Parallel sessions: Five Claudes running simultaneously. More throughput than one heavily-orchestrated agent.

No 19-agent orchestration. No multi-phase specification workflows. No external coordination layer.

My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don’t customize it much.

— Boris Cherny, Claude Code creator

The Uncomfortable Truth

Complex scaffolding appeals to people who need to explain AI development in familiar terms. Board decks, process diagrams, role matrices. “We have an Analyst agent that feeds into a PM agent…”

This is cargo cult SDLC. It looks like rigorous process. It feels like control. But it optimizes for explainability, not effectiveness.

The developers actually shipping with AI? They’re using simpler tools, faster iterations, and trusting the model to handle what used to require handoffs.

Start vanilla

Try the boring approach first. Plan Mode, focused CLAUDE.md, verification loops. Add complexity only when you hit real limits - not when someone sends you a framework link.

Your PM’s instinct isn’t wrong - structure matters. But the structure that works is minimal, native, and continuous. Not 19 agents recreating an org chart that AI was supposed to replace.