I discovered something while building SpecPilot: the line between product management and engineering is vanishing. Not blurring - vanishing entirely. What’s emerging is a hybrid role that combines product thinking with technical execution, amplified by AI agents that handle the tedious parts. It’s called product engineering, and it’s becoming the default mode for small teams shipping software in 2025.
This isn’t a trend you can ignore. By early 2025, 25% of Y Combinator’s Winter cohort had codebases that were 95% AI-generated. By summer, 88% of YC startups were AI-native companies with nearly 50% building AI agents. Tools like GitHub Spec Kit, Claude Code, and agentic workflows are fundamentally changing who builds products and how they do it.
The Old World: PM/Eng Split
Traditional software teams operate with a clear division: product managers define the “what” and “why,” engineers handle the “how.” PMs write PRDs, prioritize features, talk to customers, and manage roadmaps. Engineers translate those requirements into code, deal with technical complexity, and ship features.
This worked at scale but created friction:
- Context loss: Ideas get filtered through the PM layer, losing technical nuance
- Handoff delays: Every feature goes through a PM → Eng → QA → PM approval cycle
- Resource overhead: Small teams can’t afford dedicated PMs, so someone wears both hats poorly
- Siloed engineers: Developers are isolated from users, interfacing with PMs instead of understanding customer problems directly
For startups, this split is expensive. You need at least two people (PM + engineer) to ship anything, and the communication overhead slows everything down.
What Changed
Three things converged to break this model:
- AI coding agents can implement specs autonomously: Tools like Claude Code, GitHub Copilot, and Cursor can take detailed specifications and generate production-ready code, handle tests, and manage git workflows
- Spec-driven development makes AI output predictable: Instead of “vibe coding” with ad-hoc prompts, teams write formal specifications that become the source of truth for AI agents
- Integration platforms close the loop: GitHub Spec Kit, Linear, and tight CI/CD integrations create automated workflows from spec → implementation → deployment
The result: one person can now maintain product vision AND technical execution by writing specs and orchestrating AI agents. The PM interface is being removed.
The Product Engineer Emerges
A product engineer isn’t just an engineer who thinks about product, or a PM who can code. It’s a fundamentally different role:
What they do differently:
- Write specs in natural language that AI agents can execute
- Talk directly to users rather than through a PM proxy
- Understand business goals, user needs, AND technical constraints
- Orchestrate multiple AI agents instead of writing every line of code
- Make product decisions in real-time during implementation
What they don’t do:
- Implement every feature manually
- Wait for PM approval on technical decisions
- Translate business requirements into engineering speak
- Choose between product thinking OR technical execution
The key insight: specifications are now executable. When you write a spec and AI can turn it into working code automatically, the gap between “what to build” and “how to build it” disappears.
Specifications, not prompts or code, are becoming the fundamental unit of programming, and writing specs is the new superpower.— Sean Grove, OpenAI
It’s not just product managers. The traditional software architect role is also being absorbed into product engineering. Most developers no longer start architecture planning from scratch - they begin with AI-generated blueprints, completing in hours what normally takes weeks. Architects are evolving from “gatekeepers who reviewed and enforced standards to curators” as AI handles pattern generation.
The triple threat: Product engineers are now PMs, engineers, AND architects.
Or quadruple… or quintuple: Add design (AI-assisted UI/UX) and DevOps (infrastructure as code with AI tooling) to the mix. The consolidation pattern extends as far as the tools enable. For small teams, one person can now handle what previously required an entire department.
Spec-Driven Workflow in Practice
Let me show you what this looks like with GitHub Spec Kit, an open-source toolkit released in September 2025 that works with any AI coding agent.
The Five-Phase Process
1. Constitution (/speckit.constitution)
Establish project principles and constraints once per project. This defines your technical boundaries, coding standards, and architectural decisions that apply to all features.
You define:
- Preferred tech stack (frameworks, languages, databases)
- Architectural patterns and constraints
- Security and compliance requirements
- Performance targets and non-functional requirements
The agent uses this as context for all planning decisions, ensuring consistency across features.
2. Specify (/speckit.specify)
Describe your feature in natural language. The agent generates a complete specification focused on the “what” and “why,” not implementation details.
You write:
/speckit.specify Add user authentication with email/password login.
Users should be able to register, log in, reset passwords, and manage sessions.
Security is critical - follow OWASP best practices.
The agent generates: a full specification document with functional requirements, success criteria, user scenarios, edge cases, and acceptance criteria - all technology-agnostic and testable.
3. Plan (/speckit.plan)
The agent creates a detailed technical implementation plan, respecting your constitution constraints.
The agent generates: technology stack selection, architecture decisions, data models, API contracts, file structure, and integration patterns - all based on your spec and project constraints.
4. Tasks (/speckit.tasks)
Automatically breaks the plan into granular, dependency-ordered tasks organized by user story.
Output looks like:
- [ ] [T001] [P] Set up database schema (src/db/schema.sql)
- [ ] [T002] [P] Create user model (src/models/user.ts)
- [ ] [T003] [US1] Implement registration endpoint (src/api/auth/register.ts)
- [ ] [T004] [US1] Add password hashing utility (src/utils/hash.ts)
- [ ] [T005] [US1] Build login flow (src/api/auth/login.ts)
...
Tasks marked [P] can run in parallel. [US1] indicates which user story the task belongs to. Dependencies are tracked automatically.
5. Implement (/speckit.implement)
Execute the tasks. The agent writes code, creates tests, and manages git workflows. You review at checkpoints and provide feedback.
The Control Points
This isn’t fully autonomous. You maintain judgment at key gates:
- Review the generated spec before planning
- Approve the technical plan before task breakdown
- Review PRs before merging
- Provide feedback that agents incorporate in future iterations
The workflow preserves human decision-making while eliminating grunt work. This human-in-the-loop pattern is critical - I wrote about mastering this core loop with Claude Code because rushing past it leads to context overload and inconsistent results. The same principle applies here: plan, review, approve, iterate.
Why Small Teams Win
This shift disproportionately benefits small teams:
- Resource efficiency: One product engineer replaces separate PM and engineering roles
- Faster decision cycles: No PM/Eng handoff delays or approval bottlenecks
- Direct user feedback: Product engineers talk directly to customers, closing the feedback loop
- AI democratization: Small teams can now ship at the pace of large teams
- Natural scaling: Specs scale better than institutional knowledge
The pattern I’m seeing: solo founders and 2-3 person teams shipping production software that would have required 10+ people five years ago.
Real data backs this up:
- Google reduced migration time by 50% using AI-powered specs
- Airbnb migrated 3,500 test files in 6 weeks (vs. 1.5 years manually)
- YC 2025 startups are shipping mostly AI-generated code with tiny teams (25% had 95% AI-generated codebases by winter, 88% were AI-native by summer)
The Integration Ecosystem
Product engineering doesn’t work in isolation. It requires tight integration between specs, issues, code, and deployment:
Spec management:
- GitHub Spec Kit for structured workflows
- Markdown specs in version control
- Constitution files that define project constraints
Issue tracking:
- GitHub Projects for native integration
- Linear for modern, fast workflows
- Bidirectional sync between specs and issues
AI agents:
- Claude Code for terminal-based development
- GitHub Copilot for IDE integration
- Custom agents for specialized tasks
CI/CD feedback loops:
- Automated test runs inform agent improvements
- Deployment results feed back into planning
- Agents learn from failures
The key: everything is connected. Specs link to issues, issues link to commits, commits trigger tests, test results inform specs. It’s a closed loop.
What This Doesn’t Solve
Before this sounds too utopian, let’s be clear about limits:
Strategic product vision still requires humans. AI can’t tell you what market to enter, what problem to solve, or why customers care. That’s judgment born from experience and empathy.
Customer research can’t be automated. You still need to talk to users, understand their context, and extract insights that specs can’t capture.
Coordination across teams remains hard. If you’re working with multiple teams or external stakeholders, the human communication layer doesn’t disappear.
Ethical considerations need human oversight. Deciding how to use AI, what data to collect, and what features to build raises moral questions that agents can’t answer.
Learning the new skills takes time. Writing good specs is different from writing good code. It requires practice and a different mental model.
Lessons Learned
Having worked in this space building SpecPilot and using these tools daily, here are the practical takeaways:
- Start with specs, not prompts: Ad-hoc prompts lead to inconsistent results. Formal specs create predictable outcomes
- Embrace the hybrid role: Don’t try to be “just an engineer” or “just a PM” - the product engineer role is legitimately different
- Review everything: AI-generated code needs human review. The value is in speed, not replacing judgment
- Build feedback loops: Agents improve when they learn from CI/CD results and your corrections
- Keep humans in control: Approval gates and quality checkpoints prevent blind automation
- Focus on small teams: This approach scales down better than it scales up - perfect for startups
The shift from traditional PM/Eng split to product engineering is real, measurable, and accelerating. Tools like GitHub Spec Kit make it accessible. AI agents make it practical. Small teams make it inevitable.
Try It Yourself
If you want to experiment with product engineering workflows:
- Install GitHub Spec Kit:
uvx --from git+https://github.com/github/spec-kit.git specify init my-project --ai claude - Navigate to your project:
cd my-project - Set up your constitution: Use
/speckit.constitutionto define project principles and technical constraints (once per project) - Write your first spec: Start with a small feature, then follow the workflow:
/speckit.specify → /speckit.plan → /speckit.tasks → /speckit.implement - Review the output: See what agents generate, provide feedback, iterate
The learning curve is real but short. You’ll know within a week if this workflow fits your team.
I’m building SpecPilot to take this further - autonomous agents that handle the entire spec-to-deployment cycle with intelligent feedback loops. If you’re interested in where this is heading, that’s the space I’m exploring.
The product engineer role isn’t a trend. It’s the new default for small teams in the agentic age. The tools exist. The workflow is proven. The question is: are you ready to embrace it?


