Another day, another thought leader explaining why AI coding will destroy software engineering. Posted from their iPhone, which runs on code they didn’t write, using apps built by teams they’ve never met, connected to infrastructure maintained by people who definitely used Stack Overflow.

I’ve written about the hiring mismatch and product engineering replacing the PM/Eng split. This is the third piece in that series: why the discourse about AI and coding is fundamentally backwards.

The Core Misunderstanding

The hot takes share a common assumption: that “coding” is the job. That typing syntax into an editor is what makes someone a software engineer. That learning to code means memorizing language constructs and writing everything from scratch.

It never was.

The job was always: understand the problem, design a solution, ship something that works, serve users, iterate based on feedback. Coding was one tool in that process. An important tool, but never the whole thing.

A 50%-good solution that people actually have solves more problems and survives longer than a 99% solution that nobody has because it’s in your lab where you’re endlessly polishing the damn thing. Shipping is a feature. A really important feature.

— Joel Spolsky, Stack Overflow founder

When we talk about “learning to code,” we really mean learning to translate intent into working software. The mechanism of that translation has changed before (punch cards → assembly → high-level languages → frameworks → AI). The skill of knowing what to build and why has not.

The Stack Overflow Fallacy

“But juniors need to learn the fundamentals! They can’t just let AI write their code!”

Here’s the thing: juniors never wrote code from scratch. Stack Overflow’s own research found that 52.4% of code copies came from non-accepted answers. Higher reputation correlated with less copying. Juniors copied; seniors didn’t need to.

Before Stack Overflow, it was reference books and asking the senior dev down the hall. Before that, it was USENET and mailing lists. The pattern is old: find code that works, adapt it, learn from it.

AI is faster copy-paste with context awareness. That’s it. The fundamental learning loop (try something, see if it works, understand why) still applies. What changed is the speed.

Vibe coding vs vibe engineering

Simon Willison makes a useful distinction: “vibe coding” is irresponsibly building through dice rolls. “Vibe engineering” is experienced developers using AI tools to accelerate their work while maintaining ownership and review. If an LLM wrote every line but you reviewed, tested, and understood it all, that’s using AI as a typing assistant.

What Changed (And Didn’t)

Changed: Speed of translation from intent to code. What took hours of documentation hunting now takes minutes of prompting.

Unchanged: Need for judgment about what to build. Debugging skills when things break. Ownership of what ships. Architectural thinking. Security awareness. The 90/90 rule.

Addy Osmani (Google Chrome team) frames it well: treat AI-generated code as a first draft, not final output. The code works, but that doesn’t mean it’s good. Review, refactor, understand.

Even Andrej Karpathy, who coined “vibe coding,” recently backtracked. On his new project (Nanochat), he tried using Claude and Codex agents “but they just didn’t work well enough.” He hand-coded everything instead. The person who named the phenomenon knows its limits.

The Real Skill Was Always Shipping (and Scaling)

Kent Beck visited Facebook and discovered they shipped without traditional tests. His reaction? “Devs took responsibility for their code very seriously, and nothing at Facebook was ‘someone else’s problem.’” They succeeded not because of process, but because of ownership.

By early 2025, 25% of Y Combinator’s Winter cohort had codebases that were 95% AI-generated. By summer, 88% of YC startups were AI-native. These founders aren’t debating vibe coding on LinkedIn. They’re shipping products.

The gap between “can write code” and “can ship products” was always wide. AI didn’t change that. If anything, it widened it: now anyone can build a demo, but shipping still requires everything demos don’t have.

And then there’s scale. The demo that works for 10 users breaks at 10,000. The architecture that felt clean becomes a bottleneck. The deployment that worked on your laptop fails in production. Debugging distributed systems. Handling failure modes. Managing state across services. These problems don’t care how the code was written. They care whether you understand systems.

What The Hot Takes Get Wrong

  • “Juniors won’t learn fundamentals” - They never learned by hand anyway. The learning happens through practice, review, and iteration, not through the specific source of initial code. What matters is deliberate practice and mentorship, not the path taken.

  • “Vibe coding is dangerous” - Irresponsible coding was always dangerous. AI makes bad engineers faster, not worse. The engineers shipping “AI slop” are the ones who shipped bad code before. The tool changed; the judgment gap didn’t.

  • “You need to code the real way first” - Amy J. Ko’s research on transfer of learning suggests we’re good at what we explicitly practice, with little transfer between dissimilar skills. Learning algorithms doesn’t automatically make you better at shipping products. Deliberate practice at shipping does.

The judgment layer doesn't automate

AI can generate code, but it can’t decide what code to generate. Strategic product vision, customer empathy, architectural decisions with long-term implications, ethical considerations: these remain human. As execution accelerates, the value of judgment increases.

What Actually Deserves Concern

The discourse focuses on the wrong risks. Here’s what actually matters:

  • 45% of AI-generated code has security vulnerabilities (Veracode 2025). AI optimizes for “does it work,” not “is it secure.”

  • Experienced developers ship 2.5x more AI code than juniors. Not because they type faster, but because they know when to trust it and when to verify. The judgment gap is real.

  • Companies are filtering out AI-fluent seniors while expecting juniors + AI to fill the gap. I wrote about this in the hiring post. It won’t work. AI is a multiplier, not a replacement for experience.

The real problem isn’t that juniors are learning wrong. It’s that companies are optimizing for the wrong thing: hiring cheap + hoping AI levels them up. That’s backwards. Experience still matters. Judgment still matters. Ownership still matters.

The Bottom Line

The LinkedIn hot takes are backwards. They’re defending a past that never existed (juniors writing pristine code by hand) against a future that’s already here (AI as a development accelerator).

The job was always shipping. The skill was always judgment. The tool has changed. The fundamentals haven’t.

Stop gatekeeping a past that never was. The engineers who thrive will be the ones who ship, not the ones who debate which generation learned the “real” way.

The best engineers I know would fail most coding interviews. The worst engineers I know would pass them. That’s the problem.