Andrej Karpathy built the neural networks running inside your coding assistant. He taught deep learning at Stanford. He ran AI at Tesla. Last week, he admitted on X:
— Andrej KarpathyI’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between.
If the guy who literally built this technology feels lost, what does that tell you?
Alien Technology
Karpathy’s framing is precise: AI coding tools are “alien technology with no manual.”
Traditional engineering is deterministic. You write code, it does exactly what you wrote. Debug by reading. Fix by editing. The system behaves according to rules you can learn.
AI-assisted development is different. You’re now managing systems that are, in Karpathy’s words, “fundamentally stochastic, fallible, unintelligible and changing.”
- Stochastic: Same prompt, different results. Temperature isn’t just a setting.
- Fallible: They lie. They hallucinate. They claim success when they’ve failed.
- Unintelligible: You can’t read the weights. You can’t step through the reasoning.
- Changing: The model you mastered last month isn’t the model you’re using today.
This isn’t a learning curve. It’s a different kind of system entirely.
15 Primitives in 18 Months
Karpathy listed what developers now need to understand:
“agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations”
That’s 15+ concepts that didn’t exist in mainstream development 18 months ago. Each one evolving weekly. No stable documentation. No established best practices. No textbook.
MCP (Model Context Protocol) shipped in late 2024. Hooks and skills arrived in Claude Code months later. Beads launched in October 2025. By the time you master one primitive, three new ones have emerged.
The mental model required isn’t “learn the API.” It’s “build intuition for orchestrating unreliable autonomous systems while the ground shifts beneath you.”
Expertise Doesn’t Transfer
Here’s the uncomfortable part: being an AI researcher doesn’t help.
Karpathy understands transformers at the mathematical level. He knows why attention works, how gradients flow, what the loss landscape looks like. None of that knowledge tells him how to structure a prompt for Claude Code, when to use Ralph Wiggum loops, or how to configure Beads for a complex project.
Building neural networks and orchestrating them are different skills.
This is why senior engineers struggle. Decades of experience with deterministic systems actively misleads you. Your instincts are wrong. The debugging techniques don’t apply. The mental models are counterproductive.
Igor Babuschkin, xAI co-founder (Grok), replied to Karpathy: “Opus 4.5 is pretty good.”
Karpathy’s response:
— Andrej KarpathyIt’s very good. People who aren’t keeping up even over the last 30 days already have a deprecated world view on this topic.
Thirty days. That’s the half-life of your understanding now.
Learning in Public
Steve Yegge, who’s been writing about software for decades, now calls himself an “AI Babysitter” on LinkedIn. He built Beads by vibe coding the entire project - design, implementation, testing, documentation - in six days.
Yegge charts six waves of programming: traditional (2022), completions (2023), chat (2024), coding agents (2025 H1), agent clusters (2025 H2), agent fleets (2026). We’re somewhere in the middle of this transition, and no one has a map.
The people building these tools are learning how to use them at the same time as everyone else.
This isn’t false modesty. It’s structural. The systems are genuinely new. The interaction patterns are emergent. The best practices haven’t been discovered yet because the primitives keep changing.
What Actually Helps
A few patterns seem stable enough to invest in:
Treat agents as unreliable collaborators. They will lie to you. Build verification into your workflow. Visual verification - making agents prove their work with screenshots - catches lies early.
External memory matters. Agents have amnesia. Every session starts fresh. Systems like Beads give them persistent context across sessions. Without external memory, you’re relying on your own recall to bridge context windows.
Automation compounds. Ralph Wiggum loops let agents iterate until success. Headless browser testing closes the verification loop. The ROI on automation infrastructure is higher than ever because agent labor is cheap.
Stay close to the tools. The people making progress are using these systems daily, not reading about them quarterly. Thirty days of not practicing creates a deprecated worldview. There’s no substitute for hours in the terminal. (More on this below.)
The Leadership Problem
This matters most for technical leaders.
If you’re a CTO, VP of Engineering, or tech lead who’s delegated all coding, your mental model is already deprecated. Not slightly outdated. Fundamentally wrong. The gap between “I read about AI coding” and “I use AI coding daily” is wider than any previous technology shift.
Gene Kim’s DORA research found that trust in AI increases linearly with time spent using it. Everyone who dismisses these tools made that judgment after an hour or two. The skill curve is real, and most people haven’t climbed it. The irony: the people with the most influence over technical decisions are often the ones with the least hands-on experience with the tools reshaping their field.
I’ve written about failing coding interviews despite 20+ years of experience because I couldn’t code without AI autocomplete. That’s not a weakness. That’s adaptation. But it revealed something uncomfortable: my mental model of “how coding works” had shifted so dramatically that I couldn’t even perform the old way anymore.
Technical leaders face the same cliff. You can’t evaluate AI-augmented engineers if you’ve never been one. You can’t architect systems for agentic workflows if you’ve never orchestrated agents. You can’t make technology decisions about tools you’ve only read about in newsletters.
— Steve YeggeThis is just like what happened to the Swiss mechanical watch industry. The craftsmen were doing the same thing our staff engineers are doing today. “No cheap.” That’s word for word what they say.
The Swiss watchmakers’ objection wasn’t technical. It was identity. Their value was in the craft itself, not the outcome. Senior engineers and technical leaders face the same trap. The skills that built their careers feel threatened by tools that make those skills less central.
But here’s the thing: the craft isn’t dead. It’s transformed. The product engineer who can write specs, orchestrate agents, verify outputs, and ship products is more valuable than ever. But you can’t become that person by reading about it. You become it by doing it, daily, for months.
There’s no shortcut. No summary. No delegation path. You have to use the alien tool yourself, even without a manual.
The Honest Position
No one has mastered this.
Not Karpathy. Not Yegge. Not the researchers at Anthropic or OpenAI. The technology is too new, too fluid, too alien.
If you feel behind, you’re in good company. The experts feel behind too. The difference is they’re not waiting for a manual that doesn’t exist. They’re learning in public, building in public, failing in public.
The alien tool has no manual. We’re writing it together, in real time, one failure at a time.


