I failed a coding interview last month. Not a junior role. Not a specialized position. A senior engineering role at a company building AI products.

I have over 20 years of professional software engineering experience. I’ve built and shipped hundreds of products in my career, at varying scales, including at the very largest enterprises. Mobile apps, web platforms, multiplayer games, backend services. I’ve built teams, mentored engineers, and designed systems that scale. I can debug production incidents, optimize database queries, and architect distributed systems.

But I couldn’t pass their coding interview.

The problem? I haven’t coded by hand in months. I haven’t written code without AI autocomplete in three years. I don’t memorize syntax I can look up in 2 seconds. And the interview format assumed I would.

This isn’t about me being a bad engineer. It’s about hiring processes optimized for a developer archetype that no longer exists.

The Absurdity of Modern Hiring

Here are real experiences from my recent job search:

Failed a coding interview at a company that champions agentic workflows: I interviewed at an organization that publicly markets itself as AI-forward and promotes agentic development practices. During the coding session, I used Copilot as I do in my actual work. The rejection email praised my background but noted that “tooling assistance such as Copilot made it harder to assess your individual coding approach.”

Let that sink in: a company selling agentic solutions rejected a candidate for using agentic workflows during the assessment. They wanted to see how I code without the tools I use every day, while simultaneously marketing themselves as champions of AI-driven development. The cognitive dissonance is staggering.

Failed a system design interview not because my architecture was wrong, but because I couldn’t figure out how to use HackerRank’s whiteboard tool. I can design systems on Miro, Figma, or actual whiteboards. But their janky drag-and-drop interface defeated me. I spent 10 minutes fighting the tool instead of demonstrating my knowledge.

Got ghosted after final rounds. Multiple times. No feedback, no rejection email, just silence. Companies that preach “candidate experience” can’t be bothered to send a two-sentence email.

CV doesn’t get noticed despite years of experience. The same CV that used to get immediate recruiter attention now disappears into applicant tracking systems. Why? Because I don’t list the alphabet soup: “React,” “Python,” “Kubernetes” as discrete bullet points. I list actual outcomes: “3x’d BAU output for the org,” “Saved $200k per year on SSO,” “Built a $40M company from scratch.” But ATS filters optimize for keyword matching, not business impact. The fundamentals that let me build across any stack? Invisible to the robots screening resumes.

“You wear too many hats” became disqualifying feedback. I’ve built full-stack apps, designed systems, led teams, and shipped products. To some orgs, this reads as “unfocused” rather than “versatile.” They want specialists, not generalists. But modern product development requires generalists.

The leadership paradox: Yes, I can lead. But I can code too. What’s so wrong with that? Some companies reject me for being “not hands-on enough.” Others want “60/40 management/coding” splits (an absurd artificial ratio). Some say I’m “too hands-on” for a leadership role. And the unspoken one: “He might not be able to take direction from someone 10 years younger and less experienced.” Here’s the truth for companies: you don’t have anything to fear from experienced engineers. You only have something to fear from less experienced engineers who THINK they’re good. Peak of Mount Stupid on the Dunning-Kruger curve.

“We’re not ready for AI” was literal feedback from one company. I talked about using Claude Code and AI-augmented workflows in my process. They said it made them “uncomfortable” and they’d prefer someone with “traditional development experience.” They’re building a SaaS product in 2025 and they’re scared of AI.

The AI discomfort

Many companies are genuinely scared of AI-augmented developers. They don’t know how to evaluate AI fluency, so they default to traditional metrics. This filters out exactly the engineers who will thrive in the next 5 years.

What Changed (And Why Orgs Can’t Keep Up)

Fifteen years ago, I had thick reference books on my desk. Code Complete, Design Patterns, language-specific bibles. I memorized syntax, wrote boilerplate from scratch, and kept Stack Overflow tabs open constantly.

Today, I code like this:

  • Type intent, let AI suggest implementation
  • Generate boilerplate instantly with prompts
  • Ask Claude Code for solutions with full project context
  • Debug by describing symptoms and letting AI trace root causes
  • Build across any and all stacks as needed - my fundamentals are strong, the alphabet soup no longer matters

The actual skills I use daily: systems thinking, architecture, product sense, debugging, agentic workflows.

The skills coding interviews test: memorizing syntax, whiteboard algorithms, LeetCode puzzles, working in isolation without tools.

There’s almost no overlap. The interview process optimizes for skills that mattered in 2010, not 2025.

If you’re writing software for a startup, it’s not enough to be better than most programmers. You have to be better than the programmers already there. And to be better than them, you have to be doing something they’re not.

— Paul Graham, Beating the Averages

Here’s the uncomfortable truth: many orgs filter out AI-augmented engineers because the incumbents are protecting their turf. If you can ship 10x faster with AI, wear multiple hats, and adapt to any stack, you make the existing team look slow. So they design interviews that test for skills they have (whiteboard coding, syntax memorization) instead of skills that matter (shipping products, systems thinking, AI fluency).

The irony: the engineers using AI to ship 10x faster are exactly the ones hiring processes filter out.

Why Companies Can’t Adapt

The organizations struggling to hire me aren’t stupid. They’re risk-averse:

They talk a good game but can’t measure it: Industry reports say companies value “tool fluency, judgment, and system design over raw coding.” But at the interview, they hand you a whiteboard and ask you to implement a binary search tree from memory.

They can’t measure “AI fluency”: Traditional interviews have known rubrics. AI workflows don’t. So they stick with what’s measurable, even if it’s irrelevant.

They’re scared of over-reliance: “What if the AI goes away?” is a real concern. Fair point. But it’s like asking in 2005, “What if Stack Overflow goes away?” Tools change, but the skill of knowing how to use tools stays.

They think “vibe coding” means sloppy code: The term has given AI-augmented development a bad name. But anyone worth their salt is still writing production-ready code, considering the same edge cases and trade-offs they always did. They’re just not stumbling through Stack Overflow anymore. They’re more efficient. The only engineers shipping “AI slop” are the ones who were already bad engineers. And here’s what companies miss: these AI-fluent engineers won’t just be more efficient at their own work - they’ll automate your entire SDLC and 5x the rest of your team too.

They’re hiring juniors thinking AI will level them up: Companies are rejecting experienced AI-fluent engineers while hiring cheaper junior candidates, assuming AI will make them productive. It won’t work how you expect. AI is a multiplier, not a replacement for fundamentals. Junior engineers without strong architecture, security, and systems thinking will just ship bad code faster. Meanwhile, the senior engineers who could actually leverage AI to transform your team are getting filtered out by your broken interview process.

They want specialists, not generalists: Job descriptions say “React expert” or “Python specialist.” But modern product development needs people who can ship features end-to-end: design, frontend, backend, database, deployment. Generalists who can do this are rare and valuable, but they don’t fit neat hiring categories.

They’re optimizing for culture fit over capability: “We’re not ready for AI” translates to “we want someone who works like we do, not someone who challenges us to work differently.” This is how companies stagnate.

They rely on broken proxies: ATS systems filter for keyword matches. Coding interviews test memorization. None of these correlate with actual job performance, but they’re easier than evaluating real work.

The result: great engineers get filtered out, mediocre engineers who interview well get hired, and everyone wonders why productivity is low.

What This Doesn’t Solve

Before this sounds like sour grapes: I’m not saying memorization doesn’t matter or that AI makes bad engineers good. Fundamentals are essential. But there’s a difference between understanding architecture and memorizing syntax. AI is a force multiplier for people who already know what to build.

Not all interviews are broken. The best ones I’ve had: pair programming on real problems, reviewing GitHub projects, building features with actual tools. These evaluate how you work, not how you perform under artificial constraints.

I’m not entitled to a job. But failing because I can’t code without AI autocomplete in 2025? Absurd.

What I’ve Learned

For engineers job hunting: Build in public. My blog posts get more interviews than my CV. When rejected for using Copilot, I responded: “For an org that champions agentic workflows, testing without code completion felt off the mark.” Be polite, but call out contradictions. And find companies that get it - they’re out there.

For companies hiring: Test how candidates use AI, not how they code without it. Review their shipped work, not their whiteboard skills. Stop filtering for alphabet soup keywords when you should be looking for strong fundamentals and shipped products. And for the love of everything, stop ghosting candidates.

The best engineers I know would fail most coding interviews. The worst engineers I know would pass them. That’s the problem.

Meanwhile, talented engineers keep failing absurd interviews while shipping incredible products on the side. Want proof? Check out some of the stuff I’m working on.