A viral thread recently proclaimed that Anthropic “released ALL the Claude Code secrets” in their new prompting docs. The tips included gems like “use GitHub” and “load context at the start of sessions.”

These aren’t secrets. They’re just… using the tool.

The thread exemplifies a broader problem: the prompt engineering industry has convinced people that AI requires arcane incantations to work properly. It doesn’t. Not anymore.

The Rise of Prompt Engineering

Prompt engineering made sense in 2022. GPT-3 and early ChatGPT were powerful but fragile. They’d go off the rails without careful guidance. “Let’s think step by step” genuinely improved reasoning. Specific formatting instructions prevented chaos.

A cottage industry emerged. Prompt libraries. Courses. Job titles. People charged thousands to teach the “secrets” of getting AI to do what you wanted.

Master the art of crafting prompts that unlock AI’s true potential!

— Every prompt engineering course, 2023

The problem: these techniques were workarounds for model limitations, not fundamental skills.

Why It’s Dying

Modern models - Claude Opus 4.5, GPT-5.2, Gemini 3 - are trained specifically to understand natural language intent. That’s the entire point. They’ve been fine-tuned on millions of conversations where humans expressed frustration at being misunderstood.

The result: you can just… talk to them.

The irony

Anthropic’s “prompting best practices” doc includes XML tags like <do_not_act_before_instructions> and <investigate_before_answering>. If you need to explicitly tell an AI “please understand what I’m asking before doing stuff,” maybe the model should just do that by default. And increasingly, they do.

The viral thread’s tips essentially boil down to:

  • “Use GitHub” - Yes, give the tool access to relevant context
  • “Load context” - Yes, share information the AI needs
  • “Avoid the word ‘think’” - This is a billing optimization, not a prompting technique
  • “Use vision” - Yes, use the features that exist

This isn’t prompt engineering. It’s basic communication.

What Actually Matters Now

The skills that help with AI aren’t prompt-specific. They’re the same skills that help you work with humans:

  • Be clear about what you want - Not because AI is dumb, but because ambiguity creates ambiguous results
  • Provide relevant context - The AI can’t read your mind or your codebase unless you share it
  • Check the output - AI makes mistakes, just like human collaborators
  • Iterate when needed - “That’s not quite right, I meant X” works fine

That’s it. No magic. No secrets. No XML tags.

The XML Tag Question

The Anthropic docs suggest wrapping instructions in XML tags like <use_parallel_tool_calls> or <investigate_before_answering>. But here’s the thing: saying “use parallel tool calls when possible” works just as well.

XML tags are a structuring convention, not magic incantations. They help organize complex system prompts with multiple sections. For a user typing instructions? The model understands either way.

When XML actually helps

XML tags are useful when you have long prompts with distinct sections (instructions, examples, context) that need clear boundaries. For simple requests? Just say what you want.

The “Prompt Engineer” Career Path

Remember when “Social Media Manager” was a hot new job title? Then companies realized everyone could post on social media, and it became just part of marketing roles.

Prompt engineering is following the same trajectory. The skill is being commoditized as:

  1. Models get better at understanding intent
  2. AI tools build good prompts into their interfaces
  3. Everyone gains baseline competency through daily use

The people who’ll thrive aren’t “prompt engineers.” They’re domain experts who happen to use AI effectively - just like the best social media isn’t done by “social media experts” but by people with something interesting to say.

What’s Still Useful

I’m not saying prompting doesn’t matter at all. A few things genuinely help:

  • System prompts for applications - If you’re building an AI product, careful system prompts set tone, constraints, and behavior
  • Structured output formats - When you need JSON or specific schemas, being explicit helps
  • Few-shot examples - For unusual tasks, examples can guide the model
  • Context window management - Knowing what to include and exclude matters for long tasks

But these are engineering decisions, not “prompt engineering” as the industry sells it. They’re about building products, not crafting magic phrases.

The Real Skill

The thread that sparked this post had 400K views. People are hungry for AI “secrets” because the technology feels magical and they want an edge.

Here’s the actual edge: use AI regularly, notice what works, adjust your approach. The same way you’d get better at working with any tool or colleague.

There’s no secret. The best prompt is clear communication - the same skill that makes you effective with human teammates.

The prompt engineering industry sold complexity because complexity is what you can charge for. But the models got smarter, and now the emperor has no clothes.

Just talk to the AI. It understands you better than the prompt engineers want you to believe.