A month ago, Boris Cherny shared how he personally uses Claude Code. Today, he followed up with 10 tips from across the team. The contrast matters: same tool, different people, different workflows.
— Boris ChernyThe way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code - everyone’s setup is different. You should experiment to see what works for you.
Here’s what the team is doing, with my own takes where I’ve landed differently.
1. Parallelization is the Top Unlock
The team’s #1 tip: spin up 3-5 git worktrees at once, each running its own Claude session. Some name them with shell aliases (za, zb, zc) to hop between with one keystroke. Others have a dedicated “analysis” worktree that only reads logs and runs BigQuery.
Boris personally uses multiple git checkouts rather than worktrees. So do I. The mechanism matters less than the pattern: separate working directories mean separate Claude contexts. No cross-contamination between tasks.
I run multiple checkouts in separate terminal tabs rather than worktrees. Same isolation benefits, but I find the directory structure easier to reason about. Pick whatever you’ll actually use consistently.
2. Re-plan When Stuck
Everyone on the team uses Plan Mode, but the discipline varies. One engineer has Claude write the plan, then spins up a second Claude to review it “as a staff engineer” before execution begins.
The sharper insight: when something goes sideways mid-task, switch back to Plan Mode and re-plan. Don’t keep pushing. And use plan mode for verification steps, not just implementation.
I covered Plan Mode’s importance before. The “second Claude as reviewer” pattern is new and clever: treat planning as adversarial, not collaborative. I use my codex:review plugin for this - it spins up a separate model to critique both plans and implementations before I commit.
3. Claude Writes Its Own Rules
After every correction, end your prompt with: “Update your CLAUDE.md so you don’t make that mistake again.”
— Boris ChernyClaude is eerily good at writing rules for itself.
The team ruthlessly edits CLAUDE.md over time, iterating until Claude’s mistake rate measurably drops. One engineer maintains a notes directory per task, updated after every PR, with CLAUDE.md pointing at it.
This is CLAUDE.md as compounding engineering, taken to its logical extreme: the AI trains itself on its own failures.
4. Skills as Institutional Knowledge
If you do something more than once a day, turn it into a skill. The team’s examples:
- /techdebt: Find and kill duplicated code at the end of every session
- Context dump: Slash command that syncs 7 days of Slack, GDrive, Asana, and GitHub into one context
- Analytics agents: Write dbt models, review code, test changes in dev
Skills checked into git become institutional knowledge. New team members inherit battle-tested workflows immediately.
The team uses MCP for Slack integration. I prefer CLI and scripts for most integrations - including SQL queries. Less magic, easier to debug, works the same locally and in CI. MCP shines for things that genuinely need bidirectional communication; for read-heavy workflows, a well-crafted CLI skill is often simpler.
5. Claude Fixes Its Own Bugs
The team enables the Slack MCP, pastes a bug thread into Claude, and says “fix.” Zero context switching. Or just: “Go fix the failing CI tests.” Don’t micromanage how.
They also point Claude at docker logs to troubleshoot distributed systems. It’s surprisingly capable at pattern-matching across log noise.
I pipe logs to both console and file, then tell Claude where the file lives. It can grep through historical logs without me copy-pasting walls of text into the prompt.
The philosophy: trust Claude to figure out the path. Give it the problem, not the solution.
6. Prompting as Provocation
Three patterns the team uses to level up Claude’s output:
- “Grill me on these changes and don’t make a PR until I pass your test.” Make Claude your reviewer, not just your implementer
- “Prove to me this works.” Have Claude diff behavior between main and your feature branch
- After mediocre fixes: “Knowing everything you know now, scrap this and implement the elegant solution.” Force a second attempt with full context
The common thread: challenge Claude. Treat it as a peer who needs to justify their work, not an assistant who does what you say.
7. Terminal Setup Matters
The team loves Ghostty for its synchronized rendering, 24-bit color, and proper unicode support. I’m a Warp user myself, though I’m curious to try Ghostty now.
More useful than terminal choice:
- /statusline to show context usage and git branch at all times
- Color-coded terminal tabs, one per worktree/task
- Voice dictation (fn x2 on macOS) - you speak 3x faster than you type, and prompts get more detailed
I was skeptical about voice dictation until I tried it for longer prompts. The quality difference is noticeable: you naturally include context and nuance that you’d skip when typing. Try it for plan mode prompts especially.
8. Subagents for Context Hygiene
Three subagent patterns from the team:
- “Use subagents”: Append this to any request where you want Claude to throw more compute at the problem
- Offload to preserve context: Ship individual tasks to subagents to keep your main agent’s context window clean and focused
- Permission routing: Route permission requests to Opus 4.5 via a hook - let it scan for attacks and auto-approve safe ones
The last one is interesting: using a more capable model as a security gate, not as the primary worker.
9. Claude Replaces SQL
The team has a BigQuery skill checked into the codebase. Everyone uses it for analytics queries directly in Claude Code. Boris says he hasn’t written a line of SQL in 6+ months.
This works for any database with a CLI, MCP, or API. I have similar skills for Postgres and SQLite that shell out to psql and sqlite3, plus a SQL Server skill that runs C# 10 directly via shebang. Claude writes the query, the skill runs it, Claude interprets results.
The unlock isn’t “Claude knows SQL.” It’s that you can stay in one context while Claude handles the translation layer.
10. Learning with Claude
Tips for using Claude Code to learn unfamiliar code:
- Enable “Explanatory” or “Learning” output style in /config to have Claude explain the why behind changes
- Generate HTML presentations explaining unfamiliar code - Claude makes surprisingly good slides
- Ask for ASCII diagrams of protocols and codebases
- Build a spaced-repetition learning skill: you explain your understanding, Claude asks follow-ups to fill gaps, stores the result
I hadn’t thought of Claude as a learning tool before. The presentation and diagram tricks are worth trying on the next unfamiliar codebase.
Anthropic’s official playground plugin takes this further. It generates interactive HTML explorers where you configure visually and copy out prompts. The concept-map mode is built for learning: concept maps, knowledge gaps, scope mapping. Worth a deeper look.
The Meta-Pattern
Ten tips, ten different workflows. The consistency isn’t in the specific techniques - it’s in the underlying philosophy:
- Parallelization beats optimization: run more sessions, not smarter sessions
- Plan mode is for recovery, not just setup: re-plan when stuck
- Claude improves itself: let it write its own rules
- Skills compound: codify what you repeat
- Challenge, don’t instruct: treat Claude as a peer to convince
The creator’s setup was surprisingly vanilla. The team’s setups are surprisingly diverse. Both work because they’re built on the same foundation: discipline, verification, and letting Claude do what it’s good at.
Boris’s thread: x.com/bcherny/status/2017742741636321619


