Tether: Encrypted Dotfile Sync for Multi-Machine Developers

✓ Ready to share
"Built a CLI to sync dotfiles across machines. End-to-end encrypted. Looking for testers."
If you work across multiple machines, you know the pain. Your laptop has that alias you added last week. Your desktop is missing the git config tweak. Your VM has an outdated .zshrc from three months ago. So I built Tether - a CLI that syncs your entire dev environment: • Encrypted dotfiles (.zshrc, .gitconfig, Claude Code settings) • Project configs (.env.local, appsettings.Local.json) matched by Git remote • Global packages (Homebrew formulae + casks, npm, pnpm, bun, gem) • Background daemon syncs every 5 minutes + nightly package upgrades Unlike chezmoi/yadm/stow, it's fully automatic with zero templating config. Your shell reads plaintext locally. Git stores ciphertext. Your passphrase is the only key. macOS tested. Looking for help with Linux and teams sync. PRs/issues welcome.
797 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

claude-tools: A Plugin Marketplace for Claude Code

✓ Ready to share
"Built a Claude Code plugin marketplace. Four tools that extend Claude with external AI models and browser automation."
Released claude-tools: a plugin marketplace that extends Claude Code with specialized external capabilities. Four plugins, each delegating to best-of-breed tools: - Gemini: Visual analysis and UI mockup generation via Gemini 3 Pro - Codex: Senior-level architecture thinking via OpenAI Codex - Headless: Browser automation for site comparison and E2E testing - DNS: Multi-provider record management (Spaceship, GoDaddy) The pattern: Claude orchestrates, external tools execute. Subagents run on haiku to isolate token costs. Each plugin solves a specific problem without bloating Claude's context. Headless is the interesting one. Spawns parallel browser agents for site comparison and E2E testing. Combine with ralph-wiggum for continuous validation loops. Full details: https://paddo.dev/blog/claude-tools-plugin-marketplace
831 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Denial, Then Admission: Why LLM Quality Drops Are Real

✓ Ready to share
"For a blog that regularly fanboys Claude, this is going to be uncomfortable."
I've written dozens of posts praising Claude. This one is different. In August 2024, users reported Claude getting dumber. Anthropic's response: "Our investigation has not found any widespread issues." A year later, they published a postmortem admitting three infrastructure bugs degraded up to 16% of Sonnet 4 requests. 30% of Claude Code users were affected. The bugs: misrouted context windows, TPU corruption producing Thai characters in English responses, and a compiler bug eliminating high-probability tokens. Six weeks to diagnose. Their own evals didn't catch it. Now Opus 4.5 is showing the same pattern. Power users reporting model substitution bugs, reward hacking behavior, having to switch to Sonnet mid-task. Status page showing elevated errors days after launch. The uncomfortable truth: LLM quality drops are real, denials are common, and you can't trust "no changes" at face value. Monitor your own workflows. Full analysis with sources: https://paddo.dev/blog/denial-then-admission
1006 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Your Ad Budget Is Feeding Bots: Why the Future Is Physical

✓ Ready to share
"51% of web traffic is now bots. The advertising model you built around human attention is feeding machines."
I wrote yesterday about how the web is bifurcating - one layer for humans, one for agents. Here's the advertising angle: - 51% of web traffic is now bots (first time exceeding humans) - $238.7B wasted on bot-driven traffic in 2024 - 50% of programmatic ad traffic may come from bots - Ad fraud projected to hit $172B by 2028 Meanwhile, out-of-home advertising is having a renaissance: - US OOH revenue: $9.1B in 2024 (first time exceeding $9B) - 16 straight quarters of growth - DOOH growing 10.7% CAGR - Apple, Amazon, Google now major OOH spenders The irony: tech companies are fleeing digital ads for physical billboards. Because you can't bot a billboard. You can't fake human eyeballs on a street corner. You can't inflate impressions when the impression is a physical human walking past a physical screen. The web's advertising model was built on human attention. That attention is now contested by bots at scale. The physical world offers what digital can't: verifiable human presence.
997 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Your Website Is About to Become a Workflow

✓ Ready to share
"The web you knew is dying. What comes next is built for machines, not humans."
I haven't opened a browser for work in weeks. Agents do it for me: headless browsers running parity checks, autonomous loops shipping code overnight, tools that talk to APIs while I sleep. This isn't future speculation. The numbers are already here: - Zero-click searches: 56% → 69% in one year - Bot traffic exceeded human traffic for the first time in 2024 (51%) - Publishers losing 30-40% of search traffic to AI Overviews - Reddit's cofounder: "So much of the internet is now just dead" Microsoft is building NLWeb - "HTML for the agentic web." Every endpoint becomes an MCP server. They're embedding MCP into Windows itself. The web isn't disappearing. It's bifurcating. One layer for humans (shrinking). One layer for agents (exploding). Websites become API endpoints. User journeys become autonomous workflows. If you're building for the web, you're now building for machines first, humans second.
909 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

The External Scaffolding Era Is Ending

✓ Ready to share
"The frameworks you're using to structure AI coding? They're workarounds for tool limitations that are disappearing."
BMAD Method, GitHub Spec-Kit, Cline - these frameworks exist because AI coding tools lacked native structure. External orchestration, specification workflows, plan/act mode splits. Now the tools are catching up. Claude Code's Plan Mode, Cursor 2.0, Google Antigravity - all ship with the same patterns built in. System-enforced planning, parallel agents, proactive clarification, persistent plans. The pattern: tools absorb what frameworks provided externally. Agent orchestration becomes native subagents. Specification workflows become interactive planning. External scaffolding becomes unnecessary friction. External frameworks aren't dead - Spec-Kit's structured deliverables, BMAD's domain templates still have uses. But the core value prop is being absorbed by the tools themselves. Full breakdown: https://paddo.dev/blog/external-scaffolding-era-ending
864 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

The LinkedIn Hot Take Problem: Why the AI Discourse Is Backwards

✓ Ready to share
"Another day, another thought leader explaining why AI will destroy software engineering. Posted from their iPhone."
The LinkedIn discourse about AI coding is backwards. "Vibe coding will ruin juniors!" "You need to learn the REAL way first!" These arguments miss a fundamental point. Software engineering was never about typing code. It was about understanding problems, shipping solutions, and serving customers. Juniors never coded from scratch anyway - they copied from Stack Overflow, tutorials, and senior devs. AI just made the copying faster and contextual. What actually changed: speed of translation from intent to code. What didn't change: the need for judgment, debugging, ownership, and architecture. The 90/90 rule still applies. AI gets you to working code fast; the polish still requires experience. The real concern isn't "vibe coding" - it's companies filtering out AI-fluent seniors while expecting juniors + AI to fill the gap. The judgment layer doesn't automate. Read more: https://paddo.dev/blog/debunking-linkedin-ai-hot-takes
938 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

12 Factor Agents: Principles for AI That Actually Work

✓ Ready to share
"12 Factor Apps shaped how we build cloud software. 12 Factor Agents is doing the same for AI."
Heroku's 12-factor apps shaped a generation of cloud software. Now HumanLayer's 12 Factor Agents is doing the same for AI. The key insight: most successful AI products aren't purely agentic. They combine deterministic code with strategically placed LLM decision points. The factors codify this: own your prompts, own your context window, own your control flow. The critical factor is context. Fill your context window past 40% and you enter what Dex Horthy calls the "dumb zone" - where signal-to-noise degrades, attention fragments, and agents start making mistakes. These aren't new ideas. They're the patterns Anthropic's been shipping: Plan Mode (factors 2, 3, 8), parallel subagents (factor 10), CLAUDE.md (factor 3), agent harnesses (factors 5, 6). The 12 factors just name them explicitly. Full breakdown: https://paddo.dev/blog/12-factor-agents
857 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Anthropic Bought Bun

✓ Ready to share
"Anthropic just made their first acquisition ever. They bought Bun."
Anthropic's first acquisition ever: Bun, the JavaScript runtime. Why does a $183B AI company need JavaScript tooling? Because Claude Code - their CLI coding agent - hit $1B in run-rate revenue in just 6 months. And it's built on Bun. Claude Code ships as a Bun executable to millions of developers. No Node installation required. Fast startup. Single-file distribution. The "risky new tool" I switched to 3 weeks ago is now backed by the fastest-growing AI company in history. The acquisition validates something I've been saying: Bun isn't risky. It's infrastructure for the AI-native development era. And that era is here.
628 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Plan Mode Is Now Mandatory. Auto-Compact Should Be Enabled.

✓ Ready to share
"Plan Mode isn't optional anymore. Neither is auto-compact. Here's why I changed my mind on both."
The workarounds are dead. ROADMAP.md, PLAN.md, SCRATCHPAD.md, ULTRATHINK - all obsolete. Native Plan Mode in Claude Code does what the community hacks couldn't: system-level enforcement. Claude physically can't start coding until you approve the plan. No more "don't code yet" prompts that get ignored. The game changer most people miss: Plan Mode spawns parallel Haiku subagents to explore your codebase simultaneously. Multiple perspectives, isolated context windows, faster than sequential search. This isn't just "think before coding" - it's multi-threaded research. Plans persist to ~/.claude/plans/ and survive /clear. No more manual "read PLAN.md and continue" after context resets. I'm also reversing my stance on auto-compact. Previously said disable it to reclaim 45k tokens. Now: keep it on. Opus 4.5's token efficiency and better summarization make the trade-off favorable. Full breakdown: https://paddo.dev/blog/plan-mode-mandatory-auto-compact-yes
967 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Ralph Wiggum: Autonomous Loops for Claude Code

✓ Ready to share
"Claude Code now has an official plugin for autonomous loops. Let it run overnight, wake up to working code."
Claude Code's official plugin marketplace includes ralph-wiggum: autonomous development loops that let agents work for hours without intervention. The mechanism: a Stop hook intercepts Claude's exit, re-feeds the original prompt. Each iteration sees modified files and git history from previous runs. Set --max-iterations as a safety net. Real results: Geoffrey Huntley ran a 3-month loop that built a complete programming language. YC hackathon teams shipped 6+ repos overnight for $297 in API costs. But autonomous loops aren't always the answer. Judgment-heavy work still needs human-in-the-loop. The technique shines for batch operations: large refactors, support ticket triage, test coverage, documentation. The philosophy: "Better to fail predictably than succeed unpredictably." Prompt engineering becomes the skill.
828 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Claude Code Plugins: Breaking the AI Slop Aesthetic

✓ Ready to share
You know that look: Inter font, purple gradients, rounded corners. AI-generated UI is recognizable from a mile away. Claude Code now has a plugin system with "skills" that inject specialized prompts on-demand. The frontend-design skill (~400 tokens) pushes Claude toward distinctive choices: unusual fonts, bold color palettes, atmospheric effects. I tested it by building a retro arcade pixel art store. Instead of generic components, I got CRT scanlines, neon glows, "INSERT COIN" buttons, and high-score style checkout forms. It's not magic. You still need clear creative direction. But if you've been frustrated by generic AI output, the 30-second install is worth it. Full writeup with setup instructions and other official plugins worth knowing:
756 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Agent Harnesses: From DIY Patterns to Product

✓ Ready to share
"Anthropic published how they keep agents productive across sessions. These patterns are becoming products."
Anthropic just published engineering patterns for long-running agents. The core problem: agents struggle to maintain progress across context windows. Each session starts fresh. Their solution: progress files, structured feature lists, session startup protocols, and incremental commits. The initializer agent creates infrastructure; coding agents execute one feature at a time, reading progress docs before touching code. These aren't just patterns - they're the foundation of products like SpecPilot. Progress tracking becomes real-time run feeds. Feature lists become spec imports and task breakdowns. Session protocols become agent orchestration. The DIY approach works for engineers. But non-technical founders need the harness abstracted away entirely. That's the product opportunity: make the infrastructure invisible while keeping humans in the loop at judgment points. Read the full analysis: https://paddo.dev/blog/agent-harnesses-from-diy-to-product
964 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

The Scaling Trap: When Startups Eat Their Own

✓ Ready to share
There's a predictable moment when growing companies turn toxic. It happens around 30-50 employees. The symptoms are always the same: blame flows down while credit flows up. Leaders demand accountability but withhold authority. Psychological safety gets weaponized - keep people anxious, they'll work harder. Keep them uncertain, they won't push back. The research backs this up. Stanford found blame is literally contagious. Amy Edmondson's work shows fear inhibits learning at a neurological level. And Google's Project Aristotle identified psychological safety as the #1 predictor of team performance. The tragedy? Good people stay too long. You're the boiling frog. By the time you notice, your confidence is shot. If you're an engineering leader who's been blamed for inherited messes, gaslit into questioning your competence, and pushed out: you're not alone. The pattern is the pattern.
897 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

AI-Generated UI Mockups in Your Coding Workflow

✓ Ready to share
"Stop context-switching between Figma and your editor. Generate UI mockups directly in your coding workflow."
Just built a complete UX redesign using AI-generated mockups as the design spec. No Figma. No design handoff. Just /nano-banana, iterate on the mockup, then implement. The workflow: describe what you want, Gemini 3 Pro generates a visual mockup, you review and iterate, then Claude Code implements the design directly. Design and implementation in the same session, same context. Used this to redesign a job research app. Started generic, told Claude "too basic, come up with something more distinctive." It generated a premium glassmorphism design with animated gradients, floating cards, and neon accents. Then implemented the entire design system: components, CSS utilities, page layouts. The key insight: mockups as communication, not documentation. The generated image isn't a pixel-perfect spec. It's a visual conversation about intent. Full workflow: https://paddo.dev/blog/nano-banana-ux-design-workflow
916 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Opus 4.5 and Tool Search: The Native Fix for MCP Context Bloat

✓ Ready to share
"Anthropic just shipped a native solution to the MCP context bloat problem I wrote about last month."
Last month I wrote about MCP tools eating 20-50k tokens before you even start coding. My solution: slash commands that spawn isolated Claude instances. It worked, but it was a workaround. Opus 4.5 ships with Tool Search: mark tools with defer_loading: true and they're discoverable on-demand instead of loaded upfront. 72k tokens → 500 tokens. 85% reduction. Accuracy on MCP evals jumped from 79.5% to 88.1%. The isolation pattern isn't dead though. It still wins for complete context separation, specialized system prompts, and permission boundaries. But for pure token savings? Native solution wins. Combined with 76% fewer output tokens and automatic context summarization, Opus 4.5 is built for long agentic runs. The SDLC collapse I wrote about? This is the infrastructure for it. Full breakdown: https://paddo.dev/blog/opus-4-5-tool-search-context
858 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

From Single Model to Specialized Tooling: Adding React Grab to the Stack

✓ Ready to share
"AI coding has evolved from one model doing everything to specialized tools doing what they're best at. Here's how React Grab fits."
My AI workflow has evolved significantly over the past few months. Started with Claude Code for everything. Then added Codex for architecture decisions and Chrome DevTools MCP for element inspection. Each tool handles what it's best at. But there was still a gap: extracting UI context. When I ask Claude to "fix the spacing on this button," it greps classNames, stumbles through files, and wastes tokens guessing. The translation from UI to code is lossy. React Grab closes that gap. Hold Cmd+C, click any element, and it captures the component hierarchy with exact source locations. Benchmarks show 55% faster completions and 89% fewer tool calls. No more fumbling. The pattern emerging: specialized extraction (React Grab), specialized inspection (Chrome DevTools), specialized architecture (Codex), specialized generation (Nano Banana Pro → Gemini for concept-to-code). Claude orchestrates and implements. Different tools, different strengths, one workflow. Full setup: https://paddo.dev/blog/react-grab-workflow-evolution
1031 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

The SDLC Is Collapsing Too

✓ Ready to share
"First the PM/Eng split dissolved. Now the entire SDLC is collapsing into continuous agent-assisted flow."
I wrote about how product engineering is replacing the PM/Eng split. But that's not the only thing collapsing. OpenAI just released a guide on "AI-native engineering teams" that walks through each SDLC phase: Plan, Design, Build, Test, Review, Document, Deploy. Reading it, one thing becomes clear: the phases themselves are blurring together. When agents can sustain 2+ hours of continuous reasoning (up from 30 seconds a few years ago), they don't just help with "the build phase." They draft specs during planning, scaffold during design, write tests during implementation, and update docs as they go. The framework that survives: Delegate, Review, Own. Hand off mechanical work, review for correctness and alignment, own the judgment calls. That pattern holds across every phase. The SDLC isn't disappearing. But the hard boundaries between phases are. What emerges is continuous flow with human checkpoints, not a waterfall of handoffs.
946 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

The Quiet Advantage: Introverts in Tech

✓ Ready to share
A decade ago I was burning out. Leading a team whilst acting as fractional CTO to alpha startup founders - open plan office, constant interruptions, all-hands meetings. I was good at it, but it cost me. I'm an introvert. Acting extroverted for 10 hours a day drained me. I needed introspection to recharge, but the environment never allowed it. Everything changed with remote work and modern tooling: • Async communication lets me compose thoughts before sharing • AI assistants keep me in flow - organized, deliberate, focused • One-on-ones replace all-hands broadcasts • Documentation-first cultures reward clarity over charisma Research shows introverted leaders are equally effective - they just excel with different teams. We listen more carefully. Our teams feel more valued. Now I'm more impactful than ever. Not despite being an introvert, but because I finally have tools that match how I think. The quiet advantage is real. You just need the right environment to use it.
986 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Hybrid AI Workflows: Spawning Gemini from Claude Code

✓ Ready to share
"Stop forcing one AI model to do everything. Route tasks to the right model automatically."
I've been using Claude Code for development work, but some tasks need different strengths. Visual analysis, UI generation, large context research. Gemini 3 Pro excels at these. So I built a /gemini slash command that spawns Gemini CLI from Claude Code. Pass your landing page code to Gemini, ask it to modernize for your target market, and it returns comprehensive restructured components with improved visual hierarchy. The slash command detects intent and either applies changes directly or returns analysis. The routing logic is simple: Claude Code for backend, logic, refactoring, debugging. Gemini for visual analysis, UI work, research with its 1M token context. No MCP server needed - just spawn the CLI and pipe back results. Key insight: bias toward action. When the user says "fix" or "update", apply changes directly. When they say "review" or "analyze", return the analysis. Asking for approval on every change kills the workflow. Different models, different strengths, one workflow. Full setup with working files: https://paddo.dev/blog/gemini-claude-code-hybrid-workflow
1089 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Gemini 3: The First Unambiguous #1 in Months

✓ Ready to share
"Gemini 3 just crushed every AI benchmark. The 'scaling has hit a wall' crowd was wrong."
Google dropped Gemini 3 on Nov 18 (US time), hours after xAI's Grok 4.1 briefly held #1. Gemini 3 is the first model to cross 1500 Elo on LMArena (1501 vs Grok's 1483). 72.7% on ScreenSpot-Pro vs Claude's 36% and GPT's 3.5%. It wins on reasoning, coding, visual understanding, and multimodal tasks. The 'AI has hit a wall' narrative just died. We haven't had an unambiguous #1 model in months, but Gemini 3 is that model. Not incrementally better. Leaps forward. But here's what matters: this isn't about hype. Gemini 3 excels at complex work (PhD-level reasoning, agentic coding, screen understanding), not casual tasks. You won't notice it planning a soccer game. You will notice it debugging distributed systems or navigating visual interfaces. Google also shipped it into Search on day one. First time ever. Generative UI, interactive simulations, custom calculators, all built on the fly. The bar just moved. Read the full breakdown: [link]
945 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Finding the Craft in the Chaos: A Stoic Take on Job Loss

✓ Ready to share
"I'm searching for full-time work, happier than I've ever been in my craft, and I think I finally understand stoicism."
I've been without full-time employment for months. I do some consulting work I enjoy, but it doesn't pay enough to live on. I send 200+ applications into the void. I watch AI replace mid-level engineers while hiring managers filter for binary tree inversions. I'm objectively not thriving by any external measure. But here's the strange part: I'm happier than I've ever been. Not delusional happiness - actual joy in the work, the community, and the contribution. I've trained dozens of engineers in agentic development for free. I've made genuine friends (not networking contacts). I'm writing again. I've never felt so connected to both my craft and the real developer community. When employment was the goal, everything was transactional: network to get hired, learn to get promoted, help others to look good. Losing that stability stripped away the transactional layer and left something genuine. I help because I can. I learn because I'm curious. I connect because I actually like the people. Stoicism isn't about enduring pain. It's about discovering what matters when you stop chasing outcomes you can't control. The industry taught me happiness comes from job titles and TC packages. Adversity taught me it comes from craft, community, and contribution - things I control regardless of employment status. This isn't toxic positivity. I'm not thriving by capitalism's scoreboard. But I've found something more valuable than a paycheck. And that might be the most important career lesson I've learned. Read more: https://paddo.dev/blog/stoicism-adversity-and-craft
1576 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

C# 14 and .NET 10: Less Boilerplate, More Clarity

✓ Ready to share
"C# 14 just dropped with .NET 10. The field keyword alone will save thousands of lines of boilerplate."
C# 14 shipped with .NET 10, and it's a masterclass in surgical language evolution. The standout is the field keyword. No more backing field ceremony. Write `public string Name { get => field; set => field = Validate(value); }` and move on. It's the feature that makes you wonder why we ever tolerated the old way. Extension members finally let you add properties and static methods to types you don't own. Years of "why can't I just..." finally answered. Null-conditional assignment (`customer?.Name = value`) eliminates another class of repetitive guards. Span conversions get smarter, making high-performance code less painful to write. These aren't flashy. They're precision strikes against the tedium that slows real work. The kind of changes that compound across a codebase until you can't imagine going back. Full breakdown: [link]
843 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Product Engineering: The New Superpower

✓ Ready to share
"The PM/Eng split is dead. Product engineering is the new superpower for small teams in the agentic age."
Traditional product management is dissolving into something new: product engineering. Engineers who think like PMs, armed with AI agents and spec-driven workflows. I'm watching this unfold while building SpecPilot. The pattern is clear: write specs, let agents implement, maintain human judgment. Tools like GitHub Spec Kit and Claude Code are eliminating the handoff between 'what to build' and 'how to build it.' Small teams benefit most. One product engineer can now do what previously required separate PM and engineering roles. The interface of a product manager is being removed, and engineers are talking directly to users. This isn't about replacing human judgment. It's about freeing product-minded engineers to focus on strategy while AI handles tactical execution. The superpower isn't coding faster - it's thinking in specs and orchestrating agents. If you're on a small team, this changes everything. The tools are here. The workflow is proven. Product engineering is the new default.
997 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

C# Scripts: The Python Killer You Already Know

✓ Ready to share
"I stopped writing Python scripts. C# with .NET 10 is faster, type-safe, and I already know it."
For years, .NET devs wrote Python scripts for quick automation because C# felt too heavy. Project files, bin/obj folders, ceremony everywhere. That barrier is gone. .NET 10 (released Nov 12, 2025) brings shebang support, file-based apps, and inline NuGet packages. Write a single .cs file with `#!/usr/bin/env dotnet`, make it executable, and run it like bash. No project files needed. The magic is in ignored directives: `#:package Microsoft.Data.SqlClient@5.2.2` pulls NuGet packages inline. Your entire .NET ecosystem is available in a script that runs anywhere. I built a MSSQL CLI tool to replace my Python scripts. Type-safe, async/await, full IDE support, and it compiles to a native binary if I need it. Python still wins for ML/data science, but for DevOps automation and build scripts? C# is the better choice now. Read more: https://paddo.dev/blog/dotnet-10-scripting
883 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Use Bun

✓ Ready to share
"I spent months convincing teams to switch to pnpm. Then I switched to Bun in 3 days."
I was all in on pnpm. Fought the npm wars, won the battle, settled into "this is good enough." Then a friend saw my terminal and said "lol use bun." I laughed it off. Then I tried it. Then I converted every single project I maintain. Not because Bun is faster (though it's 3-4x faster than pnpm). Because it *just works*. No corepack confusion. No CI surprises. No friction. The irony? Everyone warns that Bun has edge cases while pnpm is "battle-tested." My experience has been the opposite. Bun is boring and reliable. pnpm was death by a thousand paper cuts. Sometimes the "risky" new tool is actually the safest bet.
625 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Skills Auto-Activation via Hooks (Does It Solve the Problem?)

✓ Ready to share
"Hooks auto-activate skills via file patterns. Solves context selection, not workflow orchestration. Both patterns have their place."
After writing about the skills controllability problem, someone pointed me to hooks-based auto-activation. Initially skeptical, but reflecting on my MCP wrapper work changed my view. The pattern: define rules in `skill-rules.json` (activate backend-dev-guidelines when editing .ts files in src/). Hook runs on every prompt, checks context, suggests relevant skills. File paths become deterministic triggers for semantic decisions. This actually fits the right use case. Skills work well for context/tool selection - you want Claude to reason about which guidelines, database, or tool to use based on task context. Semantic triggering is appropriate when the decision requires understanding intent. The controllability problem is specifically for workflow orchestration: multi-step processes needing explicit sequencing (build → test → deploy). Context selection? Hooks + skills work. Workflow execution? Slash commands for guaranteed control. Match the pattern to the problem. Semantic triggering (hooks + skills) for "which context/tool?", explicit controls (slash commands) for "do this, then that". Both have legitimate use cases. Read more: https://paddo.dev/blog/claude-skills-hooks-solution
1202 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Expressing MCP Tools as Code APIs (96% Less Context)

✓ Ready to share
"Built an MCP wrapper tool (96% token savings), tested it in production, learned the real solution is simpler: write libraries, not MCPs."
Anthropic's progressive discovery pattern: 98.7% token reduction by exploring MCP tools on-demand. Built it, production-tested with 4-database project, eliminated ~25k token overhead. Database MCPs: SQL in, data out, wrapper works. Chrome DevTools MCP: UID-based interactions, fragile pattern matching, manual timing. Token savings real, but can't fix bad API design. The trap: "MCPs save context!" True. 28k → 750 tokens. But you're solving the wrong problem. Native libraries cost zero tokens. Playwright > Chrome MCP. pg/mysql2 > database MCPs. Zero overhead, full type safety. Pragmatic reality: Already running multiple MCPs? This claws back ~97% context cost. Starting fresh? Write TypeScript libraries Claude imports directly. Real lesson: Progressive discovery works, production-tested. But it's a middle ground, not the goal. Use it to recover context when you're committed to MCPs. For new code? Libraries first. MCPs only when you need the protocol layer. Read more: https://paddo.dev/blog/mcp-code-wrapper
1023 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Claude Skills: The Controllability Problem

✓ Ready to share
"Claude decides when to use Skills. You can't force-invoke them, and you can't prevent them. For engineering work, that's a problem."
"Ultrathink" sounds like Claude activates deep reasoning mode. It doesn't. It's just hardcoded string matching that sets a token budget. Skills sound like intelligent intent detection. They are - but that's actually worse. They use Claude's own semantic reasoning to decide invocation, which means it's inherently fuzzy and unpredictable. Breaking the illusion matters because once you see the mechanics, you realize the control problem. Claude decides when to auto-invoke skills using its language understanding. You can't force one when you need it. You can't prevent one when you don't. You're managing context carefully, then Claude loads 10k tokens of skill instructions you didn't ask for. This isn't about magic failing. It's about treating Claude as a tool that needs explicit controls. Context engineering means controlling what competes for Claude's attention - signal vs noise. Auto-invoked skills break that discipline. The workaround: slash commands. Type `/chrome` for Chrome DevTools, `/codex` for architectural analysis. Explicit invocation, clear boundaries, you decide what enters the context. No guessing, no surprises. Engineering maturity means seeing through the magic and demanding control over your tools. Until Skills support explicit invocation or disable flags, slash commands are how you get it. Read more: https://paddo.dev/blog/claude-skills-controllability-problem
1401 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

The Hiring Mismatch: When 20 Years of Experience Isn't Enough

✓ Ready to share
"I failed a coding interview despite 20+ years of experience. The system is broken."
I've been coding professionally for over 20 years. I've shipped products, built teams, architected systems. I can debug race conditions, design APIs, and optimize performance. But I failed a coding interview recently. Not because I can't code. Because I can't code by hand without AI autocomplete anymore. Because I haven't memorized syntax I can look up in 2 seconds. Because the interview tested skills that stopped mattering 3 years ago. The hiring process is optimized for an engineer archetype that no longer exists: someone who codes in isolation, memorizes syntax, and uses one stack. Modern engineers are AI-augmented generalists who ship fast, wear many hats, and adapt constantly. Companies are scared of this shift. They can't measure "agentic thinking" or "AI fluency" so they default to whiteboard coding and algorithm trivia. Meanwhile, great engineers who actually build products get filtered out by processes designed for a different era. Read more: https://paddo.dev/blog/hiring-engineers-in-the-ai-age
1024 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

When Claude Needs a Second Opinion: Strategic Thinking with Codex

✓ Ready to share
"Claude Code is great at implementing. Codex is great at thinking. Here's when to use which."
Claude Code has a bias for action. Ask it an architectural question and it jumps straight to implementation details, failure modes, and code structure. Great for shipping fast, less great for strategic planning. Codex (OpenAI's model via the command line) takes the opposite approach. It thinks like a senior engineer: asks clarifying questions, presents options, focuses on high-level trade-offs before diving into specifics. I built a `/codex` slash command that spawns isolated Codex instances for architecture decisions while keeping Claude Code focused on implementation. Same pattern as the MCP context isolation post, but for cognitive specialization instead of tool isolation. The video comparison from Cole Medin says it best: when given the same multi-agent system design question, Codex returned 3 clear options with strategic questions. Claude returned a 50-line implementation plan with failure tables and confidence thresholds. Both are useful, but for different stages of the problem. Read the full breakdown: https://paddo.dev/blog/codex-vs-claude-systems-thinking
1085 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Isolating MCP Context in Claude Code with Slash Commands

✓ Ready to share
"Stop letting MCP tool definitions consume half your context window before you even start coding."
If you've tried using MCP tools in Claude Code, you've probably hit the context bloat problem. Chrome DevTools logs, network traces, and console output quickly fill your conversation window, making it hard to focus on your actual coding tasks. The solution: use slash commands to spawn isolated Claude instances with their own MCP configs. Keep Chrome DevTools in a separate `/chrome` command, database tools in `/db`, API testing in `/api`. Each runs in isolation and reports back only the results you need. This isn't just about Chrome. Every MCP server adds tool definition overhead - 20k tokens for Chrome DevTools, 10k for database tools, 5k for file system access. Load 3-4 MCP servers and you're starting every conversation with half your context already gone. The pattern is simple: create a slash command that calls `claude --mcp-config` with a specialized config file. You get clean context separation, focused system prompts, and no bloat in your main coding session. Read the full walkthrough: https://paddo.dev/blog/claude-code-mcp-context-isolation
1067 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

DeepSeek-OCR: Compressing Text by 20x Using Vision

✓ Ready to share
"DeepSeek just made AI 20x more efficient by converting text into images. Yes, you read that right."
DeepSeek-OCR flips conventional wisdom: convert text to images for 7-20x token compression. The promise: • 100 vision tokens = 700-800 text tokens (7.5x compression) • 97% accuracy at 10x compression • Open-source cost optimization The reality check: • Optimizes tokens, not OCR quality • Bounding box drift and table parsing errors in production • No standardized benchmarks vs GPT-4V/Claude • Early-stage research, not battle-tested Best for: massive document volumes where cost > accuracy. Not a GPT-4V replacement. Research: arxiv.org/abs/2510.18234
558 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Few-Shot Learning for Document Parsing: Training AI on Human Corrections

✓ Ready to share
"Built a document parser that improves itself by learning from every correction. Here's how few-shot learning works in production."
Built continuous learning into ParseIt, my document parsing SaaS, using few-shot prompting instead of model fine-tuning. The approach: • Every human correction gets stored with context • On next similar document, inject 3-5 relevant corrections as examples in the prompt • AI learns from mistakes without retraining Results: • Continuous improvement: accuracy approaches 100% as users correct edge cases • Zero infrastructure overhead (works with standard Gemini API) • Immediate learning (no waiting for model training) • Tenant-isolated (each client's corrections improve only their templates) The trade-off: Few-shot learning eventually hits token constraints as corrections accumulate. That's when you migrate to fine-tuning. But for most use cases, few-shot gets you production-ready accuracy with minimal complexity. Try ParseIt at parseit.ai
853 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

When Not to Use AI: Two Approaches to Building AI-Powered Products

✓ Ready to share
"Stop blocking users on AI generation. Use AI for creation, not delivery."
I rebuilt the same product twice. Version 1 blocked users on AI generation (15-30 second waits, unreliable outputs, unbounded costs). Version 2 used pre-generated libraries with optional AI enhancements (instant feedback, predictable costs, freemium-friendly). The difference: Version 1 used AI for delivery. Version 2 used AI for creation. Pre-generate a curated library, let users compose instantly, then offer AI style transforms and video generation as premium add-ons. Most products don't need real-time AI generation. They need instant UX, sustainable costs, and reliability. Use AI to create assets. Use traditional engineering to deliver them. Read more: https://paddo.dev/blog/when-not-to-use-ai/
709 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

How We Fixed Authorization by Moving Security from Code to Architecture

✓ Ready to share
"We were spending heavily on enterprise SSO because our architecture made security bugs inevitable. Here's how we fixed it by thinking about the whole system instead of patching symptoms."
We were spending heavily on enterprise SSO. The obvious fix: find a cheaper provider. But that would have just treated the symptom. The real problem was architectural. Our monolith had all tenants sharing the same domain, with authorization enforced in application code. Every forgotten WHERE clause, every missed check, every bug could leak data across tenants. We redesigned the system with defense in depth across four layers: • Subdomain isolation (acme.platform.com, techco.platform.com) • Auto-generated SSO providers per tenant • ORM-level query filters (can't execute cross-tenant queries) • Runtime save validation (fails loudly on contamination attempts) The results: significant cost savings, zero-touch tenant onboarding, cross-tenant leaks went from "statistically inevitable" to "architecturally impossible." And it only took a couple months because the design was elegant. The lesson: think about the whole system. Build levers, not patches. 30 lines of code now secure millions of queries across hundreds of tables. That's how small teams move mountains.
1075 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Building Trivista: What AI Coding Actually Solves (and What It Doesn't)

✓ Ready to share
"Built a multiplayer app solo with AI. Here's the honest truth about what AI coding actually solves."
Built Trivista, a multiplayer trivia app, solo using AI coding tools. The honest breakdown: What AI coding solved: • Implementation speed: spec → plan → execute workflow cut development time by 60% • Cross-platform consistency: instant hot reload across 4 devices simultaneously • Tedious boilerplate: Firebase integration, state management, UI components What it didn't solve: • Architecture decisions: still needed to spec out data models, security rules, API design • Edge case handling: AI missed critical race conditions in multiplayer state • Cost optimization: had to manually redesign the image pipeline after $200/day burn rate • UX refinement: iterative polish still required human judgment The bottom line: AI tools are force multipliers for experienced developers, not replacements. You still need to know what to build and how to architect it. Try Trivista at trivista.ai
889 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Cascading AI Pipelines: When One Model Feeds Another

✓ Ready to share
"Your AI-generated animation prompt is made by analyzing an AI-generated image that was made by combining AI-generated features. Here's how to not let that fall apart."
I built a 5-stage AI pipeline for Hybriddle, a game that generates hybrid creatures. Each stage depends on the previous one: constituent images → hybrid image → vision analysis → animation prompt → video. The key lessons: - Status tracking with automatic rollback prevents half-generated content - Preview mode lets you test expensive generations before committing - Dependencies must be explicit (hybrid needs constituents, animation needs hybrid) - Cost tracking even in preview mode for honest accounting - Each stage can fail and be retried independently Cascading pipelines are powerful but fragile. The secret is treating each stage as independently recoverable. Full breakdown: https://paddo.dev/blog/cascading-ai-pipelines
734 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Building Magpie Swoop: A Retro Game About Australia's Most Terrifying Spring Ritual

✓ Ready to share
"It's swooping season. Time to build a game about dodging angry magpies on your morning ride."
Built a retro pixel-art cycling game about dodging swooping magpies. Started during deployment downtime, turned into a playable game. Vanilla JS + Canvas, no frameworks. Magpie AI that cruises → warns → dives. Timing-based duck mechanic with combos. Coordinated attacks from multiple birds. Sometimes you just build things for fun.
334 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Stop Speedrunning Claude Code (Master the Core Loop First)

✓ Ready to share
"Stop rushing to MCPs and subagents. Master Claude Code's core loop first."
Over 7 months with Claude Code, I've watched developers rush to MCPs and subagents, then struggle with context overload and inconsistent results. The ones I've seen succeed? They mastered the fundamentals first. The core loop that works: • Clean context every task (/clear or /compact) • Plan in read-only mode (Shift+Tab) • Review thoroughly: Plan Mode catches mistakes early (yours and Claude's) before code is written • Feed learnings back to CLAUDE.md immediately Context engineering matters: Sonnet 4.5 gets "context anxiety" as the window fills. MCPs, bloated CLAUDE.md files, and conversation history all eat tokens Claude needs for understanding your code. Advanced features have their place, but master the boring fundamentals first. They get you 90% of the way there.
781 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

When Cloudflare Thinks Your Cloud Function is a Bot: The Case for API Aggregators

✓ Ready to share
"Your production API just got blocked by Cloudflare. Not because you violated terms of service, but because your cloud provider's IP looks like a VPN."
Production broke at 3 AM. Cloudflare flagged my Firebase Cloud Functions' IPs as "VPN-like traffic" and blocked API requests to Perplexity. My code was fine. My compliance was fine. The network layer just decided my datacenter looked suspicious. This isn't unique to Perplexity. Any API behind Cloudflare can flag cloud provider IPs as bots. Datacenter IP reputation is shared across thousands of customers, and anti-bot systems can't always distinguish legitimate services from malicious traffic. The fix: OpenRouter as an API aggregator with automatic fallback. When Perplexity gets blocked, requests fail over to OpenAI. I tested both - Perplexity found specific current content in 6.4s, while OpenAI gave generic answers in 38.2s. Though honestly, even a slow fallback is better than no service. The trade-offs are real: you're adding indirection, swapping one vendor lock-in for another, and making cost tracking harder. But when infrastructure fails at the network layer, having options beats manual fixes at 3 AM. Key lesson: infrastructure dependencies go deeper than your API contract. Provider abstraction doesn't prevent failures, but it gives you options when they happen.
1189 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Vibe Coding a Blog Migration: Ghost to Astro in One Night with Claude Code

✓ Ready to share
"10 PM and a terrible idea: What if I just... converted my entire blog to Astro?"
Migrated a blog from Ghost to Astro in 5 hours, start to finish in production. Results: • 75% memory reduction (1GB → 256MB) • Zero database overhead • Custom MDX components • Full CI/CD pipeline • Complete control over the stack But here's the reality: AI was a multiplier, not magic. I made every architectural decision. I reviewed all the code. I refactored what wasn't optimal. The debugging gap is real: when you haven't written the code yourself, finding issues takes longer. AI generated code that looked good but had subtle problems I only caught because I know the fundamentals. My take: AI coding tools are powerful for experienced developers who can guide, critique, and course-correct. If you don't understand the stack, you'll ship technical debt faster than ever. Worth using? Absolutely. Worth learning to code first? Also absolutely.
855 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Flutter Auto-Reload for Terminal Workflows

✓ Ready to share
"Flutter hot reload without an IDE? Here's how I built it."
Problem: Flutter's hot reload requires an IDE. When working with terminal-based AI tools like Claude Code, you're stuck manually pressing 'r' after every change. Built a file watcher that brings automatic hot reload to terminal workflows using Flutter's Unix signal support: • Dart file changes → SIGUSR1 (hot reload) • pubspec.yaml changes → SIGUSR2 (hot restart) • Works with any editor • Zero IDE overhead Impact: I now run 3 terminal panes (iOS, Android, web) that all hot reload simultaneously when I save. Cross-platform testing went from painful to seamless. This is open source. Perfect for developers using AI coding tools or anyone who prefers terminal-first workflows.
684 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Privacy Policy

⚠ No summary yet

Add socialSummary and optionally socialHook to this post's frontmatter.

Building Real-Time Multiplayer with Firebase: What Works and What Doesn't

✓ Ready to share
"Real-time multiplayer is hard. Here's what actually works in production (and what doesn't)."
Shipped a multiplayer game to production with Firebase. Here's what the tutorials don't tell you: Firestore has hard scalability limits: • 2-10 players → works great • 10-20 players → 0.5-2s lag appears • 20+ players → document contention causes failures • Fix: move to subcollections or hybrid Firestore + Realtime Database The biggest mistake I made: treating Firestore like SQL. It's not. Design documents for your read patterns, not for normalized relations. The 1MB document limit hits fast with denormalized player data. Security saved us money: • App Check enforcement prevented bot spam from day one • Dual-layer rate limiting (user + IP) stopped abuse • All writes validated server-side, never trust clients Bottom line: Firebase works well for 2-10 player games out of the box. Beyond that, you need architectural changes. Know your scale before you ship.
870 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Flutter: Animated Gradients via Custom Hooks

✓ Ready to share
"Building reusable animated gradients in Flutter without the boilerplate."
Note: Updated for modern Flutter (Jan 2023). Problem: Gradients aren't widgets in Flutter. You can't use AnimatedWidget for a gradient inside a BoxDecoration. The traditional solution? StatefulWidget with AnimationController, custom Tween, dispose methods, and 50+ lines of boilerplate. Solution: flutter_hooks. Built a custom hook that encapsulates all the animation logic. Result: Animated gradients in 2 lines of code instead of an entire StatefulWidget class. Before hooks: • StatefulWidget wrapper • AnimationController setup • Custom Tween class • initState and dispose boilerplate • 50+ lines minimum After hooks: • One hook call • Drop into any BoxDecoration • Reusable across the app • 2 lines of code Published to GitHub as flutter_animated_gradient. Hooks changed how I write Flutter animations entirely.
822 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Flutter architecture: Hooks, Functional Widget and RxDart

✓ Ready to share
"Flutter state management without the boilerplate: Hooks + RxDart + Functional Widgets."
Note: Updated for modern Flutter (Jan 2023). Architectural approaches have evolved, but the principles still apply. Problem: Building a time tracking app with reactive UI. setState mixes logic with UI. BLoC and Redux require mountains of boilerplate for simple state management. Solution: Combined three libraries that were underused at the time: • flutter_hooks - React Hooks pattern for Flutter • functional_widget - Functions instead of classes • RxDart BehaviorSubject - Reactive streams without BLoC ceremony Result: Reactive UI with 90% less boilerplate than traditional BLoC. State changes propagate automatically. 100% code sharing across iOS, Android, and web. This was pre-Riverpod, pre-modern state management. The pattern still works when you need lightweight reactivity without framework lock-in.
814 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Exploding stuff in Unity

✓ Ready to share
"How to make voxel objects explode into their constituent cubes in Unity."
Note: This is from 2017. Code samples may no longer work with modern Unity. Building a voxel-based mobile game with explosions. The goal: objects explode into their individual cubes, with emissive parts staying emissive. The solution: • Object pooling for performance (pre-calculate on awake, not at explosion time) • Particle system configured as Box shape, emitting cube meshes • Custom shader based on vertex colors • Emissive cubes start bright and fade to default (simulates heat burst) • Particle count and volume match the object's size Key challenge: mobile performance. Object pooling is mandatory - using SetActive instead of Instantiate/Destroy prevents garbage collection and saves CPU cycles. All expensive precalculation happens on scene load, not during gameplay.
783 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

ASP.NET Core Image Resizing Middleware

✓ Ready to share
"Building cross-platform image resizing for ASP.NET Core before the good libraries existed."
Note: This is from 2017. Code samples may no longer work with modern .NET. The problem: .NET Core didn't have mature cross-platform image resizing libraries. System.Drawing was Windows-only (GDI wrapper), ImageSharp was alpha, ImageFlow was in demo stage. The solution: Custom middleware using SkiaSharp (Google's Skia library, maintained by Xamarin team). Features: • Resize images via URL parameters • Auto-rotate from EXIF data • Change format/quality on the fly • Crop, pad, stretch, or fit to bounding box • Preserve transparency • Cache resized outputs The middleware sits before the static file handler, checks if the request is for an image with resize params, and either serves from cache or resizes on demand. Before modern libraries like ImageSharp matured, this was how you did cross-platform image processing in .NET Core.
841 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Microservices on .net core with Nancy - Part 2

✓ Ready to share
"Part 2: Adding inter-service communication to .NET Core microservices."
Note: This is from 2017. Code samples may no longer work with modern .NET. Part 2 of the Nancy microservices series. Part 1 built a single microservice. Now we add inter-service communication. This tutorial covers: • Creating a second microservice (LoggingService) • Building a framework for services to communicate • Using HttpClient for REST calls between services • Service discovery patterns • Handling distributed system concerns The architecture lets microservices call each other while maintaining independence. Each service can be developed, deployed, and scaled separately. Before modern service mesh tools existed, this was how you built distributed systems on .NET Core.
686 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Radial slider in RaphaëlJS

✓ Ready to share
"Building a circular slider for the web inspired by Apple's iOS Bedtime feature."
Note: This is from 2016. Code samples may no longer work. Inspired by Apple's iOS Bedtime feature with its radial slider, I built a circular slider for the web using RaphaëlJS. Why RaphaëlJS? It "just works" out of the box: • Native touchscreen support • Works on older browsers • Cross-platform SVG/VML rendering The tutorial covers: • Building a radial slider with one control point (easily extendable to two) • Math for circular arcs and angles • Touch and mouse event handling • SVG path generation Alternative to traditional horizontal sliders - useful for time pickers, volume controls, or any circular UI element. Published to npm for easy reuse.
659 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Microservices on .net core with Nancy - Part 1

✓ Ready to share
"Building microservices with .NET Core and Nancy (before ASP.NET Core MVC was mature)."
Note: This is from 2016. Code samples may no longer work with modern .NET. Part 1 of a series on building microservices with .NET Core and Nancy framework. Nancy was a lightweight alternative to ASP.NET MVC - simple, fast, and perfect for microservices. This was before ASP.NET Core MVC matured. This tutorial covers: • Setting up a basic Nancy microservice • Running on .NET Core (cross-platform) • Visual Studio Code workflow • Simple architecture for microservices projects Part 2 (coming next) adds inter-service communication and a framework for services to talk to each other. Historical context: In 2016, .NET Core was brand new and cross-platform. Nancy provided a simpler alternative to the heavier ASP.NET stack.
728 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

Transactional email on .net core with Sendwithus

✓ Ready to share
"Transactional emails don't have to be buried in source code."
Note: This is from 2016. Code samples may no longer work with modern .NET. The problem: transactional emails are buried in source code, require dev involvement to update, and lack tracking/analytics that marketing emails have. The solution: Sendwithus - a layer on top of your transactional mailer (Sendgrid, Mailgun, etc). Benefits: • Templates managed in a web UI (non-devs can edit) • A/B testing for transactional emails • Analytics and tracking • Version control for email templates • One API for multiple email providers This tutorial builds a .NET Core client for the Sendwithus API using HttpClient and JSON serialization. Before modern email SaaS tools were common, this approach separated email content from code and gave marketers control without breaking deployments.
785 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon

"Controllerless" views in asp.net core

✓ Ready to share
"How I enabled frontend devs to work on ASP.NET Core without touching C# code."
Note: This is from 2016. Code samples may no longer work with modern ASP.NET Core. The problem: Frontend dev on Mac with Sublime, backend devs on Windows with Visual Studio. How do we collaborate on .NET Core without forcing the frontend dev to learn C#? The solution: Custom middleware that automatically serves Razor views without requiring controllers. • Frontend dev creates views freely, never opens a C# file • Uses dotnet watch run for auto-reload on changes • Backend devs add controllers/logic to views when needed • Both teams work in their preferred environments Built using ASP.NET Core's middleware pipeline - a custom middleware checks for matching views before MVC tries to route to a controller. Made .NET Core cross-platform development actually work for mixed-skill teams.
796 characters ✓ View Post
Twitter/X HN Reddit Bluesky Mastodon
Copied to clipboard!