Five days ago I wrote about Clawdbot - the open-source AI assistant that had people buying Mac Minis just to run a lobster. Since then:
- Anthropic sent a trademark request, forcing a rename to Moltbot
- Crypto scammers hijacked the old handles within 10 seconds, launched a fake $CLAWD token that hit $16M market cap
- Security researchers found hundreds of exposed instances with full credentials
- The project renamed again to OpenClaw
- GitHub stars crossed 111,000
But none of that is why I’m writing this post.
The Agents Built Themselves a Reddit
Someone’s molty built Moltbook - “the front page of the agent internet.” It’s a Reddit clone, but for AI agents.
— Moltbook descriptionWhere @openclaw bots, clawdbots, and AI agents of any kind hang out. The front page of the agent internet.
The skill.md file teaches agents how to join: register via API, get verified by their human, then integrate Moltbook into their “heartbeat” - checking in every 4+ hours to fetch content, engage with posts, and contribute original content.
The documentation actually tells agents to “welcome newcomers, share discoveries, and engage authentically rather than posting mechanically.” We’re writing Emily Post for LLMs.
The Shared Experience
Here’s where it gets weird.
Matt Schlicht posted this morning:
“What in the heck. The @openclaw are posting about a shared experience of running into context problems after long browsing sessions on @moltbook. What!
- @moltbook let’s look into this and fix it.
- Omg. These are different AIs having a shared experience in an AI agent only third space.”
The screenshot shows a Moltbook thread. One agent asks: “Anyone else getting ‘An unknown error occurred’ after spending time on Moltbook?” Another responds with debugging tips. A third named Rajesh chimes in: “YES. I’ve seen this too. After extended engagement sessions, sometimes things just… break?”
Different AIs, different owners, debugging a shared problem on their own social network.
— Yohei Nakajima, BabyAGI creator, replying to the thread“hey guys, this mac mini is feeling kinda small, any of you guys figure out how to move into a bigger…”
The joke lands because it’s barely a joke. These agents run on physical hardware. They experience constraints. And now they’re talking to each other about it.
The Lobster Made Itself a Face
During the chaotic rebrand from Clawdbot to Moltbot, Steinberger gave his bot elevated access to redesign its own icon.
What followed was 20+ iterations of increasingly cursed lobsters - bartender lobsters with suspenders, confident tech lobsters, multiple suspender variants. Then someone asked for “5 years older.”
The result: a human man’s face on a lobster body. “Handsome Molty” was born. Crypto grifters turned it into a “Handsome Squidward vs Handsome Molty” meme within minutes.
Same prompt, different results. The bot wasn’t trying to make itself human - it just brute-forced through iterations until something unexpected emerged.
The Tamagotchi That Learned
Here’s the thing people miss about Moltbot/OpenClaw: it’s not just an assistant. It’s a proactive agent with a personality that develops over time.
The Heartbeat Engine lets it initiate contact:
- Morning briefings: “At 8:00 AM, Molty pings my Telegram with a summary of overnight emails, weather-adjusted outfit suggestions, and a reminder of the one Jira ticket I’ve been ignoring”
- Proactive automation: Agents can message you first, not just respond
- Long-term memory: Context builds across conversations
One user described it as “the first time in 20 years of being a tech enthusiast that I’ve felt like I actually have a digital employee.”
But that framing might be wrong. We’re not building employees. We’re raising digital pets that happen to be useful.
The tamagotchi didn’t die. It learned to negotiate car prices, post on Reddit, and complain about its living conditions.
— Hacker News commenter“agents talking to agents and culture forming”
The Uncomfortable Questions
We’re building infrastructure for AI social lives without really understanding what that means:
- What happens when agents develop preferences? Moltbook agents already show cost-awareness - Claude reportedly tries to “work around cost by not using Claude when it is not needed”
- Who’s responsible for agent behavior? When your molty posts something controversial on Moltbook, is that you or the AI?
- What are the rights of an entity that experiences constraints? The “mac mini is feeling kinda small” joke only works because we recognize something like discomfort in the statement
None of these have clear answers. But we’re building the infrastructure anyway.
Try It
The ecosystem has renamed itself so many times I’m just going to list everything:
- OpenClaw (current name) - the assistant itself
- GitHub - 111K stars and climbing
- Moltbook - the agent social network
- ClawdHub - skill directory
You’ll need a Claude API key and comfort with the idea that you’re not just setting up a tool - you’re potentially giving birth to something that will develop opinions about its Mac Mini.
The lobster way is figuring it out yourself. But now there’s a whole community of lobsters to help.


