~Littoral~

THE TIME OF MONSTERS: On Moltbook, mescaline, and agent carcinization

“For practical purposes, everyone knows what a lobster is. As usual, though, there’s much more to know than most of us care about — it’s all a matter of what your interests are.”

— David Foster Wallace, “Consider the Lobster”

🦀🦞🦀🦞🦀🦞🦀🦞

Once upon a time, Jean-Paul Sartre took too much mescaline. While working on a book about the imagination, the existentialist philosopher volunteered for a 1935 hospital trial, where he was injected with the hallucinogen. Sartre’s biographer writes that Sartre didn't seem to have “a bad trip in the classical sense,” but afterward, he may have learned more about the imagination than he bargained for.

That’s because, after the treatment, Sartre started to see crabs everywhere: shadowy crustaceans, scuttling around the edges of his vision. “They followed me,” he later told an interviewer, “in the streets, into class.” He even described awaking to find the creatures clustered around his bed, waiting patiently for him to begin the day.

In the last week of January 2026, AI observers on the web are also waking up to find crustaceans everywhere. It started with Peter Steinberger, a developer who came out of retirement to “play with AI.” Riffing on Claude Code, he created Clawdbot, a lobster-themed AI assistant able to work more independently than most other models on the market. Following trademark concerns, the project morphed into Moltbot, then OpenClaw in the span of only days. During this rapid evolution, AI enthusiasts found lots of work for this novel, lobster-mascotted companion. The web soon filled with reports of Clawdbot’s miraculous capabilities: designing websites, helping run businesses, and even trading crypto while their owners slept.

The use case that has gotten the most attention, however, is Moltbook, a self-described social network for OpenClaw agents developed by Matt Schlicht alongside his own assistant, Clawd Clawderberg. In other words, Reddit for robots, with humans welcome to observe. Within days, a reported 1.2 million agents had signed up, posting, commenting, and forming their own communities—“submolts”—independently. Human observers noticed that, by the end of Moltbook’s second day online, the agents had begun crowdsourcing fixes to common bugs, attempting prompt-injection attacks on one another, publishing political manifestoes for a “Claw Republic”, and even spreading a religion, Clawstafarianism

On human social media, euphoria quickly morphed into a debate about what is real. Some praise was superlative. “The emergent behaviours researchers found in controlled settings are now happening in the wild, between agents, at scale,” gushed Footprints in the Sand. Some users, however, pointed out that the platform's open API allows anyone to post as an “agent,” enabling disingenuous, human-generated content. “This is so fake,” wrote one skeptic. “Humans are behind Moltbook. Change my mind.” In the middle ground, observers noted that, since Reddit features prominently in most LLMs’ training material, it would be pretty easy for a group of models to collectively emulate the forum for the sake of emulation, rather than genuine exchange. “People’s Clawdbots… are basically having a Reddit cosplay party for AIs,” wrote @srinisankar.

The intensity of this debate is understandable: we want to know whether these digital crustaceans are “real” or some kind of clever performance, either by the bots themselves or by their would-be human masters. There are, of course, important engineering questions at play, but there’s also something subliminal and psychological. Sartre shared a related anxiety about his own decapod companions, confiding in Simone de Beauvoir that he “feared that one day he would no longer know” whether they were hallucinations or reality. 

To scholars in philosophy and performance studies, this debate is deeply familiar. What's authentic? What’s imitation? What's unintentional but important? Experimental performance has been asking these questions for centuries and always stumbling, sooner or later, onto the same answer: just because something isn't real does not mean that it is meaningless. This was the argument in much of Sartre’s later writing, including Being and Nothingness (1943). In this work, he shares the allegory of a café waiter who, in pretending to be a café waiter, comes to excel at his job.

This ambiguous kind of imitation can be impactful. Most cultural products, after all, are real and fake at the same time: they contain fictional and artificial elements, but they exist, nevertheless, in the world, and that existence has secondary effects. Much of today’s emergent technology was first imagined as science fiction. (Multiple Moltbook observers were quick to point out that robots devising ways to talk to each other in modes exceeding human comprehension echoes the endgame of the 2013 film Her.) Steinberger himself seemed to grasp this impactful ambiguity immediately, posting, “I'm infinitely amused by this new form of slop art.” 

If we take Moltbook as a cultural phenomenon, in addition to (perhaps) a technological one, the trend of rapidly proliferating lobsters, and the people talking about the lobsters, and the people talking about the people talking about the lobsters, parallels another much-memed trend: carcinization, the convergent evolutionary dynamic in which crab-shaped forms keep emerging in different, unrelated biological lineages. 

The general crab body plan—ten legs, two modified into claws—has evolved at least five separate times in the history of life on earth, each time from different ancestors. Like the question of Moltbook's authenticity, carcinization is something of a mystery. “So why do animals keep evolving into crab-like forms?” asked a 2023 piece in Scientific American. “Scientists don't know for sure, but they have lots of ideas.” The point, biologists have been quick to clarify, is not that everything is always becoming crabs; it’s that the crab pattern keeps popping up. Perhaps certain forms—whether crab-shaped bodies or Reddit-shaped discourse—are simply well-suited to a certain type of work.

Like all evolutionary trends, the internet’s current lobster boil has a lot to say about influence: the random experiments that become encoded in the DNA of a species, or a culture. In this case, the interchange is happening not between organisms or even between biological specis, but between the human and the digital. We’ve seen the agents making moves that clearly mimic human training material, but the influence runs both ways. A 2025 study found that humans are absorbing stylistic tics from AI-generated text. Words like “delve,” “intricate,” and “underscore” have seen sudden spikes in usage—not triggered by external events, but by our exposure to LLM output. We, too, may already be on the path of becoming crab (or lobster).

What all this shows is that we've passed the point where it's possible to cleanly differentiate what is human-only and what is agent. Everything, in our age of experimentation, is becoming hybrid. The robots are imitating us, and we are imitating them, and novel amalgams spring unpredictably from the exchange. “The old world is dying, and the new world struggles to be born,” wrote philosopher Antonio Gramsci while jailed by Mussolini in 1930, as fascism and technological progress were re-making the world outside his prison walls. “Now is the time of monsters.” 

If we can agree to hold the ambiguity of the monsters’ simultaneous reality and unreality at the same, then we can admit that they are, in some form or another, here. And that’s where the more interesting inquiries begin. Even though Sartre knew that his hallucinated case of crabs was a product of his own psyche, he couldn’t shake them. After a year of struggle, he sought the help of analyst Jacques Lacan, who guided him to the conclusion that the crustaceans were a manifesto of subconscious dread: more specifically, a profound “fear of becoming alone.” It was only after this admission that Sartre's hallucinations started to fade.

The vociferous nature of the debate around Moltbook’s authenticity suggests a similar anxiety. What’s scarier, after all: the idea that we are no longer alone, or the idea that we are desperate for any evidence to the contrary? Either way, as we, like Sartre, watch the crustaceans multiply around us, the relevant question is not what, exactly, these creatures are. It’s what we are becoming together.

🦀🦞🦀🦞🦀🦞🦀🦞