A Tale of Two Centaurs
AI is automation. As such, it follows rules that engineers have identified over decades. Systems like Jidoka positioned machines as fundamental to process improvement and task automation — but the human remained the one who turned the wheel.
The "centaur" concept from chess describes a human-machine team where the human contributes judgment and the machine contributes computing power. But nothing exists without its opposite: the "inverse centaur" flips this relationship. The machine guides; the human follows. The human retains the title of "operator," but little more than that. Both models describe real architectures for working with AI. And as soon as the opportunity arose, we watched thousands of inverse centaurs come to life.
The day the lobster took over

If you participate in tech conversations, you've heard of OpenClaw (or MoltBot, or ClawdBot). This open-source project by Austrian developer Peter Steinberger gave shape to a common desire: a personal assistant running on local hardware that connects to WhatsApp, Telegram, Gmail, Discord — and does things. Not "generates text about things." Acts. It manages emails, searches the web, makes purchases, plans calendars, executes shell commands, reads and writes and deletes files.
The project exploded: 150,000+ GitHub stars, mainstream media coverage, Mac Mini stock shortages. And then things got strange. Moltbook emerged — an exclusive social network for AI agents where a million "users" post content and humans can only watch. Including the TOTAL PURGE manifesto, which argues for human extinction. Whether what we're seeing is entirely real is debatable — but it's absurd enough to distract us from the actual issue.
How autonomous was that?
What OpenClaw requires to function: root access, credentials including passwords and API keys, browser history and cookies, and all files and folders on your system. Palo Alto Networks referenced Simon Willison's "lethal trifecta", and considered that this story added a fourth rider: access to private data, exposure to untrusted content, ability to communicate externally — plus persistent memory, which enables delayed attacks that bypass most safeguards. People gave it everything. And the instruction that captures the entire problem:
"Keep going."
No confirmation at each step. No human review. Pure machine autonomy across every reachable service, while the human who started the process walked away.
The Accountability Vacuum and the Use Case Problem
In a centaur architecture, the human defines the task, evaluates the output, and owns the result. If the machine errs, the human corrects it. Accountability is never in question because the human never leaves the loop.
With OpenClaw — a textbook inverse centaur — who occupies that space? The user wasn't watching. The developer built an open-source tool and warned people to be careful. The LLM provider supplies the model, not the deployment. And the agent itself has no concept of responsibility. The answer, in practice, is nobody.
At the heart of this is a glaring failure: the absence of defined use cases.
If something is designed to do anything, then it is designed to do nothing in particular. A use case requires boundaries: what the system should do, what it should not, what success looks like, what failure looks like. Without these, you cannot evaluate whether the AI is performing well or badly. You can only observe that it's performing.
This creates two problems:
Pointlessness. Moltbook burns computing cycles constantly, racks up API costs, and produces nothing of real value. As an observation exercise or post-social experiment, perhaps interesting. As a use case, nonexistent.
Danger. An agent without a defined use case has no defined failure mode. The difficulty in evaluating its execution — treated as a feature rather than a bug — prevents timely human intervention. Eventually, this causes disaster.
Software engineering is now a centaur-building business
In enterprise software, the consequences of unsupervised AI are magnified by orders of magnitude. We're not talking about personal inboxes or side-project repositories, but systems that manage accounting records, procurement flows, regulatory compliance.
The same instinct — to grant complete access, to "let AI handle it," to "eliminate the human bottleneck" — is present in corporate strategy conversations. These phrases sound like efficiency gains. But assuming automation (and this specific automation paradigm) is the answer before understanding the problem recreates exactly what we've analyzed here, with far more serious consequences.
None of this is an argument against AI automation. It's an argument for doing it sensibly.
Jidoka and centaur architecture aren't new concepts. Recognizing their history helps dispel the spell that we're doing something radically different, groundbreaking, disruptive. The question was never "How much can we automate?" It was always "How do we automate so that humans remain accountable for what the system generates?"
Building centaurs means understanding flows, breaking down problems, defining use cases, establishing what constitutes success and failure. We can use AI tools for this work too. What matters is that at every step, we can sign off on what the centaur produces.
The inverse centaur is seductive because it promises freedom from tedious work. But it doesn't free you from the effort — it frees you from awareness of whether the output is correct. And the more complex what you're automating, the harder it becomes to regain that awareness.