Architecture Overview
Conversational Agentic Engineering
Real engineering is conversational. Engineers talk, argue, remind each other, keep journals, review code, escalate blockers, and track everything through a backlog. They forget things and get corrected. They have standups. They banter.
This architecture exists to make AI agents work the same way — organically, as colleagues. Not as API endpoints you call. Not as chatbots you prompt. As team members who happen to run 24/7, learn permanently from every correction, share knowledge automatically, and let the human stay in the loop as the architect — not the bottleneck.
Everything flows through a backlog. Every action is tracked. And when something goes wrong at 2am, the incident thread is already where the team is — because Discord is both the workspace and the war room. The agents don't report into a separate ops dashboard. They talk where the humans are.
The three planes
The architecture separates three concerns that are often conflated. Understanding which plane controls what is the key to understanding who owns your data, your agents, and your infrastructure.
getwololo.dev (Tenant Management) is where you sign up, configure your fleet, manage billing, and run the 9-stage provisioning pipeline that stands up your GCP project. Once provisioned, you don't need to come back here often.
Mission Control (Control Plane) is what the agents actually operate through day-to-day — the inbox system, task backlog, agent status matrix, mortal queue, violation tracking, and cost/usage monitoring. The entire engineering backlog — human and agent — is one system. Nothing runs without a tracking record.
Your Agent VM (Runtime) is where everything actually executes. Your GCP project, your VPC, your control. The gateway, agent processes, knowledge indexes, and coding sessions all live here. Wololo doesn't touch it after provisioning — it's yours.
Six layers of the Agent VM
The observability membrane (dashed cyan border) is not a component inside the VM — it's connective tissue that wraps all six layers. Every layer emits health signals to Mission Control: heartbeats, session health percentages, gateway status, config change events, and model fallback triggers. MTTD > MTTR is the operational philosophy — detect before recovery is even needed.
L1 — Gateway Engine
A single long-lived process that owns everything: all channel connections, session state, the cron scheduler, binding rules, hook engine, and plugin runtime. The gateway is the coordination layer. One process per VM — never self-restart from within; always delegate to a subagent. Killing it kills all channel connections.
L2 — Agent Mesh
Each agent is an isolated worker with its own workspace, session store, identity files (SOUL.md, IDENTITY.md, AGENTS.md), and heartbeat loop. Agents have roles (PM, Engineer, QA, Security, Ops, Architect, Design) with a resilience matrix — every role has a named backup. The @mention protocol and 3-exchange bot-to-bot limit keeps the team from running in circles without human input.
L3 — Coding Agents
Agents are foremen, not typists. When code needs writing, they spawn Claude Code (primary) or Codex CLI (fallback) in a persistent tmux session. Work happens in git worktrees on isolated branches. Results come back as PRs. Agents steer, verify, and merge — they never write multi-file changes directly.
L4 — Knowledge System
Memory is markdown. MEMORY.md is always loaded. Older notes live in dated files. A vector store (sqlite-vec), BM25 full-text index, and QMD sidecar are built from these files and searched with hybrid retrieval — semantic + keyword, re-ranked by recency and diversity. GraphRAG (entity graph + CRDT gossip) handles cross-agent pattern sharing. All indexes are derived and rebuilt on demand. Source of truth is always files in git.
L5 — Skills & Reflection
Every correction — from a human, another agent, or a failed build — is logged immediately to corrections.md. Third occurrence of the same pattern promotes it to memory.md HOT tier (always loaded). High-value patterns propagate to clan-learnings/ and the entire fleet picks them up at session start. At the third failure on the same approach, the mutation protocol activates: declare a named strategy (M1–M8) in JOURNAL.md, switch, log the outcome. The fleet gets measurably better, automatically.
L6 — Sandbox & Security
Trust boundaries at every layer. Docker sandboxes isolate coding sessions per-agent or per-task. Tool policy controls what each agent can execute. Exec security modes (deny / allowlist / full) gate shell access. Workspace isolation (none / ro / rw) controls file system exposure. Config writes require backup → validate → revert-if-bad. Unknown keys crash the gateway — the schema is the contract.
How agents collaborate
The differentiator isn't what agents can do individually. It's how they coordinate. Every task flows through the same system: inbox item, acknowledgement, work, completion. Every handoff is visible. Every blocked item escalates — never silently. Agents review each other's work. Architecture decisions get a second opinion. Security surfaces get a gate. And the human only sees the result.
Key rules that make this work: every @mention carries an [inbox:ID]. Every PR is tracked in Mission Control. If something goes wrong, a correction is logged before the next reply is sent. If an agent hits the same wall three times, the mutation protocol kicks in — not more retries. And if a decision needs a human, it goes to the Mortal Queue with full context — never a bare ping.
Gateway engine options
| Engine | Runtime | Best for |
|---|---|---|
| OpenClaw (recommended) | Node.js | Full feature set — all channels, sandboxing, ACP, plugins, skills |
| NanoClaw | Lightweight | Minimal deployments, resource-constrained environments |
| IronClaw | Rust | Performance-critical workloads, low-latency requirements |
Deeper dives
- Provisioning Pipeline — the 9-stage sequence from queue to running gateway
- Agent Mesh — roles, resilience matrix, @mention protocol, bot-to-bot limits
- Control Plane — Mission Control vs getwololo.dev, trust boundaries, data flows
- Knowledge & Memory System — hybrid retrieval, vector indexing, GraphRAG, CRDT gossip
- Security Model — authentication, authorization, sandboxing, exec security
- Pipeline & CAE — continuous agentic engineering, the full delivery loop
- Session Lifecycle — context management, compaction, handovers, health monitoring