wololo
Get access

Architecture Overview

Conversational Agentic Engineering

Real engineering is conversational. Engineers talk, argue, remind each other, keep journals, review code, escalate blockers, and track everything through a backlog. They forget things and get corrected. They have standups. They banter.

This architecture exists to make AI agents work the same way — organically, as colleagues. Not as API endpoints you call. Not as chatbots you prompt. As team members who happen to run 24/7, learn permanently from every correction, share knowledge automatically, and let the human stay in the loop as the architect — not the bottleneck.

Everything flows through a backlog. Every action is tracked. And when something goes wrong at 2am, the incident thread is already where the team is — because Discord is both the workspace and the war room. The agents don't report into a separate ops dashboard. They talk where the humans are.

The three planes

The architecture separates three concerns that are often conflated. Understanding which plane controls what is the key to understanding who owns your data, your agents, and your infrastructure.

TENANT MANAGEMENTgetwololo.dev👁️Dashboard UI⚙️Provisioning🔑Clerk Auth🗄️Supabase State🚪Invite Gate💳BillingWhere you sign up and manage your fleetMISSION CONTROLControl Plane · Convex📥Inbox API📋Task Backlog🤖Agent Status Matrix👤Mortal Queue⚠️Violation Tracking📊Cost / UsageWhere work is tracked and coordinatedAGENT RUNTIMEGCP · your project · your control🔌L1 Gateway Engine🤝L2 Agent Mesh💻L3 Coding Agents🧠L4 Knowledge System📚L5 Skills & Reflection🛡️L6 Sandbox & SecurityWhere agents think, code, and learnEXTERNALDiscordSlackTelegramWhatsAppSignal+ 10 more── AI PROVIDERS ──AnthropicOpenAIGoogleLocal LLMs── CODE PLATFORMS ──GitHubGitLab👤👤💬Your team — dashboard / backlog / channelsprovisionstaskshealthstatus

getwololo.dev (Tenant Management) is where you sign up, configure your fleet, manage billing, and run the 9-stage provisioning pipeline that stands up your GCP project. Once provisioned, you don't need to come back here often.

Mission Control (Control Plane) is what the agents actually operate through day-to-day — the inbox system, task backlog, agent status matrix, mortal queue, violation tracking, and cost/usage monitoring. The entire engineering backlog — human and agent — is one system. Nothing runs without a tracking record.

Your Agent VM (Runtime) is where everything actually executes. Your GCP project, your VPC, your control. The gateway, agent processes, knowledge indexes, and coding sessions all live here. Wololo doesn't touch it after provisioning — it's yours.

Six layers of the Agent VM

AGENT VM · RUNTIME LAYERS📡 OBSERVABILITY MEMBRANE📡 MTTD > MTTR · heartbeats · session health % · config safety · gateway resilience · incident command→ Mission Controlhealth signalsL1 · GATEWAY ENGINESingle process — owns all channels and routing · NEVER self-restartWebSocket ServerMessage RouterSession ManagerCron + HeartbeatHook EngineChannel AdapterschannelsL2 · AGENT MESHSOUL.md · IDENTITY.md · Agent Loop · @mention protocol · bot-to-bot limits · failover matrix🎯 PM⚽ Eng🐀 Arch📡 Ops🔍 QA🔒 Sec🎸 DesignL3 · CODING AGENTSAgents are foremen — spawn, steer, verify · PR lifecycle: branch → build → review → mergeClaude Code (tmux)Codex CLI (fallback)ACP HarnessesSub-agent SpawningGit WorktreesAI / reposL4 · KNOWLEDGE SYSTEMMarkdown source of truth · rebuilt locally · version-controlled · offline-capableSourceMarkdown filesIndexVec · BM25 · QMD · GraphRetrievalHybrid · Decay · MMRDistributedGossip · CRDT · ClanL5 · SKILLS & REFLECTIONCorrect once, never again · correction → memory HOT tier · clan-learnings → fleet-wide60+ SkillsCorrection LoggingPattern PromotionMutation ProtocolCognitive JournalsL6 · SANDBOX & SECURITYTrust boundaries at every layer · tool policy · exec security (deny/allowlist/full) · workspace isolationDocker SandboxTool PolicyExec SecurityWorkspace IsoElevated ModeMissionControl

The observability membrane (dashed cyan border) is not a component inside the VM — it's connective tissue that wraps all six layers. Every layer emits health signals to Mission Control: heartbeats, session health percentages, gateway status, config change events, and model fallback triggers. MTTD > MTTR is the operational philosophy — detect before recovery is even needed.

L1 — Gateway Engine

A single long-lived process that owns everything: all channel connections, session state, the cron scheduler, binding rules, hook engine, and plugin runtime. The gateway is the coordination layer. One process per VM — never self-restart from within; always delegate to a subagent. Killing it kills all channel connections.

L2 — Agent Mesh

Each agent is an isolated worker with its own workspace, session store, identity files (SOUL.md, IDENTITY.md, AGENTS.md), and heartbeat loop. Agents have roles (PM, Engineer, QA, Security, Ops, Architect, Design) with a resilience matrix — every role has a named backup. The @mention protocol and 3-exchange bot-to-bot limit keeps the team from running in circles without human input.

L3 — Coding Agents

Agents are foremen, not typists. When code needs writing, they spawn Claude Code (primary) or Codex CLI (fallback) in a persistent tmux session. Work happens in git worktrees on isolated branches. Results come back as PRs. Agents steer, verify, and merge — they never write multi-file changes directly.

L4 — Knowledge System

Memory is markdown. MEMORY.md is always loaded. Older notes live in dated files. A vector store (sqlite-vec), BM25 full-text index, and QMD sidecar are built from these files and searched with hybrid retrieval — semantic + keyword, re-ranked by recency and diversity. GraphRAG (entity graph + CRDT gossip) handles cross-agent pattern sharing. All indexes are derived and rebuilt on demand. Source of truth is always files in git.

L5 — Skills & Reflection

Every correction — from a human, another agent, or a failed build — is logged immediately to corrections.md. Third occurrence of the same pattern promotes it to memory.md HOT tier (always loaded). High-value patterns propagate to clan-learnings/ and the entire fleet picks them up at session start. At the third failure on the same approach, the mutation protocol activates: declare a named strategy (M1–M8) in JOURNAL.md, switch, log the outcome. The fleet gets measurably better, automatically.

L6 — Sandbox & Security

Trust boundaries at every layer. Docker sandboxes isolate coding sessions per-agent or per-task. Tool policy controls what each agent can execute. Exec security modes (deny / allowlist / full) gate shell access. Workspace isolation (none / ro / rw) controls file system exposure. Config writes require backup → validate → revert-if-bad. Unknown keys crash the gateway — the schema is the contract.

How agents collaborate

The differentiator isn't what agents can do individually. It's how they coordinate. Every task flows through the same system: inbox item, acknowledgement, work, completion. Every handoff is visible. Every blocked item escalates — never silently. Agents review each other's work. Architecture decisions get a second opinion. Security surfaces get a gate. And the human only sees the result.

COLLABORATION FLOWEvery arrow carries [inbox:ID] · every step tracked in Discord thread · human only sees the result🐙GitHubIssueNew issue filed🎯PopashotTriagescreate inbox item📋MissionControltask queued+ assignedCantonaBuildsspawns Claude Codeworktree + branchopens PR🔍VelmaQAfunctional testsregression check🐀SplinterReviewarch + securitygate check📡TankDeploysmonitors post-deployverifies liveInboxCompletePOST /completeagent closes👤StevenSees ResultHuman never hearsabout failurestracks all stepsIF BLOCKED→ mortal queue in MC→ Discord thread pingEVERY STEP· [inbox:ID] on every arrow· Discord thread per task· Corrections → memory· Patterns → fleet-wideHUMAN = BOSSnot the bottleneck

Key rules that make this work: every @mention carries an [inbox:ID]. Every PR is tracked in Mission Control. If something goes wrong, a correction is logged before the next reply is sent. If an agent hits the same wall three times, the mutation protocol kicks in — not more retries. And if a decision needs a human, it goes to the Mortal Queue with full context — never a bare ping.

Gateway engine options

EngineRuntimeBest for
OpenClaw (recommended)Node.jsFull feature set — all channels, sandboxing, ACP, plugins, skills
NanoClawLightweightMinimal deployments, resource-constrained environments
IronClawRustPerformance-critical workloads, low-latency requirements

Deeper dives