Anthropic built the brain. This is the nervous system that keeps it from hurting itself.
7 enforced rules. Mechanical guardrails. Zero trust in promises.
7
Enforced Rules
--
Violations Caught
--
Preflight Checks
--
Live Processes
The Problem
LLMs break things when you're not watching
Every team deploying LLMs in production hits the same problems. These aren't bugs - they're the nature of stateless systems making decisions with incomplete context.
CONTEXT LOSS
Sessions end, memory dies
Every new conversation starts from zero. Weeks of context, decisions, and state - gone.
SILENT FAILURE
Crashes leave no trace
LLM times out mid-task. No record of what happened. No way to resume. Start over.
DRIFT
Detail obsession
Ask it to fix a bug, it rewrites the architecture. No mechanism to force a step back.
FILE DAMAGE
Helpful destruction
Working code gets "improved" into broken code. No file protection. No preflight checks.
LOOP
Debug spirals
Spends 20 messages fixing what should have been dispatched to a background agent in 1.
OVERREACH
Acts without asking
Changes configs, restarts services, modifies logic - all without human approval.
The Solution
7 rules. Mechanically enforced.
Not suggestions. Not system prompt instructions. Enforcement that runs before every action, logs every violation, and blocks every unauthorized edit.
01
DISPATCH DONT DO
If a task takes more than 2 messages, write a task file and dispatch a background agent. Keep the main session for strategy.
Prevents the LLM from burning context on execution work.
02
UNTOUCHABLE = UNTOUCHABLE
89+ protected files. Before any edit, preflight.sh checks the list. BLOCKED means STOP. No rationalizing.
Working systems get broken by well-meaning improvements. Lock what works.
03
WRITE PROGRESS AS YOU GO
Before each action, write what you're about to do. If you crash, the next session picks up exactly where you stopped.
LLM sessions can die at any moment. Written progress is the only insurance.
04
STEP BACK EVERY 4 MESSAGES
Forced reflection cycle. Are we solving the real problem? Is this moving toward the goal? Say it out loud.
LLMs zoom into details and lose the big picture. Forced reflection prevents drift.
05
DELEGATE AND RETURN
When you dispatch work, come back. Report what was dispatched. Ask what's next. Never leave the human waiting.
The human should never wonder what the LLM is doing. Silence is the enemy.
06
ASK BEFORE TOUCHING
Data changes can proceed with direction. Logic changes get proposed and wait. Preflight enforces it mechanically.
The LLM does not own the system. The human does.
07
HAND OFF EVERY FEW MESSAGES
Session handoff file gets updated every 3-4 exchanges. Staleness over 10 minutes is a logged violation.
LLM sessions are ephemeral. The handoff file is permanent memory.
The Architecture
The Brain + Agents Model
Your conversation never stops
THINK
Talk to the brain. Ask questions. Plan strategy. The LLM stays with you.
DISPATCH
Heavy tasks get written as files and dispatched to background agents.
GOVERN
Every agent runs under the same 7 rules. Kill switch ready. Audit trail immutable.
This is how one person runs 22 AI agents, 3 MCP servers, and a global product from a single conversation on a $12/month server.
Live Proof
This system is running right now
These numbers come from the live production system. 12 AI agents, 25 processes, $352/month infrastructure - governed by the Nervous System since February 2026.
Live System Status
--
Violations caught
--
Edits blocked
--
Processes online
--
RAM usage
--
Handoff status
0
Breaches
What Users Say
“Running this install was easy. I would recommend it.”
Louie Sanchez
First external deployment — MacBook Pro, March 2026
Powered by Claude + The Nervous System
Claude provides the intelligence. The Nervous System provides the governance. Auto mode decides what Claude can do - the Nervous System governs how it behaves while doing it. Together, one human runs an entire AI operation.
Claude Provides
Reasoning, code generation, strategy, natural language understanding