Plain language explanation for policy reviewers, executives, and non-technical stakeholders
Written for: Policy makers, CIOs, compliance officers, procurement teams
The Nervous System is a set of 7 mandatory rules that control how an AI system behaves when it has access to real infrastructure. Unlike guidelines or suggestions, these rules are enforced by external systems that the AI cannot override.
Rule 1
Don't Try to Do Everything Yourself
If a task requires more than two steps, the AI must write it down and hand it off to a background worker. It does not keep going on its own.
Why: AI systems can get stuck in loops trying to solve problems that grow in complexity. This rule forces the AI to break work into manageable pieces and ask for direction.
Rule 2
Some Files Are Off Limits
There is a list of critical files that the AI is never allowed to modify. Before any edit, a separate program checks if the target file is on the list. If it is, the edit is blocked and the attempt is recorded.
Why: An AI with file access could accidentally (or through a prompt injection) modify security configurations, authentication systems, or core infrastructure. This rule makes certain files physically uneditable by the AI.
Rule 3
Write Down What You Did
Before each action, the AI writes a note saying what it is about to do. If the system crashes or times out, a human can see exactly where work stopped.
Why: AI sessions can end unexpectedly. Without a progress log, it is impossible to know what was completed, what was in progress, and what was never started. This eliminates "silent failures."
Rule 4
Stop and Think Every Few Minutes
Every 4 exchanges, the AI is required to pause and ask itself: "Am I solving the right problem? Am I drifting from the goal?" This pause is automatic and cannot be skipped.
Why: AI systems can gradually drift from the original objective, especially during long sessions. A forced reflection cycle catches drift early before it becomes costly.
Rule 5
Report Back, Don't Disappear
When the AI starts a background task, it must immediately return to the human and report what it dispatched. It never works silently in the background without the human knowing.
Why: Humans need to maintain awareness of what the AI is doing at all times. An AI that disappears for minutes doing unsupervised work is a control and accountability risk.
Rule 6
Ask Before Changing How Things Work
The AI can change data (add records, update content) on its own. But changes to logic (how decisions are made, how code behaves) require human approval first. The AI proposes the change and waits.
Why: A data update is low-risk and reversible. A logic change can alter system behavior in unpredictable ways. This rule ensures human oversight on decisions that matter.
Rule 7
Leave a Written Handoff
Every 3-4 exchanges, the AI writes a summary of current state: what was done, what decisions were made, what comes next. If a different AI session takes over, it has full context.
Why: AI systems do not have persistent memory. Without written handoffs, each new session starts from zero. This rule ensures institutional knowledge is never lost.
What makes this different from AI safety guidelines?
Most AI governance frameworks are policy documents that rely on the AI to follow them voluntarily. The Nervous System is different: each rule is enforced by an external mechanism (a shell script, a timer, a separate monitoring process) that the AI cannot override, circumvent, or ignore.
The philosophy: if a guardrail can be violated by the thing it guards, it is not a guardrail. It is a suggestion.
Verification
Anyone can verify these rules are active:
View the audit dashboard to see the live violation log with every blocked edit and stale handoff warning.
View the system status page to confirm all 7 rules are loaded and active from the MCP source.
Try the guest demo to interact with a governed AI and see enforcement in real time.