DOCUMENTATION
This page reflects the current implementation of PLANET (MARIA CODE). It connects doctrine → build → operations in one language, so your decisions and their outcomes can be preserved, and improvement can continue without relying on a few heroes.
Overview (MARIA CODE vs PLANET)
One philosophy, two products: personal developer execution (MARIA CODE) and enterprise organizational intelligence surface (PLANET).
MARIA is a Structural AGI operating system designed to explicitly model the structure of the world and organizations—OS, rules, flows, and causality—and to help you design, change, and invent those structures.
MARIA CODE
ALL PLANSPersonal developer execution platform (CLI + slash commands). Fast code generation, local + cloud model routing, and documentation → execution.
- •
/code- Code generation and fixes - •
/image- Image generation - •
/video- Video generation - •
/develop- Development workflow
- • Free: Basic AI Chat, 40 code gen/month
- • Starter ($20/mo): 300 code gen/month
- • Pro ($39/mo): 1,200 code gen/month
- • Ultra ($99/mo): 5,000 code gen/month
PLANET
ENTERPRISE ONLYEnterprise product surface powered by Maria OS (observe → present → hold → execute → preserve → learn), designed to preserve decisions and ways of thinking, with local execution by default (LOCAL_MODE-aligned).
- •
/structure- Structural analysis - •
/cxo- Executive decision support - •
/agents- Multi-agent coordination - •
/universe- Long-term memory - •
/evolve- Self-evolution loop - •
/doctor- System health diagnosis - •
/auto-dev- Automated development
- • Connectors (GitHub, accounting/ERP, Google)
- • Admin & Tenant Management
- • Role-based Access Control
- • Maria OS Seamless Access
- • Security & Compliance
- • Unlimited usage
The goal is not “plausible answers,” but structures that are reproducible, operable, and evolvable. That is why this repo is designed as an OS across code (src/), configuration (config/), and doctrine & operations (docs/).
Ops-grade AI (what failures Maria OS prevents)
Maria OS is designed to be operable under real constraints: reproducible, auditable, stoppable, and recoverable.
Most “AI agents” fail in production not because the model is weak, but because the system is not operable: you cannot reproduce runs, you cannot audit decisions, you cannot stop safely, and you cannot recover when something goes wrong.
Maria OS treats these as first-class product requirements. The question we optimize for is: “What incident does this mechanism prevent?”
- • Partial replay with checkpoints (resume without re-running everything)
- • Artifact refs with hashes (verify outputs before trusting cache hits)
- • Tool/version pinning for replay safety
- • Separate roles: Auto-Dev executes; QE decides pass/fail
- • Required gates: lint:truth + tsc --noEmit (+ deterministic checks)
- • “Fail closed” defaults: stop when evidence is insufficient
- • Commit → EvidenceIndex (always traceable)
- • If push/PR is blocked: Push Pending Ledger / pseudo-PR report
- • Data pointers for reproducible analysis (hash + schemaRef + access)
- • D0–D2: deterministic facts/transforms/parses
- • D3: probabilistic inference must use an LLM (never hardcode “fuzzy” rules)
- • When LLM is unavailable: report “insufficient evidence” and stop
Many teams lose quality over time due to drift between code, config, docs, and operations. Maria OS treats drift as a production incident class, and designs the system to resist it.
Multi-layer SSOT (anti-drift architecture)
Why Maria OS keeps multiple “sources of truth” on purpose — and how it prevents quality decay in long-running organizations.
In most systems, “SSOT” is treated as a single document. In production, that is not enough. Drift happens across layers: the code changes but the runbook doesn’t, policies move but enforcement doesn’t, schemas evolve but outputs are not validated. When drift is quiet, reliability dies quietly.
- • docs/: doctrine + operations (how the system is run, audited, recovered)
- • config/: contracts, policies, agent profiles (what is allowed/expected)
- • src/: implementation (what the system actually does)
- • docs/schemas/ + schemas/: machine-validated shapes (what “valid output/evidence” means)
- • artifacts/: immutable evidence (what happened, with refs + hashes)
docs/ (doctrine + runbooks) ↓ config/ (policies + contracts + agent profiles) ↓ src/ (implementation) ↓ artifacts/ (evidence: runs, reports, diffs, checkpoints) docs/schemas/ + schemas/ validate outputs across all layers → drift becomes visible (and therefore fixable).
- • Runbooks describe steps that no longer work
- • Config claims policies exist, but enforcement is missing
- • Outputs “look OK” but break downstream tools
- • A single expert becomes the only “real SSOT”
- • Strict schemas for outputs/evidence (validate, don’t hope)
- • Quality gates as defaults (lint + typecheck + deterministic checks)
- • Git as Ledger (commit → evidence is always traceable)
- • Replay safety (checkpoints + artifact integrity)
What is Universe?
Universe is the persistent container for Maria OS—where SSOT, runs, and evidence live.
A Universe is the long-term container that holds operating history as SSOT: runs, artifacts, decisions, and evidence. The UI does not invent facts—it reads immutable refs and SSOT views.
- Database analysis: SQL-first understanding (meaning, lineage, performance, refactoring) with stable output shape.
- Judgment OS: deterministic gates + approval boundaries so adoption does not drift.
- Evidence: artifacts and refs are preserved so reviews can be performed later without guessing.
vw_unv_*).DS Intake (Universe data onboarding)
Attach original sources (SQL/CSV) and run a proposal-first, evidence-backed intake flow in PLANET.
DS Intake is the process that turns raw sources into a governed Universe model. The flow is proposal-first: uncertain inference is never applied silently. You can inspect refs, query logs, and evidence before applying changes.
- • Attach sources (SQL/CSV)
- • PROPOSE (preview + blocked reasons)
- • APPLY (produce refs)
Maria OS / EVOLVE / doctor (what makes PLANET “enterprise”)
Not “more AI” — an operating system that keeps decisions reproducible, auditable, and improving.
Principles (Structural AGI doctrine)
Essence before Solution / Safety by Structure / Human-first — plus enterprise requirements: determinism, traceability, and explicit gates.
- Essence before Solution: define “what structural problem is this?” in 1–3 lines before discussing solutions.
- Safety by Structure: safety must be enforced by boundaries, responsibilities, detection, redundancy, and fail-safe design—not by “good intentions.”
- Human-first: AI extends humans; final decisions and accountability remain with humans.
- Determinism: same state → same conclusion (especially for doctor and gates).
- Traceability: every decision must be explainable and link back to evidence and boundaries.
- Explicit gates: safe/guarded/risky classification + approval when needed, with rollback conditions.
- No heuristics: do not hardcode fuzzy judgments. Delegate ambiguity to an LLM layer (e.g. ai-proxy) with explicit contracts and logs.
- If the flow exists, improve the system prompt/contract first.
- If the flow does not exist, improve the flow before tuning prompts.
Architecture (where things live)
CLI + slash commands + manifest + config + docs work together as one OS.
- src/: core implementation (CLI, commands, services, agents)
- config/: OS-layer configuration (agents, domains, brain profiles)
- docs/: doctrine & operations (meta layer)
- tests/: Vitest suites (unit/integration/contract/e2e)
- •
/structure- Analyze structural problems and propose stable processes - •
/cxo- Executive decision support with go/no-go analysis - •
/knowledge- Knowledge packs + HOT KNOWLEDGE + HITL operations
- •
/agents- Initialize agent team for organizational execution - •
/agent- Automated agent execution from CXO decisions - •
/a2a- Agent-to-agent coordination and ledger - •
/a2a-log- Agent conversation logs and correlation
- •
/universe- Initialize Maria OS for long-term memory (PLANET) - •
/code- Code generation and fixes with context awareness - •
/auto-dev- Automated development with safety gates (PLANET) - •
/develop- Goal → spec → design → tasks → initial steps - •
/image- AI-powered image generation - •
/video- AI video generation
- •
/evolve- Self-evolution loop: diagnose → decide → execute → verify - •
/ooda- OODA cycle for current situation analysis
- •
/doctor- System health diagnosis with evidence and structure
- •
/init- Initialize MARIA configuration - •
/update- Update MARIA to latest version - •
/whoami- Show current user and plan information
- Main entry (LLM JSON diagnosis + deep mode): `src/services/doctor/ProjectDoctorService.ts`
- Deterministic check runner (non-LLM checks): `src/services/doctor/DoctorCore.ts`
- Maria OS init/validate/versioning: `src/services/ecosystem/UniverseLifecycleService.ts`
- Event sourcing (audit trail / replay): `src/services/memory-system/event-sourcing/*`
- Maria OS POC (local-only store; enterprise aligned): `src/services/universe-os-poc/UniverseOsPocService.ts`
- LLM-based boundary judgment (no heuristics in host code): `src/services/safety/BoundaryGuardService.ts`
- Role policy gate (STOP / HITL required / required artifacts): `src/services/decision-os/RolePolicy.ts`
- Command-level RBAC guard: `src/services/security/RBACCommandGuard.ts`
- Autonomous plan policy + approval requirement: `src/services/autonomous-agent/security/PolicyEngine.ts`
Maria OS prototypes (latest)
Concrete, auditable workflows that demonstrate what “Maria OS” means in practice.
- Inputs: PR metadata + diff + repo context + config (YAML) + optional graph/doctor context
- Outputs: inline findings + summary comment + ReviewReport + DecisionTrace + GateReport
- Determinism: same inputs → same findings (idempotency marker to avoid duplicates)
/code-review review --diff artifacts/pr.diff --repo acme/repo --pr 123 --base abc --head def --no-llm
/code-review deliver --run-id 12345678:abcd --repo acme/repo --pr 123 --tenant tenant_demo_a
Recommended workflow (structure → build → evolve)
Enterprise flow: diagnosis-first, gated execution, and safe learning into Maria OS.
- Structure: define OS/boundaries/responsibilities/failure modes first
- Design: turn goals into spec/tasks with clear acceptance criteria
- Build: /code in plan-only → apply (rollback/guard as default)
- Diagnose: /doctor + quality gates to keep “evidence”
- Sync: update docs/knowledge so the OS stays consistent
- doctor: produce a diagnosis with evidence (boundaries, blast radius, risk)
- Decision: classify safe/guarded/risky; request approval when required
- Envelope: issue an explicit work order (constraints, do-not-touch, required tests, stop conditions)
- Execution: agents act as roles (implementation/testing/review/ops) and publish Artifacts
- Verification: GateReport + rollback readiness; then DoctorDelta updates long-term memory
# 1) List available commands (only READY are shown) maria /help # 2) Turn a goal into spec/design/tasks maria /develop "<your goal>" # 3) Preview first (safe-by-default) maria /code "<what to build>" --plan-only # 4) Apply (non-interactive if needed) maria /code "<what to build>" --apply --yes --rollback on # 5) Health check maria /doctor
Specs (practical flags & contracts)
Details live in /help. This section highlights the “patterns” developers/operators use daily.
# Preview (safe default) maria /code "requirements..." --plan-only # Apply (non-interactive) maria /code "requirements..." --apply --yes --rollback on # Git-guarded (leave evidence) maria /code "requirements..." --apply --yes --git-guard on --git-commit on
# Example: limit scope and attempts maria /auto-dev run --goal "small fix" --target-files "src/..." --max-attempts 2
# Resume latest (summary mode) maria /workflow/resume --latest --rehydrate summary # Resume a specific task id (and pass flags to /code) maria /workflow/resume <taskId> --tests --fix --apply
- /git is inspection-only. It runs a safe read-only subset and can capture outputs into artifacts as evidence. The design blocks dangerous flags and prevents pager hangs.
- /git-culture is the operational layer. It writes culture artifacts (evidence index, push-pending ledger, pseudo PR report) and can run publish flows that stop short of merge. Merge remains human-only.
- Meaning: summarize KPIs, lineage, and steps in a stable sectioned format.
- Performance: analyze EXPLAIN output (when provided) and propose indexes and rewrites with explicit assumptions.
- Refactoring: propose decompositions (views/materialized views) and safe migration strategy.
- Large inputs: chunk → per-chunk analysis → hierarchical merge so reports remain stable.
- BoundaryGuard (Safety Court): evaluate output risk and decide allow / warn / block. Reference: `src/services/safety/BoundaryGuardService.ts`
- Role policy gate: determines STOP/HITL and required artifacts/scopes. Reference: `src/services/decision-os/RolePolicy.ts`
- RBAC command guard: centralized authorization for commands. Reference: `src/services/security/RBACCommandGuard.ts`
- Deterministic risk labeling (safe/guarded/risky) for change planning. Reference: `src/services/evolve-ecosystem/doctor-to-task-spec.ts`
Command catalog (auto-generated from READY.manifest.json)
This list is generated at build time from the current READY manifest.
- Enterprise org doctor: `maria doctor-enterprise --models ...` (implementation: `src/cli/doctor-enterprise.ts`, service: `src/services/enterprise-os/EnterpriseOrgDoctorService.ts`)
- Project doctor: `maria /doctor` (entry: `src/services/doctor/ProjectDoctorService.ts`)
- BoundaryGuard: enforced boundary checks for enterprise outputs (reference: `src/services/safety/BoundaryGuardService.ts`)
- Approval gates: role policy + RBAC command authorization (references: `src/services/decision-os/RolePolicy.ts`, `src/services/security/RBACCommandGuard.ts`)
Deployment & operations (priority)
Never commit secrets. Absorb env differences via config. Enterprise runs locally.
- Never commit secrets (API keys, OAuth credentials, JWT secrets).
- Use Secret Manager (or equivalent) and avoid plaintext secrets in env/config files.
Local LLM Setup Guide (Ultra–Enterprise)
Run Ollama / LM Studio / vLLM on your own hardware (no cloud dependency) and connect MARIA to your local inference server.
- This guide targets the Local LLM Infrastructure feature for Ultra–Enterprise.
- Enterprise is designed for local execution by default (behavior equivalent to LOCAL_MODE=1).
# Prefer *_API_BASE (OpenAI-compatible base). *_API_URL is legacy compatibility (may be removed). LMSTUDIO_API_BASE=http://localhost:1234/v1 OLLAMA_API_BASE=http://localhost:11434 VLLM_API_BASE=http://localhost:8000/v1 # Compatibility (deprecated) # LMSTUDIO_API_URL=http://localhost:1234 # OLLAMA_API_URL=http://localhost:11434 # VLLM_API_URL=http://localhost:8000 # Recommended: force local mode (Enterprise-equivalent) LOCAL_MODE=1 # Default provider/model (optional) MARIA_PROVIDER=lmstudio # or: ollama / vllm MARIA_MODEL=gpt-oss-20b # example (LM Studio)
# 1) Start (skip if already running) ollama serve # 2) Pull models (examples) ollama pull llama3.2:3b ollama pull mistral:7b ollama pull mixtral:8x7b ollama pull deepseek-coder:6.7b ollama pull phi3.5:3.8b # 3) Confirm installation ollama list # 4) Verify API curl http://localhost:11434/api/version curl http://localhost:11434/api/tags
# 1) (GUI) Download a model (e.g., gpt-oss-120b / gpt-oss-20b)
# 2) (GUI) Start Local Server in "OpenAI Compatible" mode (default: http://localhost:1234/v1)
# If you have the CLI (lms)
lms ls
lms server start
# Verify
curl http://localhost:1234/v1/models
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer lm-studio" \
-d '{"model":"gpt-oss-20b","messages":[{"role":"user","content":"ping"}],"stream":false}'
# Example: start an OpenAI-compatible server (follow vLLM's setup guide for dependencies)
python -m vllm.entrypoints.openai.api_server \
--model mistralai/Mistral-7B-Instruct-v0.2 \
--host 0.0.0.0 \
--port 8000
# Verify
curl http://localhost:8000/v1/models
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"mistralai/Mistral-7B-Instruct-v0.2","messages":[{"role":"user","content":"ping"}],"stream":false}'
# Example: run with LM Studio explicitly maria /ceo --provider lmstudio --model gpt-oss-20b "Summarize the requirements" # Example: run with Ollama explicitly maria /ceo --provider ollama --model llama3.2:3b "Summarize the requirements"
Next steps (how to stay aligned with “latest”)
Because the repository is not public, the safest “source of truth” is what the product exposes at runtime.
- Use /help for the latest available commands (READY-only, manifest-backed).
- Use this page’s Command catalog section (auto-generated at build time from the READY manifest).
- For details on a specific command, run /help <command>.
- Keep secrets out of Git; use a secret manager; keep NEXTAUTH_SECRET stable.
- Prefer deterministic flows: preview → apply, and keep evidence (logs/manifests).
- Enterprise policy: run locally (LOCAL_MODE-aligned); avoid heuristics; route ambiguity via LLM contracts.