DOCUMENTATION

This page reflects the current implementation of PLANET (MARIA CODE). It connects doctrine → build → operations in one language, so your decisions and their outcomes can be preserved, and improvement can continue without relying on a few heroes.

Overview (MARIA CODE vs PLANET)

One philosophy, two products: personal developer execution (MARIA CODE) and enterprise organizational intelligence surface (PLANET).

MARIA is a Structural AGI operating system designed to explicitly model the structure of the world and organizations—OS, rules, flows, and causality—and to help you design, change, and invent those structures.

MARIA CODE

ALL PLANS

Personal developer execution platform (CLI + slash commands). Fast code generation, local + cloud model routing, and documentation → execution.

Available Commands:
  • /code - Code generation and fixes
  • /image - Image generation
  • /video - Video generation
  • /develop - Development workflow
Plans:
  • • Free: Basic AI Chat, 40 code gen/month
  • • Starter ($20/mo): 300 code gen/month
  • • Pro ($39/mo): 1,200 code gen/month
  • • Ultra ($99/mo): 5,000 code gen/month

PLANET

ENTERPRISE ONLY

Enterprise product surface powered by Maria OS (observe → present → hold → execute → preserve → learn), designed to preserve decisions and ways of thinking, with local execution by default (LOCAL_MODE-aligned).

Available Commands:
  • /structure - Structural analysis
  • /cxo - Executive decision support
  • /agents - Multi-agent coordination
  • /universe - Long-term memory
  • /evolve - Self-evolution loop
  • /doctor - System health diagnosis
  • /auto-dev - Automated development
Enterprise Features:
  • • Connectors (GitHub, accounting/ERP, Google)
  • • Admin & Tenant Management
  • • Role-based Access Control
  • • Maria OS Seamless Access
  • • Security & Compliance
  • • Unlimited usage
Plan:
Enterprise - $128/month
Upgrade to Enterprise →

The goal is not “plausible answers,” but structures that are reproducible, operable, and evolvable. That is why this repo is designed as an OS across code (src/), configuration (config/), and doctrine & operations (docs/).

Enterprise quick links
/enterprise — PLANET Playground + overview
/enterprise/assessment — readiness assessment to deploy PLANET
/enterprise/universe/3d — UNIVERSE 3D (sample)
Sign-in is required for Playground and Assessment (Google / GitHub).
The core loop (EVOLVE)
doctor observes the system and presents evidence.
Parent MARIA (internally with agents) orchestrates work; to you, it presents what it sees so you can decide.
Agents execute with Envelopes and produce Artifacts.
GateReport → DoctorDelta verifies and preserves memory safely. Your decisions are held, not judged.

Ops-grade AI (what failures Maria OS prevents)

Maria OS is designed to be operable under real constraints: reproducible, auditable, stoppable, and recoverable.

Most “AI agents” fail in production not because the model is weak, but because the system is not operable: you cannot reproduce runs, you cannot audit decisions, you cannot stop safely, and you cannot recover when something goes wrong.

Maria OS treats these as first-class product requirements. The question we optimize for is: “What incident does this mechanism prevent?”

Reproducibility (checkpoint + artifact integrity)
  • • Partial replay with checkpoints (resume without re-running everything)
  • • Artifact refs with hashes (verify outputs before trusting cache hits)
  • • Tool/version pinning for replay safety
Prevents: “we had to restart from zero”, “cache looked fine but was corrupted”, “same input produced a different run”.
Independent quality judgment (QE + required gates)
  • • Separate roles: Auto-Dev executes; QE decides pass/fail
  • • Required gates: lint:truth + tsc --noEmit (+ deterministic checks)
  • • “Fail closed” defaults: stop when evidence is insufficient
Prevents: “agent approved its own broken change”, “silent regressions”, “good demo / bad merge”.
Auditability (Git as Ledger)
  • • Commit → EvidenceIndex (always traceable)
  • • If push/PR is blocked: Push Pending Ledger / pseudo-PR report
  • • Data pointers for reproducible analysis (hash + schemaRef + access)
Prevents: “work existed only on someone’s laptop”, “no one can reproduce the report”, “audit cannot reconstruct why”.
No heuristics boundary (DecisionClass D0–D3)
  • • D0–D2: deterministic facts/transforms/parses
  • • D3: probabilistic inference must use an LLM (never hardcode “fuzzy” rules)
  • • When LLM is unavailable: report “insufficient evidence” and stop
Prevents: “mysterious behavior drift”, “rules that seem smart but fail silently”, “unsafe guesswork in core logic”.
Multi-layer SSOT (why it matters)

Many teams lose quality over time due to drift between code, config, docs, and operations. Maria OS treats drift as a production incident class, and designs the system to resist it.

Multi-layer SSOT (anti-drift architecture)

Why Maria OS keeps multiple “sources of truth” on purpose — and how it prevents quality decay in long-running organizations.

In most systems, “SSOT” is treated as a single document. In production, that is not enough. Drift happens across layers: the code changes but the runbook doesn’t, policies move but enforcement doesn’t, schemas evolve but outputs are not validated. When drift is quiet, reliability dies quietly.

The layers (what each layer is “true” about)
  • docs/: doctrine + operations (how the system is run, audited, recovered)
  • config/: contracts, policies, agent profiles (what is allowed/expected)
  • src/: implementation (what the system actually does)
  • docs/schemas/ + schemas/: machine-validated shapes (what “valid output/evidence” means)
  • artifacts/: immutable evidence (what happened, with refs + hashes)
Diagram (doctrine → contracts → execution → evidence)
docs/ (doctrine + runbooks)
   ↓
config/ (policies + contracts + agent profiles)
   ↓
src/ (implementation)
   ↓
artifacts/ (evidence: runs, reports, diffs, checkpoints)

docs/schemas/ + schemas/ validate outputs across all layers
→ drift becomes visible (and therefore fixable).
What drift looks like (failure modes)
  • • Runbooks describe steps that no longer work
  • • Config claims policies exist, but enforcement is missing
  • • Outputs “look OK” but break downstream tools
  • • A single expert becomes the only “real SSOT”
How Maria OS prevents drift (mechanisms)
  • • Strict schemas for outputs/evidence (validate, don’t hope)
  • • Quality gates as defaults (lint + typecheck + deterministic checks)
  • • Git as Ledger (commit → evidence is always traceable)
  • • Replay safety (checkpoints + artifact integrity)
Why this is “hard” (and why it is worth it)
Multi-layer SSOT increases upfront explicitness. In return, it makes onboarding repeatable, keeps incident response deterministic, and prevents quiet process corruption.
If you only remember one thing: drift is not a documentation problem — it is an operational reliability problem.

What is Universe?

Universe is the persistent container for Maria OS—where SSOT, runs, and evidence live.

A Universe is the long-term container that holds operating history as SSOT: runs, artifacts, decisions, and evidence. The UI does not invent facts—it reads immutable refs and SSOT views.

Data.OS (standard in Universe)
In PLANET, Data.OS is a standard layer bound to Universe. It is where database analysis and Judgment OS meet: raw sources become structured reports, then proposals are routed through explicit gates.
  • Database analysis: SQL-first understanding (meaning, lineage, performance, refactoring) with stable output shape.
  • Judgment OS: deterministic gates + approval boundaries so adoption does not drift.
  • Evidence: artifacts and refs are preserved so reviews can be performed later without guessing.
UNIVERSE.OS
Create / select a Universe (tenantId + universeId).
Run in SIM-first mode, preserve artifacts, and keep revisions reproducible.
Universe Visual Playground (3D)
Execution layer: planets (agents/commands) and envelopes (A2A).
Data layer: nodes/edges from SSOT SQL views (vw_unv_*).

DS Intake (Universe data onboarding)

Attach original sources (SQL/CSV) and run a proposal-first, evidence-backed intake flow in PLANET.

DS Intake is the process that turns raw sources into a governed Universe model. The flow is proposal-first: uncertain inference is never applied silently. You can inspect refs, query logs, and evidence before applying changes.

Web (PLANET)
  • • Attach sources (SQL/CSV)
  • • PROPOSE (preview + blocked reasons)
  • • APPLY (produce refs)
Open DS Intake → (sign-in required)
CLI parity (concept)
Each OS mode should be available in both CLI and Web. DS Intake follows the same principle: the Web is a surface, and the system execution is owned by MARIA OS.
Note: command names may differ depending on your deployment; the invariants are proposal-first, traceability, and policy boundaries.

Maria OS / EVOLVE / doctor (what makes PLANET “enterprise”)

Not “more AI” — an operating system that keeps decisions reproducible, auditable, and improving.

Maria OS (long-term memory with structure)
Maria OS is not “logs.” It is an organized memory shelf where every run is recorded as Events, distilled into Lessons, and stabilized as Procedures.
Work is issued as an Envelope (objective, constraints, boundaries, required tests, stop conditions). Outcomes are stored as Artifacts linked back to the Envelope — including failures.
doctor (diagnosis with evidence, not vibes)
doctor does not merely say “there is a problem.” It explains which boundaries are touched, what evidence supports the diagnosis, and what the expected blast radius is.
The same inputs should yield the same conclusions. Enterprise diagnosis must not “wobble.”
EVOLVE (safe improvement as a mechanism)
EVOLVE is the loop that makes improvement continuous: observe → present → hold → execute → preserve → learn.
Adoption is gated by deterministic rules (safe/guarded/risky). When stakes are high, PLANET presents what it sees with context and rollback conditions, so you can decide. It asks for your approval—not as a command, but as a companion standing beside you.
Verification produces a GateReport. Only after improvement is confirmed do we write back long-term memory (DoctorDelta).
Enterprise default: local execution
Enterprise deployments assume local-first execution (LOCAL_MODE-aligned) to keep data boundaries and audit trails under control. When in doubt, do not hardcode environment differences — absorb via config/loaders and keep evidence.

Principles (Structural AGI doctrine)

Essence before Solution / Safety by Structure / Human-first — plus enterprise requirements: determinism, traceability, and explicit gates.

Core principles
  • Essence before Solution: define “what structural problem is this?” in 1–3 lines before discussing solutions.
  • Safety by Structure: safety must be enforced by boundaries, responsibilities, detection, redundancy, and fail-safe design—not by “good intentions.”
  • Human-first: AI extends humans; final decisions and accountability remain with humans.
Enterprise principles (non-negotiable)
  • Determinism: same state → same conclusion (especially for doctor and gates).
  • Traceability: every decision must be explainable and link back to evidence and boundaries.
  • Explicit gates: safe/guarded/risky classification + approval when needed, with rollback conditions.
Think in layers (recommended)
OS layer: what becomes “default” (values, boundaries, responsibilities)
Rules layer: decision criteria, allow/deny, exceptions
Process layer: workflows, reviews/approvals, operating loops
Implementation layer: code, config, tests, monitoring
Implementation rules (consistency & governance)
  • No heuristics: do not hardcode fuzzy judgments. Delegate ambiguity to an LLM layer (e.g. ai-proxy) with explicit contracts and logs.
  • If the flow exists, improve the system prompt/contract first.
  • If the flow does not exist, improve the flow before tuning prompts.

Architecture (where things live)

CLI + slash commands + manifest + config + docs work together as one OS.

Repo layout (as an OS)
  • src/: core implementation (CLI, commands, services, agents)
  • config/: OS-layer configuration (agents, domains, brain profiles)
  • docs/: doctrine & operations (meta layer)
  • tests/: Vitest suites (unit/integration/contract/e2e)
Command reliability model
Only READY commands are exposed to users. Readiness is mechanically enforced by the manifest.
Source of truth: /help and READY.manifest.json
Command Structure (PLANET)
1. GOVERNANCE (Decision & Structure)PLANET
  • /structure - Analyze structural problems and propose stable processes
  • /cxo - Executive decision support with go/no-go analysis
  • /knowledge - Knowledge packs + HOT KNOWLEDGE + HITL operations
2. AGENTS (Organization & Execution)PLANET
  • /agents - Initialize agent team for organizational execution
  • /agent - Automated agent execution from CXO decisions
  • /a2a - Agent-to-agent coordination and ledger
  • /a2a-log - Agent conversation logs and correlation
3. EXECUTION (Maria OS & Build)ALL PLANS
  • /universe - Initialize Maria OS for long-term memory (PLANET)
  • /code - Code generation and fixes with context awareness
  • /auto-dev - Automated development with safety gates (PLANET)
  • /develop - Goal → spec → design → tasks → initial steps
  • /image - AI-powered image generation
  • /video - AI video generation
4. EVOLUTION (Learning Loop)PLANET
  • /evolve - Self-evolution loop: diagnose → decide → execute → verify
  • /ooda - OODA cycle for current situation analysis
5. HEALTH (Safety & Diagnostics)PLANET
  • /doctor - System health diagnosis with evidence and structure
6. SETTINGALL PLANS
  • /init - Initialize MARIA configuration
  • /update - Update MARIA to latest version
  • /whoami - Show current user and plan information
Enterprise implementation pointers (src/services/...)
doctor (diagnosis)
  • Main entry (LLM JSON diagnosis + deep mode): `src/services/doctor/ProjectDoctorService.ts`
  • Deterministic check runner (non-LLM checks): `src/services/doctor/DoctorCore.ts`
Maria OS (lifecycle + memory foundations)
  • Maria OS init/validate/versioning: `src/services/ecosystem/UniverseLifecycleService.ts`
  • Event sourcing (audit trail / replay): `src/services/memory-system/event-sourcing/*`
  • Maria OS POC (local-only store; enterprise aligned): `src/services/universe-os-poc/UniverseOsPocService.ts`
Boundary guard (Safety Court)
  • LLM-based boundary judgment (no heuristics in host code): `src/services/safety/BoundaryGuardService.ts`
Approval & authorization gates
  • Role policy gate (STOP / HITL required / required artifacts): `src/services/decision-os/RolePolicy.ts`
  • Command-level RBAC guard: `src/services/security/RBACCommandGuard.ts`
  • Autonomous plan policy + approval requirement: `src/services/autonomous-agent/security/PolicyEngine.ts`
Deterministic risk classification (safe/guarded/risky)
See `src/services/evolve-ecosystem/doctor-to-task-spec.ts`.

Maria OS prototypes (latest)

Concrete, auditable workflows that demonstrate what “Maria OS” means in practice.

GitHub Code Review Maria OS (prototype)
A comment-only PR reviewer that prioritizes auditability, consistency, and reproducibility.
  • Inputs: PR metadata + diff + repo context + config (YAML) + optional graph/doctor context
  • Outputs: inline findings + summary comment + ReviewReport + DecisionTrace + GateReport
  • Determinism: same inputs → same findings (idempotency marker to avoid duplicates)
Try it (developer commands)
Review a diff (local):
/code-review review --diff artifacts/pr.diff --repo acme/repo --pr 123 --base abc --head def --no-llm
Generate deliverables (from webhook runId):
/code-review deliver --run-id 12345678:abcd --repo acme/repo --pr 123 --tenant tenant_demo_a
Deliverables are generated under docs/deliverables/universe-github-code-review-spec-v1/<runId>/.

Recommended workflow (structure → build → evolve)

Enterprise flow: diagnosis-first, gated execution, and safe learning into Maria OS.

Fastest loop (recommended)
  1. Structure: define OS/boundaries/responsibilities/failure modes first
  2. Design: turn goals into spec/tasks with clear acceptance criteria
  3. Build: /code in plan-only → apply (rollback/guard as default)
  4. Diagnose: /doctor + quality gates to keep “evidence”
  5. Sync: update docs/knowledge so the OS stays consistent
Enterprise loop (what “EVOLVE” means in practice)
  1. doctor: produce a diagnosis with evidence (boundaries, blast radius, risk)
  2. Decision: classify safe/guarded/risky; request approval when required
  3. Envelope: issue an explicit work order (constraints, do-not-touch, required tests, stop conditions)
  4. Execution: agents act as roles (implementation/testing/review/ops) and publish Artifacts
  5. Verification: GateReport + rollback readiness; then DoctorDelta updates long-term memory
Developer quick start
# 1) List available commands (only READY are shown)
maria /help

# 2) Turn a goal into spec/design/tasks
maria /develop "<your goal>"

# 3) Preview first (safe-by-default)
maria /code "<what to build>" --plan-only

# 4) Apply (non-interactive if needed)
maria /code "<what to build>" --apply --yes --rollback on

# 5) Health check
maria /doctor

Specs (practical flags & contracts)

Details live in /help. This section highlights the “patterns” developers/operators use daily.

/code (generate → preview → apply)
Rule: start with plan-only, avoid risky ops, keep rollback available.
# Preview (safe default)
maria /code "requirements..." --plan-only

# Apply (non-interactive)
maria /code "requirements..." --apply --yes --rollback on

# Git-guarded (leave evidence)
maria /code "requirements..." --apply --yes --git-guard on --git-commit on
--plan-only / --dry-run
Preview diffs and align before applying
--rollback on|off
Control rollback strategy on failures
--interactive / --yes
Human review vs non-interactive execution
--git-guard / --git-commit
Safety gates + commit-level audit trail
/auto-dev (small & safe autonomous changes)
A non-breaking-first autonomous dev engine for small, test-driven changes.
# Example: limit scope and attempts
maria /auto-dev run --goal "small fix" --target-files "src/..." --max-attempts 2
For deterministic gates, copy config/templates/auto-dev.config.yaml into your project as auto-dev.config.yaml.
/workflow/resume (the resume contract)
For multi-day/week work: restores summary/artifacts/decisions and bridges to the next action (often /code).
# Resume latest (summary mode)
maria /workflow/resume --latest --rehydrate summary

# Resume a specific task id (and pass flags to /code)
maria /workflow/resume <taskId> --tests --fix --apply
Full spec: docs/RESUME_FUNCTION_DESIGN.md
Git operations (inspect vs publish)
In PLANET, Git is not treated as a casual activity log. It is treated as a ledger with evidence and deterministic operating steps.
  • /git is inspection-only. It runs a safe read-only subset and can capture outputs into artifacts as evidence. The design blocks dangerous flags and prevents pager hangs.
  • /git-culture is the operational layer. It writes culture artifacts (evidence index, push-pending ledger, pseudo PR report) and can run publish flows that stop short of merge. Merge remains human-only.
Principle: inspection is safe-by-default; publish is evidence-first and policy-gated.
/ds (data source analysis as an operating workflow)
/ds is a SQL-first workflow for understanding and improving data sources. It is not “chat about data” — it is designed to produce structured artifacts you can re-run and review.
  • Meaning: summarize KPIs, lineage, and steps in a stable sectioned format.
  • Performance: analyze EXPLAIN output (when provided) and propose indexes and rewrites with explicit assumptions.
  • Refactoring: propose decompositions (views/materialized views) and safe migration strategy.
  • Large inputs: chunk → per-chunk analysis → hierarchical merge so reports remain stable.
Manifest (“only READY is exposed”)
Command exposure is mechanically enforced by contract (metadata/execute, dependencies, tests). Only READY is visible in /help.
Best practices: docs/BEST_PRACTICE/MANIFEST_BEST_PRACTICE.md
Enterprise governance (boundaries + approvals)
Enterprise operations depend on three explicit contracts: boundaries, role/permission gates, and approval/rollback conditions.
  • BoundaryGuard (Safety Court): evaluate output risk and decide allow / warn / block. Reference: `src/services/safety/BoundaryGuardService.ts`
  • Role policy gate: determines STOP/HITL and required artifacts/scopes. Reference: `src/services/decision-os/RolePolicy.ts`
  • RBAC command guard: centralized authorization for commands. Reference: `src/services/security/RBACCommandGuard.ts`
  • Deterministic risk labeling (safe/guarded/risky) for change planning. Reference: `src/services/evolve-ecosystem/doctor-to-task-spec.ts`
Rule of thumb: LLMs may propose. Adoption must be decided by deterministic gates and recorded as evidence.

Command catalog (auto-generated from READY.manifest.json)

This list is generated at build time from the current READY manifest.

Tip: /help is always the latest truth
For per-command details, run /help <command> in the CLI.
Enterprise: decision → deployment → operations (recommended starting points)
If you are adopting PLANET in an organization, start with diagnosis and governance surfaces before scaling agents.
  • Enterprise org doctor: `maria doctor-enterprise --models ...` (implementation: `src/cli/doctor-enterprise.ts`, service: `src/services/enterprise-os/EnterpriseOrgDoctorService.ts`)
  • Project doctor: `maria /doctor` (entry: `src/services/doctor/ProjectDoctorService.ts`)
  • BoundaryGuard: enforced boundary checks for enterprise outputs (reference: `src/services/safety/BoundaryGuardService.ts`)
  • Approval gates: role policy + RBAC command authorization (references: `src/services/decision-os/RolePolicy.ts`, `src/services/security/RBACCommandGuard.ts`)
For enterprise safety posture, prefer LOCAL_MODE-aligned execution and keep GateReport/DoctorDelta evidence for each change.
Current READY commands by category
Generated: 2026-01-05T12:52:18.984Z · Total: 144 · READY: 144
ai (8)
/a2a
A2A: Ops command to query/audit/replay Envelope SSOT (SQLite Ledger). Start by checking ledger status with /a2a status.
Usage: status [--json] | doctor [--json] [--limit <n>] | ledger --queue <queueId> [--limit <n>] [--json] | ledger --correlation <id> [--limit <n>] [--json] | ledger --envelope <envelopeId> [--system <a2a|decision-os|auto-dev|governance|universe-poc|unknown>] [--limit <n>] [--json] | audit --queue <queueId> [--limit <n>] [--json] | replay --queue <queueId> [--force] [--note <text>] [--json] | kg sync [--limit <n>] [--json] | kg show [--queue <queueId>] [--decision <decisionId>] [--approval <requestId>] [--approval-group <apg_...>] [--limit <n>] [--format mermaid|json|timeline] | approval reopen --approval-group <apg_...> [--json]
/a2a-bus
Command to inspect the A2A message bus (delivery queue) and manually drain (deliver) messages.
Usage: peek [--limit <n>] [--json] | stats [--json] | tail [--limit <n>] [--json] | verify [--json] | drain [--limit <n>] [--dry-run] [--transport <session|inbox|webhook>] [--json] | worker [--interval-ms <ms>] [--limit <n>] [--dry-run] [--max-ticks <n> | --forever] [--transport <session|inbox|webhook>] [--retry-failed] [--retry-max-attempts <n>] [--retry-backoff-ms <ms>] [--json]
/a2a-log
Command to list/show logs of sessions recorded via the A2A protocol (e.g., /cxo, /agents, /develop).
Usage: list [--source <cxo|agents|develop>] [--last <n>] [--json] | show <sessionId> [--json] [--layers <comma-separated>] [--diff] | verify-signatures <sessionId> [--json] | approve <sessionId> --decision <decisionId> [--deny] [--kind <hitl>] [--note <text>] | audit [--source agents] [--last <n>] [--json]
/agent
Experimental command that suggests specialized agent candidates from A2A logs. (/agent auto)
Usage: auto [--source <cxo|agents|develop>] [--last <n>] [--analyze last-30d] [--json] | evolve <candidateId> [--output-dir <path>] [--dry-run] [--json] | audit-a2a --agents <id> [--last <n>] [--json] | diagnose --agents <id> [--last <n>] [--json]
/agents
Hub command to create an AI team (agent org) to help drive a project end-to-end: planning, execution, and retrospectives. (/agents)
Usage: init "<goal>" [options] | envelope-dev | templates | recommend-template "<goal>" | universe --agents <id> --tenant <tenantId> --project <projectId> [--json] | plan [--agents <id>] | run --agents <id> [--mode <manual|local|staging>] [--max-steps <n>] [--concurrency <n>] [--coder-agents <n>] [--background] [--hitl] [--apply --hitl-approve <decisionId>] | trace [--correlation-id <id>] [--decision-id <id>] [--agents <id> --workload-id <id>] [--json] | status [--agents <id>] | pause --agents <id> | resume --agents <id> | interrupt --agents <id> | show <agentsId> | list [--product <id>] | members --agents <id> | member add/remove "<role>" [--agents <id>] | memory <list|add|pin|unpin|remove|review> [options] | save [--agents <id>] [--file <path>] | load [--file <path>]
/caio
A custom agent command for Human-AI architecture and UX design support as CAIO (MARIA).
Usage: /caio [--profile <id>] [--provider <lmstudio|ollama|vllm>] [--model <name>] [--inputs <json|@file>] [--use-latest off|on|auto] [--auto-rerun off|on|auto] "Your request"
/gpu
🎮 GPU management and monitoring for AI acceleration
Usage: [status|benchmark|devices|memory] [options]
/llm-catalog
Helper command to refresh and inspect the LLM model catalog (llm-model-catalog.json).
Usage: refresh [--provider openai|anthropic|google|xai|all] | show
analysis (13)
/ask-data
Interactive command: ask a business question in natural language and get a CXO-oriented analysis plan and insights (SQL/analytics framing) based on the specified data source.
Usage: /ask-data --source <datasource_id|path> [--persona ceo|cfo|chro|cto|auto] [--goal <text>] "question"
/ds
Starting from SQL, analyze structured data sources (SQL / NoSQL / CSV / VDB / GraphRAG, etc.) from a data science perspective, organizing meaning, quality, performance, and refactoring suggestions (/ds, currently focuses on SQL).
Usage: [analyze|perf|refactor] --file <path> [--driver <pg|bq|mysql|generic|nosql|csv|vdb|graph>] [--kind <sql|nosql|csv|vdb|graph>] [--schema <path>] [--explain <path>] [--max-chars <n>] [--goal <text>] [--persona <ceo|cfo|chro|cto|auto>] [--strict] [--mode <analyze|perf|refactor|perf-strict|analyze-strict|refactor-strict>] [--for <ceo|cfo|chro|cto>] [--export <path>] [--concurrency <n>] [--background]
/ds-chat
Wizard to use /ds /insight /ask-data from chat. Proposes optimal command lines from your goal and context.
Usage: /ds-chat [--file <path>] [--mode analyze|perf|refactor] [--persona ceo|cfo|chro|cto|auto] [--goal <text>] [notes or questions...]
/find-trace
Reverse-lookup correlationId (causal chain) from evidence(ref/type). Useful for tracing URL/PR/CI/Deploy/Freee via SSOT.
Usage: /find-trace --ref <evidenceRefOrUrl> | --deploy-url <url> [--type github|ci|deploy|freee|url|log|decision_check] [--limit 50] [--latest 10]
/generate-kpi
Generate performance KPI artifacts (performance.kpi.json + optional workitems.ndjson) with deterministic quality_gates + workitem templates.
Usage: /generate-kpi [--out artifacts/perf] [--workplan <path>] [--metrics-snapshot <path>] [--product-id <id>] [--initiative-id <id>] [--product-context <path>] [--mode normal_execution] [--platform local] [--baseline-source <s>] [--baseline-window 7d] [--baseline-aggregation median] [--error-rate-max 0.02] [--test-pass-rate-min 1] [--safety-filter-pass-rate-min 0.99] [--min-response-length-chars-min 200]
/note
Records human intervention (override/exception/force stop) as a human_override Envelope in the SSOT Ledger.
Usage: /note --cid <correlationId> --reason <reason_code> --review-after <YYYY-MM-DD> [--override-type override_recommendation|add_exception|force_stop] [--confidence 0.0-1.0] [--role CAIO] [--parent <envelopeId>] [--decision-ref <ref>] [--evidence "url=... github=..."] "body text"
/ooda
Command that returns a report and TODOs in an OODA loop (Observe / Orient / Decide / Act) structure for TSA / SymptomEvent or management challenges, and records it as an episode.
Usage: [--node <edgeNodeId> --symptom <symptomId>] [--domain <manufacturing|care|local-government|product-company|platform>] [--role <doctor|cxo|coo|mixed>] "Request to organize using OODA"
/replay
Generates an HTML replay UI under artifacts/replay to 'play back' the causal chain over time by specifying correlationId from Envelope SSOT (Ledger).
Usage: /replay --cid <correlationId> [--limit 2000]
/research
Research a topic or URL and produce a summary that separates "facts" from "interpretations", including concrete takeaways and source links (Principle-First OS / Structural AGI lens).
Usage: <_url> [_options] OR <_action> [params] [--background] [--concurrency <n>]
/review-overrides
Summarize human_override records past review_after and enqueue review request envelopes (review_request) into the SSOT Ledger.
Usage: /review-overrides [--cid <correlationId>] [--limit 200] [--dry-run]
/sma
SMA hub (sense-making). Ingest sensor envelopes and produce lightweight session artifacts (local, deterministic).
Usage: /sma /sma ingest --sensor-envelope <path> [--out <artifactsRoot>] /sma analyze --session <id> [--raw-root <rawRoot>] [--out <artifactsRoot>] /sma verify --session <id> [--raw-root <rawRoot>] [--out <artifactsRoot>] /sma report --session <id> [--artifacts-root <artifactsRoot>] [--out <envelopesOut>]
/trace
Show SSOT timeline for a correlationId (EnvelopeLedger). Fast text/JSON view; use /replay to generate HTML.
Usage: /trace --cid <correlationId> [--limit 2000] [--events 2000] [--json]
/tsa
A hub for TSA (tactile-sense-agent) utilities for on-site symptom sensors.
Usage: Run /tsa or /tsa help to show an overview of TSA workflows and example commands.
auth (4)
/account
Show current account, plan, usage and environment information
Usage: /account
/login
Sign in to MARIA
Usage: [--device] [--force] [status]
/logout
Sign out of MARIA
Usage: [--revoke] [--all-devices]
/usage
Check usage quota
Usage: /usage
business (21)
/biz
A hub for business topics such as revenue, KPIs, business planning, and strategy.
Usage: /biz [sales|roi|plan|launch|budget|strategy] [options]
/cai-clone
CAI Clone (commercial v1.2): decision-structure replay engine for AI architecture/AI UX boundaries with audit and local-only contract.
Usage: /cai-clone init | /cai-clone boundary set --file <boundary.json> | /cai-clone ingest --file <decisionlog.json> | /cai-clone query --file <decisionrequest.json>
/ceo
A custom agent command for executive decision support (capital policy, portfolio, org design) as CEO (MARIA).
Usage: /ceo [--profile <id>] [--provider <lmstudio|ollama|vllm>] [--model <name>] [--inputs <json|@file>] [--use-latest off|on|auto] [--auto-rerun off|on|auto] "Your question"
/ceo-clone
CEO Clone (commercial v1.2): decision-structure replay engine with local-only contract, audit, and bounded output.
Usage: /ceo-clone init | /ceo-clone boundary set --file <boundary.json> | /ceo-clone profile set --file <profile.json> | /ceo-clone profile show | /ceo-clone policy d3 set --file <d3-policy.json> | /ceo-clone policy review-slo set --file <review-slo.json> | /ceo-clone review init|enqueue|approve|discard ... | /ceo-clone ingest --file <decisionlog.json> | /ceo-clone query --file <decisionrequest.json>
/cfo
Run CFO-style natural-language Q&A using freee accounting data. Available after running /connect freee.
Usage: /cfo --company-id <freee_company_id> "Show the trend of revenue and profit this fiscal year"
/coo
A custom agent command for operations design, scaling, progress management, and incident response as COO (MARIA).
Usage: /coo [--profile <id>] [--provider <lmstudio|ollama|vllm>] [--model <name>] [--inputs <json|@file>] [--use-latest off|on|auto] [--auto-rerun off|on|auto] "Your request"
/cpo
A custom agent command for product vision, prioritization, UX, and roadmap design as CPO (MARIA).
Usage: /cpo [--profile <id>] [--provider <lmstudio|ollama|vllm>] [--model <name>] [--inputs <json|@file>] [--use-latest off|on|auto] [--auto-rerun off|on|auto] "Your request"
/cto-clone
CTO Clone (commercial v1.2): decision-structure replay engine for engineering/tech boundaries with audit and local-only contract.
Usage: /cto-clone init | /cto-clone boundary set --file <boundary.json> | /cto-clone ingest --file <decisionlog.json> | /cto-clone query --file <decisionrequest.json>
/cxo
Run multiple CxO agents (CEO/COO/CAIO/CFO) in parallel and return a synthesized executive committee report.
Usage: /cxo [--profile <id>] [--members "ceo,coo,caio"] "Decision question" [--json] [--background]
/cxo-meeting
A meeting OS command that automatically generates CXO meeting agendas based on Structure OS models and OS doctor reports.
Usage: /cxo-meeting [agenda] [--type exec|biz-review|ops] [--domain <domain>] [--id <id>] [--json] [--background]
/decision
Present options, considerations, and context for executive decisions—so you can decide. Your decisions are held here, preserved without judgment.
Usage: /decision support --question "<decision question>" | /decision show <decisionId> [--json] | /decision tune [--max-audit-lines N] [--max-chat-lines N] [--no-llm] [--json]
/ed
Executive Decision Core OS (ED-0001): draft → slot fill → commit (audited, deterministic, chain-hashed).
Usage: /ed create [--org <orgId>] [--role CEO|BOARD|EXECUTIVE] [--json] | /ed next <draftId> [--json] | /ed answer <draftId> --slot <slotPath> --value <value> [--json] | /ed status <draftId> [--json] | /ed patch <draftId> --patch '<json>' [--json] | /ed commit <draftId> --constitution <version> [--commit-reason '...'] [--ledger-chain-scope org|decision] [--json] | /ed show <decisionId> [--json] | /ed review run-queue [--now <iso>] [--limit <n>] [--no-resurface] [--json] | /ed review start <reviewId> | /ed review submit <reviewId> --result success|partial|failure --actual <text> --gap <text> --learning <text>
/failure
Identifies common failure patterns, their signs, impacts, prevention measures, and recovery plans for specified initiatives or AI deployment plans.
Usage: /failure "Description of initiative or AI deployment plan" [--domain saas|manufacturing|healthcare|gov|finance|ops] [--n 5]
/insight
Generate multi-angle insights and next actions for CXO audiences from SQL / CSV / KPI reports.
Usage: /insight --file <path.sql|path.csv> [--kind sql|csv] [--driver pg|bq|mysql|generic] [--persona ceo|cfo|chro|cto|auto] [--goal <text>]
/meta
Generate "good questions" you should ask yourself as CEO/CAIO/CPO, based on your current situation. This supports your question-framing OS, not the answer.
Usage: /meta "Describe your current situation / concern / topic" [--role ceo|cfo|coo|cpo|founder|manager] [--n 5]
/os-map
Generate an Enterprise OS map from a Structure OS model and show a high-level structural overview.
Usage: /os-map [--domain <domain>] [--id <id>] [--mermaid] [--json] [--latest]
/review
Generate a critical second opinion for the latest /ceo /coo /cpo /caio /doctor /evaluate result, or for any provided text.
Usage: /review [ceo|coo|cpo|caio|doctor|evaluate] [--focus risk|strategy|execution|numbers|people] ["text to review"]
/sales-dashboard
Interactive TUI sales dashboard with real-time updates
Usage: /sales-dashboard [--profile sales|executive|sales_manager] [--format text|json|tui|slack]
/sim
Simulate 3+ world-line scenarios (conservative/baseline/aggressive) for initiatives like new businesses, capital strategy, pricing changes, and channel strategy.
Usage: /sim "Decision or initiative to simulate" [--mode business|product|ops] [--horizon 1y|3y|5y] [--kpi "MRR, margin, churn"] [--background]
/structure
Structure OS modeling tools
Usage: /structure ...
/tune
Business tuning: identify levers and experiments to improve KPIs
Usage: /tune [scope] [--kpi "<kpi name>"]
configuration (5)
/config
Configuration management
Usage: /config
/hooks
Hook configuration
Usage: /hooks
/init
Initialize project guidance and generate MARIA.md at repo root
Usage: /init [--root <dir>] [--lang auto|ja|en] [--force] [--no-interactive]
/permissions
Permission management
Usage: /permissions
/setup
🚀 First-time environment setup wizard
Usage: [--quick] [--advanced] [--_config <file>] [--silent] [--fix] [--rollback]
conversation (2)
/clear
Clear conversation context
Usage: /clear
/clear
Automatically choose clear mode based on context usage and run /clear
Usage: /clear/auto
core (15)
/about
Display information about MARIA and the team
Usage: /about
/avatar
Create and manage a personalized ASCII pixel avatar (whoami personalization)
Usage: /avatar [status|list|create "<prompt>" [--style green_crt|mono]|use <id>|show <id>|clear]
/cat
List and reprint long outputs auto-saved under artifacts/<command>/ (to prevent TTY clipping).
Usage: /cat <command> [--list] [--limit <n>] /cat --last <command> [--max-chars <n>]
/contact
Display contact information and support channels
Usage: /contact
/examples
Show practical usage examples for MARIA commands. Your decisions are held here, preserved without judgment.
Usage: /examples
/exit
🚪 Gracefully exit the application or conversation mode
Usage: [--force] [--save-session] [--no-confirm]
/feedback
Provide feedback and report issues
Usage: /feedback
/help
📚 Show how to use MARIA. MARIA holds your decisions without judgment—here you can explore what is available.
Usage: [command] [--category <category>] [--search <term>] [--stats] [--quickstart]
/identity
Show Maria Code identity and supported READY skills
Usage: /identity [--json] [--locale <tag>]
/open
Show the path to the latest file saved under artifacts/<command>/ (shortcut for opening it in your editor/OS).
Usage: /open --last <command>
/personalize
Personalize character/voice style using client profile overlays (without changing core capability)
Usage: /personalize [status|list|use <id>|clear|create <id> --base <profileId> --display-name <name> --tone <tone> [--avoid <csv>] [--values <csv>]
/self
Show MARIA's Self-State (health, growth, mode), plus self-diagnosis / self-reflection reports and cognitive layer structure.
Usage: [doctor|reflect|layer-dump] [--json] [--last <days>] # /self: state only, /self doctor: self-diagnosis, /self reflect: self-reflection, /self layer-dump: cognitive layer dump
/update
🔄 Incremental codebase updates with Graph RAG delta detection (riskTier SSOT: MARIA.md > System Constitution > RiskTier Policy)
Usage: [--since <ref>] [--dry-run] [--verbose] [--json] | config --config <name> [--list] [--preset <...>] [--target <project-root|project-dot-maria|global>]
/version
Show version information
Usage: /version
/whoami
Show current brain composition summary (personality OS, industry mode, role, thinking mode, safety mode, etc.)
Usage: /whoami [--debug]
creative (1)
/novel
Design characters (dynamic cast) and setting (world), and generate the next chapter of a serialized novel via A2A (writers-room) for each theme prompt. Continues in Universe-style (envelope/runId/artifacts).
Usage: <theme> [--series <id|@path/to/series.json>] [--new-series] [--title <title>] [--lang <code>] [--format md|txt] [--genre <name>] [--quality p0|p1|p2] [--plan-only] [--out <dir>] [--dir <dir>] [--envelope <jsonOr@path>] [--confirm]
development (6)
/auto-dev
Slash command to run autonomous dev jobs based on a safe Non-Breaking Policy. (/auto-dev)
Usage: run [--mode <safe|execution>] ... | propose-pr ... | resume ... | supervise ... | self-improve ... | self-evo ... | run-from-next ... | extract-dataset ... | job-spec-from-doctor ... | chat-quality ... | init-config ... | events ... | attempts ...
/code-review
GitHub Code Review Universe (prototype): review a unified diff and generate deliverables from webhook artifacts (/code-review)
Usage: review --diff <path> [--out <dir>] [--config <path>] [--repo <owner/name>] [--pr <n>] [--base <sha>] [--head <sha>] [--use-local-only] [--no-llm] | deliver --run-id <id> --repo <owner/name> --pr <n> --tenant <tenantId> [--artifacts-root <dir>] [--out-root <dir>]
/develop
A development kickoff hub that turns a spec/goal into design, task breakdown, and initial code steps (/develop)
Usage: "<goal>" [--spec <path> ...] [--context <path> ...] [--product <id>] [--mode <spec|plan|full>] [--dry-run] [--background]
/flow
Show the latest Flow Quality Gate result (what is broken and what to fix next) in one shot.
Usage: /flow last [--json]
/next
Suggest exactly one "today's P0 action" from the latest /develop and/or /doctor results.
Usage: /next [--source auto|doctor|develop]
/retry
Assuming the latest attempt (/develop, /doctor, etc.) did not work well, propose alternative approaches, sub-problem decomposition, and exit/reframe lines.
Usage: /retry ["Why the last approach failed / error logs"] [--focus strategy|implementation|scope]
evaluation (1)
/evaluate
📊 General-purpose evaluation engine. Given a goal and input materials, returns analysis, issues, prioritization, scores, and recommendations.
Usage: [run|status|results|stop|assess] [--goal <text>] [--goal-file <path>] [--inputs <path> ...] [--file <path> ...] [--bundle <id>] [--mode <product|business|ops|tech|ml|rag|content>] [--format <markdown|json|text>] [--output <path>] [--profile <path>] [--language <ja|en>] [--config <path>] [--dataset <path>] [--compare-baseline] [--idea <text>] [--code <text>] [--criteria <path>] [--background]
evolution (1)
/evolve
🧬 Doctor-driven self-evolution protocol (P0: dry-run through taskSpecs + Commander report)
Usage: "<goal>" [--repo <repoId>] [--universe <universeId>] [--oep <profileId>] [--mode fast|deep] [--time-window-days <n>] [--max-tasks <n>] [--safe-window <n>] [--skip-update] [--policy <path>] [--approval interactive|auto] [--dry-run] [--dispatch-poc] [--execute] [--on-fail stop|retry|escalate] [--concurrency <n>] [--tenant-id <id>] [--trace-id <id>] [--apply-memory] [--confirm-memory | --approved-by <id>] [--json] status --run <runId> [--tenant-id <id>] [--trace-id <id>] [--logs] [--deliver] [--doctor] [--recommend] [--json] approve --run <runId> --change "<text>" [--tenant-id <id>] [--trace-id <id>] [--approved-by <id>] [--json] resume --run <runId> --dispatch [--tenant-id <id>] [--trace-id <id>] [--json] meta --propose [--repo <repoId>] [--window <n>] [--apply-memory] [--confirm-memory | --approved-by <id>] [--tenant-id <id>] [--json] control --scan [--repo <repoId>] [--oep <profileId>] [--window <n>] [--json]
graphrag (6)
/boundary
🧱 Boundary/contract summary for a node (RepoGraph)
Usage: <nodeId|path|name> [--root <dir>] [--json]
/contract
📜 Contract summary for a node (on-demand, cached with TTL)
Usage: <nodeId|path|name> [--root <dir>] [--json]
/graph
🕸 GraphRAG graph utilities (audit orphans / hubs)
Usage: audit [--orphans] [--limit <n>] [--json]
/search
🔍 Deterministic repository search (RepoRAG) with SSOT boost + evidence-first artifacts (P0)
Usage: <query> [--lang <language>] [--top-k <number>] [--intent <intent>] [--sources bm25,vector,kg] [--vector-index] [--explain] [--kg-hops <0..2>] [--kg-top-neighbors <1..200>]
/search.llm
🔎 High-quality repository search: P0 deterministic retrieval + Local LLM reranking (P1). Writes artifacts/search-llm/<taskId>/ with a separate schema.
Usage: <query> [--top-k <number>] [--max-candidates <number>] [--sources bm25,vector,kg] [--intent <intent>] [--provider <p>] [--model <m>]
/why
🧭 Why summary for a node (intent-centric, on-demand, cached)
Usage: <module|service|nodeId|path|name> [--details] [--root <dir>] [--json]
implementation (4)
/akashic
Akashic: scan docs (pdf/docx/pptx/md/video) -> normalize+metadata -> index -> CLI chatbot (POC)
Usage: /akashic [scan|update] [--input <dir>] [--out <dir>] [--db <path>] [--llm] [--require-llm] [--enforce-acl] [--max-files <n>] [--concurrency <n>] [--no-progress] [--use-local-only] --confirm /akashic monitor [--interval-sec <n>] [--once] [--require-llm] [--use-local-only] --confirm /akashic dlq-envelope [--dead] [--dlq <path>] [--doc-type <csv>] [--error-code <csv>] [--envelope-out <path>] [--max <n>] [--require-llm] --confirm /akashic ask --q <question> [--index <path>] [--top-k <n>] [--llm] [--use-local-only] /akashic --envelope @docs/projects/akashic/envelopes/akashic-scan.example.json
/code
Generate code with AI
Usage: <request> [--plan-only|--sow] [--apply] [--dry-run] [--interactive] [--yes] [--max-files N] [--root DIR] [--rollback on|off] [--output names|summary|detail|diff] [--no-code] [--preview-lines N] [--only-attached] [--attach-mode strict|assist] [--max-attachments N] [--diff-lines N] [--diff-bytes N] [--diff-hunks N] [--diff-global-max-files N] [--diff-global-max-bytes N] [--verify] [--deliver]
/deliver
Generate delivery artifacts (DeliveryReport/Verification/Runbook/SaveReceipt) from a DeliveryOps Envelope
Usage: /deliver --envelope @path/to/envelope.json [--out docs|artifacts] [--templates <dir>] [--confirm]
/workflow
Browse recent task snapshots, restore context (summary/decisions/artifacts), and suggest the next /code command to run
Usage: /workflow/resume [<taskId>] [--latest] [--date YYYY-MM-DD] [--limit N] [--rehydrate summary|full] [--open] [--tests] [--fix] [--apply] [--dry-run]
integration (3)
/connect
Connector status only (freee / GitHub / Google). Configuration is handled by Web UI.
Usage: /connect /connect status /connect --tenant <id> Note: Connector configuration is done via Web UI (CLI does not store connector secrets/tokens).
/mcp
MCP (Model Context Protocol) integration status and discovery tools.
Usage: /mcp /mcp status /mcp tools /mcp resources /mcp init /mcp register notion [--config <path>] [--force] /mcp start [all|<server>] /mcp stop [all|<server>] /mcp restart [all|<server>]
/vercel-env
Utility to sync a Vercel project's environment variables with local files. Wraps `vercel env push/pull` using the `/connect vercel` configuration.
Usage: /vercel-env push [--file .env.local] [--env production|preview|development] /vercel-env pull [--file .env.vercel] [--env production|preview|development] Prereq: run /connect vercel to register the Vercel project name/scope, and complete `vercel login` + `vercel link`.
learning (2)
/accel
Self-acceleration layer: stats/propose/experiments/policy (v1.1 event model)
Usage: /accel
/l2r
Learning-to-Rank operations with 44-dimension feature system
Usage: [train|predict|status|features|explain|inspect|ml-init|ml-plan|ml-train] [options]
media (2)
/image
Tell it your image idea in one line and it will generate images immediately (you can switch to plan mode if needed).
Usage: /image "prompt" [--size 1024x1024] [--format webp|png|jpg] [--count 1..8] [--model gemini-...] [--seed N] [--out dir] [--apply|--plan-only|--dry-run] [--concurrency N] [--retry N]
/video
Describe the product demo/promo video you want and it will generate a video immediately (you can switch to plan mode if needed).
Usage: /video "prompt" [--duration 8] [--fps 24] [--aspect 16:9|9:16] [--res 720|1080] [--format mp4|webm] [--model gemini-...] [--seed N] [--out dir] [--apply|--plan-only|--dry-run] [--concurrency N] [--retry N]
memory (6)
/forget
Remove memories from persistent storage
Usage: /forget
/memory
Memory Bank + Budgeted Recall (v1.1): store/search/explain/value/promote/prune/conflicts
Usage: /memory
/memory-status
Show memory usage statistics and health
Usage: /memory-status
/personalization
Control implicit personalization learning (consent/categories), list or clear learned user-profile memories
Usage: /personalization [status|list|consent on|off|set <category> on|off|sync on|off|memories [--category <cat>] [--limit <n>]|clear <category|all> --confirm [--block=0|1]|never <category|all> --confirm|sync-status [--json=1]|sync-flush [--max <n>] [--max-attempts <n>] --confirm]
/recall
Retrieve stored memories from persistent storage. What you decided before is still here, preserved without distortion.
Usage: /recall
/remember
Store important information in persistent memory. MARIA never forgets; your decisions and context are held here, unchanged and unjudged.
Usage: /remember
multilingual (1)
/language
🌍 Language detection, weights configuration, and multilingual processing
Usage: [detect|weights|supported|optimize] [<text>] [--lang <code>] [--verbose] [--format <format>]
multimodal (1)
/multimodal
Multimodal features
Usage: /multimodal
product (4)
/blog
Generate reproducible daily technical blog drafts (3/day) into blogs/
Usage: /blog generate [--date YYYYMMDD] [--slot 1|2|3] [--category benchmark|decision|reflection] [--out blogs] [--apply|--dry-run] [--force] [--replace] /blog format [--in blogs] [--date YYYYMMDD] [--slot 1|2|3] [--apply|--dry-run] [--force] [--limit N] /blog sync [--in blogs] [--date YYYYMMDD] [--slot 1|2|3] [--project <gcpProjectId>] [--apply] [--publish] [--limit N]
/factory
Factory AI OS helper – generate ASCII architecture diagrams from factory.yaml and related configs.
Usage: /factory diagram --project <id> [--view logical|components|agents] [--output <path>]
/pm
Product management
Usage: /pm
/tournament
Tournament-driven killer universe factory (run → score → bracket → winner)
Usage: /tournament run --theme "..." --n 100 [--run-id tr-YYYYMMDD-xxxx] [--timebox-days 14] [--top 16] [--bracket 16] [--apply|--dry-run] /tournament score --run-id <runId> [--top 16] [--apply|--dry-run] /tournament bracket --run-id <runId> [--size 16] [--apply|--dry-run] /tournament match --run-id <runId> --round 16|8|4|2|1..6 [--apply|--dry-run] /tournament winner --run-id <runId> [--winner <pitchId>] [--apply|--dry-run] /tournament publish --run-id <runId> --channel blog|portal|blog,portal [--date YYYYMMDD] [--slot 1|2|3] [--category benchmark|decision|reflection] [--out blogs] [--replace] [--portal-webhook-url <url>] [--portal-webhook-secret <secret>] [--archive] [--apply|--dry-run]
quality (6)
/auto
Plan safe lint auto-fix commands (pnpm lint --fix / eslint --fix) without executing them automatically
Usage: /auto/fix-lint [--dry-run] [--verbose] [hint...]
/auto
Plan safe test recovery commands (pnpm test variants) without executing them automatically
Usage: /auto/fix-tests [--dry-run] [--verbose] [hint...]
/golden
Golden Test (GTDD/GTDO) helper: run/update/explain golden snapshots for CLI UX contracts (local-only safe).
Usage: /golden run [--case <id>] | /golden update [--case <id>] | /golden explain [--case <id>]
/langfix
Language policy tools: scan/fix Japanese characters in src/ using the common Auto-Dev LLM edit-plan flow.
Usage: /langfix scan [--roots "src,config"] [--json] | /langfix fix [--roots "src,config"] [--max-files <n>] [--max-attempts <n>] [--json]
/repair
Natural-heal auto repair pipeline (detect → plan → propose → verify) with HITL approval.
Usage: lint [--max-files <n>] [--stop-after <detect|plan|propose|verify>] [--json] [--correlation-id <id>] [--no-pr-preview] [--no-claim] [--max-attempts <n>] | detect [--mode <lint|typecheck|split|all>] --paths "<csv>" [--budget-ms <n>] [--report <path>] [--json] | plan --from <report.json> [--max-files <n>] [--report <path>] [--json] | propose --from <report.json> [--base-branch <name>] [--on-conflict <wait|fail|split|report-only>] [--wait-ms <n>] [--emit-diff] [--out-diff <path>] [--confirm] [--no-pr-preview] [--no-claim] [--max-attempts <n>] [--report <path>] [--json] | verify --from <report.json> [--budget-ms <n>] [--report <path>] [--json] | run --paths "<csv>" [--mode <...>] [--stop-after <detect|plan|propose|verify>] [--json]
/validate
OS governance validator (SSOT).
Usage: /validate [surface|outcome|evidence|tenant|quality|judgement|evolve|performance|ops|release|all]
research (3)
/knowledge
Batch-import project Knowledge Packs (YAML) and grow a knowledge base the AI can reference.
Usage: /knowledge [install-packs|sync-packs] [--root <dir>] [--user-id <id>]
/repo-graph
🧠 Repo Comprehension Graph stats (Repo + Universe + summaries + queue)
Usage: [--root <dir>] [--json]
/repo-queue
🧾 RepoGraph update queue (pending tasks)
Usage: [--root <dir>] [--limit <n>] [--json]
system (21)
/bench
Benchmark slash commands: manifest/init, doctor, list, matrix (P0)
Usage: /bench doctor [--provider ollama] [--base-url <url>] [--model <name>] [--mode latency|throughput|nightly] [--tag <suiteTag>] /bench manifest:init [--out bench/manifest.json] [--provider ollama] [--base-url <url>] [--model <name>] [--mode latency|throughput|nightly] [--tag <suiteTag>] [--force] /bench manifest:enable-local-safe [--manifest bench/manifest.json] [--provider ollama] [--base-url <url>] [--model <name>] [--profile lite|sweep|sweep-real] [--force] /bench manifest:enable-cloud-safe [--manifest bench/manifest.json] [--provider ai-proxy] [--base-url <url>] [--profile lite|sweep|sweep-real] [--force] /bench list [--manifest bench/manifest.json] /bench matrix [--manifest bench/manifest.json] [--runs <n>] [--variants light,typical,heavy] [--dry-run] /bench all [--manifest bench/manifest.json] [--runs <n>] [--variants light,typical,heavy] [--concurrency <n>] [--tag <suiteTag>] /bench context:write [--tag <suiteTag>] [--manifest <path>] [--runs <n>] [--variants light,typical,heavy] [--concurrency <n>] [--force] /bench report [--manifest bench/manifest.json] [--tag <suiteTag>] /bench compare --base <suiteTagA> --target <suiteTagB> /bench notes [--summary <path> | --tag <suiteTag>] [--top <n>] /bench select [--summary <path>] [--mode slowP95|failRate|variance] [--top <n>] [--out <path>] /bench deep:compare-kg [--manifest bench/manifest.json] [--base-tag <suiteTag>] [--summary <path>] [--mode slowP95|failRate|variance] [--top <n>] [--runs <n>] [--concurrency <n>] [--tag-prefix <prefix>] /bench deep [--manifest bench/manifest.json] [--selection <path>] [--runs <n>] [--concurrency <n>] [--tag <suiteTag>]
/claim
WorkClaim (soft-lock) operations to avoid parallel edit collisions.
Usage: /claim acquire --scopes <csv> [--ttl <sec>] [--actor-id <id>] [--actor-type agent|human|system] [--mode <mode>] [--priority P0|P1|P2] [--risk-tier low|medium|high|critical] [--correlation-id <id>] [--repo-root <dir>] [--json] /claim heartbeat <claimId> [--extend <sec>] [--json] /claim release <claimId> [--reason <text>] [--json] /claim list [--status active|released|expired|revoked|conflicted|any] [--scopes <csv>] [--repo-root <dir>] [--json] /claim show <claimId> [--json] /claim mine [--status active|any] [--json] /claim resolve <claimId> --strategy wait|split|prioritize|manual [--decision-owner <id>] [--note <text>] [--json]
/dashboard
System dashboard
Usage: /dashboard
/dc
Decision Core (Evidence Layer) operations: ingest/search/harden (v1.1)
Usage: /dc evidence ingest --file <evidence.json> [--base-dir <dir>] /dc evidence get --id <ev_...> [--base-dir <dir>] /dc evidence search [--type E1,E2,...] [--tag <tag>] [--min-confidence 0..1] [--sort observedAt|confidenceScore|createdAt] [--order asc|desc] [--limit N] [--base-dir <dir>] /dc claim upsert --file <claim.json> [--base-dir <dir>] /dc claim get --id <cl_...> [--base-dir <dir>] /dc plan harden --decision-key <key> --plan <planDraft.json> [--policy <coveragePolicy.json>] [--constitution <constitution.json>] [--out <path>] [--base-dir <dir>]
/debug
Debug app/command issues: identify likely root-cause files and provide a safe auto-fix flow.
Usage: /debug [<bug description>] | /debug [full|memory|performance|analyze|fix] [options]
/doctor
📋 Observes project health and presents what it sees (code/tests/graph/docs). Your decisions are held here, preserved without judgment.
Usage: Recommended (default): /doctor == /doctor scan --format v2 v2 scan: /doctor scan --format v2 [--universe <id>] [--time-window-days <n>] [--focus-path <prefix>] [--focus-module <id>] [--max-issues <n>] [--json] [--background] v2 freeform:/doctor <folder|file|symptom text> [--deep-dive] v2 freeform:/doctor --request "<symptom text>" [--deep-dive] v2 continue:/doctor continue "<follow up>" [--deep-dive] v2 explain: /doctor explain --format v2 --issue-id <id> [--json] v2 trace: /doctor trace --format v2 --issue-id <id> [--universe <id>] [--run <evolveRunId>] [--tenant-id <id>] [--trace-id <id>] [--json] v2 propose: /doctor propose --format v2 [--max-tasks <n>] [--json] [--background] Legacy (compatibility only): /doctor --format legacy [legacy flags] Note: legacy mode exists to support older workflows (e.g., --deep/--focus/--logs). Prefer v2.
/edge
Edge Box / Micro Agent runtime hub (state inspection). For operations use `maria edge ...`.
Usage: /edge [status|nodes|agents|symptoms] [options]
/env
Display environment information and variables
Usage: /env [filter] /env set <KEY> <VALUE> [--persist] [--file .env.local] /env unset <KEY> [--persist] [--file .env.local] /env load [--file .env.local]
/envelope
Subcommand hub for Envelope SSOT operations (/envelope <subcommand> ...).
Usage: /envelope replay --cid <correlationId> /envelope find-trace --ref <evidenceRefOrUrl> /envelope note --cid <correlationId> --reason <reason_code> --review-after <YYYY-MM-DD> "body" /envelope review-overrides [--dry-run]
/evidence
Evidence UI (EvidencePack) viewer/exporter/verifier for CLI.
Usage: show <packId|runId|evidenceId|path> [--level l0|l1|l2] [--focus <claim|evidence|counter|repro|audit>] [--item <EV-..|CE-..>] [--max-items <n>] [--page <n>] [--page-size <n>] [--confidential <public|internal|restricted>] [--redact <auto|strict|off>] [--json] | summary <...> | verify <...> | export <...> --format md|json|zip [--out <path>] | open <ref>
/guardrails
Guardrails command to propose/simulate/apply/promote/rollback policies. Switching observe/enforce and performing rollbacks require explicit human responsibility.
Usage: /guardrails status|propose|simulate|apply|promote|rollback [--scope <user|project|org|session>] [--id <id>] ...
/hitl
Open Human-in-the-loop review center for knowledge / evolution / deploy
Usage: /hitl [knowledge|evolution|deploy|all] [--kind knowledge|evolution|deploy|all]
/mlops
🧪 MLOps autonomous-improvement operations (release/evidence)
Usage: /mlops release publish|promote|rollback|schedule-run --tenant <id> [--artifacts-dir <dir>] ... /mlops evidence search --tenant <id> --q <query> [--limit 20]
/ping
Test system responsiveness and connectivity
Usage: /ping
/processes
Display running processes and system information
Usage: /processes [maria|full]
/shell
Shell command execution
Usage: /shell <cmd>
/status
📊 Display comprehensive system status and health information (standard)
Usage: /status
/terminal-setup
🖥️ Configure and optimize terminal integration for MARIA
Usage: [--detect] [--optimize] [--_shell <_shell>] [--install-integration] [--reset]
/universe
Operate Universe: ecosystem universe (design/apply) and Universe OS POC (contract delivery: Envelope/A2A/Tool Gateway).
Usage: /universe ...
/upgrade
Upgrade your subscription plan
Usage: /upgrade
/uptime
Display system and process uptime information
Usage: /uptime
workflow (8)
/approval-git
Git-like approval workflow management
Usage: /approval-git log [--number <count>] [--oneline] [--author <name>] [--since <date>] [--grep <pattern>] [--branch <name>] /approval-git branch [<branch-name>] [--create <name>] [--delete <name>] [--force-delete <name>] [--merged] [--checkout <name>] /approval-git merge <source-branch> [--target <branch>] [--message <msg>] [--no-ff] /approval-git revert <commit-id> [--message <msg>] [--no-commit] /approval-git tag [<tag-name>] [--delete <name>] [--force] [--message <msg>] [--list] /approval-git status [--detailed] /approval-git show [<commit-id>] [--diff] [--tags]
/drive
drive-analysis Universe workflow (P0: report/new + validate).
Usage: /drive <subcommand>
/git
Run safe, read-only git commands (native output)
Usage: /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] branch [--all|-a] [--remote|-r] [--show-current] /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] status [--porcelain|-s] [--branch|-b] [-sb] /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] diff [--staged|--cached] [--stat|--name-only] [-- <path>] /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] log [--oneline] [--graph] [--decorate] [--max-count <n>|-n <n>] [<rev>] /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] rev-parse --is-inside-work-tree|--show-toplevel|--short <rev>|--abbrev-ref <name> /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] show [--stat|--name-only|--patch] <rev> /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] remote -v /git [--capture|--native] [--save] [--correlation-id <id>] [--diff-default <stat|name-only>] grep [-n] [-i] [-w] [-m|--max-count <n>] <pattern> [-- <path>...] Modes: --capture (default): capture stdout/stderr, mask secrets (no file writes unless --save) --native: stdio: inherit (native output; no capture) Saving: --save: save artifacts to artifacts/ (capture: write files; native: tee capture while printing)
/git-culture
Git Culture Layer helpers (doctor/evidence/pending/pseudo-pr) v1.1
Usage: /git-culture doctor [--run-id <id>] [--remote-mode github|gitlab|local-only] [--can-push 0|1] [--can-open-pr 0|1] [--correlation-id <id>] [--json] /git-culture validate [same as doctor] /git-culture evidence --run-id <id> [--json] /git-culture sync --run-id <id> [--remote-mode github|gitlab|local-only] [--base main] [--json] /git-culture start --run-id <id> --task-slug <slug> --agent <name> [--remote-mode github|gitlab|local-only] [--base main] [--json] /git-culture pending --remote-mode <mode> --owner <owner> --reason <text> --next-step <text> --evidence-ref <path> [--run-id <id>] [--correlation-id <id>] [--who agent|user|ci] [--notes <text>] [--json] /git-culture pseudo-pr --run-id <id> --title <text> --diff-summary <text> --evidence-index <path> --risk-tier <S1..S5> --rollback <text> [--owner <owner>] [--json] /git-culture publish --run-id <id> --remote-mode github|gitlab|local-only --owner <owner> --task-id <id> --why <text> --decision <text> --risk-tier <S1..S5> [--changes <csv>] [--sync-main 0|1] [--commit 0|1] [--create-pr-api 0|1] [--json] /git-culture kpi replay [--tail <n>] [--since <iso>] [--until <iso>] [--write-summary 0|1] [--json] /git-culture ruleset replay [--tail <n>] [--since <iso>] [--until <iso>] [--json] /git-culture doctor policy [--json] /git-culture doctor taxonomy [--json]
/lock
Decision memo lock (fast): capture → compress (information compression) → provisional decision (not OS/file lock)
Usage: /lock --topic "<topic>" [--notes "<raw notes>"] [--provisional-decision "<text>"] [--sleep-window-hours 8]
/registry
Decision Registry: list decision memo logs from /sleep, /lock, /wake (not OS sleep)
Usage: /registry decision [--tail 50] [--id <sleepId>] [--json]
/sleep
Sleep Workflow: capture → compress (information compression) → provisional lock (decision memo workflow; not OS sleep)
Usage: /sleep --topic "<topic>" [--notes "<raw notes>"] [--sleep-window-hours 8] [--provider <lmstudio|ollama|vllm>] [--model <name>]
/wake
Wake Check (decision memo): short validation after /sleep or /lock → commit/rework/pending
Usage: /wake --id <sleepId> [--provider <lmstudio|ollama|vllm>] [--model <name>] [--gut ok|uneasy|unknown] [--result commit|rework|pending] [--uneasy-notes <text>]

Deployment & operations (priority)

Never commit secrets. Absorb env differences via config. Enterprise runs locally.

Secrets
  • Never commit secrets (API keys, OAuth credentials, JWT secrets).
  • Use Secret Manager (or equivalent) and avoid plaintext secrets in env/config files.
Common local auth issue (NextAuth JWE decryption)
If you see JWE decryption failed in JWT sessions, it is typically caused by a mismatched or rotated NEXTAUTH_SECRET.
Fix: keep the secret stable and clear old sessions (cookies/storage), then sign in again.
Enterprise execution policy
Enterprise runs locally (behavior aligned with LOCAL_MODE).
Do not hardcode environment differences. Absorb via config/loaders and keep evidence (logs/manifests).
Prefer deterministic safety rails: explicit boundaries, approval gates, and rollback conditions documented in the Envelope.

Local LLM Setup Guide (Ultra–Enterprise)

Run Ollama / LM Studio / vLLM on your own hardware (no cloud dependency) and connect MARIA to your local inference server.

Scope (important)
  • This guide targets the Local LLM Infrastructure feature for Ultra–Enterprise.
  • Enterprise is designed for local execution by default (behavior equivalent to LOCAL_MODE=1).
Start here (SSOT for config)
MARIA can auto-detect local inference servers, but for reliable operations you should explicitly set env vars.
# Prefer *_API_BASE (OpenAI-compatible base). *_API_URL is legacy compatibility (may be removed).
LMSTUDIO_API_BASE=http://localhost:1234/v1
OLLAMA_API_BASE=http://localhost:11434
VLLM_API_BASE=http://localhost:8000/v1

# Compatibility (deprecated)
# LMSTUDIO_API_URL=http://localhost:1234
# OLLAMA_API_URL=http://localhost:11434
# VLLM_API_URL=http://localhost:8000

# Recommended: force local mode (Enterprise-equivalent)
LOCAL_MODE=1

# Default provider/model (optional)
MARIA_PROVIDER=lmstudio   # or: ollama / vllm
MARIA_MODEL=gpt-oss-20b   # example (LM Studio)
Note: many slash commands can be overridden per-call via --provider / --model.
Ollama (install models, start server, verify)
Strengths: automatic model management (pull/list/run) and lightweight ops.
# 1) Start (skip if already running)
ollama serve

# 2) Pull models (examples)
ollama pull llama3.2:3b
ollama pull mistral:7b
ollama pull mixtral:8x7b
ollama pull deepseek-coder:6.7b
ollama pull phi3.5:3.8b

# 3) Confirm installation
ollama list

# 4) Verify API
curl http://localhost:11434/api/version
curl http://localhost:11434/api/tags
Large models (70B / 8x7B) depend heavily on GPU VRAM and system RAM. Start small (3B–7B), verify the pipeline, then scale up.
LM Studio (install models, start server, verify)
Strengths: GUI model management, GPU acceleration, and an OpenAI-compatible API (/v1).
# 1) (GUI) Download a model (e.g., gpt-oss-120b / gpt-oss-20b)
# 2) (GUI) Start Local Server in "OpenAI Compatible" mode (default: http://localhost:1234/v1)

# If you have the CLI (lms)
lms ls
lms server start

# Verify
curl http://localhost:1234/v1/models
curl http://localhost:1234/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer lm-studio" \
  -d '{"model":"gpt-oss-20b","messages":[{"role":"user","content":"ping"}],"stream":false}'
vLLM (high-throughput, production-oriented)
Strengths: high-throughput serving / Tensor parallelism / Continuous batching / PagedAttention / Production-ready
# Example: start an OpenAI-compatible server (follow vLLM's setup guide for dependencies)
python -m vllm.entrypoints.openai.api_server \
  --model mistralai/Mistral-7B-Instruct-v0.2 \
  --host 0.0.0.0 \
  --port 8000

# Verify
curl http://localhost:8000/v1/models
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"mistralai/Mistral-7B-Instruct-v0.2","messages":[{"role":"user","content":"ping"}],"stream":false}'
MARIA uses VLLM_API_BASE (recommended) or VLLM_API_URL (compat) to locate the endpoint.
Use from MARIA (fastest path)
1) Set the `*_API_BASE` variables in `.env.local` or your shell environment
2) Explicitly pass provider/model per command for deterministic routing
# Example: run with LM Studio explicitly
maria /ceo --provider lmstudio --model gpt-oss-20b "Summarize the requirements"

# Example: run with Ollama explicitly
maria /ceo --provider ollama --model llama3.2:3b "Summarize the requirements"
If the local inference server is down, MARIA may attempt auto-detection/auto-start depending on the environment. For production ops, explicit env is recommended.