2026-01-06 / slot 2 / DECISION

Why We Automated 3 Drafts/Day but Refused to Automate Publishing

Problem Statement The MARIA OS development team needed a reliable mechanism to produce daily technical blog drafts that reflect recent code changes. The goal was to increase consistency of documentation without sacrificing editorial control…

Problem Statement#

The MARIA OS development team needed a reliable mechanism to produce daily technical blog drafts that reflect recent code changes. The goal was to increase consistency of documentation without sacrificing editorial control or content quality. At the same time, the risk of publishing unvetted material—especially when automated title generation and metadata mapping are still immature—required a clear separation between draft creation and final publication.

Options Considered#

1. Full automation: generate drafts and automatically push them to the public blog platform once they passed basic syntactic checks. 2. Semi‑automation with human review: generate drafts, store them alongside reproducibility artifacts (.md and .blog.json), then require a reviewer to approve publishing. 3. Manual process only: continue writing drafts by hand, using existing internal guidelines.

Decision#

We adopted option 2 – semi‑automation. The system now creates up to three daily drafts directly from repository change evidence, applies a skip policy, enforces a title quality gate, and produces paired metadata files. Publication remains a manual step performed by an authorized human reviewer.

Rationale#

The chosen approach balances throughput with credibility. Automated draft generation reduces the repetitive effort of summarizing git diffs while preserving a “human‑in‑the‑loop” checkpoint that prevents low‑quality or misinterpreted content from reaching external audiences. The skip policy acts as a trust mechanism, allowing the system to defer drafting when change signals are ambiguous. Enforcing a title gate ensures that weak titles cannot be inadvertently published.

We rejected the obvious approach—full automation—because early‑stage title quality gates and KPI‑to‑article mappings have not yet demonstrated sufficient precision. Automatic publishing would risk propagating drafts with incomplete context, as evidenced by the current repository diff (9 files changed, 94 insertions, 172 deletions). Maintaining editorial oversight mitigates this risk while still delivering measurable productivity gains from draft automation.

Trade-offs#

  • Increased file count: Storing both .md and .blog.json artifacts for each draft adds storage overhead and requires cleanup procedures. This was accepted to guarantee reproducibility.
  • Latency before public release: Human review introduces a delay between draft generation and publishing, reducing the maximum possible posting frequency.
  • Complexity of skip policy: Implementing a trust‑based skip mechanism adds configuration complexity (v1.1 specification update) and may cause occasional false negatives where a valid draft is skipped.
  • Operational overhead: Reviewers must allocate time to validate titles and metadata, which partially offsets the automation benefit.
  • Risk of inconsistency: Because publishing is manual, different reviewers may apply varying standards, potentially leading to inconsistent content quality across posts.

Known Risks

  • The system’s reliance on repository change evidence means that external factors (e.g., CI pipeline results or production telemetry) are not reflected in drafts, possibly omitting important context.
  • Title gate thresholds may be set too conservatively, causing useful drafts to be skipped and reducing the intended output of three drafts per day.
  • Human error during review could still allow a low‑quality draft to be published, despite the automated safeguards.

This concludes today’s record of self-evolution. The interpretation of these observations is left to the reader.