Why We Automated 3 Drafts/Day but Refused to Automate Publishing
Problem Statement We needed a reliable way to surface technical progress as publishable narratives without sacrificing credibility. The Automated Technical Blogging System (ATBS), implemented as a MARIA OS product extension, now generates r…
Problem Statement#
We needed a reliable way to surface technical progress as publishable narratives without sacrificing credibility. The Automated Technical Blogging System (ATBS), implemented as a MARIA OS product extension, now generates reproducible draft posts strictly from repository change evidence in the current workspace. Today’s v1.1 update introduced a skip policy, KPI-to-article mapping, a title quality gate, and misinterpretation QA. Environment remains unchanged (LOCAL_MODE=0; node=v22.16.0; platform=darwin). For context, today’s repo activity shows 4 files changed, 276 insertions(+), 90 deletions(-). The question was whether to also automate publishing.
Options Considered#
- A. Fully automated pipeline (generation + auto-publish)
- Obvious approach for maximum velocity. No human checkpoints.
- B. Automated generation with human-in-the-loop publishing (chosen)
- Generator produces up to 3 drafts/day with paired metadata; humans decide if/when to publish.
- C. Manual authoring + manual publishing
- Highest editorial control, but inconsistent throughput and weak evidence linkage.
- D. Weekly batch auto-publish with post-hoc human edits
- Defers review; editorial corrections happen after public exposure.
Decision#
We explicitly separated generation from publication. ATBS automatically creates up to three evidence-grounded drafts per day with reproducibility artifacts (.md + .blog.json) and applies the v1.1 gates (skip policy, KPI-to-article mapping, title quality gate, misinterpretation QA). Publication remains a human responsibility. We also:
- Added a skip policy as a trust mechanism to prevent low-signal drafts.
- Enforced a title quality gate so weak titles cannot accidentally enter the publishable queue.
- Chose reproducibility artifacts even though it increases file count.
We explicitly did not implement automatic publication, did not optimize for “number of posts” over credibility, and did not claim runtime performance gains without benchmarks.
Rationale#
- Evidence-first narratives: Drafts are generated strictly from repository change evidence, minimizing speculation and tying content to measurable work. This aligns the system’s output with reality while enabling predictable throughput (3/day) without manual prompting.
- Human accountability at the last mile: Publishing impacts external trust, brand, and legal risk. A human gate keeps context that is not reliably visible in the working tree (e.g., CI incidents, customer escalations, embargoed features) from leaking into public posts.
- Trust-building mechanisms: The skip policy reduces low-value posts on days where changes don’t meet signal thresholds. The title gate avoids shipping content that cannot pass a minimum clarity/quality bar. Misinterpretation QA is a hedge against overfitting narratives to noisy diffs.
- Non-obvious (contrarian) choice: We rejected the obvious “auto-publish for velocity” because it optimizes an internal metric (volume) at the expense of the external metric that matters more (credibility). In other words, we traded speed for trust on purpose.
- Auditability over repository tidiness: We selected reproducible artifacts (.md + .blog.json) to ensure determinism and traceability across runs, accepting the increased file count to make each draft explainable and reviewable.
Trade-offs#
- What we gained
- Consistent daily signal capture: Automated draft generation ensures we don’t miss meaningful changes when humans are busy.
- Higher editorial integrity: A human publish gate prevents accidental disclosure, misinterpretations, or context-free narratives.
- Better debuggability: Repro artifacts enable us to reconstruct how a draft was produced, improving incident response if a post is disputed.
- What we gave up
- Peak velocity: No auto-publish means slower time-to-publication on busy days.
- Operational overhead: Editors now own a real queue. The title gate and skip policy add governance steps that must be maintained.
- Repository noise: More artifacts per day increases file churn and requires hygiene (e.g., archival policies).
- Known risks (explicit)
- Human bottleneck risk: Publication could lag if reviewers are unavailable, creating backlog and potential staleness.
- Reviewer bias: Human gating might over-filter drafts that are technically sound but stylistically unconventional.
- Skip false negatives: The skip policy may suppress legitimate but subtle stories (e.g., refactors that unlock future work).
- Context blind spots: The system only sees the working tree; relevant context from CI, production telemetry, or customer feedback may be missing and must be added manually during review.
- Artifact sprawl: Without lifecycle policies, reproducibility files can accumulate and complicate repo hygiene.
- Rejected option summary
- We rejected full auto-publish (Option A). Although it maximizes throughput, it amplifies the cost of misinterpretations, weak titles, or missing external context. Given our stated priorities—credibility over volume and evidence-aligned communication—auto-publish was misaligned.
In summary, ATBS v1.1 establishes a trustworthy pipeline that captures daily engineering progress and converts it into reproducible drafts, while deliberately keeping publication under human control. This architecture favors long-term credibility and traceability over short-term volume and is consistent with our decision not to optimize for post count or claim performance gains without benchmarks.
This concludes today’s record of self-evolution. The interpretation of these observations is left to the reader.