Keyboard-first triage. Approve, edit, or reject. Edits create clean versions. Rejections feed a structured signal back to the prompt registry. Manifesto tenet 03, made operational.
email_draft.subject, .body, .ctalead.pitchBrief — visible side-by-side for fact-checklead.enrichment.contacts[0] — addressed personworkspace.reviewerId — who is reviewingpack.feedbackTaxonomy[] — controlled rejection reasonsreview.decision — approve | edit | rejectreview.editedDraft — full revision body if editedreview.diff — char-level diff vs original draftreview.rejectionReasons[] — taxonomy-boundreview.reviewedAt, .reviewerId, .durationMsreview.feedsBackTo — email_prompt_id for self-teachingThe reviewer sees one draft at a time, with the pitch brief and the enriched contact open in a side panel for fact-check. ⏎ approves and advances. E drops into an inline editor that produces a clean revision; the diff against the original draft is stored as the learning signal. X rejects with a taxonomy pick — never freeform text — so rejection reasons aggregate cleanly across leads and packs. The whole loop is built around the fact that a reviewer should be able to clear two-hundred drafts in roughly an hour.
Edits version cleanly. Each edited draft is a new row keyed by the original email_prompt_id, with a parent pointer to the unedited version. The edit diff and the reviewer id are stored alongside. After a quarter of edits, the pack's prompt team can ask the registry "what does my reviewer change about openers in legal_v17" and get a corpus of paired before/after text — the kind of training data that actually moves draft quality.
Rejections are the most valuable signal in the system. A taxonomy-bound reject (tone_off, facts_wrong, cta_weak, credit_link_off) carries a structured, comparable failure mode. The prompt registry can roll those up by version, by pack, by reviewer, and surface saturating failures for the next prompt bump. The reviewer is, in a real sense, the editor of the next prompt.
--- shortcuts --- { "approve": ["Enter"], "edit": ["e", "E"], "reject": ["x", "X"], "next": ["j", "ArrowDown"], "prev": ["k", "ArrowUp"], "factsPanel": ["f", "F"], "showHelp": ["?"] } --- feedback taxonomy (per pack) --- [ "tone_off", # voice doesn't match brand profile "facts_wrong", # claim contradicts pitchBrief "specific_obs_weak", # observation isn't specific enough "cta_weak", # question is generic "credit_link_off", # wrong link for audience "wrong_addressee", # named leader not the buyer "language_drift" # language wrong ] --- write-back --- { "decision": "approve"|"edit"|"reject", "diff": string|null, # on edit "rejectionReasons": string[], # taxonomy keys only "reviewerId": uuid, "feedsBackTo": email_prompt_id # self-teaching attribution } # <!-- PLACEHOLDER — taxonomy editable per pack in app -->
Reviewer hits Enter on every draft without reading. Reputation cost on send.
Per-reviewer throughput dashboard. Approve-rate > 95% with zero edits triggers a calibration prompt; admin can mandate sample re-review.
Reviewer rewrites the draft heavily; the original signal is gone.
Edits are stored as a new versioned row keyed back to original prompt id. Char-level diff persisted alongside both bodies.
Rejections come back as prose, not aggregable.
Reject UI is a multi-select on the pack's feedbackTaxonomy. Optional note is freeform; the structured codes are required.
Try the keyboard loop. J / Enter / E / X. Feel the rhythm.