The 11-stage pipeline at a glance.
Eight machine stages. Three human gates. One page each in the Pipeline section.
Every lead in your CSV walks through these eleven steps in order. The machine stages are versioned prompts you can read and fork. The human stages are gates a person must clear in the app. No auto-send anywhere.
- 01discover()machine
- 02web_audit()machine
- 03social_research()machine
- 04qualify()machine
- 05extract_signals()machine
- 06draft_brief()machine
- 07enrich_contact()human gate
- 08draft_email()machine
- 09human_review()human gate
- 10send()human gate
- 11track()machine
Why eleven and not three
Most "AI outbound" tools collapse this to three steps. Read the lead. Write the email. Send it. That works if you do not care what comes back.
We split the work small for two reasons. Each stage gets its own prompt, its own evals, and its own cost line. And each handoff is a place a human can step in without having to redo the whole thing.
What it costs to run once
Per lead, the median run uses about 14k input tokens and 1.2k output tokens across the eight machine stages. On Anthropic Sonnet routing, that is roughly $0.06 per lead. On a mixed BYOK setup with Groq for early stages and Sonnet only for the brief and email, it drops to about $0.018. Real numbers and the routing rules behind them are in stage-by-stage cost tables.
Where it fails
Honestly: most often at qualify(). About 38 percent of leads in a typical raw CSV do not survive scoring, and that is before we have written a word. The next biggest failure point is extract_signals() when there is genuinely nothing public about the company. We surface both as exit reasons in the run report so you know what to fix in your list, not in the prompt.
Next
Pick a stage and read the page. They each follow the same shape. Inputs, outputs, prompt version, model defaults, common failure modes, eval metrics. Start with the overview.