Methodology:
What is the Buyer Flywheel?
A repeatable revenue-marketing method for SaaS SMBs: research active buyers, run risk-managed micro-experiments, and use a forward signal to prioritize what moves deals.
How does it run (cadence)?
90-day sprints:
Days 1–14 research
Days 15–42 test
Days 43–90 scale; repeat.
Buyer vs Customer — why the split?
Buyers (pre-purchase) have different objections than customers (post-purchase); prospect intelligence predicts acquisition better.
Is this a framework or a product?
Methodology + templates — not a proprietary platform; tools support the process.
Who should run it?
A senior marketing lead owns it; sales co-owns the qualification; small teams can start with a single practitioner.
Or let Revenue Operations run it, as stated by Dr. Debbie Qaqish.
When does it fit?
Use it when you’re an SMB SaaS with budget limits, product-market fit, and inconsistent conversions.
Metrics & Measurement:
What is LVR?
Lead Velocity Rate = month-over-month growth in qualified leads; a forward indicator of pipeline momentum.
Must we use LVR?
Use LVR or CAC or any metric that's leading, stable, and shared.
How to define a ‘qualified lead’?
Agree a sales-accepted SQL definition (demo requested, trial started, or sales-accepted MQL), lock it for LVR calculations.
What benchmarks are realistic?
Use baseline to set targets; growth-stage SaaS often aim for 15–25% M/M qualified-lead growth as an aggressive target.
Where does attribution fit?
Keep attribution for revenue crediting; the Flywheel is a learning engine to validate messaging and acquisition tactics.
How frequently do we measure?
Track leading signals weekly; formally review LVR and experiments monthly during sprint retros.
Execution & Process:
How to start the first sprint?
Begin with 5–7 buyer interviews, lost-deal review, and friction mapping; then design 3 small tests.
What makes a valid test?
Single-variable hypothesis, defined audience, sample size, tracking plan, and a pre-registered success metric.
How many tests at once?
As many as your 5–20% experimentation budget allows — prefer multiple small parallel tests to one big bet. Depends on your team's capacity.
What documentation is required?
Experiment log:
hypothesis,
audience,
timeline,
context,
sample size,
expected outcome (KPIs),
outcome,
confidence,
ROI estimate, and
next action.
How to hand results to sales?
Deliver validated messages, objection scripts, and a one-page playbook; run a short handover session.
Budget & Governance:
How much to reserve for experiments?
Allocate 5–20% of marketing budget to experimentation per quarter.
Rule of thumb: The higher the risk, the lower the budget.
Per-test cap?
Cap individual unproven tests ~2% of quarterly marketing budget to control downside.
What if leadership objects to failed tests?
Failures are logged and taught; governance and transparent logging demonstrate learning and reduce repeated mistakes.
How to budget if cash is very tight?
Trim test scope, prioritize qualitative buyer interviews, and run lower-cost validations (emails/landing pages) first.
Fit & Limits:
Is it only for SaaS?
Best fit for SaaS SMBs but adaptable wherever buyer research + small experiments can improve acquisition.
When NOT to use it?
Solve product first - If the core problem is product-market fit, pricing, or fundamental product issues.
Does Flywheel replace product or pricing work?
No — it surfaces evidence that may require product/pricing changes; it’s an early-warning system, not a cure for product-market-fit gaps.
Advanced:
What statistical threshold to require?
Target 95% confidence before scaling experiment winners; record effect size and ROI assumptions.
What supporting KPIs to track?
Conversion rates by funnel stage,
marketing-attributed pipeline,
CAC,
LTV:CAC, and
pipeline velocity.
Can this be audited by analysts?
Yes — keep experiment logs like scientists do. Think composition books.
LVR history, and sprint retros for external review (use Playbook bibliography).