FAQs for Curious Minds

Methodology:

  1. What is the Buyer Flywheel?

    A repeatable revenue-marketing method for SaaS SMBs: research active buyers, run risk-managed micro-experiments, and use a forward signal to prioritize what moves deals.

  2. How does it run (cadence)?

    90-day sprints:

    • Days 1–14 research

    • Days 15–42 test

    • Days 43–90 scale; repeat.

  3. Buyer vs Customer — why the split?

    Buyers (pre-purchase) have different objections than customers (post-purchase); prospect intelligence predicts acquisition better.

  4. Is this a framework or a product?

    Methodology + templates — not a proprietary platform; tools support the process.

  5. Who should run it?

    A senior marketing lead owns it; sales co-owns the qualification; small teams can start with a single practitioner.

    Or let Revenue Operations run it, as stated by Dr. Debbie Qaqish.

  6. When does it fit?

    Use it when you’re an SMB SaaS with budget limits, product-market fit, and inconsistent conversions.

Metrics & Measurement:

  1. What is LVR?

    Lead Velocity Rate = month-over-month growth in qualified leads; a forward indicator of pipeline momentum.

  2. Must we use LVR?

    Use LVR or CAC or any metric that's leading, stable, and shared.

  3. How to define a ‘qualified lead’?

    Agree a sales-accepted SQL definition (demo requested, trial started, or sales-accepted MQL), lock it for LVR calculations.

  4. What benchmarks are realistic?

    Use baseline to set targets; growth-stage SaaS often aim for 15–25% M/M qualified-lead growth as an aggressive target.

  5. Where does attribution fit?

    Keep attribution for revenue crediting; the Flywheel is a learning engine to validate messaging and acquisition tactics.

  6. How frequently do we measure?

    Track leading signals weekly; formally review LVR and experiments monthly during sprint retros.

Execution & Process:

  1. How to start the first sprint?

    Begin with 5–7 buyer interviews, lost-deal review, and friction mapping; then design 3 small tests.

  2. What makes a valid test?

    Single-variable hypothesis, defined audience, sample size, tracking plan, and a pre-registered success metric.

  3. How many tests at once?

    As many as your 5–20% experimentation budget allows — prefer multiple small parallel tests to one big bet. Depends on your team's capacity.

  4. What documentation is required?

    Experiment log:

    • hypothesis,

    • audience,

    • timeline,

    • context,

    • sample size,

    • expected outcome (KPIs),

    • outcome,

    • confidence,

    • ROI estimate, and

    • next action.

  5. How to hand results to sales?

    Deliver validated messages, objection scripts, and a one-page playbook; run a short handover session.

Budget & Governance:

  1. How much to reserve for experiments?

    Allocate 5–20% of marketing budget to experimentation per quarter.

    Rule of thumb: The higher the risk, the lower the budget.

  2. Per-test cap?

    Cap individual unproven tests ~2% of quarterly marketing budget to control downside.

  3. What if leadership objects to failed tests?

    Failures are logged and taught; governance and transparent logging demonstrate learning and reduce repeated mistakes.

  4. How to budget if cash is very tight?

    Trim test scope, prioritize qualitative buyer interviews, and run lower-cost validations (emails/landing pages) first.

Fit & Limits:

  1. Is it only for SaaS?

    Best fit for SaaS SMBs but adaptable wherever buyer research + small experiments can improve acquisition.

  2. When NOT to use it?

    Solve product first - If the core problem is product-market fit, pricing, or fundamental product issues.

  3. Does Flywheel replace product or pricing work?

    No — it surfaces evidence that may require product/pricing changes; it’s an early-warning system, not a cure for product-market-fit gaps.

Advanced:

  1. What statistical threshold to require?

    Target 95% confidence before scaling experiment winners; record effect size and ROI assumptions.

  2. What supporting KPIs to track?

    • Conversion rates by funnel stage,

    • marketing-attributed pipeline,

    • CAC,

    • LTV:CAC, and

    • pipeline velocity.

  3. Can this be audited by analysts?

    Yes — keep experiment logs like scientists do. Think composition books.

    LVR history, and sprint retros for external review (use Playbook bibliography).