- How did you determine your initial pricing?
Benchmarked against workforce SaaS and assessment platforms (per-seat / per-candidate).
Modeled “cost per successful placement” vs. what government and employers already pay (recruiters, training, churn).
Backed into a price that is neutral-to-cheaper than current spend, while supporting our margins.
- What feedback have customers given you about pricing?
Government: “This is reasonable if it’s tied to clear outcomes (placements, completions, retention).”
Employers: Price is acceptable if it replaces some recruiter/assessment spend and doesn’t feel like an extra line item.
General: Less pushback on total price, more on how it’s structured (per seat vs. per hire vs. per cohort).
- How sensitive are they to price changes?
Moderately sensitive on unit price, highly sensitive on budget category.
If it fits into existing workforce/recruiting budgets and replaces something, there’s flexibility.
If it feels “additive” or comes from innovation dollars only, sensitivity is high.
- What is your next pricing experiment (e.g., free trial → paid, usage tiers, value-based pricing)?
[current] Test a “program / placement” model with
government: platform fee + per validated hire.
Segment government buyers into three motion types: fast-adopters (regional workforce offices), credibility buyers (DoD pilots), and institutional renewers (DOL multi-year program) — then sequence resource allocation accordingly.
For employers, test a role- or site-based subscription (all candidates for X roles or locations included), instead of per-candidate fees.
Codify a repeatable employer-validation protocol — 30-day trust-building process with simulation demos, supervisor calibration, and post-placement feedback — so employer adoption becomes scalable rather than relational. Prioritize
employer demand signals — track placements per employer, supervisor satisfaction, and time-to-first-hire.
- How does your pricing reflect the real value delivered?
Anchored to outcomes they already track: time-to-fill, cost-per-hire, and retention.
Designed so that, when fully used, our cost per successful hire is lower than what they pay today for mis-hires, overtime, and recruiting — and so we only win when they do.
Revenue Model SummaryOur biggest business-model risk is relying on large, slow government contracts where the payer is separate from daily users. To de-risk this, we’re testing tightly scoped, outcome-priced programs with a government partner and a small set of employers, proving unit economics and creating a repeatable playbook for multi-year deals. Early willingness to pay is strongest from veteran/workforce programs and utilities/manufacturers hiring for critical roles, evidenced by funded pilots, data-sharing, and movement into budget/legal review rather than just verbal enthusiasm. If we suddenly raised prices 2–3x, we’d expect government partners to push for narrow pilots and strict guarantees, and employers to shrink scope to single sites or roles—visible through lower win rates and more deals stalling at procurement/finance.
Our decision to keep or change the current pricing model will be driven by
Net Revenue Retention plus usage; if usage is strong but NRR is flat, that’s our signal to move toward usage- or value-based pricing so revenue tracks the value we create. Expansion revenue comes in two waves: first,
within government contracts (more roles, sites, and programs per agency), and second, through a
future B2B model where employers who participate in government-funded programs and see results convert into direct paying clients (site- or role-based subscriptions). This current government-led model is intentionally designed to bring employers into the system early, generate proof of impact for them, and then unlock that B2B expansion path.