How to build an ROI model for executive coaching programmes
by Mentor Group

Short answer: decide what coaching should change, measure those changes fairly, and convert them into financial value your finance team recognises. This guide covers the why, the what, and the how — with research to support the case.
1) Why an ROI model matters (and what “good” looks like)
Executive coaching is valuable when it improves the habits that drive performance: clearer goals, reliable follow-through, and resilience under pressure.
Meta-analyses show coaching has significant, positive effects on these outcomes at work — see Frontiers in Psychology (2023) and Theeboom et al. (2013) in The Journal of Positive Psychology.
When organisations do track benefits, case studies have reported strong financial returns — from 5.7:1 (Manchester Inc.) to over 500% ROI (MetrixGlobal) in specific contexts. Treat these as signals, not guarantees, but they show why a clean model is worth the effort.
2) Decide what to measure (leading → lagging)
A credible model starts with a short chain from behaviour to business result.
Leading indicators (what coaching should move):
- Coaching cadence and quality (e.g., % of 1:1s completed to a simple rubric)
- Decision cycle time (how long it takes to resolve the top recurring issues)
- Self-efficacy and resilience (brief, validated scales monthly)
These are the mechanisms research says coaching improves (goal attainment, psychological capital, resilience); see Frontiers (2023 RCT meta-analysis) and Theeboom et al. (2013).
Lagging outcomes (what the exec team sees):
- Revenue efficiency: win rate, average deal value, sales cycle time
- Forecast credibility: error, slippage, push rates
- People metrics: manager retention, internal mobility, avoided replacement costs
Why this works: the mapping is explicit, entities are consistent (win rate, forecast error, decision cycle time), and the relationships are easy to summarise.
Success summary: Track leading indicators like coaching cadence, decision cycle time and self-efficacy alongside lagging outcomes such as win rate, forecast error, retention and productivity per manager.
3) Set the baseline and the comparison (keep it fair, not perfect)
You don’t need a laboratory. You need fairness.
- Baseline: collect 8–12 weeks of pre-coaching data for the selected leaders.
- Comparison: pick a matched cohort (similar tenure, team size, market). If that’s not possible, use a clear pre/post design and document other factors that might affect results.
- Attribution view: start with pre/post + matched cohort; if resources allow, add a difference-in-differences cut (how much more the coached group improved vs the comparison group in the same window).
This is simple enough to run in a spreadsheet, credible enough for finance, and recognisable to auditors.
Success summary: Use an 8–12 week baseline and a matched comparison cohort. Start with a pre/post analysis and, if possible, add a difference-in-differences view to see how much more the coached group improved in the same period.
4) Build the money bridges (the part finance signs off)
Translate improvements into financial value using accepted conversions in your business:
- Win rate → bookings: A +3–4 point lift at steady volume is a material revenue gain. Industry round-ups often cite ~28% relative win-rate improvements where coaching is structured and consistent (use directionally, not universally).
- Forecast error → resource allocation: Fewer “mirage” deals mean less wasted pursuit time and better prioritisation.
- Retention → avoided cost: Add hiring fees, ramp time, and lost productivity for senior roles.
- Time saved → capacity: Faster decisions and fewer escalations → hours saved → value per hour.
Tip: agree these bridges with RevOps/Finance before the programme starts. That turns the ROI maths into a straightforward receipt.
Success summary: Use agreed money bridges: win rate to bookings, forecast error to resource allocation, manager retention to avoided replacement cost, and time saved to capacity. Align these with RevOps and Finance before the programme.
5) Calculate ROI (and share it clearly)
Keep the formula simple and transparent:
ROI (%) = (Total Benefits – Total Costs) ÷ Total Costs × 100
Payback period = months to breakeven
Costs: coaching fees, internal time, light measurement overhead.
Benefits: incremental bookings from win-rate lift, value of time saved, avoided backfill costs, and any verified retention benefit.
Report monthly on a single page:
- Scope (who’s in the cohort and the comparison)
- Movement (leading → lagging)
- Money bridges (assumptions agreed with Finance)
- ROI and payback (to date and projected)
6) What “good” ROI looks like in the evidence
A reasonable expectation for measured, targeted programmes is often 3–7x ROI over time, with higher multiples in some contexts. That view balances the mechanism evidence with case study returns when organisations track benefits. For example:
- Manchester Inc.: average 7:1 ROI across 100 executives
- MetrixGlobal (Fortune 500 telecom): 529% ROI, rising to 788% with retention savings
- Meta-analyses of workplace and executive coaching show consistent positive effects on work outcomes — see Frontiers in Psychology (2023) and Frontiers (RCT meta-analysis).
Success summary: Context varies, but measured, targeted programmes often land in the 3–7x range over time, with higher multiples when benefits like retention are included and the programme is executed well.
7) Governance and privacy (don’t skip this bit)
Keep measurement light and data secure. Use aggregated views for sensitive people metrics, follow your DPA/ISO processes, and be explicit about purpose: improving decision quality and leadership effectiveness.