Introduction
Most sales organisations don’t lack content; they lack clarity on which capabilities matter most right now. A practical capability assessment names the few behaviours that unlock progress, gives you evidence in weeks (not months), and sets up a focused training plan your managers can run. This guide shows how to do that—simply, fairly and in a way that leaders trust.
For a full breakdown of what good sales training looks like, read our pillar guide here.
Define Scope and Outcomes
- Start with outcomes. Frame goals in business terms—Value, Volume and Velocity (the 3Vs). Examples: improve stage conversion from 2→3, reduce cycle time by 10%, lift average margin by 2 pts.
- Choose a lane. Pick one segment, one product, or one region for the first pass. Breadth kills speed; focus builds credibility.
- Agree evidence types. Decide which artefacts you will review (call snippets, opportunity notes, proposals, practice clips) and who can supply them.
Build a Behaviour Model (EQ, IQ, XQ)
Express capability through three lenses to avoid over‑indexing on product knowledge:
- EQ—trusted human connection (relevance, empathy, senior access, stakeholder rapport).
- IQ—insight and decision quality (problem diagnosis, value linking, qualification rigour).
- XQ—execution in workflow (preparation, discovery depth, multithreading, next‑step hygiene).
Keep the model short: 6–8 observable behaviours you can spot in evidence, not 40 abstract competencies.
Collect Evidence Lightly
- Self evidence—reps submit two examples per behaviour (e.g., discovery call snippet, proposal excerpt) with a quick self‑rating and context.
- Manager review—managers rate the same items against a simple rubric (e.g., 1–4 with descriptors).
- Calibration—enablement samples 10–20% for quality and consistency; adjust guidance where scores drift.
Make it easy to contribute (recorded clips, CRM notes, redacted proposals). Avoid long surveys; use artefacts instead.
Score Fairly with a Simple Rubric
- Clarity—define what “good” looks like with plain‑English descriptors for each score.
- Consistency—train managers on 3–4 sample artefacts so scoring aligns.
- Evidence over opinion—require a link or file for each score; no anecdotal ratings.
Target a simple dashboard: behaviour, current standard, example, next action.
Analyse for Action, Not Perfection
Look for patterns that inform a short plan:
- Two behaviours with the biggest gap and business impact.
- Where the gaps sit (new reps vs. experienced; segment; region).
- Which artefacts show best practice you can reuse.
Publish a one‑slide summary: “What we learned, what we’ll change, how we’ll measure”.
Close the Loop: Learn → Practise → Embed
- Learn—build concise, contextual modules mapped to your stages and buyer language.
- Practise—deploy digital practice (AI role‑plays, simulations) for safe repetition with instant feedback.
- Embed—run a weekly manager coaching cadence with simple prompts and an accessible metrics dashboard for reference
Instrument the behaviours you targeted and track movement in conversion, margin and cycle time. Report in board‑ready language.
Governance, Privacy and Adoption
- Privacy by design—agree where practice data lives, how long it’s retained, and who can view it.
- Accessibility—ensure content and tools work for all users (captions, transcripts, keyboard navigation).
- Flow of work—keep everything within CRM and collaboration tools (Teams/Slack); avoid new portals.
Common Pitfalls
- Gigantic competency models with vague language.
- Surveys instead of evidence.
- Activity counts mistaken for quality.
- Assessments that don’t lead to a manager‑run plan.
One-Page Checklist
- Outcomes agreed in 3Vs; scope narrowed.
- Behaviour model (EQ, IQ, XQ) with 6–8 observables.
- Evidence plan using real artefacts.
- Simple rubric and calibration.
- Pilot plan: Learn → Practise → Embed with manager cadence.
- Metrics wired to conversion, margin, cycle time.
Bottom Line
Q1. What is a sales capability assessment?
A1. A structured, evidence‑based review of core sales behaviours—across EQ, IQ and XQ—to identify gaps and strengths that directly link to business outcomes.
Q2. How is capability assessment different from performance reviews?
A2. Performance reviews judge results; capability assessment examines how results are achieved using observable evidence, so you can target specific behaviours.
Q3. What methods should we use to assess?
A3. Review real artefacts (call snippets, opportunity notes, proposals, practice clips), add self and manager ratings against a simple rubric, and calibrate a sample.
Q4. What counts as good evidence?
A4. Short, representative artefacts that clearly show the behaviour in context—backed by links or files—so ratings rely on facts, not anecdotes.
Q5. How long should an assessment take?
A5. 2–4 weeks for a focused cohort—fast enough to act, thorough enough to trust.
Q6. Who should be involved?
A6. Reps contribute evidence; managers score and coach; enablement calibrates; leadership agrees outcomes and removes blockers.
Q7. What metrics should we report?
A7. Behavioural standards achieved plus 3V movement: stage conversion, margin discipline and cycle‑time reduction.
Q8. Which tools help?
A8. Digital practice platforms for safe repetition, simple scoring forms for managers, and integrations with CRM and collaboration tools to keep work in flow.