Introduction to the Brinkerhoff Success Case Method (SCM) + historic origins

If you have followed this series from Kirkpatrick through Phillips, a pattern should already be clear. Most organisations do not struggle with knowing that learning happened. They struggle to explain where it made a difference. Completion rates, satisfaction scores, and even well-designed evaluations can still leave leaders asking the same question: what actually worked, and why?

This is where the Brinkerhoff Success Case Method fits into the measurement toolkit. Rather than trying to measure everything, SCM takes a deliberate step back. It focuses on identifying the few cases where learning led to meaningful performance improvement, as well as the cases where it did not, and learning deeply from both. The goal is insight that leads to action, not perfect data.

The method was developed by Robert O. Brinkerhoff in the early 2000s as a response to the growing complexity and cost of traditional evaluation approaches. Brinkerhoff argued that organisations were spending too much time measuring activity and not enough time understanding impact. His foundational work, most notably The Success Case Method, positioned SCM as a practical, evidence-based way to surface real results quickly and credibly.

SCM still matters because it is fast, focused, and grounded in reality. When time, budget, or data maturity are limited, it helps learning teams answer the questions that matter most and improve performance where it counts.

SCM in 60 seconds: Identify the most successful and least successful outcomes of a learning initiative, study them closely, and use those insights to strengthen learning impact and business results.

What the Success Case Method is

At its core, the Success Case Method is a focused evaluation approach that asks a very specific question: where did this learning make a real difference, and what can we learn from that? Instead of trying to measure average impact across an entire population, SCM deliberately zooms in on the extremes. It identifies the most successful and least successful instances of a learning intervention and studies them in depth.

The logic is simple but powerful. If learning led to meaningful performance improvement for some people, understanding how and why that happened is far more useful than knowing that most participants rated the course four out of five. And the reverse is true too. If learning did not translate into impact, that failure often hides barriers that metrics alone cannot explain. SCM prioritises insight over completeness, and explanation over scale.

Practically, this means SCM relies on targeted data collection rather than one-size-fits-all measurement. A short quantitative screening survey is used to surface likely success and non-success cases. These are followed by structured interviews and evidence gathering to understand context, behaviour change, and outcomes. The emphasis is on credible stories supported by evidence, not anecdotes without substance.

Seen alongside other frameworks in this series, SCM plays a complementary role. Kirkpatrick helps teams think systematically about reactions, learning, behaviour, and results. Phillips adds financial discipline through ROI. SCM, by contrast, is most valuable when the question is not how much impact there was on average, but how impact actually happens in the real world. It can stand alone for rapid evaluation, or be used alongside broader measurement approaches to add depth and meaning.

The outputs of an SCM evaluation are deliberately practical. They typically include a small number of well-evidenced success stories, a clear analysis of barriers and enablers, and concrete recommendations for improving learning design, support, and performance conditions. Where data allows, these insights can also contribute to a credible view of impact or value, even if a formal ROI calculation is not the goal.

The real value of the Success Case Method lies in what it enables next: better decisions about where to invest, what to fix, and how to scale learning that genuinely improves performance.

Steps for implementing SCM: identifying critical cases, collecting data, analysing results

This is where the Success Case Method becomes operational. The strength of SCM lies not in complexity, but in disciplined focus. Each step is designed to move quickly from insight to action, while still producing evidence that leaders can trust. What follows is a practical way to apply the method without turning it into a heavy evaluation exercise.

Step 1. Planning & scoping

Every SCM evaluation starts with clarity. Before any data is collected, the purpose of the evaluation needs to be explicit. This means agreeing on why the evaluation is being done and what decisions it is meant to inform. Typical stakeholders include a business sponsor who cares about performance outcomes, an L&D owner responsible for the intervention, and, where available, a data owner who can help access relevant performance information.

Success criteria should be defined upfront. In SCM, success is not completion or satisfaction. It is observable change, such as improved behaviour, better performance, or movement in a business metric that the learning was designed to influence. Being precise here prevents the evaluation from drifting later.

Scoping also includes deciding who is in and out of scope. This might involve specific roles, regions, or cohorts. Finally, agree on deliverables and timelines. SCM can be run as a rapid six to eight week exercise for quick insight, or extended over a longer period when deeper evidence is required.

Step 2. Quantitative screening survey (identify likely success / non-success cases)

The screening survey is the engine that makes SCM efficient. Its purpose is not to measure impact precisely, but to quickly surface where impact is most and least likely to have occurred. The survey should be short and focused, typically no more than five to eight questions.

Effective questions concentrate on whether participants applied the learning, how often they did so, and what difference it made. This is also where respondents can indicate their willingness to be interviewed. The survey can be sent to the full participant group or to a representative sample, depending on size and feasibility.

The output of this step is a ranked view of responses that highlights potential success and non-success cases. This allows the evaluation effort to be concentrated where it will generate the most insight.

Step 3. Selecting critical cases

From the screening results, a small number of critical cases are selected for deeper investigation. These usually include the strongest success cases and the weakest or non-impact cases. Selection can be based on a percentage of top and bottom responses, or a fixed number per role or region.

Purposeful sampling matters here. Including diversity across contexts helps avoid drawing conclusions from a narrow set of experiences. At this stage, consent should be confirmed and expectations about confidentiality and anonymisation made clear. Trust is essential if participants are going to share honestly.

Step 4. In-depth qualitative interviews

Interviews are where SCM generates its richest insight. The aim is to understand what actually happened after the learning, in the context of real work. Interviews explore how the learning was used, what changed as a result, and what helped or hindered success.

A structured interview guide keeps conversations focused while allowing space for stories to emerge. Participants should be asked to provide examples and, where possible, supporting evidence such as reports, metrics, or manager feedback. This triangulation strengthens credibility and moves the conversation beyond opinion.

Step 5. Analysis & synthesis

Once interviews are complete, patterns begin to emerge. Analysis typically involves identifying common success factors and barriers, and linking them to organisational conditions such as manager support, workload, systems, or incentives.

The output is not a complex statistical model, but a clear synthesis. This often takes the form of short success-case narratives and a simple matrix showing what worked and what did not. Where relevant data exists, qualitative insights can be mapped back to performance indicators mentioned by participants.

Step 6. Reporting & actioning

The final step is turning insight into action. SCM reporting should be concise and decision-focused. Common deliverables include an executive one-pager, a small set of anonymised success stories, and a list of practical recommendations.

Recommendations should be prioritised, clearly owned, and realistic. Some actions can be implemented immediately, such as changes to learning design or support materials. Others may point to broader changes in roles, processes, or performance management. The value of SCM is realised not in the report itself, but in how quickly its insights are used to improve learning impact.

When and why to use SCM

The Success Case Method is not designed to be used everywhere. It is most effective when learning teams need clarity quickly and when the goal is to understand impact, not simply to report on activity. SCM works particularly well for pilot programmes, new learning initiatives, or interventions aimed at addressing a specific performance problem. In these situations, leaders are often less interested in averages and more interested in whether the learning can work, under what conditions, and for whom.

SCM is also a strong fit when outcomes vary widely across a group. When some people apply the learning successfully while others struggle, traditional evaluation approaches tend to flatten those differences. SCM does the opposite. It makes variation visible and turns it into insight that can be acted on.

One of the key benefits of SCM is speed. Because it focuses on a small number of cases, it can generate credible insight with relatively low cost and effort. The stories it produces are grounded in evidence and easy for stakeholders to understand, which makes them powerful tools for building confidence in learning initiatives and influencing decisions about improvement or scale.

SCM does not replace broader measurement approaches. Instead, it complements them. Use it when you need depth on impact quickly, and combine it with surveys, performance data, or ROI analysis when the organisation requires numeric evidence at scale. Used this way, SCM helps learning teams move faster, learn smarter, and focus investment where it delivers real performance results.

Limitations and pitfalls (when not to use SCM)

While the Success Case Method is powerful, it is not appropriate for every evaluation scenario. SCM is not designed for situations where statistically generalisable results are required across an entire population, nor for compliance-driven programmes where complete coverage and uniform evidence are essential. It is also a poor fit when stakeholders require precise, defensible dollar-value ROI calculations without access to supporting operational or financial data.

There are also inherent bias risks to manage. Because SCM deliberately focuses on extremes, selection bias can occur if success cases are overrepresented or if non-success cases quietly drop out. Survivorship bias and positive-story bias may further distort findings if evaluators prioritise compelling narratives over disconfirming evidence.

Common practical pitfalls include poorly designed screening surveys, leading or overly confirmatory interview questions, weak triangulation with performance data, and analysing cases without sufficient attention to organisational context such as incentives, systems, or managerial support.

These risks can be mitigated through transparent sampling logic, careful survey design, neutral and structured interview protocols, and triangulation with operational metrics wherever possible. Crucially, SCM reports should clearly state their limitations, ensuring findings are interpreted as directional, insight-rich evidence, not universal proof.

Case examples

The Success Case Method is most convincing when it is grounded in real work. Below are two practical examples of how SCM has been used to surface impact, improve learning design, and strengthen performance outcomes. Where specific public case studies are not available, the examples are anonymised to protect organisational privacy.

Example 1: South African retail bank (anonymised)
A South African retail bank introduced a blended sales capability programme for frontline consultants. The goal was to increase cross-sell performance and reduce customer drop-off. The learning team used SCM to understand why some consultants were consistently converting customers while others struggled, even though they had completed the same training.

A short screening survey identified high and low performers. The evaluation team then conducted in-depth interviews with a small group from each group, along with manager input. The results revealed that success was strongly linked to manager coaching, access to customer data, and the ability to practice skills in real time. Non-success cases were often hindered by high workload, limited coaching, and a lack of role clarity.

The bank used these insights to redesign manager enablement, introduce micro-practice sessions, and adjust performance expectations. Within six months, the bank reported improved conversion rates in the branches where the changes were implemented.

Example 2: International manufacturing firm
A global manufacturing company used SCM to evaluate a safety training programme rolled out across multiple sites. The goal was to understand why safety behaviours improved in some plants but not others. SCM identified strong success cases in plants where leaders consistently reinforced safety practices and where employees had access to the right tools and time to apply what they learned.

In sites where impact was weak, the evaluation uncovered barriers such as production pressure, inadequate equipment, and unclear safety ownership. The company used these findings to shift focus from training alone to improving systems and leadership accountability.

How to explain SCM to a sceptical stakeholder

SCM is a fast, evidence-based way to answer the question: what works, for whom, and why? Instead of trying to measure everything, SCM focuses on the strongest and weakest outcomes, then uses interviews and evidence to explain the difference. The result is clear insight that can be acted on quickly, especially when time and data are limited.

How to anonymise and package SCM stories for stakeholders (dos and don’ts)

Do: Use role-based identifiers (e.g., “Branch Manager, Gauteng”), remove identifiable details, and focus on behaviours and outcomes.
Do: Pair stories with evidence (metrics, manager feedback, or work artefacts).
Do: Present both success and non-success cases to show balance.
Don’t: Use real names or details that could identify individuals.
Don’t: Treat stories as proof; treat them as evidence to inform decisions.

Further reading

Success Case Method: Find Out Quickly What’s Working and What’s Not
https://www.ojp.gov/library/abstracts/success-case-method-find-out-quickly-whats-working-and-whats-not

Success Case Method (overview and approach)
https://www.betterevaluation.org/methods-approaches/approaches/success-case-method

Training and capacity building evaluation: Maximizing resources and results with Success Case Method (abstract)
https://www.sciencedirect.com/science/article/abs/pii/S0149718915000415

Success Case Method: Combining hard and soft data – What’s the value added
https://www.brinkerhoffevaluationinstitute.com/post/success-case-method-combining-hard-and-soft-data—whats-the-value-added

The Benefits of the Brinkerhoff Success Case Method
https://www.l-ten.org/Web/Web/News—Insights/focus-articles/Benefits-of-the-Brinkerhoff-Success-Case-Method.aspx