Welcome to Unpacked: Measuring What Matters – a fortnightly series where we focus on how to measure learning success.
Learning teams need a clear map if they are to have real impact. There is no single silver bullet. Instead, there is a set of tested-and-trusted frameworks, each appropriate for different questions and maturity levels. This article outlines the core frameworks for L&D, shows when to use each pathway, and offers actionable steps to combine them so measurement really drives better work.
Why frameworks matter
Frameworks give learning leaders a shared vocabulary and known steps. They turn loose conversation about “value” into concrete questions like: Did behaviour shift? Did performance improve? Did the organisation gain concrete benefit? Where measurement is certain, L&D is trustworthy and efficacious. Where it’s undefined, training’s an expense item on a spreadsheet.
Frameworks are not competing religions. Think of them as complementary paths. Some are strict on experimental design and statistical rigor, but others emphasize practical intuition and narrative. Your job is to choose the path that answers the business question you’re interested in.
In this article, we’ll explore the most widely used frameworks for measuring learning impact, from classic models like Kirkpatrick and Phillips to newer, data-driven approaches such as LTEM and xAPI, and show how each one helps connect learning to performance and results.
The core frameworks: what they are and when to use them
1. Kirkpatrick Four Levels
What it is: Donald Kirkpatrick developed this model in the 1950s, and it remains a benchmark for evaluating training programs. It measures four levels:
- Reaction – Opinions of the learners regarding the learning process
- Learning – What they attained or developed as competencies
- Behavior – How they apply those skills at work
- Results – Business outcomes realized
Why it works: It offers a structure to an otherwise vague process. Kirkpatrick challenges L&D teams to move beyond activity-level metrics (e.g., satisfaction scores) to evidence of behavior shift and business results. It spans the classroom and work environment — a link many programs lack. Used effectively, it paints a story that leaders like: from engagement to measurable results.
Limitations: It is superficial to use personal experience. Most teams stagnate at Levels 1 and 2 and never bother to actually measure behavior or results. To actually use Kirkpatrick, commit to data collection that tracks personal following in the workplace. Review the New World Kirkpatrick guidance that clearly connects levels and holds individuals accountable.
Practical start: On any new initiative, obtain Level 1 reaction and Level 2 learning. Schedule a Level 3 behaviour observation or 30- to 90-day follow-up. Define a business measure for Level 4 in design.
2. Phillips ROI Model
Best for: when stakeholders ask, “What is the monetary return on this investment?”
What it is: Jack Phillips extended Kirkpatrick’s model by adding a fifth level: Return on Investment (ROI). It quantifies the value of learning by comparing program costs with measurable business gains. The process includes isolating the training effect, converting improvements to financial metrics, and calculating a benefit-cost ratio.
Why it works: It speaks the language of executives, money. By turning learning into financial terms, the Phillips Model helps justify budgets and shift learning from a cost center to a performance enabler. While it can be complex to calculate ROI precisely, even partial application encourages sharper goal setting and clearer alignment with business metrics.
Limitations: Attribution is hard. Isolating the training effect requires good experimental design or reasonable assumptions. The process can be time consuming and may not be justified for every program. Use it selectively for major investments.
Practical start: Pick one flagship program. Agree baseline business metrics. Use a control group where possible or triangulate with multiple data sources. Document the assumptions openly.
3. Brinkerhoff Success Case Method (SCM)
Best for: when you need practical, evidence-rich stories and quick insight into what works.
What it is: Developed by Robert Brinkerhoff, SCM is a targeted evaluation approach that identifies the participants who achieved the most success and those who achieved the least from a learning program. The method involves structured interviews, surveys, and sometimes observation to explore why these outcomes occurred. Unlike broad surveys or purely quantitative measures, SCM prioritizes the stories of real impact. The result is a mix of qualitative narratives and selective quantitative evidence that highlights both the enablers and barriers to effective learning transfer.
Why it works: SCM works because it combines human insight with measurable results. Stories resonate with executives and decision-makers, showing not just that learning occurred but how it translated into behaviour and performance. By spotlighting high-impact cases, teams can identify patterns, effective practices, and gaps, creating actionable intelligence for program improvement. It’s especially powerful in environments where organizational culture, leadership, or operational context significantly influence whether learning sticks. SCM helps turn abstract evaluation data into compelling, actionable insights.
Limitations: SCM is not a population-level, statistically rigorous evaluation. It can be subject to selection bias and does not capture every learner’s outcome. It is best paired with broader quantitative measures to provide a fuller picture of impact.
Practical start: Run SCM as a quarterly pulse on new programs. Use it to refine design, identify blockers, and collect the kinds of outcome evidence that feed OKRs or KPIs. Capture success stories that can be shared with stakeholders to demonstrate tangible impact and inspire adoption.
4. Learning Transfer Evaluation Model (LTEM)
Best for: practitioners focused on the science of learning and ensuring knowledge and skills transfer to the workplace.
What it is: Developed by Will Thalheimer, LTEM is a research-informed evaluation framework that goes beyond traditional measures of completion or satisfaction. It breaks learning evaluation into multiple levels, starting from engagement and moving through knowledge acquisition, skill demonstration, retention, and finally transfer to the job. LTEM emphasizes assessments and data points that better predict whether learners will apply their learning effectively in real-world contexts. The framework encourages designing evaluation in parallel with learning, not as an afterthought, making it a proactive tool for measuring impact.
Why it works: LTEM works because it aligns measurement with how people actually learn and apply knowledge. Rather than relying on superficial quizzes or surveys, it emphasizes meaningful assessments, scenario-based exercises, and observation methods that indicate real workplace application. This approach provides actionable insights into which learning elements are effective and which are not, enabling L&D teams to iterate on design and maximize ROI. LTEM bridges the gap between theoretical knowledge and practical performance, giving leaders confidence that learning initiatives contribute to measurable results.
Limitations: LTEM can demand more thoughtful assessment design and richer data capture, which may increase upfront effort. It is better suited for teams willing to invest in measurement quality and for programs where transfer to work is a critical success factor.
Practical start: Use LTEM to audit your existing assessments. Replace low-value quizzes with scenario tasks, applied exercises, or simulations that correlate strongly with on-the-job performance. Integrate measurement early in course design to continuously track whether learning is translating into action and impact.
5. xAPI and Learning Analytics
Best for: teams that need continuous, cross-system data to track learner behaviour, engagement, and outcomes over time.
What it is: xAPI, also known as the Experience API or Tin Can API, is a specification that allows tracking of detailed learning experiences across multiple platforms, devices, and formats, from online courses and simulations to mentoring sessions, mobile learning, and even offline activities. Paired with a Learning Record Store (LRS) and analytics tools, xAPI captures rich, granular data on what learners do, how they interact with content, and how they progress along their learning paths. Learning analytics then takes this data and converts it into actionable insights, visualized in dashboards, reports, or predictive models.
Why it works: xAPI and learning analytics work because they provide a real-time, comprehensive view of learning behaviour and outcomes. Unlike traditional LMS reports, which often only track completion or scores, xAPI captures the context, sequence, and quality of learner interactions. Analytics can then identify trends, correlations, and predictive signals. For example, which learning activities drive performance improvements or where learners are likely to struggle. This allows L&D teams to intervene proactively, improve learning design, and demonstrate concrete links between learning and business results.
Limitations: Implementing xAPI and analytics requires technical setup, proper governance, and attention to data privacy and integration with HR, CRM, or other business systems. Without careful planning, the volume of data can be overwhelming, and insights may be lost.
Practical start: Begin by identifying two or three key learning activities to track with xAPI. Send the statements to a learning record store and create a simple dashboard that links these learning behaviours to relevant business metrics. Over time, expand tracking and analytics to include more learning pathways and predictive insights.
6. OKRs and KPIs as Measurement Frameworks
Best for: aligning L&D efforts with business strategy and creating clear, time-bound goals that link learning to measurable outcomes.
What it is: Key Performance Indicators (KPIs) are specific, trackable metrics used to monitor ongoing processes and performance, such as completion rates, time-to-competency, or learner satisfaction. Objectives and Key Results (OKRs), on the other hand, are time-boxed goals designed to drive change. An objective defines what you want to achieve, while the key results define how you will measure success. In L&D, OKRs often translate strategic priorities — like reducing ramp-up time for new hires or increasing customer service scores — into actionable, measurable learning initiatives.
Why it works: OKRs and KPIs work together because they connect daily learning activity to strategic outcomes. KPIs provide steady-state visibility, showing whether programs are running as expected, while OKRs encourage learning teams to stretch and focus on real performance improvement. They create accountability, transparency, and alignment with broader organizational objectives. When paired with evaluation frameworks like Kirkpatrick, LTEM, or xAPI analytics, OKRs and KPIs ensure that learning is not just delivered, but drives measurable impact on the business.
Limitations: OKRs require careful design to avoid being gamed or seen as punitive. KPIs can sometimes anchor teams to maintaining the status quo. Without linkage to evaluation data, both frameworks risk becoming reporting exercises rather than instruments for change.
Practical start: Co-create one or two L&D OKRs with business stakeholders each quarter. Ensure each key result maps to a measurable KPI or evaluation level. Use KPIs to monitor ongoing program health, while OKRs drive focused initiatives to improve performance or outcomes.
How to choose a framework: a decision guide
It starts with the question you’d like to answer. Are you trying to show ROI on a big investment? Phillips ROI is the clear choice. Need to show behavior change with your teams? Kirkpatrick Level 3 or LTEM paired with analytics will give you the evidence you’re searching for. Need to create some good stories for leadership? SCM stands out here. And if you need continuous insights across a number of learning platforms, xAPI with analytics is the answer.
Secondly, think about scale and risk. Big-ticket initiatives need more robust assessment like Phillips or controlled comparisons. Fast experiments or small projects value OKRs and LTEM for timely feedback.
Also consider what data and capability you already have. If your systems are not integrated, start with Kirkpatrick and SCM as you build your data foundation. If your stack is in place, take advantage of xAPI and analytics, and consider Phillips ROI for your largest programs.
Finally, think about what the insights will be used for. Are they going to design faster? Focus on LTEM and SCM. Are you trying to build a budget case? Phillips ROI triumphs most convincingly. Do you have to assure alignment of strategy? OKRs and KPIs assist in making that connection.
Practical playbook: consolidating frameworks into one flow
You do not necessarily have to employ one framework alone. One simple way to think about it is to treat it as if it were learning program stages.
Design stage: Start by defining OKRs and KPIs that are tied to your business objectives. Define success in terms of quantifiable impact.
Build stage: Use LTEM to design tests that predict how learning will translate to work. Capture key learning activity with xAPI where feasible.
Deploy phase: Track early indicators like Kirkpatrick Levels 1 and 2 to ensure timely. Collect SCM stories from the early adopters to see real-world success in action.
Assess phase: Monitor behavior change (Level 3) through observation or system metrics. Use analytics to triangulate results and get a deeper understanding.
Prove stage: For big programs, translate results to monetary measures with Phillips ROI. For ongoing programs, update your KPIs and OKRs, and revisit on what you learn.
By combining frameworks like this, you can have both the numbers and the stories without overwhelming your team. Each method supports the others, giving you a clearer picture of impact.
Why blend frameworks?
No single framework captures all dimensions of learning impact. Each excels in some areas, some focus on behavior, others on costs, assessment validity, or cogent narrative. Through framework synthesis, learning teams are able to cover multiple bases, meet stakeholder requirements, and build strong evidence of impact, without making the process overly complex. The layered approach allows L&D to move beyond activity reporting and begin to show hard value without overcomplicating the process.
Common pitfalls
Even experienced teams fall into the same traps. One of the biggest traps is to reach activity. Completion rates in and of themselves aren’t sufficient; always connect learning to at least one behavior or business metric.
Another is poor baselines. Absent a baseline, it’s tough to show change.
Data silos can trap you too. Don’t rely solely on LMS data. Combine it with HR, finance, or other relevant systems.
Over-measuring is another pitfall. Concentrate on a few meaningful measures instead of diluting yourself too widely.
Lastly, watch out for OKRs. They must challenge and motivate your team, not come across as punitive or linked directly to performance reviews.
Closing thought
Measurement in L&D is less about checking boxes and more about visible results. The models and tools you apply are meant to offer reassurance, not the source of undue complexity. Start with small but meaningful measures, blend approaches with caution, and use the findings to continually improve programs.
Once learning transitions from measuring activity to proof of concrete impact, it stops being nothing more than a cost and turns into an actual business asset. Over time, these lessons can be used to make better decisions, establish credibility, and empower your team to decide on which initiatives really create meaningful change.
Further reading
- Kirkpatrick Partners — The Kirkpatrick Model and companion guides. https://www.kirkpatrickpartners.com
Whatfix — The Phillips ROI Model explained. https://www.whatfix.com
Phillips ROI Institute — Measuring ROI: process and trends. https://www.roiinstitute.net
Watershed / TTC Innovations — Brinkerhoff Success Case Method overview. https://www.watershedlrs.com
Will Thalheimer — Learning Transfer Evaluation Model (LTEM). https://www.work-learning.com
xAPI.com and Watershed — xAPI overview and learning analytics resources. https://xapi.com
Coursera — OKRs for Learning and Development guidance.
https://www.coursera.orgTability & OKRify — Practical OKR examples and templates for L&D. https://www.tability.io
WhatFix / Digital Adoption Blog — Comparative guides to evaluation models. https://www.whatfix.com/blog