Earlier this week I had the pleasure of speaking with a local company’s education team on the topic is measurement in learning and performance. It was an engaging discussion, and left me thinking about my own journey towards my current definition of ‘measurement’.
When I first took the lead for my former employer’s learning and development team (in what seems like a lifetime ago), I remember being given the tour of the department by one of the senior team members. In one office, I noticed four three-inch binders on a shelf, each of which had about four inches of paper in it.
These binders contained the evaluations participants completed at the end of each workshop the trainers delivered. I thought this might be a good place to start in terms of seeing how the group was performing. This is a summary of our conversation.
Trainer: These binders contain the evaluations trainees complete at the end of each workshop.
Me: And what do we do with the evaluations?
Trainer: We put them in the binder…
Me: And what do we do with them at that point?
Trainer: Well… nothing. We do kinda flip through them during our annual performance review though.
Even though I had no formal education on the measurement of education at that point, this conversation bothered me. It bothered because it conflicted with some of the basic foundations of effective feedback; why bother spending energy on completing, collecting, and filing feedback forms if you don’t plan on doing anything with the information?
In addition, there was no answer to the question “What do we do when the trainees leave?” As a manager, I saw our training department as a provider of quality workshops, never felt that a large percentage of the classroom training - which was all we offered at the time - was being used on the job. From the perspective of both the organization and the learner, why waste the effort if people aren't going to use the skills?
The one thing I was sure of was that I needed to treat this new department the same way I did my old one. We needed to contribute to the business, and we needed to show it; I just didn't know what the tangible contribution of learning was yet.
In accepting the role, I also accepted the responsibility to do it right. It was then that I began to formally educate myself on adult learning. In doing so, I found my passion and have been a student of the profession ever since, constantly absorbing whatever I can. In my early studies, I was searching for something that could help us answer a simple question: Are we effective in what we do?
In that search I came across a certification program that trained individuals on how to calculate the value of training. Better still, it did so in dollars and cents, presenting it as an ROI. While a common concept in the learning profession at the time, this was my first real exposure to it. I immediately was attracted to this for two primary reasons. First, I'm kind of a numbers and stats geek. Second, and more importantly, the executive that sought me out and was monitoring my progress was the company's Chief Financial Officer. When I proposed that I go through the certification process and the value I thought it would bring, he seemed to be literally salivating.
That certification helped me get a much better understanding of how what learning and development professionals do can impact a business. Over the last decade, I have read and followed much of the work on the subject of evaluation. I have taken courses, attended conferences, and yes, listened to all sides of the endless ROI debate. I have learned a great deal and consider the concepts and processes of educational evaluation to be one of my strong points.
If you're at all familiar with the works of Kirkpatrick, Phillips, Brinkerhoff, and others, you know there are substantially different perspectives on evaluation, and they often conflict with one another. So what's the answer to the endless evaluation question then? Is it ROI or ROE? Are there four, five, or forty-five levels of evaluation? Is it even possible to measure learning?
In truth, I'm not sure there is a single answer to these questions. I don't believe there is a one-size fits all solution. The fact is that the definition of the 'Return' part of the equation varies. It varies based on the program, and it also varies on the stakeholders.
There are a number of different stakeholders for any learning and performance program, each of whom would likely define ROI for the program differently. As learning professionals, we don't get to dictate what ROI means; we can educate on how we can measure our effectiveness, and work internally towards consistency, but if the stakeholders define ROI differently, well... They win.
Over the years I have used a number of evaluation techniques from all sources in order to evaluate the effectiveness of my programs. Sometimes it’s heavy on metrics, other times it’s more intangibles; sometimes I try to isolate the effects of the program, other times just showing a correlation is good enough. There are number of different metrics that can be applied to learning and performance programs, and plenty of different paths to get to those metrics.
That’s why for me, there is no single answer to the ROI question. It’s like the famous quote from Abraham Maslow: If you only have a hammer, you tend to see every problem as a nail. Similarly, if you only subscribe to a single theory on evaluation, you tend to see every program through that lens as well.
Having been a studier of subject of evaluation over the better part of a decade, I do believe there is one theme that is consistent across the evaluation spectrum.
It’s not what happens during a learning and performance program that matters; it what happens after the program that’s important.
Regardless of the labels used, the concepts of applying the skills gained from a program, and having those skills impact important areas of growth for the individual and organization are the critical areas we should focus on. We should always keep that in mind in every stage of our learning and performance programs.
Side Note: During the Q&A there was a discussion about the differences between the concepts of ROI and ROE. I remembered an article from a magazine that provided a good foundation to answer that question, but could not at the time recall where I saw it. It turns out it was an article from the August 2010 issue of T&D Magazine, written by James and Wendy Kirkpatrick. The article is still available online HERE.
Dave, agreed, depending upon the program you can look at ROE and/or apply more qualitative approaches like Brinkerhoff's Success Case. It's important that L&D advocate for metrics and partner, not avoid.
ReplyDeleteDave, this is yet another excellent reflection. I don't know that we can truly measure learning in a snapshot -- we can trace a journey, maybe, if we first understand as observers that we each have a perspective that we're bringing into it, and that we can describe change that takes place over time. I think inherent in your title is that recognition -- it's a journey: one that transforms you and those you encounter as it continues.
ReplyDeleteDave,
ReplyDeleteNice piece - got my wheels turning. I agree whole-heartedly that event-based learning measures are really insignificant in the grand scheme. To me all but business results are formative measures. When the rubber hits (and stays) on the road - thats summative. The problem though with learning events is just that - the expectation that when the event ends all is well. Real outcomes should be measured through and in real world application of new knowledge and skills. If formal learning could be seen as a sandwich (Sorry, its lunchtime, I'm hungry) then the event is only the cold cuts. The bread being the precursor analysis and post-event support which are critical but tragically more often than not left off completely.
David - What was the certification program you referred to that teaches how to value training?
ReplyDeleteHi Dan-
ReplyDeleteThe certification I was referencing was the Certified ROI Porfessional (CRP) offered by the ROI Institute. As I mentioned in the post, I don't think there is one solution to the evaluation question, but I do think this program did a great job of instilling the importance of Application and Business Impact as it applies to training, and offers some very actionable techniques hat can be used in appropriate situations.