Showing posts with label ROI. Show all posts
Showing posts with label ROI. Show all posts

Wednesday, September 21, 2011

Resources Shared at Recent ASTD / SHRM Presentations


Resources Shared at Recent Presentations

I recently had the privilege of speaking at a joint ASTD / SHRM Chapter Meeting. The topic of the evening's discussion was The Importance of Measurement in Learning and Performance.

The discussion covered a brief history of measurement of learning and performance, as well as where measurement may be going in the future.  This blog post collects the resources that I shared as well as other resources that can continue the discussions and learning from the presentation.

BOOKS
COMMUNITIES
WEBSITES AND WEB PAGES

Tuesday, April 5, 2011

Reflections on #lrnchat: Learning Analytics

Image use courtesy of lrnchat and Kevin Thorn (@LearnNuggets)

Each week that I am able to participate in #lrnchat discussion I post a summary of the discussion to my blog. I do this both for my personal development as well as sharing with the Learning and Development Profession at large. This summary is based on my own interpretations of the chat; others who participated may have differing opinions or interpretations of the discussion. I welcome those that do to add your ideas to the comments.

 
The topic of this week's #lrnchat session was Learning Analytics". 

I always find looking at the questions that are used to loosely guide the chat as a nice way to see the overall theme of the chat. Here are the discussion questions that were presented to the group:

Q1) What learning data does your org collect? Why? What problem is org trying to solve?
Q2) Are there ethical issues in collecting learning data?
Q3) How are you collecting formative and summative data now?
Q4) What about informal and social learning? Can we/should we try to measure that? How?

If you want to gauge the effectiveness or progress of something, you need to collect some sort of data that can be analyzed.  Sometimes the data is very simple, like the data collected by a gas tank that is analyzed with the output showing you how much gas is left in your tank.  Sometimes the data collected is much more detailed and robust, like the seemingly endless statistics collected by sports organizations. 

Collecting data for analysis as a means of gauging performance is a practice that exists in almost every aspect of life, including the learning and performance departments of organizations.

When you hear about the effectiveness of a learning program, the discussion often focuses on the effects of the program.  Youll hear about changes in performance, the effect the changes in performance are having on the organization, and perhaps a discussion on the ROI of the program.

What you dont hear about as often is the data that has been collected and analyzed that enabled the learning department to come to these conclusions.  An analysis is only as credible as the data behind it. 

This weeks #lrnchat focused on the data collection methods used in Learning and Performance departments.

The discussion started by sharing the types of data being collected by learning departments, as well as exploring the purposes for collecting the data.

The majority of the responses confirmed some of the shackles we place on ourselves within our organizations.  What we are tracking are mostly widgets: Number of course completions, test grades, and the proverbial butts in the seats.  The other data that was collected by many was reaction data from program participation, commonly referred to as Smile Sheets.

Think about that for a moment.  The two most commonly collected forms of data are respected so little by organizations that there are commonly known phrases that openly mock their value.  Thats a problem.

I think part of the problem, historically, is the learning function providing the data that is asked for as opposed to the data that is needed.  If you don't understand the metrics of a field, chances are you're going to ask about the obvious widgets associated with the topic.

In the case of learning, the most common widget is the 'butts n the seat'.  I point the finger of. Blame for this in two directions.

The first area I see causing this issue is Compliance Training.  In many organizations, this is the fist, and possibly only, learning program in which data is requested.  It's also training that is often less about performance improvement and more about being able to report that the required training took place. 

In my experience, most stakeholders ask questions about compliance-related questions that fall under the theme of "Did everyone complete it?" and not "Is everyone's performance meeting the compliance standards? The data being asked for, by both the organization and the regulators that enforce the compliance standards, is 'butts in the seats'.

The other area I see as contributing to the data problem is the learning field itself.  We should not be waiting to be asked for data; we should always be showing he value being created by our efforts and of learning in general.

In the organizations I have been a part of, there is not a great understanding of the link between learning and performance.  There's an accepted connection between the two, but not an understanding the pathway that leads from one to the other. 

The organization cares about performance, not learning.  Those that do not understand the pathways from learning to performance will likely ask about the numbers.

I rarely share learning 'data'.  Honestly, I don't have much interest in the reports I an able to get from my LMS, and if I don't have any interest in it, why should I expect anyone else to?

I do collect data, but unless pushed for it, I dont share data.  Its not an ownership issue, though that is something that will be considered later in this post.  I just dont think the data itself has value.  The value of learning isnt something you will find on a spreadsheet.  The value of the data I collect is in the story the data enables me to tell to a stakeholder.

Stories feel much more real than sterile and ultimately non-valued data.  In my experience, most of the stakeholders value and trust the story more than the data, though such trust does need to be earned.  For stakeholders that are interested in the data behind the story, I have that and can share it too.

The main point here is that Im not going to wait for someone to tell me what data to provide them.  Ideally that would be a discussion during the needs assessment conversation, but in some organizations, thats not a standard.  Even if I do not walk away with set metrics to report back on, I will walk away from those initial discussions with an understanding on the types of metrics that need to be impacted by the program.  That understanding gives me the framework for the story I need to tell.

I believe in organizations with a more mature learning culture, much of what Im describing is formally built into the structure of the workflow, and thats a good thing.  In the absence of that though, learning professionals need to step up and fill the gap.

From here the discussion moved on to the ethical issues that may exist with the very concept of collecting learning data.  For me, this is a question that needs to be analyzed beyond the initial knee-jerk reaction.

I think for many and this was represented in the discussions the immediate reaction is to say, collecting learning data is unethical; it goes against the principles of organic growth.

I think theres truth in the ethical concerns, but I dont think it has to do with the data collection.  Learning data is just that data.  Its not the data itself that is unethical; Its how we collect it and how the data is used that raises ethical concerns.

I think it starts with the plan specifically having one related to your data.  Any time data is collected without a set plan for its use, youre opening the door to ethical issues.  For example, many managers use the data for Gotcha Management; they use the data to hold people accountable and punish those that have not completed courses.

Dont get me wrong accountability is a huge piece of the performance improvement puzzle.  In the context of data though, its important that the data is used effectively.  Much of the data I collect is supplied directly or indirectly by the participants.  If we use the information they supply us with against them, how forthcoming with information will they be in the future?

Another pet peeve I have related to the ethics of data collection has to do with what we tell learners about the data we collect.  How many organizations make learners complete Level 1 evaluations - even explaining that their feedback is important and will impact future programs and then do nothing with the data?

Is it ethical to tell someone their opinions matter when in truth the data is not changing anything?  Im always amazed by the amount of data that is collected, and never used. Make a decision regarding you data: Either collect the data you need and use it, or dont collect it at all.  Your time and the time of learners are too valuable to waste it collecting unneeded data.

The discussion then moved towards the methods we are using to collect our formative and summative data.  Of course, in order to answer that we need to have an understanding of what the differences are between formative and summative data. 

While there are number of levels at which we could explore the difference, at the core the differences between the two are not about the data itself; its more about when the data is being collected.  Formative data is the data that is being actively collected during a learning program, whereas summative data is data that is collected after the program is over.

While I do see a place for formal data collection methods like focus groups, surveys, and assessments, they are not my primary tools for data collection.  I cannot overemphasize the fact that the most important data collection tools we have are simply our eyes and our ears.

If I want to collect data on how participants are learning and performing, my first step is to quite simply stop talking.  Getting people to talk about their learning in reflective ways, hearing that they see the connections between their learning and their work, and listening to them share ideas with each other on how to apply their new or enhanced skills provides me with more powerful data that I would likely ever get from an assessment or a smile sheet.

The other tool I mentioned is our eyes.  Harold Jarche simplified learning to its core when he said Work is learning, Learning Work.  That being the case, if I want to see if someone is learning, one of the best ways I can to do that is to watch as the person is working.  Not only does it provide learning professionals with some of the most accurate data related to performance, it also provides an excellent opportunity for real-time performance support and coaching. It fosters continuous improvement.

I also believe that its important that we define work in this context.  A work-based role-play or a simulation of the work environment is not representations of work.  They are excellent tools to reinforce performance, but the only way to truly collect data related to work performance is to observe the work itself where it happens. Anything else really pales in comparison.

Need proof of this?  Ask yourself how many times you have heard Thats great, but thats not how it works in the real world from a participant.  Learners already know this is true; we just need to follow their lead.

The chat concluded with an exploration of informal and social learning, and what, if any, data we should collect regarding it.  What surprised me about this discussion was how much of it centered on a different discussion: Whether one type of learning is better than another type of learning. 

To me that subtheme underscored one of the main issues related to learning analytics.  The learning is actually irrelevant in most cases.  What really matters is the performance. 

In most cases, we dont have a Social Learning Program or an Informal Learning Program; we have a learning program that incorporates Formal, Informal, and Social learning techniques.  Its the collective influence of program and all other external factors that ultimately leads to the performance.

Ultimately we design learning with the desired performance in mind.  When the time comes to collect summative data, we should be collecting data related to the performance, not the learning.  We can always backtrack to what factors influenced any performance change, including the effectiveness of each aspect of the learning program.

As I mentioned earlier, when I talk about the results of a learning program with stakeholders, Im sharing a story, not data.  Therefore, I rarely talk about measurement of the learning programs.  Ill talk about measurements related to performance, and then Ill build credible connections between the performance and the learning programs.

Until next week #lrnchat-ers!

Tuesday, February 15, 2011

Do We Need to Change the Language of Employee Learning?

Language is a fascinating thing.  It is the means through which we use words to communicate our thoughts and ideas and subsequently, develop relationships.  Of course, used ineffectively, language can also damage the very relationships we are trying to build.
It’s with that background in mind that I’ve been pondering the language we use within the field of Organizational Learning and Performance, and whether or not the language needs to change.
When I speak with peers in the field, be it in person or virtually, I am always amazed at the amount of time we spend discussing, debating, and examining the language of our profession.  Here’ are just a few examples of what I mean:
“We’re in the business of Learning, not Training”
“Executives don’t care about ROI; they care about ROE”
“I hate it when people call me a ‘Trainer’”
The problem I see with many of these discussions is that they’re placing too much emphasis on the label, and not enough emphasis on the definitions.
Let me explain what I mean.  Let’s start with the definition of language according to dictionary.com:
Language: –noun  1.  a body of words and the systems for their use common to a people who are of the same community or nation, the same geographical area, or the same cultural tradition.
I think the most important part of that definition is ‘people who are of the same community’.  That’s where the labels we sometimes focus too much on create a problem.
It depends on context and frame of reference.  When I am speaking with a colleague of the learning field, I can use the terms training, learning, performance, design, e-learning, and many others and my counterpart will understand the subtle differences in my message.  We are all part of the same community of professionals, so the words and usage have agreed upon meaning, adding value to the overall discussion.
In many organizations, business leaders are not members of the community of learning professionals.  The same terms that added value within the community could reduce the value outside of it.  The terminology runs the risk of becoming jargon, which is a huge barrier to communication.
Last week’s #lrnchat discussion on the topic “If we could wipe the slate clean…” got me thinking about some of the mistakes we have made in our profession that have contributed to the baggage the profession carries with it today.  I think language is a big part of that.
For years, when I have heard debates about the language of learning, it’s been about the labels – more specifically, a focus on incorrect labels that are placed by non-learning professionals.  Someone describes an individual as a Trainer, and the individual spends 10-15 minutes explaining why that’s the wrong label to use.
And therein lies the problem – we’re focusing on the label instead of the definition.
Here’s a non-learning example. When people describe my eating habits, they use the phrase “David is a vegetarian”.  Technically speaking, that label is incorrect.  I am a Lacto-Ovo-Vegetarian, which means I eat no meat or fish, but do eat eggs and dairy.  Ultimately I could care less what label people place on my eating habits; I’m more concerned with not creating social awkwardness by having someone serve me a plate of food I don’t eat.  If they want to label me as a vegetarian, I’m fine with that – as long as we’re defining it the same way.
The same applies to organizational learning and performance; don’t focus too much on the labels. If the CEO defines it as training, don't try to have him or her re-label it as corporate learning, performance, or anything else.  It's more important that you change the way the organization looks at and defines the contribution. 
I could really care less if the CEO labeled what I do as 'Dave's mystical, magical voodoo', as long as the CEO understands what I am trying to do, that we agree on what  is truly important about the outcomes, and that I have support on the path we choose to get there.
In truth, that’s the way labels emerge from a community anyway.  A label is not created in advance; it’s created when something already exists.  Think about Social Learning.  No one really invented that concept; it grew organically through the technologies that enabled it.  People were learning more and more through these new social connections, the community of learning professionals noticed it, and the label ‘Social Learning’ was born.
Do we need to change the language of Employee Learning? I think in most organizations the answer would be yes - but it has to start with the definitions behind the language.  If you want to change the language and labels that are applied to learning in your organization, then change the way people define ‘Training’.  When you change that, actually changing the labels becomes easy.

Friday, January 14, 2011

Evaluating Learning and Performance - One Man's Journey

Earlier this week I had the pleasure of speaking with a local company’s education team on the topic is measurement in learning and performance.  It was an engaging discussion, and left me thinking about my own journey towards my current definition of ‘measurement’.
When I first took the lead for my former employer’s learning and development team (in what seems like a lifetime ago), I remember being given the tour of the department by one of the senior team members.  In one office, I noticed four three-inch binders on a shelf, each of which had about four inches of paper in it. 
These binders contained the evaluations participants completed at the end of each workshop the trainers delivered.  I thought this might be a good place to start in terms of seeing how the group was performing.  This is a summary of our conversation.
Trainer: These binders contain the evaluations trainees complete at the end of each workshop.
Me: And what do we do with the evaluations?
Trainer: We put them in the binder…
Me: And what do we do with them at that point?
Trainer: Well… nothing. We do kinda flip through them during our annual performance review though.
Even though I had no formal education on the measurement of education at that point, this conversation bothered me.  It bothered because it conflicted with some of the basic foundations of effective feedback; why bother spending energy on completing, collecting, and filing feedback forms if you don’t plan on doing anything with the information?
In addition, there was no answer to the question “What do we do when the trainees leave?”   As a manager, I saw our training department as a provider of quality workshops, never felt that a large percentage of the classroom training - which was all we offered at the time - was being used on the job.  From the perspective of both the organization and the learner, why waste the effort if people aren't going to use the skills?
The one thing I was sure of was that I needed to treat this new department the same way I did my old one.  We needed to contribute to the business, and we needed to show it; I just didn't know what the tangible contribution of learning was yet.
In accepting the role, I also accepted the responsibility to do it right. It was then that I began to formally educate myself on adult learning.  In doing so, I found my passion and have been a student of the profession ever since, constantly absorbing whatever I can.  In my early studies, I was searching for something that could help us answer a simple question: Are we effective in what we do?
In that search I came across a certification program that trained individuals on how to calculate the value of training.  Better still, it did so in dollars and cents, presenting it as an ROI.  While a common concept in the learning profession at the time, this was my first real exposure to it.  I immediately was attracted to this for two primary reasons.  First, I'm kind of a numbers and stats geek. Second, and more importantly, the executive that sought me out and was monitoring my progress was the company's Chief Financial Officer.  When I proposed that I go through the certification process and the value I thought it would bring, he seemed to be literally salivating.
That certification helped me get a much better understanding of how what learning and development professionals do can impact a business.  Over the last decade, I have read and followed much of the work on the subject of evaluation.  I have taken courses, attended conferences, and yes, listened to all sides of the endless ROI debate. I have learned a great deal and consider the concepts and processes of educational evaluation to be one of my strong points.
If you're at all familiar with the works of Kirkpatrick, Phillips, Brinkerhoff, and others, you know there are substantially different perspectives on evaluation, and they often conflict with one another.  So what's the answer to the endless evaluation question then? Is it ROI or ROE?  Are there four, five, or forty-five levels of evaluation?  Is it even possible to measure learning?
In truth, I'm not sure there is a single answer to these questions.  I don't believe there is a one-size fits all solution.  The fact is that the definition of the 'Return' part of the equation varies.  It varies based on the program, and it also varies on the stakeholders. 
There are a number of different stakeholders for any learning and performance program, each of whom would likely define ROI for the program differently.  As learning professionals, we don't get to dictate what ROI means; we can educate on how we can measure our effectiveness, and work internally towards consistency, but if the stakeholders define ROI differently, well... They win.  
Over the years I have used a number of evaluation techniques from all sources in order to evaluate the effectiveness of my programs.  Sometimes it’s heavy on metrics, other times it’s more intangibles; sometimes I try to isolate the effects of the program, other times just showing a correlation is good enough. There are number of different metrics that can be applied to learning and performance programs, and plenty of different paths to get to those metrics.
That’s why for me, there is no single answer to the ROI question.  It’s like the famous quote from Abraham Maslow: If you only have a hammer, you tend to see every problem as a nail.  Similarly, if you only subscribe to a single theory on evaluation, you tend to see every program through that lens as well. 
Having been a studier of subject of evaluation over the better part of a decade, I do believe there is one theme that is consistent across the evaluation spectrum.
It’s not what happens during a learning and performance program that matters; it what happens after the program that’s important.
Regardless of the labels used, the concepts of applying the skills gained from a program, and having those skills impact important areas of growth for the individual and organization are the critical areas we should focus on. We should always keep that in mind in every stage of our learning and performance programs.
Side Note: During the Q&A there was a discussion about the differences between the concepts of ROI and ROE.  I remembered an article from a magazine that provided a good foundation to answer that question, but could not at the time recall where I saw it.  It turns out it was an article from the August 2010 issue of T&D Magazine, written by James and Wendy Kirkpatrick.  The article is still available online HERE.