How do you measure success? Part 1 – Reaction

You can do a search for measuring and evaluating training and come up with a ton of reading material on the subject. (about 1.4 million results)  🙂

The problem is, most companies either don't do it, or knee jerk to a solution because they think they should do it but aren't sure what to do with it.  

The fact is that most practitioners follow the "smile sheet" reactionary evaluation because they are supposed to (I hate that term btw, it's so much more than that), and put a learning assessment in place because it's such a key part of any adult learning training program to measure application. When in reality, most companies stick the "smile sheets" in a box in a corner and slowly build up a fire hazard, and if there even is reporting functionality available to managers and leaders, reviewing the learning assessments just tells them completion occurred (or didn't), but not much about what that means.

I believe in the concept of the Kirkpatrick evaluation model.  I believe it because it's comprehensive and covers all the bases.  When you break it down, what is really drives you to do is, test the satisfaction of the program, test that learning occurred, check if behaviours have changed and did you impact the business as expected.  There are arguments and nitpicking that can be had until you turn blue, but this is the nuts and bolts when you boil it down for most companies.  It aggravates me when speaking with folks who want to split hairs over the levels. Is there a level 5 or not?  I'm not a learning theorist, I'm a practitioner.  I need something that works.

So a few things come out of this for me, lets me start with the "smile sheet" or Reaction level.

The reaction evaluation is generally a survey that asks the learner if they liked elements of the session. (Hence "smile")  For most companies it's only applied to ILTs, not WBTs, and I've seen these damn things that are pages long.  When you walk around asking learning leaders and specialists for questions that mean something to them so you can build this survey, invariably, the list of things they want to learn will be long.  It makes sense.  It's info they feel they need to get better.

The problem is three-fold in my opinion. 

  • What questions to ask
  • Where is the learner is an expert
  • Getting useful feedback
  • Length of survey and getting tired
  1. The first is the easiest.  For every question, ask yourself "Will I modify this element based on anyone's response?".  If the answer is probably not, don't include it.  I've seen questions over the years like "Rate your experience with the LMS".  If this six to seven figure LMS implementation is not negotiable or will not be configured, why ask the question?
  2. The second problem is a little tougher.  Why you should focus on questions for which the learner is an expert is in credibility of data.  I hate questions around rating a facilitators techniques, because how many learners really even know what they are?  I've seen questions around the format, tools, scenarios, technology, etc.  The learner WILL respond to your likert-based question, but how valid is your data?

    Always asking another question, "Is the learner an expert in this?" when generating questions will help you ensure the data is good.  Remember, this is a reaction level, not a skill competency assessment.  they are ALWAYS expert in their own satisfaction.  Instead of learning mumbo jumbo questions about a facilitators techniques, ask if they were satisfied the facilitator was engaging, or around how interactive they found the session, etc (and ask why!!).  

    You will up the value of the data and increase the response rate because they won't be confused.

  3. I often see reaction surveys using a likert-type scale.  but if I put a "3" on "Rate the quality of the student materials", what does that allow you as an organization to now do?  Can you walk over to the Instructional Designer, knock them out of their chair and demand they make it a 4?

    I think the likert scale is a great, fast way to gather trending data, but that's only useful for the dashboard, not the ID.  A tool I've used which I think works really well is to simply ask after that question "Why did you select that rating?".  I based it on NPS(Net Promoter Score) methodology and this will give you 1 of 3 very useful types of feedback for the ID.

    It will tell you why the learner hated it, and now the ID can go and fix it or It will tell you why the learner loved it, so the ID can learn what works and add to their best practices.

    It works.  the learner is always happy to tell you why something sucks, which is info we need to know.  They are also happy to tell you why they LOVED something, which we also need to be aware of.  Use the emotional response of the student!

  4. Lastly typically, response rate for remote surveys can be low.  Much of this is due to the length of survey.  People are extremely time conscious and the less you continue to impact their productivity, the better.

    If you are follow the rules above, remove erroneous questions, have questions that mean something to them and leverage the emotional response you are halfway there.  If you are focusing on their satisfaction and digging for them to tell you why they liked or disliked the material, you could arguably ask only a few questions.  Instead of piling on the things you care about and applying the thumbscrews, why not let them drive the feedback?

    They will provide the likert metrics for your dashboard and will now give you great information on what they loved or hated about the facilitator.  They will give you the same for the design and content of the material and will be happy to provide feedback about the things that will keep them away or bring them back.

    What more do you want?

Advertisements

About Andrew Ambrose
I am passionate about the learning longtail for formal and informal learning solutions, leveraging social media and networking technology for learning projects, innovation through mLearning, collaborative learning and applying solutions that fit within the learners personal learning environment.

One Response to How do you measure success? Part 1 – Reaction

  1. >Using this methodology, you can also very easily apply it as a completion survey at the back of a WBT as well.If it's only a few questions, you'll get huge response rate increase.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: