Learner surveys: Dusty storage boxes to massive impact in 1 day

Smile sheets, satisfaction surveys, Kirkpatrick Level 1s. Most training teams have these in place. Why?

Because we should gather learner feedback, it’s industry standard, part of the process/adult learning model/evaluation methodology, etc… All good reasons.

Not too long ago, I posted an entry called “How to measure success: Part 1 – Reaction“. In that post I talked about some of the pitfalls and challenges with level 1 surveys. I wanted to go abit more in-depth in an extremely impactful and easy solution. The Net Promoter Score.

If you aren’t familiar with NPS, this primer I put together to walk a team through the methodology is a useful kickoff to how NPS works.



The NPS methodology allows you to target the hardest part of feedback gathering. The “what now?”.

Many companies diligently ask the right questions and have followed good process to deploy and gather the feedback. Now what? In many cases that energy wound down as feedback piled up and other priorities came to the fore. Surveys would come in, get labeled nicely and stored.

The single most important survey objective should be the actionables put in place BASED on the feedback. For a fiscal plan to include a key goal focused on this, the payoff is huge, for an almost negligible cost. Using a Net Promoter Score (NPS) methodology for Level 1 surveys is a very effective way of driving great scoreboard metrics and very useful feedback!

NPS can help you deal with the main obstacles that pop up for Reaction Surveys:

  1. Annoyance factor increases proportionate to the number of questions. Fewer questions is ALWAYS better and will increase the response rate.

    I spoke with a large company analyst who saw 6 to 8000 surveys go out to every support customer engaged, and a decent response for a typical 10 question remote survey would sit around 10 -15%. This increased drastically the fewer questions asked.Now take into account that to get decent trending data for learner feedback, you would need say 15 to 20 solid responses to make good decisions on changes to a course. You would need to send out 150 to 200 10 question surveys.

    Because an NPS survey is designed to be Short and Sweet, it drives response rate up. Learners are less annoyed at a survey that takes them less than a minute and draws from the the information that is key in their mind; What they loved or hated.

  2. Getting data you can use. I rate the LMS as a “3”.

    What are you going to do with a 3? I mentioned in another post about slapping around a bunch of IDs, demanding they make the design a 4! Traditional survey’s don’t allow for this, they are metric based, not feedback based. Asking “Why did you Rate it X” will drive the learner to tell you exactly what you want to know and provide useful trending data to empower the design or support teams to action.

  1. Metrics are King. 30 minute discussion to the executive team or 3 second scoreboard?

    The reaction survey has an important secondary objective; Demonstrate your success and growth. If you have a 15 question survey, how are you detailing trends in an understandable fashion for the organization.

    Because NPS provides a clear metric you can graph from session to session, you can instantly set objectives, benchmarks and comparison models with different audiences. This data doesn’t show that learning has occurred (you still need a good level 2 model for that!), but this is what will convince executive management that you are impacting the audience positively and they are eager to come back for more.

Managing feedback from this type of survey is relatively easy. If you use a tool like Surveymonkey, the output gathered as CSV will allows you to make use of an excel formula to quickly tabulate the NPS result (total Promoter% – total Detractor% = NPS). Some online tools like Surveygizmo have steps you can use to autoreport the NPS result of your questions (here!).

As the data will be short and impactful, triaging the results is fast and easy. One person of my team was holding NPS triage sessions daily during a multi-day program, reviewing the results with the facilitator and actioning the feedback for the next days session. (way to go Lynda!)

The result? A upward tracking graph of NPS results to show leaders, satisfied learners and better equipped facilitators/instructional designers.

What more can you ask for?

How do you measure success? Part 3 – Performance and Business Impact

Although very separate, I think it’s prudent to discuss the evaluation of behavioural changes in the learner and the expected business impact together.

The reason I think these two should be linked in discussion is because although there are many techniques to gather the data for both of them (sales ROI, impact to call volume, employee pulse surveys on management effectiveness, etc.), in my experience, the single hardest question for alot of organizations and learning generalists isn’t how to get the data, but what should I be measuring?

In some cases, like sales or call centres, there are some pretty standard behavioural measurements, leading and lagging indicators and dashboards available.  But in other cases it can be tougher.

The absolutely most valuable tool for both level 3 and 4 evaluations is that initial, pre-design assessment.  Hopefully, energy has been spent on solid behavioural objectives.  If you are able to help the client, through your research, articulate what they want these people to be able to do in the context of learning objectives, it’s a no brainer to then;  Three or six months post learning solution launch, go back and evaluate if those behaviours have taken hold.

The hardest element to measure is for sure the Business Impact.  If no discussions and clear targets around this have occurred pre-design, its not only hard, but impossible.  You are crazy to go to a client and make statements about ROI or business unit growth due to a learning solution after the fact.  ROI is such a contentious topic. There are so much stumuli around learners in a corporate environment, good luck tying a companys growth directly to a learning solution and being able to stand by it.

Instead, when researching from the client at the needs assessment phase, talk about impact to business continuity, expansion, quality, efficiency, but drive to more SMART based goals.

A good example for a new hire program is time to competency. Typically, smaller organizations levetage veteran employees to coach and upskill new hires.  This pulls the experienced individual from their job for a significant amount of time.  Define “competent” first, (I like the ability to perform a task or tasks without support), and dig to figure out how long on average to that benchmark.  Next, in concert with the team experts, decide on a reasonable but impacting
goal.

In this example, the benefits should be clear to you and the client. You get a clear business level measurement to target post-launch and the client has confidence their key veterans will be able to spend more time where they need to.  Building products.

This has other benefits as well.  On top of the obvious opportunity to measure your own success and provide opportunity to calibrate support resources or next phase solutuons, it also is something I strongly suggest building into your charters or agreements.  So many times i’ve heard of vendors who deploy, then cut and run.  There is so much value in going back for praise or abuse by a client.  You can leverage the relationships and success to open doors to other opportunities.

Going back to let an unhappy client beat you with a pool cue has value as well (arguably more!).  You can learn what failed to avoid that pitfall and get better.  At the very least you show courage, grace and how much the relationship meant to you.

How do you measure success? Part 2 – Learning

I remember years ago creating daily quizzes for my students.

The style (multiple choice, scenario, interaction, etc) varied and doesn’t really matter for the purposes of this article.

The objective was two-fold:

  • First being a percentage of their final grade, they were encouraged to arrive on time to complete the 10-15 minute quiz at the start of each class.
  • Secondly, it got the juices flowing around the previous days content to build on.
I would write them generally the morning before and it would be ready the following morning.  It was an interesting exercise that taught me a lot about how to present the content for the day.  At the time, I was not facilitating content I had created or designed, they were canned courses but I had some flexibility in the style of execution and the elaborations I made to the lessons.

In retrospect, I think taking this initiative, which at the time was actually selfish in ensuring the students were more tuned into the day’s lesson, made me a much better designer.  Once I learned solid tools and techniques for gathering a learning need effectively, all my evaluation writing experience, including those damn quizzes, really upped my design.  There’s such a key dependency between that initial gathering of the learning need, the design of the collateral and the evaluation that the learning takes place.  The need to flow from one to the other is so critical.

Let’s assume due diligence has been done in the initial need gathering exercises and a strong scope is outlined for the project.  Post analysis, you should have a very good grasp of the learner’s “story”.  What should their experience look like at deployment of your solution.  At the very least, you should have a list of knowledge areas required, the medium selected for each element and some strong behavioural or performance objectives.

At this stage most designers I’ve met go forward, meet with SMEs and get to work.  There’s nothing wrong with this, however I think sometimes it feels like you know your starting point, can see the finish line, but have to muddle through an unlabelled map to find your way there.  Once the material is completed, many designers then get to work on creating effective evaluations.

The problem with this approach in my opinion is shouldn’t you already know what the learner should walk away being able to do based on those behavioural objectives?  If you know what the learner’s story is, say for example, a client support representative has to answer client calls, use a tool to log a ticket and troubleshoot specific product features in X amount of time, why not gather your content and write the evaluations FIRST.

In writing your evaluation before design, you end up building a guide for the courseware you will be creating.  In essence creating signposts on the road.  If a particular topic, like being able to triage error logs has been identified in the needs assessment for example, writing an evaluation for that module right away using the top 10 errors encountered by clients immediately will force your design to cover that topic effectively.

This approach does allow for an effective learning level evaluation to be put in place with more consistency and did tend, for me, to help hit deadlines.

The only caution that comes in this approach is to not “teach to the test”.  I mentioned how critical is the triad of need, design and evaluation.  If a strong behavioural objective is outlined first and then evaluations written to meet that objective after, then the danger of “teaching to the test” can be reduced.

Further, because the design also flows from the need, a good designer will leverage the spirit of the needs assessment and those performance objectives to be guided by them.  In short, the evaluation and design both stem in parallel from the same behavioural objectives and are guided by each other as apposed to being created in a more traditional linear fashion.

How do you measure success? Part 1 – Reaction

You can do a search for measuring and evaluating training and come up with a ton of reading material on the subject. (about 1.4 million results)  🙂

The problem is, most companies either don't do it, or knee jerk to a solution because they think they should do it but aren't sure what to do with it.  

The fact is that most practitioners follow the "smile sheet" reactionary evaluation because they are supposed to (I hate that term btw, it's so much more than that), and put a learning assessment in place because it's such a key part of any adult learning training program to measure application. When in reality, most companies stick the "smile sheets" in a box in a corner and slowly build up a fire hazard, and if there even is reporting functionality available to managers and leaders, reviewing the learning assessments just tells them completion occurred (or didn't), but not much about what that means.

I believe in the concept of the Kirkpatrick evaluation model.  I believe it because it's comprehensive and covers all the bases.  When you break it down, what is really drives you to do is, test the satisfaction of the program, test that learning occurred, check if behaviours have changed and did you impact the business as expected.  There are arguments and nitpicking that can be had until you turn blue, but this is the nuts and bolts when you boil it down for most companies.  It aggravates me when speaking with folks who want to split hairs over the levels. Is there a level 5 or not?  I'm not a learning theorist, I'm a practitioner.  I need something that works.

So a few things come out of this for me, lets me start with the "smile sheet" or Reaction level.

The reaction evaluation is generally a survey that asks the learner if they liked elements of the session. (Hence "smile")  For most companies it's only applied to ILTs, not WBTs, and I've seen these damn things that are pages long.  When you walk around asking learning leaders and specialists for questions that mean something to them so you can build this survey, invariably, the list of things they want to learn will be long.  It makes sense.  It's info they feel they need to get better.

The problem is three-fold in my opinion. 

  • What questions to ask
  • Where is the learner is an expert
  • Getting useful feedback
  • Length of survey and getting tired
  1. The first is the easiest.  For every question, ask yourself "Will I modify this element based on anyone's response?".  If the answer is probably not, don't include it.  I've seen questions over the years like "Rate your experience with the LMS".  If this six to seven figure LMS implementation is not negotiable or will not be configured, why ask the question?
  2. The second problem is a little tougher.  Why you should focus on questions for which the learner is an expert is in credibility of data.  I hate questions around rating a facilitators techniques, because how many learners really even know what they are?  I've seen questions around the format, tools, scenarios, technology, etc.  The learner WILL respond to your likert-based question, but how valid is your data?

    Always asking another question, "Is the learner an expert in this?" when generating questions will help you ensure the data is good.  Remember, this is a reaction level, not a skill competency assessment.  they are ALWAYS expert in their own satisfaction.  Instead of learning mumbo jumbo questions about a facilitators techniques, ask if they were satisfied the facilitator was engaging, or around how interactive they found the session, etc (and ask why!!).  

    You will up the value of the data and increase the response rate because they won't be confused.

  3. I often see reaction surveys using a likert-type scale.  but if I put a "3" on "Rate the quality of the student materials", what does that allow you as an organization to now do?  Can you walk over to the Instructional Designer, knock them out of their chair and demand they make it a 4?

    I think the likert scale is a great, fast way to gather trending data, but that's only useful for the dashboard, not the ID.  A tool I've used which I think works really well is to simply ask after that question "Why did you select that rating?".  I based it on NPS(Net Promoter Score) methodology and this will give you 1 of 3 very useful types of feedback for the ID.

    It will tell you why the learner hated it, and now the ID can go and fix it or It will tell you why the learner loved it, so the ID can learn what works and add to their best practices.

    It works.  the learner is always happy to tell you why something sucks, which is info we need to know.  They are also happy to tell you why they LOVED something, which we also need to be aware of.  Use the emotional response of the student!

  4. Lastly typically, response rate for remote surveys can be low.  Much of this is due to the length of survey.  People are extremely time conscious and the less you continue to impact their productivity, the better.

    If you are follow the rules above, remove erroneous questions, have questions that mean something to them and leverage the emotional response you are halfway there.  If you are focusing on their satisfaction and digging for them to tell you why they liked or disliked the material, you could arguably ask only a few questions.  Instead of piling on the things you care about and applying the thumbscrews, why not let them drive the feedback?

    They will provide the likert metrics for your dashboard and will now give you great information on what they loved or hated about the facilitator.  They will give you the same for the design and content of the material and will be happy to provide feedback about the things that will keep them away or bring them back.

    What more do you want?

%d bloggers like this: