How do you measure success? Part 2 – Learning

I remember years ago creating daily quizzes for my students.

The style (multiple choice, scenario, interaction, etc) varied and doesn’t really matter for the purposes of this article.

The objective was two-fold:

  • First being a percentage of their final grade, they were encouraged to arrive on time to complete the 10-15 minute quiz at the start of each class.
  • Secondly, it got the juices flowing around the previous days content to build on.
I would write them generally the morning before and it would be ready the following morning.  It was an interesting exercise that taught me a lot about how to present the content for the day.  At the time, I was not facilitating content I had created or designed, they were canned courses but I had some flexibility in the style of execution and the elaborations I made to the lessons.

In retrospect, I think taking this initiative, which at the time was actually selfish in ensuring the students were more tuned into the day’s lesson, made me a much better designer.  Once I learned solid tools and techniques for gathering a learning need effectively, all my evaluation writing experience, including those damn quizzes, really upped my design.  There’s such a key dependency between that initial gathering of the learning need, the design of the collateral and the evaluation that the learning takes place.  The need to flow from one to the other is so critical.

Let’s assume due diligence has been done in the initial need gathering exercises and a strong scope is outlined for the project.  Post analysis, you should have a very good grasp of the learner’s “story”.  What should their experience look like at deployment of your solution.  At the very least, you should have a list of knowledge areas required, the medium selected for each element and some strong behavioural or performance objectives.

At this stage most designers I’ve met go forward, meet with SMEs and get to work.  There’s nothing wrong with this, however I think sometimes it feels like you know your starting point, can see the finish line, but have to muddle through an unlabelled map to find your way there.  Once the material is completed, many designers then get to work on creating effective evaluations.

The problem with this approach in my opinion is shouldn’t you already know what the learner should walk away being able to do based on those behavioural objectives?  If you know what the learner’s story is, say for example, a client support representative has to answer client calls, use a tool to log a ticket and troubleshoot specific product features in X amount of time, why not gather your content and write the evaluations FIRST.

In writing your evaluation before design, you end up building a guide for the courseware you will be creating.  In essence creating signposts on the road.  If a particular topic, like being able to triage error logs has been identified in the needs assessment for example, writing an evaluation for that module right away using the top 10 errors encountered by clients immediately will force your design to cover that topic effectively.

This approach does allow for an effective learning level evaluation to be put in place with more consistency and did tend, for me, to help hit deadlines.

The only caution that comes in this approach is to not “teach to the test”.  I mentioned how critical is the triad of need, design and evaluation.  If a strong behavioural objective is outlined first and then evaluations written to meet that objective after, then the danger of “teaching to the test” can be reduced.

Further, because the design also flows from the need, a good designer will leverage the spirit of the needs assessment and those performance objectives to be guided by them.  In short, the evaluation and design both stem in parallel from the same behavioural objectives and are guided by each other as apposed to being created in a more traditional linear fashion.


About Andrew Ambrose
I am passionate about the learning longtail for formal and informal learning solutions, leveraging social media and networking technology for learning projects, innovation through mLearning, collaborative learning and applying solutions that fit within the learners personal learning environment.

3 Responses to How do you measure success? Part 2 – Learning

  1. Anonymous says:

    >Interesting point about the dangers of teaching to the test. I've used courseware development software that has the designer start by creating objectives, then draft questions dirtectly tied to the objectives, and then associate specific content items that are tied to the objectives. While this approach certainly helps less experienced designers follow a systematic structure to content structuring, there is a high probability and risk that one will slip into this "teach to the test" conundrum…The tool is skillsoft cct, by the way. 🙂 – RM

  2. >I think there are a few additional ways ive found to really reduced the chance as well. One is to really dig in and create well formed and articulated objectives. The more clearly this is articulated to everyone involved, the clearer the yardstick when actually creating the collateral.The other big one is to consider the type of evaluation. I'm not a fan of multiple choice or T/F. In these cases, it's really hard to test learning unless the number of questions is very high. I'm a way bigger fan of scenario and interactive testing where the learner has to dig deeper to demonstrate some true understanding. If you teach to that test, generally that's not only ok, but super impactful!

  3. >It's interesting however you bring up tools. In retrospect, I also hadn't considered in this article how tools might really encourage a minimalist behaviour when creating content tied to evaluation's created first.I haven't used skillsoft, but I have seen tools where you can connect evaluation to design topics and I can see where a danger would exist in a designer deciding they had met the evaluation criteria rather than what they should be targeting… satisfying the objectives and using the eval as a guide.Thanks for the response!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: