Post-Training Evaluation: How to Take Action

A while back I wrote about 8 transferable lessons from my Fitbit that I’ve applied to my L&D practice. As part of that post, I complained that the Fitbit sometimes gave me data, but I couldn’t do anything with it. Specifically, I was talking about my sleep pattern.

A typical night could look like this:

Fitbit - Sleepless

FORTY-ONE TIMES RESTLESS! That’s a lot of restlessness. It’s not good. But what am I supposed to do about it? It reminded me of my post-training evaluation scores.

Sometimes learners would give my sessions an average of 4.2. And sometimes those same learners would give a colleague’s presentation an average of 4.1 or 4.3 (even though I knew in my heart of hearts that my presentation was more engaging!!). But what could I do with these post-training evaluation scores? I’ll come back to this point in a minute.

As for my restlessness, my wife suggested something and suddenly my Fitbit sleep tracker looked a lot different. Continue reading

My Mid-year Resolution: Using What I’ve Learned

Thinking about all the stuff you’ve learned – in formal workshops, conferences, meetings, webinars, classes – or perhaps things you’ve read or re-tweeted or blogged about, is there some concept or tool or idea or theory you plan to re-visit and make an effort to incorporate into your daily routine? Even if it was something you picked up a year ago, it’s not too late to re-visit!

In my June 5 blog post, I declared that I would review my notes from presentations and the highlighted parts of books I’ve read and I would return to this space to turn some of the things I’ve learned over the past year into specific actions and habits.

Since the beginning of 2013, I’ve spent more than 60 hours in formal professional development sessions – conferences, webinars, day-long or half-day workshops. I’ve read some or all of 15 books to improve my knowledge base and performance. I’ve read countless blog posts, twitter links and had many conversations with other people in the L&D field. I’ve written over 160 blog posts. I’ve ate, breathed and lived L&D.

I have a lot of notes from all of these experiences. A LOT. While reviewing these notes has given me many, many ideas and reminded me of many cool things I’d like to try on Monday morning when I get to the office, I can’t do everything at once. Between now and the end of the year, I will try to up my game in many areas, but there are two specific things I plan to focus on and incorporate into my personal and professional life as much as possible:

  1. Kegan & Lahey’s Immunity Map. This was first introduced to me during my master’s program in an organizational development class. It is a tool based upon Kegan & Lahey’s change management work and I was reminded of this concept and tool during a recent Immunity to Change workshop I attended.
  2. Using variables in Articulate Storyline. I am truly a sucker for any blog post from Tom Kuhlman or Mike Taylor or David Anderson or Nicole Legault or anyone else related to Articulate. They have lots of tips and tricks and I’ve tried some of them out – sometimes the tips make my elearning projects better. Sometimes I don’t seem to do it correctly and I screw up the whole module. Regardless, one thing that continues to lack in my elearning is the use of variables in allowing learners to lock or unlock various elements on the screen before they can proceed to the next item. Before I learn any more tips or tricks with Storyline, I commit to learning how to better use variables in order to make the learner experience better or more challenging (or both).

Please forgive me if you see fewer tweets or re-tweets or comments about new things I’m reading over the next few weeks and months. I’ll be focused on bringing some specific new skills and habits into my daily routine.

Train Like A Champion will not have its usual Thursday post this week. I’ll be immunity mapping or learning how to use variables (or maybe I’ll just be getting ready for the 4th of July). We’ll see you back here on Monday, July 7.

3 Training Lessons from the Kirkpatrick Evaluation Model

In May of 2014, I learned that Donald Kirkpatrick passed away.

Don Kirkpatrick evaluation model for training programs

Like many others in the learning and development space, he made a life-changing impact on how I view my work. Here are three simple yet profound ways in which his work, specifically what is often called the Kirkpatrick Evaluation Model or the Four Levels of Learning Evaluation, influenced my work:

1. Smile sheets don’t justify a giant ego (if the feedback is good) nor are they the end of the world (if they’re bad).

I first landed a job with “training” in its title about 8 years ago, and the way by which I measured my work was by the end-of-training evaluation forms. I viewed them as my report card. Great evaluation scores would have me walking on top of the world for the rest of the afternoon. Poor scores were absolutely devastating.

I don’t remember where I first heard of Kirkpatrick’s 4 levels of training evaluation – perhaps on an ASTD discussion board, perhaps in T+D magazine. When I learned that post-training evaluation scores were the bottom rung of training evaluation, I felt liberated… until I realized that I had to actually be intentional on how to ensure people learned something, that they could use new skills on the job, that they could improve their performance in a measurable way. It was a completely different outlook on how to approach my work. A new and exciting challenge that wasn’t limited to the whims of post-training smile sheets.

2. Training should actually be transferred.

Kirkpatrick’s 3rd level, “transfer”, had perhaps the most profound impact on my work. It’s a level that I, and I’m sure many of my colleagues (yes, even you, dear reader) continue to struggle with and do poorly. Afterall, once people leave the training room, what control do we have over whether or how they choose to apply our brilliant lessons and content on their job?

It’s the simple act of being consciously aware that transfer of learning to the job is the Holy Grail of training design and evaluation that transforms training sessions from being presenter-centered to learner-centered. And while it’s extremely difficult to measure transfer, top-notch trainers will always strive to achieve efficacy at this level.

3. Bottom line: it’s a process, not an event.

The first two items above naturally lead to this point: if training is about more than high evaluation scores, if training is about transfer to the job and the subsequent results that transfer can yield, then training must be a process, not a 1-hour or 1-day or 1-week event.

Aiming for the highest levels of the Kirkpatrick evaluation model has inspired me to figure out what kinds of job aids might live beyond the training room, what kinds of communication with supervisors is necessary in an attempt to forge partnerships with those who will have more influence on whether learning is transferred on the job, and what kinds of longer-term evaluation instruments need to be integrated into the training’s design.

We spend more waking hours at work than we do with our families. When we learn how to be better at work, it can improve our quality of life. These three lessons made me a better training professional, and in turn, improved my quality of life.

Though he’s no longer with us physically, I believe his legacy will continue to transform training programs for generations to come. Thank you, Donald Kirkpatrick, for sharing your talents and your work.

Who are your influences? Did the Kirkpatrick evaluation model influence you? How has your understanding grown?

Evaluating Training Efforts

Moments ago, I cleared security at London’s Heathrow Airport. As I was re-packing my bag with my laptop and iPad, I noticed this little machine.

Security Feedback

I tapped the super-smiley button.

I wonder what they do with the feedback.  I’m guessing they use it like many training professionals use similar feedback.  If the percent of super smiley feedback is high, they probably advertise it internally, and perhaps even externally to demonstrate a job well-done.

Level 1 Feedback Limitations

The limitations of the “smile sheet” evaluation form are many.  All they can really tell us is whether someone enjoyed their experience or not.  Low scores should  be a concern, but high scores don’t necessarily mean value was added.  This sort of quantitative feedback can’t tell us why someone might give a low score. In the example at Heathrow Airport, I could hit the grumpy face, but it doesn’t help any of their supervisory or training staff improve anything. Did I hit the grumpy face because I had a terrible interaction with the security staff?  Did I hit the grumpy face because I was pulled aside for random, extra screening? Did I hit the grumpy face because a group of other passengers claimed to be rushing to their airplane which was departing in 10 minutes and therefore they were allowed to cut the lengthy security line while the rest of us waited patiently?

The Question Matters

I know many organizations – large and small – that measure training success by post-training evaluation scores.  I understand the reason – as training professionals we want some type of metric that can demonstrate our value.  But the minute someone starts asking “tough” questions like: “what value is a 4.3 out of 5 adding to our company’s bottom line?”, the power of the smile sheet metric could quickly lose its luster.

I wonder if the Heathrow staff would get more useful data if they changed their question.  Some ideas that came to mind include:

  • How did today’s security experience compare to previous experiences?
  • Will your flight be safer because of today’s security check?
  • Were you treated with respect and dignity during today’s security screening?

The list could go on. I understand that in order to have people participate, they’re limited to one question, and it needs to be simple. But “How was your security experience today?” depends on so many variables.

When it comes to post training evaluation forms, I try to limit the number of questions I ask to three per module/topic:

  • This session will help me do my job better
  • The facilitator was an expert in this topic
  • The facilitator presented the topic in a way that kept me curious and interested

Depending on what I want to get out of the feedback, I may also ask a question about whether or not each learning objective was accomplished.  At the end of the evaluation form, I’ll also include these two questions:

  • I felt engaged and participated throughout the day
  • I felt my fellow attendees were engaged and participated throughout the day

Again, these types of “Level 1” evaluation forms are just taking a snapshot of participants’ subjective feelings of how things went, and including blank text boxes for attendees to write additional comments can add some clues as to why they gave certain scores, but ultimately the value of training initiatives should be measured by specific behavior changes or performance improvements.  Those types of measurements require additional feedback down the road – from attendees and ideally their supervisors.

Nonetheless, evaluation forms like this can begin to offer a hint of the value that training adds… if the questions are crafted well.

What questions have I missed that you’re asking your attendees? If you create elearning, would you ask anything different in the post-module evaluation?

The Train Like A Champion Blog is published Mondays, Wednesdays and Fridays.  If you think someone else might find this interesting, please pass it along.  If you don’t want to miss a single, brilliant post, be sure to click “Follow”!  And now you can find sporadic, 140-character messages from me on Twitter @flipchartguy.