Hmmmm… What to measure?

I was scrolling through my Facebook feed last week and shrieked in horror when I noticed that a friend of mine was spreading this nonsense:

Nonprofit Overhead

I understand that sharing things like this comes from a good place, but just because things have numbers in them doesn’t mean those numbers mean anything. In this particular case, these numbers are simply made up (don’t get me started on the topic of “overhead” and nonprofit funding). In many other cases, especially in the world of learning and development, the numbers may not be made up, but they need to be combined with a few more data points to be useful.

Let’s take a big one, ripped straight from ATD’s 2016 State of the Industry Report: Employees averaged 33.5 hours of training. Continue reading

A 4-question Level 1 Evaluation Form

Evaluation Form

Level 1 evaluation: gauging the learner’s reaction to training.

Why is this such a tough thing to get right?

My world used to come crashing down when participants would score me anything under a 4 on a 5-point scale.

Then I started reading up on post-training evaluation forms and learned that there was no correlation between high scores and on-the-job performance, so I stopped doing them completely.

Then my boss insisted I do evaluation forms, so I tried to make them more meaningful. I added a net promoter score component. I read Will Thalheimer’s excellent book on the subject and tried to simply use every sample question he offered in the book. Participants revolted at an evaluation form that took 20 minutes to complete.

A few weeks ago I reached deep into my bag of tricks and tried something new.   Continue reading

Tired of presenters who miss the mark? Show them what they’ll be evaluated on!

Last week I sat with a colleague, walking through her line-up of speakers for an upcoming conference. She asked if I had any suggestions to help the presenters deliver more effective presentations.

It’s an age-old, intractable question. Do conference speakers (or consultants who may come into your organization to train your staff on one specific topic) really care?   Continue reading

Post-training evaluation data is nice… but what should we do with it?

A while back I wrote about 8 transferable lessons from my Fitbit that I’ve applied to my L&D practice. As part of that post, I complained that the Fitbit sometimes gave me data, but I couldn’t do anything with it. Specifically, I was talking about my sleep pattern.

A typical night could look like this:

Fitbit - Sleepless

FORTY ONE TIMES RESTLESS! That’s a lot of restlessness. It’s not good. But what am I supposed to do about it? It reminded me of my post-training evaluation scores.

Sometimes learners would give my sessions an average of 4.2. And sometimes those same learners would give a colleague’s presentation an average of 4.1 or 4.3 (even though I knew in my heart of hearts that my presentation was more engaging!!). But what could I do with these post-training evaluation scores? I’ll come back to this point in a minute.

As for my restlessness, my wife suggested something and suddenly my Fitbit sleep tracker looked a lot different. Continue reading

Want to improve your organization’s training? Some people may be suspicious of your intent.

Suspicious

Readers of the Train Like a Champion blog will not be surprised that I am smitten with Will Thalheimer’s new book, Performance-focused Smile Sheets: A Radical Re-thinking of a Dangerous Art Form.

You can read a review of why every training professional should read this book here, and you can see several examples of how I integrated concepts from the book by having my own post-training evaluation forms undergo an extreme makeover here.

It just makes sense. Better post-evaluation questions lead to better analysis of the value of a training program, right? So it was with some surprise that I was pulled aside recently and asked to explain myself for all the changes I’d made to our evaluation forms. Continue reading

Extreme Make-over: Smile Sheet Edition

Eval Form Cover Image

A few weeks ago I finished reading Will Thalheimer’s book, Performance-focused Smile Sheets: A Radical Rethinking of a Dangerous Artform (here’s my brief review of the book).

A colleague recently made fun of me, suggesting that I read “geeky books” in my spare time. That would be true if I just read books about smile sheets for fun. And while I did have fun reading this book (so I guess I am kind of geeky), I’ve been attempting to integrate lessons learned from the book into my work.

Following are two examples of improvements I’ve made on existing smile sheets, and the logic behind the changes (based upon my interpretation of the book):  Continue reading

Book Review: Will Thalheimer’s Performance-focused Smile Sheets

Performance-focused Smile Sheets

102-word Summary: “The ideas in this book are freakin’ revolutionary.” So Will Thalheimer begins chapter 9 of his book. It’s hard to argue against the statement. In a world where the vast majority of training is evaluated on a 1-5 Likert-style post-training evaluation form, Will Thalheimer proposes a different way to perform a basic-level assessment of a training program. His thesis: while “smile sheets” aren’t the be all and end all of training evaluation, they’re the most common type of evaluation, so if we’re going to have our learners fill them out, we may as well get some good, useful, actionable information from them.  Continue reading