For and Against 12 Training Evaluation Metrics

On Monday I asked readers to share their thoughts on the most compelling of twelve training evaluation metrics. Whether you’re trying to create a case for funding for a training program you feel is essential or you’re trying to market the value of your existing professional development offerings, using qualitative and quantitative training evaluation metrics will be an important part of your argument.   Continue reading

The Best Training Metric

Training can often be a tricky thing to measure. Just because it’s difficult, doesn’t mean we shouldn’t try. After all, when the people who make budget decisions decide to ask what the value of your training program is, you’ll need a good answer.

With today’s blog, I’m going to try something a little different. I’d like to get your thoughts, dear reader, on the following question. Share your thoughts in the comment section. I’ll come back on Thursday summarizing all the thoughts that are contributed here and add my own thoughts as well.

The question: Which of the following is the BEST metric to measure a training programContinue reading

Case Study: The Impact of Training One Year After Launch

A year ago I found myself in Birmingham, AL, helping to lead a train the trainer session as part of the launch and roll-out of a new sales training program.

A year later, we’ve been able to look at the impact of training through the lens of Kirkpatrick’s four levels of evaluation and see the results and impact on each level, including a double-digit growth in sales for those stores who have implemented this program compared to those who haven’t.   Continue reading

Use Data to Make a Compelling Business Case for Training

Shortly after this year’s Super Bowl, this statistical analysis began making its way around the Internet:

Statistics used poorly

As a Buffalo Bills’ fan, I appreciated it.

As a training professional, it served as a very good reminder that numbers can be very deceiving.

So how do we make the best business case for training and professional development? Numbers can be helpful… if we use the right ones.   Continue reading

Hmmmm… What to measure?

I was scrolling through my Facebook feed last week and shrieked in horror when I noticed that a friend of mine was spreading this nonsense:

Nonprofit Overhead

I understand that sharing things like this comes from a good place, but just because things have numbers in them doesn’t mean those numbers mean anything. In this particular case, these numbers are simply made up (don’t get me started on the topic of “overhead” and nonprofit funding). In many other cases, especially in the world of learning and development, the numbers may not be made up, but they need to be combined with a few more data points to be useful.

Let’s take a big one, ripped straight from ATD’s 2016 State of the Industry Report: Employees averaged 33.5 hours of training. Continue reading

A 4-question Level 1 Evaluation Form

Level 1 evaluation:
gauging the learner’s reaction to training

Why is this such a tough thing to get right?

My world used to come crashing down when participants would score me anything under a 4 on a 5-point scale.

Then I started reading up on post-training evaluation forms and learned that there was no correlation between high scores and on-the-job performance, so I stopped doing them completely.

Then my boss insisted I do level 1 evaluation forms, so I tried to make them more meaningful. Continue reading

Post-Training Evaluation: How to Take Action

A while back I wrote about 8 transferable lessons from my Fitbit that I’ve applied to my L&D practice. As part of that post, I complained that the Fitbit sometimes gave me data, but I couldn’t do anything with it. Specifically, I was talking about my sleep pattern.

A typical night could look like this:

Fitbit - Sleepless

FORTY-ONE TIMES RESTLESS! That’s a lot of restlessness. It’s not good. But what am I supposed to do about it? It reminded me of my post-training evaluation scores.

Sometimes learners would give my sessions an average of 4.2. And sometimes those same learners would give a colleague’s presentation an average of 4.1 or 4.3 (even though I knew in my heart of hearts that my presentation was more engaging!!). But what could I do with these post-training evaluation scores? I’ll come back to this point in a minute.

As for my restlessness, my wife suggested something and suddenly my Fitbit sleep tracker looked a lot different. Continue reading

Want to improve your organization’s training? Some people may be suspicious of your intent.

Suspicious

Readers of the Train Like a Champion blog will not be surprised that I am smitten with Will Thalheimer’s new book, Performance-focused Smile Sheets: A Radical Re-thinking of a Dangerous Art Form.

You can read a review of why every training professional should read this book here, and you can see several examples of how I integrated concepts from the book by having my own post-training evaluation forms undergo an extreme makeover here.

It just makes sense. Better post-evaluation questions lead to better analysis of the value of a training program, right? So it was with some surprise that I was pulled aside recently and asked to explain myself for all the changes I’d made to our evaluation forms. Continue reading

Extreme Make-over: Smile Sheet Edition

Eval Form Cover Image

A few weeks ago I finished reading Will Thalheimer’s book, Performance-focused Smile Sheets: A Radical Rethinking of a Dangerous Artform (here’s my brief review of the book).

A colleague recently made fun of me, suggesting that I read “geeky books” in my spare time. That would be true if I just read books about smile sheets for fun. And while I did have fun reading this book (so I guess I am kind of geeky), I’ve been attempting to integrate lessons learned from the book into my work.

Following are two examples of improvements I’ve made on existing smile sheets, and the logic behind the changes (based upon my interpretation of the book):  Continue reading