Extreme Make-over: Smile Sheet Edition

Eval Form Cover Image

A few weeks ago I finished reading Will Thalheimer’s book, Performance-focused Smile Sheets: A Radical Rethinking of a Dangerous Artform (here’s my brief review of the book).

A colleague recently made fun of me, suggesting that I read “geeky books” in my spare time. That would be true if I just read books about smile sheets for fun. And while I did have fun reading this book (so I guess I am kind of geeky), I’ve been attempting to integrate lessons learned from the book into my work.

Following are two examples of improvements I’ve made on existing smile sheets, and the logic behind the changes (based upon my interpretation of the book):  Continue reading

Book Review: Will Thalheimer’s Performance-focused Smile Sheets

Performance-focused Smile Sheets

102-word Summary: “The ideas in this book are freakin’ revolutionary.” So Will Thalheimer begins chapter 9 of his book. It’s hard to argue against the statement. In a world where the vast majority of training is evaluated on a 1-5 Likert-style post-training evaluation form, Will Thalheimer proposes a different way to perform a basic-level assessment of a training program. His thesis: while “smile sheets” aren’t the be all and end all of training evaluation, they’re the most common type of evaluation, so if we’re going to have our learners fill them out, we may as well get some good, useful, actionable information from them.  Continue reading

The Case for Net Promoter Score as a Measure of Presentation Effectiveness

When it comes to post-training evaluation forms, the rule of thumb to which I’ve adhered is: high scores may not guarantee learning happened, but low scores often guarantee learning didn’t happen.

For years I’ve tabulated and delivered feedback for countless sessions that have received Likert-scale scores well above 4 (on a 5-point scale), yet I knew deep down that some of these presentations weren’t as effective as they could be. How can we tell if the presentation was engaging and effective if the post-training evaluation scores are always high?

Several weeks ago I attended a Training Magazine-sponsored webinar on training metrics (view the recording here) and I was introduced to the idea of Net Promoter Score as a way to evaluate presentations. After some deliberation, my colleagues and I decided to test this concept out during a recent 2-day meeting. We added one question to our evaluation forms: Would you recommend this session to a colleague?

Following are the scores from our traditional question about whether people were exposed to new insights, information or ideas on a scale of 1-5 (5 representing “Strongly Agree”):

NPS 1

Not too shabby. Apparently we were exposing people to new insights, information and ideas! So that’s good, right? Who cares whether presenters were engaging or boring, stuck to their lesson plans or went off script, all the scores averaged between 4 and 5. Yay for us!

Then we took a look at the same sessions through the lens of Net Promoter Score, and this is what we found:

NPS 2

These scores showed some variance, but it didn’t tell much of a story until we put these scores side-by-side:

NPS 3

People may have been exposed to some new ideas or insights in each session, but would they put their own reputation on the line and recommend the session to any of their colleagues? It depends. There’s a huge difference between presentations that scored above 4.5 and presentations that drew an average of 4.2.

Here are three reasons why I think this matters:

1. A Wake-up Call. In the past, someone could walk away from a meeting with a score of 4.13 and think to himself: “Well, I wasn’t as sharp as I could have been, but people still liked that session, so I don’t really need to work to improve my delivery… and furthermore, who cares about all these adult learning principles that people keep telling me I need to include?!”

However, if that same presenter sees a Net Promoter Score of 6 or 19 or 31 (with a high score potential of 100), the reaction is very different. People suddenly seem a little more interested in tightening up their next presentation – rehearsing a little more seriously, having instructions for activities down cold, sticking more closely to their lesson plans.

2. Before On-the-Job Application, People Need To Remember Your Content. Some L&D practitioners care less about whether a presentation was engaging, instead being wholly focused on whether or not someone actually does something new or differently or better on the job. To this, I say: “Yes, and…”

Yes, better performance is the ultimate goal for most training programs.

And, you can’t do something new or differently or better if you weren’t paying attention to the presentation. You can’t do something better if you forgot what you learned before you go to bed that evening. While better job performance matters, the presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.

3. Marketing Matters. The principle objection to Net Promoter Score as a training evaluation tool, articulated very well in this post from Dr. Will Thalheimer, is that it is designed for marketing, not for training effectiveness. I would argue that L&D professionals must have some sort of marketing chops in order to generate interest in their programs. After all, Dr. Thalheimer also cited a significant body of research that “found that one misdirected comment by a team leader can wipe out the full effects of a training program.”  If influential people wouldn’t recommend your presentation, research shows that you have a problem.

What do you think? Is Net Promoter Score something you’ve used (or are thinking about using)? Or is it a misguided metric, not suitable for L&D efforts?