Extreme Make-over: Smile Sheet Edition

Eval Form Cover Image

A few weeks ago I finished reading Will Thalheimer’s book, Performance-focused Smile Sheets: A Radical Rethinking of a Dangerous Artform (here’s my brief review of the book).

A colleague recently made fun of me, suggesting that I read “geeky books” in my spare time. That would be true if I just read books about smile sheets for fun. And while I did have fun reading this book (so I guess I am kind of geeky), I’ve been attempting to integrate lessons learned from the book into my work.

Following are two examples of improvements I’ve made on existing smile sheets, and the logic behind the changes (based upon my interpretation of the book):  Continue reading

Book Review: Will Thalheimer’s Performance-focused Smile Sheets

Performance-focused Smile Sheets

102-word Summary: “The ideas in this book are freakin’ revolutionary.” So Will Thalheimer begins chapter 9 of his book. It’s hard to argue against the statement. In a world where the vast majority of training is evaluated on a 1-5 Likert-style post-training evaluation form, Will Thalheimer proposes a different way to perform a basic-level assessment of a training program. His thesis: while “smile sheets” aren’t the be all and end all of training evaluation, they’re the most common type of evaluation, so if we’re going to have our learners fill them out, we may as well get some good, useful, actionable information from them.  Continue reading

The Case for Net Promoter Score as a Measure of Presentation Effectiveness

When it comes to post-training evaluation forms, the rule of thumb to which I’ve adhered is: high scores may not guarantee learning happened, but low scores often guarantee learning didn’t happen.

For years I’ve tabulated and delivered feedback for countless sessions that have received Likert-scale scores well above 4 (on a 5-point scale), yet I knew deep down that some of these presentations weren’t as effective as they could be. How can we tell if the presentation was engaging and effective if the post-training evaluation scores are always high?

Several weeks ago I attended a Training Magazine-sponsored webinar on training metrics (view the recording here) and I was introduced to the idea of Net Promoter Score as a way to evaluate presentations. After some deliberation, my colleagues and I decided to test this concept out during a recent 2-day meeting. We added one question to our evaluation forms: Would you recommend this session to a colleague?

Following are the scores from our traditional question about whether people were exposed to new insights, information or ideas on a scale of 1-5 (5 representing “Strongly Agree”):

NPS 1

Not too shabby. Apparently we were exposing people to new insights, information and ideas! So that’s good, right? Who cares whether presenters were engaging or boring, stuck to their lesson plans or went off script, all the scores averaged between 4 and 5. Yay for us!

Then we took a look at the same sessions through the lens of Net Promoter Score, and this is what we found:

NPS 2

These scores showed some variance, but it didn’t tell much of a story until we put these scores side-by-side:

NPS 3

People may have been exposed to some new ideas or insights in each session, but would they put their own reputation on the line and recommend the session to any of their colleagues? It depends. There’s a huge difference between presentations that scored above 4.5 and presentations that drew an average of 4.2.

Here are three reasons why I think this matters:

1. A Wake-up Call. In the past, someone could walk away from a meeting with a score of 4.13 and think to himself: “Well, I wasn’t as sharp as I could have been, but people still liked that session, so I don’t really need to work to improve my delivery… and furthermore, who cares about all these adult learning principles that people keep telling me I need to include?!”

However, if that same presenter sees a Net Promoter Score of 6 or 19 or 31 (with a high score potential of 100), the reaction is very different. People suddenly seem a little more interested in tightening up their next presentation – rehearsing a little more seriously, having instructions for activities down cold, sticking more closely to their lesson plans.

2. Before On-the-Job Application, People Need To Remember Your Content. Some L&D practitioners care less about whether a presentation was engaging, instead being wholly focused on whether or not someone actually does something new or differently or better on the job. To this, I say: “Yes, and…”

Yes, better performance is the ultimate goal for most training programs.

And, you can’t do something new or differently or better if you weren’t paying attention to the presentation. You can’t do something better if you forgot what you learned before you go to bed that evening. While better job performance matters, the presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.

3. Marketing Matters. The principle objection to Net Promoter Score as a training evaluation tool, articulated very well in this post from Dr. Will Thalheimer, is that it is designed for marketing, not for training effectiveness. I would argue that L&D professionals must have some sort of marketing chops in order to generate interest in their programs. After all, Dr. Thalheimer also cited a significant body of research that “found that one misdirected comment by a team leader can wipe out the full effects of a training program.”  If influential people wouldn’t recommend your presentation, research shows that you have a problem.

What do you think? Is Net Promoter Score something you’ve used (or are thinking about using)? Or is it a misguided metric, not suitable for L&D efforts?

 

 

 

Evaluating Training Efforts

Moments ago, I cleared security at London’s Heathrow Airport. As I was re-packing my bag with my laptop and iPad, I noticed this little machine.

Security Feedback

I tapped the super-smiley button.

I wonder what they do with the feedback.  I’m guessing they use it like many training professionals use similar feedback.  If the percent of super smiley feedback is high, they probably advertise it internally, and perhaps even externally to demonstrate a job well-done.

Level 1 Feedback Limitations

The limitations of the “smile sheet” evaluation form are many.  All they can really tell us is whether someone enjoyed their experience or not.  Low scores should  be a concern, but high scores don’t necessarily mean value was added.  This sort of quantitative feedback can’t tell us why someone might give a low score. In the example at Heathrow Airport, I could hit the grumpy face, but it doesn’t help any of their supervisory or training staff improve anything. Did I hit the grumpy face because I had a terrible interaction with the security staff?  Did I hit the grumpy face because I was pulled aside for random, extra screening? Did I hit the grumpy face because a group of other passengers claimed to be rushing to their airplane which was departing in 10 minutes and therefore they were allowed to cut the lengthy security line while the rest of us waited patiently?

The Question Matters

I know many organizations – large and small – that measure training success by post-training evaluation scores.  I understand the reason – as training professionals we want some type of metric that can demonstrate our value.  But the minute someone starts asking “tough” questions like: “what value is a 4.3 out of 5 adding to our company’s bottom line?”, the power of the smile sheet metric could quickly lose its luster.

I wonder if the Heathrow staff would get more useful data if they changed their question.  Some ideas that came to mind include:

  • How did today’s security experience compare to previous experiences?
  • Will your flight be safer because of today’s security check?
  • Were you treated with respect and dignity during today’s security screening?

The list could go on. I understand that in order to have people participate, they’re limited to one question, and it needs to be simple. But “How was your security experience today?” depends on so many variables.

When it comes to post training evaluation forms, I try to limit the number of questions I ask to three per module/topic:

  • This session will help me do my job better
  • The facilitator was an expert in this topic
  • The facilitator presented the topic in a way that kept me curious and interested

Depending on what I want to get out of the feedback, I may also ask a question about whether or not each learning objective was accomplished.  At the end of the evaluation form, I’ll also include these two questions:

  • I felt engaged and participated throughout the day
  • I felt my fellow attendees were engaged and participated throughout the day

Again, these types of “Level 1” evaluation forms are just taking a snapshot of participants’ subjective feelings of how things went, and including blank text boxes for attendees to write additional comments can add some clues as to why they gave certain scores, but ultimately the value of training initiatives should be measured by specific behavior changes or performance improvements.  Those types of measurements require additional feedback down the road – from attendees and ideally their supervisors.

Nonetheless, evaluation forms like this can begin to offer a hint of the value that training adds… if the questions are crafted well.

What questions have I missed that you’re asking your attendees? If you create elearning, would you ask anything different in the post-module evaluation?

The Train Like A Champion Blog is published Mondays, Wednesdays and Fridays.  If you think someone else might find this interesting, please pass it along.  If you don’t want to miss a single, brilliant post, be sure to click “Follow”!  And now you can find sporadic, 140-character messages from me on Twitter @flipchartguy.