The Case for Net Promoter Score as a Measure of Presentation Effectiveness

When it comes to post-training evaluation forms, the rule of thumb to which I’ve adhered is: high scores may not guarantee learning happened, but low scores often guarantee learning didn’t happen.

For years I’ve tabulated and delivered feedback for countless sessions that have received Likert-scale scores well above 4 (on a 5-point scale), yet I knew deep down that some of these presentations weren’t as effective as they could be. How can we tell if the presentation was engaging and effective if the post-training evaluation scores are always high?

Several weeks ago I attended a Training Magazine-sponsored webinar on training metrics (view the recording here) and I was introduced to the idea of Net Promoter Score as a way to evaluate presentations. After some deliberation, my colleagues and I decided to test this concept out during a recent 2-day meeting. We added one question to our evaluation forms: Would you recommend this session to a colleague?

Following are the scores from our traditional question about whether people were exposed to new insights, information or ideas on a scale of 1-5 (5 representing “Strongly Agree”):

NPS 1

Not too shabby. Apparently we were exposing people to new insights, information and ideas! So that’s good, right? Who cares whether presenters were engaging or boring, stuck to their lesson plans or went off script, all the scores averaged between 4 and 5. Yay for us!

Then we took a look at the same sessions through the lens of Net Promoter Score, and this is what we found:

NPS 2

These scores showed some variance, but it didn’t tell much of a story until we put these scores side-by-side:

NPS 3

People may have been exposed to some new ideas or insights in each session, but would they put their own reputation on the line and recommend the session to any of their colleagues? It depends. There’s a huge difference between presentations that scored above 4.5 and presentations that drew an average of 4.2.

Here are three reasons why I think this matters:

1. A Wake-up Call. In the past, someone could walk away from a meeting with a score of 4.13 and think to himself: “Well, I wasn’t as sharp as I could have been, but people still liked that session, so I don’t really need to work to improve my delivery… and furthermore, who cares about all these adult learning principles that people keep telling me I need to include?!”

However, if that same presenter sees a Net Promoter Score of 6 or 19 or 31 (with a high score potential of 100), the reaction is very different. People suddenly seem a little more interested in tightening up their next presentation – rehearsing a little more seriously, having instructions for activities down cold, sticking more closely to their lesson plans.

2. Before On-the-Job Application, People Need To Remember Your Content. Some L&D practitioners care less about whether a presentation was engaging, instead being wholly focused on whether or not someone actually does something new or differently or better on the job. To this, I say: “Yes, and…”

Yes, better performance is the ultimate goal for most training programs.

And, you can’t do something new or differently or better if you weren’t paying attention to the presentation. You can’t do something better if you forgot what you learned before you go to bed that evening. While better job performance matters, the presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.

3. Marketing Matters. The principle objection to Net Promoter Score as a training evaluation tool, articulated very well in this post from Dr. Will Thalheimer, is that it is designed for marketing, not for training effectiveness. I would argue that L&D professionals must have some sort of marketing chops in order to generate interest in their programs. After all, Dr. Thalheimer also cited a significant body of research that “found that one misdirected comment by a team leader can wipe out the full effects of a training program.”  If influential people wouldn’t recommend your presentation, research shows that you have a problem.

What do you think? Is Net Promoter Score something you’ve used (or are thinking about using)? Or is it a misguided metric, not suitable for L&D efforts?

 

 

 

Too much interaction, not enough lecture? Impossible! Or is it?

Interaction

A little introduction to the topic. Here are a few discussion prompts. Break into small groups with table facilitators to guide the conversation. Large group de-brief. No bullet-pointed PowerPoint slides. Heck, no slides at all! This is a textbook example of well-designed training built upon a strong foundation of adult learning, right?

Not so fast.

Earlier this week I had an opportunity to attend a 60-minute session on the topic of measuring training impact. Training that has a measurable impact – it’s the holy grail of the learning and development profession, right? Sign me up. In fact, sign my colleagues up too! I dragged a colleague to this workshop as well. We need to learn as much as we can on this topic because we certainly haven’t found a consistent way to crack this nut.

During the session, a facilitator framed the topic then turned us loose in small groups to discuss the topic. In my own small group, I felt I was able to offer brilliant insights into the challenges we face when trying to isolate training as a reason for improved business results. I took a look around the room and everyone was engaged. The room was abuzz.

Toward the end, each small group reported their insights. Time expired, a little end-of-session networking took place, and then we all headed our own separate ways. It was fun.

Later, I reached out to my colleague who attended and asked about her take-aways. She said: “I don’t know that I took away any new/better way to measure training. How about you?”

The truth was, I didn’t have any concrete take-aways either. I was kind of hoping my colleague was going to mention something that I somehow missed.

Last week, during a #chat2lrn Twitter chat, Patti Shank took a lot of flak (including from me) when she wrote this:

When I reflected on the training experience I had this week, Patti’s words suddenly resonated for me. This training was ultra-engaging. And yet my colleague and I left without being able to do something new or differently or better.

Perhaps there should have been a more vigorous de-brief. Perhaps there should have been more instructor-led content, maybe even <gasp> lecture – either before or after the small group discussions.

I may not have new ways to measure the impact of my training initiatives, but I did carry three concrete take-aways from this experience:

  1. Sometimes, lecture isn’t completely evil.
  2. Sometimes, too many discussion-based activities can be counter-productive.
  3. Reflection is an essential habit following a learning experience. Even when concrete take-aways from the topic at hand prove to be elusive, learning can still happen.

And you? What kinds of things have you learned unexpectedly even though the actual topic at hand of a training session didn’t quite deliver for you? Leave your thoughts in the comments section.

 

 

4 Ways to Better Measure Corporate Training Results

I think results come out in lots of different ways, and some of them you measure, and some of them you feel.

In the January issue of TD magazine, SAP CEO Bill McDermott makes the point that training results aren’t always numbers-driven. I’ve seen this first-hand.

An India-based colleague has spent the last several years holding monthly training sessions focused on our company values and discussing “soft” topics such as teamwork and collaboration. When I dropped by one of these training sessions last month, one of her trainees commented: “In other organizations people try to pull other people down. Our organization is unique in that everybody tries to help each other and boost each other’s performance.”

Sometimes you can feel the results of a training program. But as I mentioned in Monday’s post, companies around the world spend over $75 billion (with a b!) annually and have no idea whether or not their training efforts have produced any results. This isn’t good.

If you happen to be interested in the ability to show other people (your boss, for example) that your training efforts don’t just feel good, but have made a measurable difference, here are four ways to do that:

1. Make sure you ask what should be different as a result of the training.

This one may sound like a no-brainer, but you’d be surprised at how many times training is planned and executed without specifically identifying what should be done new or differently or better as a result.

2. Pay some attention to Kirkpatrick’s Four Levels of Evaluation…

About 60 years ago, Donald Kirkpatrick espoused four “levels” of evaluation to assist training practitioners begin to quantify their results. First come post-training evaluation scores (“smile sheets”), then learning (most of the time through pre/post testing), then skill transfer on the job (maybe a self-reported survey, or a survey from a trainee’s supervisor) and finally impact (did sales increase? did on-the-job safety accidents decrease?). Levels 1 and 2 are most common, but trainers and organizations can certainly strengthen their Level 3 and 4 efforts.

3. …and then go beyond Kirkpatrick.

According to a research paper entitled The Science of Training and Development in Organizations, Kirkpatrick’s Four Levels is a model that can be helpful, but there is data to suggest it is not the be-all-and-end-all that training professionals have pinned their evaluation hopes on. The authors of this paper offer the following example as a specific way to customize the measurement of a training program’s success or failure:

“If, as an example, the training is related to product features of cell phones for call center representatives, the intended outcome and hence the actual measure should look different depending on whether the goal of the training is to have trainees list features by phone or have a ‘mental model’ that allows them to generate recommendations for phones given customers’ statements of what they need in a new phone. It is likely that a generic evaluation (e.g., a multiple-choice test) will not show change due to training, whereas a more precise evaluation measure, tailored to the training content, might.”

4. Continue to boost retention while collecting knowledge and performance data.

Cognitive scientist Art Kohn offers a model he calls 2/2/2. This is a strategy to boost learner retention of content following a training program. Two days after a training program, send a few questions about the content to the learners (this can give data on how much they still remember days after having left your training program). Two weeks later, send a few short answer questions (again, this helps keep your content fresh in their minds and it gives you a data point on how much they’ve been able to retain). Finally, two months after the training program, ask a few questions about how your content has been applied on the job (which offers data on the training’s impact).

If companies as spending billions of dollars on training, never to know whether or not those efforts were effective, there’s a problem. Spending a few hours thinking through your evaluation strategy prior to deploying your next training program can make your efforts literally worth your time.

 

Learning and Development Thought Leader: Will Thalheimer

In this age of social media, where anyone with a computer and Internet connection can post something online and proclaim themselves as a “thought leader” in their industry, it can be difficult to find the true leaders in the industry.

This is the first in a new, periodic series from the Train Like A Champion blog that will highlight L&D professionals who have proven effective in moving the industry to better results and higher performance.

Thought Leader #1: Dr. Will Thalheimer

Will Thalheimer leads Work-Learning Research and is simply on a quest to cut through all the noise and questionable research that’s out there in order to help L&D professionals be aware of evidence-based practices and well-conducted research.

He has the gall to question the effectiveness of Kirkpatrick’s 4 levels of evaluation (and the research to back it up). And don’t even get him started on learning styles.

If you have a sliver of interest in the research behind what truly works in training and presentations, you should be reading his blog, Will At Work Learning.

Two Resources from Dr. Thalheimer that You Should Check Out ASAP:

Research Study: While there’s a lot of good stuff on there, one blog post I found particularly helpful revolved around a study titled The Science of Training & Development: What Matters in Practice. The Science of Training & Development. In my day job, I work with a lot of medical professionals who insist on the science behind things. While facilitation is indeed an art form, having research-based best practices lends necessary credibility to conversations about why lecture and didactic delivery of content isn’t effective.

Slide Design: I’ve never liked the idea of slide templates. I never had a very good argument against them until I watched this 10-minute video:

In Sum:

I could write a lot more about Dr. Thalheimer and why he’s someone you should be listening to. But then those would just be my words. And the Train Like A Champion blog hasn’t (yet) declared me a thought leader in the L&D field. So, check out some of these resources and discover for yourself why you should be paying attention to his work.

 

Start Worrying (A Lot) More About Level 1

I generally consider Level 1 evaluation forms to be a waste of time and energy, so when I read Todd Hudon’s The Lean CLO Blog post this week, Stop Worrying About Level 1, I cheered and said YES! And…

Todd’s point is right on. The most valuable learning experiences are generally uncomfortable moments and generally not even in the training room. Even in the training room, trainers can often tell by observing their audience’s behavior (not by using an evaluation form) when participants are engaged.

The best argument I can think of for Level 1 feedback is that it provides institutional memory. What happens if you – the rock star training facilitator of the organization – win the lottery and retire to your own private island in the Caribbean tomorrow? Or perhaps something more likely happens – you need to deliver the same presentation a year from now. Will you be able to remember the highlights (and the sections of your lesson that need to be changed)?

This point was brought home to me earlier this week when a co-worker was asked to facilitate a lesson someone else had presented back in the spring. I shared the lesson plan with my co-worker and his first question was: do we have any feedback on this session?

Searching through my files I realized that my disdain for Level 1 feedback led me to create a quick, too-general post-training evaluation form for this meeting and it didn’t yield any useful feedback for this particular session.

In addition to questions about the overall meeting, I should have asked specific questions (both Likert-scale style and open-ended) about each session during this meeting. Yes, this makes for a longer evaluation form, and if we’re going to ask learners to take the time to fill out the forms anyways we may as well get some useful information from them!

I absolutely agree with the idea that the best, most powerful learning experiences happen on the job. And in a world where formal training experiences are still part of our annual professional development experience, we training professionals need to ensure we continue to build better and better learning experiences for our audiences, both through noting our own observations of the session as well as crafting more effective ways of capturing our learners’ reactions.

What are some questions you’ve found particularly helpful on post-training evaluation forms?

Let me know in the comments section below (and perhaps it will be the subject of a future blog post!).

 

3 Training Lessons from Donald Kirkpatrick (Rest in Peace, Mr. Kirkpatrick)

Earlier in the week, I learned that Donald Kirkpatrick passed away.

don-kirkpatrick

Like many others in the learning and development space, he made a life changing impact on how I view my work. Here are three simple yet profound ways in which his work influenced mine:

1. Smile sheets don’t justify a giant ego (if the feedback is good) nor are they the end of the world (if they’re bad). I first landed a job with “training” in its title about 8 years ago, and the way by which I measured my work was by the end-of-training evaluation forms. I viewed them as my report card. Great evaluation scores would have me walking on top of the world for the rest of the afternoon. Poor scores were absolutely devastating.

I don’t remember where I first heard of Kirkpatrick’s 4 levels of training evaluation – perhaps on an ASTD discussion board, perhaps in T+D magazine. When I learned that post-training evaluation scores were the bottom rung of training evaluation, I felt liberated… until I realized that I had to actually be intentional on how to ensure people learned something, that they could use new skills on the job, that they could improve their performance in a measurable way. It was a completely different outlook on how to approach my work. A new and exciting challenge that wasn’t limited to the whims of post-training smile sheets.

2. Training should actually be transferred. Kirkpatrick’s 3rd level, “transfer”, had perhaps the most profound impact on my work. It’s a level that I, and I’m sure many of my colleagues (yes, even you, dear reader) continue to struggle with and do poorly. Afterall, once people leave the training room, what control do we have over whether or how they choose to apply our brilliant lessons and content on their job?

It’s the simple act of being consciously aware that transfer of learning to the job is the Holy Grail of training design and evaluation that transforms training sessions from being presenter-centered to learner-centered. And while it’s extremely difficult to measure transfer, top notch trainers will always strive to achieve efficacy at this level.

3. Bottom line: it’s a process, not an event. The first two items above naturally lead to this point: if training is about more than high evaluation scores, if training is about transfer to the job and the subsequent results that transfer can yield, then training must be a process, not a 1-hour or 1-day or 1-week event.

Aiming for the highest levels of Kirkpatrick’s evaluation model has inspired me to figure out what kinds of job aids might live beyond the training room, what kinds of communication with supervisors is necessary in an attempt to forge partnerships with those who will have more influence on whether learning is transferred on the job, and what kinds of longer term evaluation instruments need to be integrated into the training’s design.

We spend more waking hours at work than we do with our families. When we learn how to be better at work, it can improve our quality of life. These three lessons made me a better training professional, and in turn improved my quality of life.

Though he’s no longer with us physically, I believe his legacy will continue to transform training programs for generations to come. Thank you, Donald Kirkpatrick, for sharing your talents and your work.

Evaluating Training Efforts

Moments ago, I cleared security at London’s Heathrow Airport. As I was re-packing my bag with my laptop and iPad, I noticed this little machine.

Security Feedback

I tapped the super-smiley button.

I wonder what they do with the feedback.  I’m guessing they use it like many training professionals use similar feedback.  If the percent of super smiley feedback is high, they probably advertise it internally, and perhaps even externally to demonstrate a job well-done.

Level 1 Feedback Limitations

The limitations of the “smile sheet” evaluation form are many.  All they can really tell us is whether someone enjoyed their experience or not.  Low scores should  be a concern, but high scores don’t necessarily mean value was added.  This sort of quantitative feedback can’t tell us why someone might give a low score. In the example at Heathrow Airport, I could hit the grumpy face, but it doesn’t help any of their supervisory or training staff improve anything. Did I hit the grumpy face because I had a terrible interaction with the security staff?  Did I hit the grumpy face because I was pulled aside for random, extra screening? Did I hit the grumpy face because a group of other passengers claimed to be rushing to their airplane which was departing in 10 minutes and therefore they were allowed to cut the lengthy security line while the rest of us waited patiently?

The Question Matters

I know many organizations – large and small – that measure training success by post-training evaluation scores.  I understand the reason – as training professionals we want some type of metric that can demonstrate our value.  But the minute someone starts asking “tough” questions like: “what value is a 4.3 out of 5 adding to our company’s bottom line?”, the power of the smile sheet metric could quickly lose its luster.

I wonder if the Heathrow staff would get more useful data if they changed their question.  Some ideas that came to mind include:

  • How did today’s security experience compare to previous experiences?
  • Will your flight be safer because of today’s security check?
  • Were you treated with respect and dignity during today’s security screening?

The list could go on. I understand that in order to have people participate, they’re limited to one question, and it needs to be simple. But “How was your security experience today?” depends on so many variables.

When it comes to post training evaluation forms, I try to limit the number of questions I ask to three per module/topic:

  • This session will help me do my job better
  • The facilitator was an expert in this topic
  • The facilitator presented the topic in a way that kept me curious and interested

Depending on what I want to get out of the feedback, I may also ask a question about whether or not each learning objective was accomplished.  At the end of the evaluation form, I’ll also include these two questions:

  • I felt engaged and participated throughout the day
  • I felt my fellow attendees were engaged and participated throughout the day

Again, these types of “Level 1” evaluation forms are just taking a snapshot of participants’ subjective feelings of how things went, and including blank text boxes for attendees to write additional comments can add some clues as to why they gave certain scores, but ultimately the value of training initiatives should be measured by specific behavior changes or performance improvements.  Those types of measurements require additional feedback down the road – from attendees and ideally their supervisors.

Nonetheless, evaluation forms like this can begin to offer a hint of the value that training adds… if the questions are crafted well.

What questions have I missed that you’re asking your attendees? If you create elearning, would you ask anything different in the post-module evaluation?

The Train Like A Champion Blog is published Mondays, Wednesdays and Fridays.  If you think someone else might find this interesting, please pass it along.  If you don’t want to miss a single, brilliant post, be sure to click “Follow”!  And now you can find sporadic, 140-character messages from me on Twitter @flipchartguy.