The Case for Net Promoter Score as a Measure of Presentation Effectiveness

When it comes to post-training evaluation forms, the rule of thumb to which I’ve adhered is: high scores may not guarantee learning happened, but low scores often guarantee learning didn’t happen.

For years I’ve tabulated and delivered feedback for countless sessions that have received Likert-scale scores well above 4 (on a 5-point scale), yet I knew deep down that some of these presentations weren’t as effective as they could be. How can we tell if the presentation was engaging and effective if the post-training evaluation scores are always high?

Several weeks ago I attended a Training Magazine-sponsored webinar on training metrics (view the recording here) and I was introduced to the idea of Net Promoter Score as a way to evaluate presentations. After some deliberation, my colleagues and I decided to test this concept out during a recent 2-day meeting. We added one question to our evaluation forms: Would you recommend this session to a colleague?

Following are the scores from our traditional question about whether people were exposed to new insights, information or ideas on a scale of 1-5 (5 representing “Strongly Agree”):

NPS 1

Not too shabby. Apparently we were exposing people to new insights, information and ideas! So that’s good, right? Who cares whether presenters were engaging or boring, stuck to their lesson plans or went off script, all the scores averaged between 4 and 5. Yay for us!

Then we took a look at the same sessions through the lens of Net Promoter Score, and this is what we found:

NPS 2

These scores showed some variance, but it didn’t tell much of a story until we put these scores side-by-side:

NPS 3

People may have been exposed to some new ideas or insights in each session, but would they put their own reputation on the line and recommend the session to any of their colleagues? It depends. There’s a huge difference between presentations that scored above 4.5 and presentations that drew an average of 4.2.

Here are three reasons why I think this matters:

1. A Wake-up Call. In the past, someone could walk away from a meeting with a score of 4.13 and think to himself: “Well, I wasn’t as sharp as I could have been, but people still liked that session, so I don’t really need to work to improve my delivery… and furthermore, who cares about all these adult learning principles that people keep telling me I need to include?!”

However, if that same presenter sees a Net Promoter Score of 6 or 19 or 31 (with a high score potential of 100), the reaction is very different. People suddenly seem a little more interested in tightening up their next presentation – rehearsing a little more seriously, having instructions for activities down cold, sticking more closely to their lesson plans.

2. Before On-the-Job Application, People Need To Remember Your Content. Some L&D practitioners care less about whether a presentation was engaging, instead being wholly focused on whether or not someone actually does something new or differently or better on the job. To this, I say: “Yes, and…”

Yes, better performance is the ultimate goal for most training programs.

And, you can’t do something new or differently or better if you weren’t paying attention to the presentation. You can’t do something better if you forgot what you learned before you go to bed that evening. While better job performance matters, the presenter plays a big role in whether or not people remember the content and are excited to use it when they return to their offices.

3. Marketing Matters. The principle objection to Net Promoter Score as a training evaluation tool, articulated very well in this post from Dr. Will Thalheimer, is that it is designed for marketing, not for training effectiveness. I would argue that L&D professionals must have some sort of marketing chops in order to generate interest in their programs. After all, Dr. Thalheimer also cited a significant body of research that “found that one misdirected comment by a team leader can wipe out the full effects of a training program.”  If influential people wouldn’t recommend your presentation, research shows that you have a problem.

What do you think? Is Net Promoter Score something you’ve used (or are thinking about using)? Or is it a misguided metric, not suitable for L&D efforts?

 

 

 

Start Worrying (A Lot) More About Level 1

I generally consider Level 1 evaluation forms to be a waste of time and energy, so when I read Todd Hudon’s The Lean CLO Blog post this week, Stop Worrying About Level 1, I cheered and said YES! And…

Todd’s point is right on. The most valuable learning experiences are generally uncomfortable moments and generally not even in the training room. Even in the training room, trainers can often tell by observing their audience’s behavior (not by using an evaluation form) when participants are engaged.

The best argument I can think of for Level 1 feedback is that it provides institutional memory. What happens if you – the rock star training facilitator of the organization – win the lottery and retire to your own private island in the Caribbean tomorrow? Or perhaps something more likely happens – you need to deliver the same presentation a year from now. Will you be able to remember the highlights (and the sections of your lesson that need to be changed)?

This point was brought home to me earlier this week when a co-worker was asked to facilitate a lesson someone else had presented back in the spring. I shared the lesson plan with my co-worker and his first question was: do we have any feedback on this session?

Searching through my files I realized that my disdain for Level 1 feedback led me to create a quick, too-general post-training evaluation form for this meeting and it didn’t yield any useful feedback for this particular session.

In addition to questions about the overall meeting, I should have asked specific questions (both Likert-scale style and open-ended) about each session during this meeting. Yes, this makes for a longer evaluation form, and if we’re going to ask learners to take the time to fill out the forms anyways we may as well get some useful information from them!

I absolutely agree with the idea that the best, most powerful learning experiences happen on the job. And in a world where formal training experiences are still part of our annual professional development experience, we training professionals need to ensure we continue to build better and better learning experiences for our audiences, both through noting our own observations of the session as well as crafting more effective ways of capturing our learners’ reactions.

What are some questions you’ve found particularly helpful on post-training evaluation forms?

Let me know in the comments section below (and perhaps it will be the subject of a future blog post!).

 

Sugar? Spice? Everything Nice? What Are The 5 Most Fundamental Ingredients For An Effective Trainer?

This week, I’ve been in India working with a training colleague in preparation to roll out a new curriculum.  This colleague is a novice trainer and her supervisor doesn’t have any training background.  But the quality and effectiveness of the work of this colleague will have significant impact on the work of our entire global mission, and therefore it’s vital that she blossoms into a high quality training professional.

In order to focus our efforts to develop this colleague professionally, I developed a competency rubric by which we can rate her current level of training proficiency, create a development plan with several specific goals based on her needs, then work toward those goals.

To create this rubric, I wanted to capture the essence of what I feel are the top five most basic, fundamental, foundational competencies for anyone who trains.  The five competencies I included in this rubric (which can be downloaded by clicking here) are:

  1. Presentation Skills
  2. Creativity
  3. Body Language
  4. Adult Learning Principles
  5. Subject Matter Expertise (if you download the rubric, you’ll see that this section is customized to the specific industry I’m working in – eye banking)

Getting Started

This is not a scientifically-developed list.  It’s not intended to be.  There are a lot of factors that will impact any facilitator’s ability to be effective (experience level, trainees’ supervisor support, policies/procedures in place that allow trainees to implement new skills and abilities when they return to their jobs after training, etc.).  This list of factors can indeed be overwhelming, especially for someone new to the learning and development profession. 

This list is simply intended to get a novice trainer focused on some of the key elements within their circle of influence (to borrow a term from Stephen Covey).  It is a tool to establish a baseline of current skills, abilities, knowledge and behaviors.  It is a tool I plan to use to identify performance gaps and establish specific goals.

What Do You Think?

If you find this rubric to be helpful in assessing your own current set of training skills and abilities, please use it… and then tell me how it works for you.  If you feel there are more important, more basic, more foundational skills that trainers (especially novice trainers) should be focused on, I invite you to share those ideas in the comments section below.

What, me “adequate”?!

“While you certainly did an… adequate… job of facilitating today, it would be nice if you gave us some more time to talk things through tomorrow.” More was said, but I stopped hearing it after my facilitation was called “adequate.

It was June 2007, and I was facilitating a 2-day workshop on diversity for the first time. At the end of the first day, we allowed the participants to provide some feedback. I was called adequate.

They say feedback is a gift. I suppose good (as in quality, not as in positive) feedback is a gift. But feedback can also hurt. Feedback can anger. Especially when a presenter’s perception of the event is different from the feedback giver’s perception. This is one reason high performing athletes review a lot of video before and after game day.  The video tape doesn’t lie. 

In preparing for their 1999 World Cup final match against China, the coaching staff of the US women’s national team traveled with a library of up to 80 videotapes.  In The Girls of Summer, Jere Longman chronicles how the coaches used a sophisticated piece of video technology that “recorded every touch, every pass of the ball made by every Chinese player… [Assistant Coach] Gregg believed that, if videotape accounted for five percent of the American preparation, it provided valuable readiness.”  The American women wanted as much analysis of their own performances as well as their opponents as they could get.  And the American women went on to win their second World Cup in 1999.

Few presenters have access to (or a need for) the sophisticated technology used by championship athletes.  The use of simple videotape to review, reflect and make improvements to presentations has been demonstrated to improve performance.  All the way back in 1975, researcher Lowell Ellett published the results of a study in AV Communication Review that demonstrated the benefits of using videotape and a self-assessment tool. Teachers using the videotape and the self-assessment tool “significantly outperformed teachers… who did not use the self-rating instrument and consistently improved their performance from tape to tape.”

As powerful of a tool as videotaping can be, it’s not always feasible. That shouldn’t stop a presenter from reflecting on and evaluating their past performances in order to ensure an ever-improving delivery each time.

In The Fifth Discipline, Peter Senge highlights the Army’s use of a simple, after-action review process to evaluate performance. It’s a simple, three-step process that presenters would be wise to take advantage of in the immediate aftermath of a presentation. The three steps include:

1)    What did I intend?

2)    What actually happened?

3)    What accounted for the gap between what happened and what I intended?

For any presenter using a lesson plan format, here is an expanded lesson plan format that can be filled out during a presentation or immediately thereafter.

Truth be told, “adequate” was probably an appropriate way to describe my facilitation in that diversity workshop. I just didn’t care to be given that type of feedback in front of the group. Actually, who am I kidding? I wouldn’t like to hear that feedback in any setting – public or private. But the feedback was so powerful (and something I never wanted to hear again) that I made a mental note of it and took it to heart. In every presentation since the “adequate incident” I’ve made sure to balance the task at hand (delivering content) with the process (spending enough time in any given area to ensure participants are actually “getting it”). This modification in my style was only possible as a result of feedback, reflection and then corrective action.