Evaluating Training Efforts

Moments ago, I cleared security at London’s Heathrow Airport. As I was re-packing my bag with my laptop and iPad, I noticed this little machine.

Security Feedback

I tapped the super-smiley button.

I wonder what they do with the feedback.  I’m guessing they use it like many training professionals use similar feedback.  If the percent of super smiley feedback is high, they probably advertise it internally, and perhaps even externally to demonstrate a job well-done.

Level 1 Feedback Limitations

The limitations of the “smile sheet” evaluation form are many.  All they can really tell us is whether someone enjoyed their experience or not.  Low scores should  be a concern, but high scores don’t necessarily mean value was added.  This sort of quantitative feedback can’t tell us why someone might give a low score. In the example at Heathrow Airport, I could hit the grumpy face, but it doesn’t help any of their supervisory or training staff improve anything. Did I hit the grumpy face because I had a terrible interaction with the security staff?  Did I hit the grumpy face because I was pulled aside for random, extra screening? Did I hit the grumpy face because a group of other passengers claimed to be rushing to their airplane which was departing in 10 minutes and therefore they were allowed to cut the lengthy security line while the rest of us waited patiently?

The Question Matters

I know many organizations – large and small – that measure training success by post-training evaluation scores.  I understand the reason – as training professionals we want some type of metric that can demonstrate our value.  But the minute someone starts asking “tough” questions like: “what value is a 4.3 out of 5 adding to our company’s bottom line?”, the power of the smile sheet metric could quickly lose its luster.

I wonder if the Heathrow staff would get more useful data if they changed their question.  Some ideas that came to mind include:

  • How did today’s security experience compare to previous experiences?
  • Will your flight be safer because of today’s security check?
  • Were you treated with respect and dignity during today’s security screening?

The list could go on. I understand that in order to have people participate, they’re limited to one question, and it needs to be simple. But “How was your security experience today?” depends on so many variables.

When it comes to post training evaluation forms, I try to limit the number of questions I ask to three per module/topic:

  • This session will help me do my job better
  • The facilitator was an expert in this topic
  • The facilitator presented the topic in a way that kept me curious and interested

Depending on what I want to get out of the feedback, I may also ask a question about whether or not each learning objective was accomplished.  At the end of the evaluation form, I’ll also include these two questions:

  • I felt engaged and participated throughout the day
  • I felt my fellow attendees were engaged and participated throughout the day

Again, these types of “Level 1” evaluation forms are just taking a snapshot of participants’ subjective feelings of how things went, and including blank text boxes for attendees to write additional comments can add some clues as to why they gave certain scores, but ultimately the value of training initiatives should be measured by specific behavior changes or performance improvements.  Those types of measurements require additional feedback down the road – from attendees and ideally their supervisors.

Nonetheless, evaluation forms like this can begin to offer a hint of the value that training adds… if the questions are crafted well.

What questions have I missed that you’re asking your attendees? If you create elearning, would you ask anything different in the post-module evaluation?

The Train Like A Champion Blog is published Mondays, Wednesdays and Fridays.  If you think someone else might find this interesting, please pass it along.  If you don’t want to miss a single, brilliant post, be sure to click “Follow”!  And now you can find sporadic, 140-character messages from me on Twitter @flipchartguy.

What, me “adequate”?!

“While you certainly did an… adequate… job of facilitating today, it would be nice if you gave us some more time to talk things through tomorrow.” More was said, but I stopped hearing it after my facilitation was called “adequate.

It was June 2007, and I was facilitating a 2-day workshop on diversity for the first time. At the end of the first day, we allowed the participants to provide some feedback. I was called adequate.

They say feedback is a gift. I suppose good (as in quality, not as in positive) feedback is a gift. But feedback can also hurt. Feedback can anger. Especially when a presenter’s perception of the event is different from the feedback giver’s perception. This is one reason high performing athletes review a lot of video before and after game day.  The video tape doesn’t lie. 

In preparing for their 1999 World Cup final match against China, the coaching staff of the US women’s national team traveled with a library of up to 80 videotapes.  In The Girls of Summer, Jere Longman chronicles how the coaches used a sophisticated piece of video technology that “recorded every touch, every pass of the ball made by every Chinese player… [Assistant Coach] Gregg believed that, if videotape accounted for five percent of the American preparation, it provided valuable readiness.”  The American women wanted as much analysis of their own performances as well as their opponents as they could get.  And the American women went on to win their second World Cup in 1999.

Few presenters have access to (or a need for) the sophisticated technology used by championship athletes.  The use of simple videotape to review, reflect and make improvements to presentations has been demonstrated to improve performance.  All the way back in 1975, researcher Lowell Ellett published the results of a study in AV Communication Review that demonstrated the benefits of using videotape and a self-assessment tool. Teachers using the videotape and the self-assessment tool “significantly outperformed teachers… who did not use the self-rating instrument and consistently improved their performance from tape to tape.”

As powerful of a tool as videotaping can be, it’s not always feasible. That shouldn’t stop a presenter from reflecting on and evaluating their past performances in order to ensure an ever-improving delivery each time.

In The Fifth Discipline, Peter Senge highlights the Army’s use of a simple, after-action review process to evaluate performance. It’s a simple, three-step process that presenters would be wise to take advantage of in the immediate aftermath of a presentation. The three steps include:

1)    What did I intend?

2)    What actually happened?

3)    What accounted for the gap between what happened and what I intended?

For any presenter using a lesson plan format, here is an expanded lesson plan format that can be filled out during a presentation or immediately thereafter.

Truth be told, “adequate” was probably an appropriate way to describe my facilitation in that diversity workshop. I just didn’t care to be given that type of feedback in front of the group. Actually, who am I kidding? I wouldn’t like to hear that feedback in any setting – public or private. But the feedback was so powerful (and something I never wanted to hear again) that I made a mental note of it and took it to heart. In every presentation since the “adequate incident” I’ve made sure to balance the task at hand (delivering content) with the process (spending enough time in any given area to ensure participants are actually “getting it”). This modification in my style was only possible as a result of feedback, reflection and then corrective action.