Shortly after this year’s Super Bowl, this statistical analysis began making its way around the Internet:
As a Buffalo Bills’ fan, I appreciated it.
As a training professional, it served as a very good reminder that numbers can be very deceiving.
So how do we make the best business case for training and professional development? Numbers can be helpful… if we use the right ones.
Level 0: “Butts in Seats”
A colleague recently told me the story of a high placed training director in her organization boasting that his team had trained over 1,000 people in the past year. For an organization that has 30,000 staff, that number rung hollow for a number of reasons.
First, if his employer espouses the leadership principle of “hire and develop the best”, then this training leader better quickly find a way to reach 29,000 more people. Second, the “butts in seats” metric is truly a vanity metric unless it’s connected to other measures of training success.
The value of the butts in seats metric: This metric demonstrates activity of the training team and the general interest level that others have in your training offerings. It can show the effectiveness of your marketing strategy to bring people to your training and it can show how valuable others perceive your offerings.
There are too many times when training has been requested – especially eLearning – only to not have people actually complete the course.
Level 1: Participant Reaction
This is a measure of training evaluation that carries a lot of strong feelings on both sides. Some deride post-training evaluations with the condescending term: “smile sheets”. Others, particularly decision-making executives who are looking for some sort of way to measure the effectiveness of training initiatives, often request Level 1 feedback since it’s the easiest and most readily available measure of training.
In my early years as a trainer, I used to live and die by the post-training evaluation forms. If I had a better average score than other sessions at a conference, I had… what did I have?
Was my session better? No, not necessarily.
Meh. Whatever. I had bragging rights!
Of course, that’s not necessarily the reason that training sessions are offered, is it?
And then, if I had a few negative comments, I’d be shattered. I’d like to think my opinion of this particular metric has matured.
The value of the participant reaction metric: Will Thalheimer wrote an entire book on the importance of making sure you’re asking the right questions so that you’re getting participant feedback on the right things.
Simply saying “participants rated this training as a 4.5 out of 5” won’t mean much if an executive is savvy enough to ask a follow-up question such as: “What does 4.5 out of 5 actually mean?”
My team was able to use post-training evaluation on an eLearning that we built in order to demonstrate that, by a 2:1 margin, people who completed the eLearning we developed found the choose your own adventure-style customer scenarios to be the most valuable piece to the module. Open-ended feedback shed more light on this data point when people wrote things such as: “It put me in real-life scenarios and I learn better from hands-on training.”
We asked specific post-training feedback questions about this feature because we had been hired to design a different type of training. This data allowed us to offer evidence that the different approach to training design was noticed and appreciated by participants.
Level 2: Are People Smarter?
People should probably know more when they leave your training experience than they knew beforehand.
This is typically done using some sort of pre/post testing. These numbers can look sexy as long as the person you’re showing these numbers to doesn’t ask a lot of questions.
A few years ago, I designed a multi-week training program for people working in the foster care system and we were required to show examples of training program effectiveness to our funder. We used pre/post-test data that was quite compelling – the post-test scores of participants rose by 19%.
The problem here is that, according to everything we know about how the brain works, people begin forgetting things when they leave the training room. So if the post-test is administered at the end of the training session, the information will still be fresh in people’s minds. A more compelling post-test data set would come by administering the post-test a week or two after the training ends – this will give a more realistic idea of whether people actually learned and retained new concepts.
The value of the “are people smarter?” metric: If you can offer quantitative numbers that show that people have learned and retained new concepts (following the training, 100% of participants can correctly list the three most important principles of effective business writing), this number could be a more compelling business case for training than reporting that people rated your program 4.5 out of 5.
Level 3: Applying the Skills in Real Life
This is why people attend your session or complete your module – to bring new skills to their craft. This data can be gleaned from post-training surveys so that participants can self-report what they’re doing, or so that their managers can actually share their observations.
The value of the on-the-job application metric: After one training session, I followed up with an executive to find out if he’d heard whether people were actually using the skills they’d learned in the training and he wrote: “One manager too the knowledge and skills back to his store and realized a 67% increase over the average weekly sales within the first four days back in his store.” That’s a powerful testimonial that actually combines Levels 3 and 4.
The thing about collecting data to make the business case for training programs is that the higher the level, the more difficult it can be to collect the information. But it doesn’t mean we should stop trying. Simply asking managers or training participants is one way to collect the data. Response rates will naturally go down when time has passed after a training program, but it doesn’t mean we shouldn’t try to find some data or anecdotes.
If participants or managers are too busy to respond to requests for information, then get out from behind your desk and go observe others in action. This is more labor-intensive and may not offer a large sample size for meaningful quantitative data, but could offer powerful case studies.
Level 4: Impact
Some people group return-on-investment and impact in the same data. This is a tricky level at which to measure because there are a lot of factors that could impact ultimate performance – training being one of them.
Of course, there are a lot of factors that go into any business metric. Identifying numbers that could offer a correlation between training and business results can help with your business case for training (as long as you’re prepared to admit that other factors could also come into play).
Sales training might be one of the easiest types of training efforts to try to quantify. One client reported to me that the stores that had gone through our training had realized a 13% increase in sales compared to those stores who hadn’t yet rolled our training out. While many other factors were at play, this did provide a sort of control group (those who hadn’t yet rolled out the training) and test group (those who had).
I’ve also used overall organizational performance before and after the addition of a training staff member to illustrate the benefit of adding training headcount to an organization (here is a case study of this specific example that was published in TD magazine). In short, we had metrics to demonstrate that managers could spend less time training and more time managing, which allowed the department to grow and scale with the addition of one person wholly focused on training.
The value of the impact metric: At the end of the day, if you have 1,000 people attending your training, that’s really just an extra cost to the organization. You’re costing the organization with your own salary and benefits and you’re costing the organization with the time of the 1,000 people who are spending lord knows what kind of average hourly salary attending your training.
However, if you can connect the dots between what those 1,000 people are doing differently and how it’s impacting the organization – through greater efficiencies, reduced errors and re-work, growth in sales or productivity – you’ve made a case for training, of you and of your team.
What success have you had using data to make a case for training? Willing to share?