A month and a half ago, I was sitting in a training session when someone from the marketing department got in front of the room. She shared how she worked with the training team to help follow up training events with communication and additional resources for training participants.
Recently I asked this marketing professional to sit down and help me better understand how she works with the training team, and what’s in it for both the training and marketing teams. Here is what Emily Ledbetter had to say.
Recently, gamified LMS platform creators Growth Engineering released their list of “the top 20 L&D experts and influencers you need to know about in 2021.” Along with some of the titans of our industry such as Josh Bersin and Karl Kapp, there were also some nose-to-the-grindstone practitioners on the list.
One of the people on the list was Kevin Yates, a self-described L&D detective doing some of the most important work that can be done in our industry: uncovering the impact and value of learning initiatives.
I had a chance to talk with Kevin about what we should all be doing in the field of learning and development to play the role of L&D detective and uncover the impact and value of our own training initiatives.
On Monday, we shared a 10-minute podcast in which we spoke with ATD Puget Sound’s director of research, Sarah Schillen, to talk about some of the things that stood out for us in ATD’s 2019 State of the Industry Report. Several questions that should arise any time data like this is compiled in a report are: what does this information mean and how can it be acted upon?
Course design often includes creating an assessment of the skills gained during training. To truly assess a learner’s knowledge on a subject, you need more than just a question and a correct answer for them to choose, you need good distractors. There is magic in a good set of distractors that really makes the learner analyze the choices in front of them and consider what the question is asking. How do we accomplish that?
On Monday I asked readers to share their thoughts on the most compelling of twelve training evaluation metrics. Whether you’re trying to create a case for funding for a training program you feel is essential or you’re trying to market the value of your existing professional development offerings, using qualitative and quantitative training evaluation metrics will be an important part of your argument. Continue reading
Training can often be a tricky thing to measure. Just because it’s difficult, doesn’t mean we shouldn’t try. After all, when the people who make budget decisions decide to ask what the value of your training program is, you’ll need a good answer.
With today’s blog, I’m going to try something a little different. I’d like to get your thoughts, dear reader, on the following question. Share your thoughts in the comment section. I’ll come back on Thursday summarizing all the thoughts that are contributed here and add my own thoughts as well.
The question: Which of the following is the BEST metric to measure a training program? Continue reading
A year ago I found myself in Birmingham, AL, helping to lead a train the trainer session as part of the launch and roll-out of a new sales training program.
A year later, we’ve been able to look at the impact of training through the lens of Kirkpatrick’s four levels of evaluation and see the results and impact on each level, including a double-digit growth in sales for those stores who have implemented this program compared to those who haven’t. Continue reading
Shortly after this year’s Super Bowl, this statistical analysis began making its way around the Internet:
As a Buffalo Bills’ fan, I appreciated it.
As a training professional, it served as a very good reminder that numbers can be very deceiving.
So how do we make the best business case for training and professional development? Numbers can be helpful… if we use the right ones. Continue reading
I was scrolling through my Facebook feed last week and shrieked in horror when I noticed that a friend of mine was spreading this nonsense:
I understand that sharing things like this comes from a good place, but just because things have numbers in them doesn’t mean those numbers mean anything. In this particular case, these numbers are simply made up (don’t get me started on the topic of “overhead” and nonprofit funding). In many other cases, especially in the world of learning and development, the numbers may not be made up, but they need to be combined with a few more data points to be useful.
Let’s take a big one, ripped straight from ATD’s 2016 State of the Industry Report: Employees averaged 33.5 hours of training. Continue reading
Level 1 evaluation:
gauging the learner’s reaction to training
Why is this such a tough thing to get right?
My world used to come crashing down when participants would score me anything under a 4 on a 5-point scale.
Then I started reading up on post-training evaluation forms and learned that there was no correlation between high scores and on-the-job performance, so I stopped doing them completely.
Then my boss insisted I do level 1 evaluation forms, so I tried to make them more meaningful. Continue reading