Kirkpatrick 10 Requirements for an Effective Training Program

Learning & Development

Training & Evaluation guidance for delivering quality education programs: ” we strongly suggest that you take the right steps to ensure that training is actually accomplishing what it was intended to do and contributing to the bottom line. Don’t think about evaluation in terms of demonstrating overall value until you are sure you have done all you can to ensure that your training programs are effective” (pg. 3, Kirkpatrick & Kirkpatrick, 2009). 

Ten Requirements for an Effective Training Program

  1. Base the program on the needs of the participants.
    • Needs analysis from the learners on what they need to learn and also what the organisation needs to develop.
    • View from the perspective of managers and the organisation.
  2. Set learning objectives.
    • What is expected to be learned
    • Any behaviour or cultural changes?
  3. Schedule the program at the right time.
    • Best method of delivery and time/day for the learners. Engage a positive mindset from the start.
  4. Hold the program at the right place with the right amenities.
    • Right location for appropriate amenities and travel time.
  5. Invite the right people to attend.
    • Right number, right mix of hierarchy within team members.
  6. Select effective instructors.
    • Internal or external subject matter experts.
  7. Use effective techniques and aids.
  8. Accomplish the program objectives (return to point 2).
  9. Participant satisfaction.
  10. Evaluate the program


Kirkpatrick, D. L. & Kirkpatrick, J.D. (2009). Implementing the four levels: A practical guide for effective evaluation of training programs. Berrett-Koehler Publishers [exerpt].

An Evaluation of Course Evaluations

Journal Club Article: Stark, P. & Freishtat, R. (2014). An Evaluation of Course Evaluations. University of California, Berkeley.

The magic of evaluation, every nurse educators way of measuring teaching effectiveness. Or is it?

This discussion paper by Stark & Freishtat (2014) provides a great balanced discussion on evaluations. From my experience as an educator in the hospital and university setting, evaluation is deemed important in measuring the quality of education.

“Among faculty, student evaluations of teaching (SET) are a source of pride and satisfaction- and frustration and anxiety.”  (pg. 2)

That tenure and promotion are related to performance in evaluations, the pressure is on to make sure your students provide a positive evaluation. Imagine the pressure this potentially places on educators to please students, what about holding to account or daring to fail a student, imagine the bad evaluation you may receive. This provides a different spin to the magic of evaluations.

“students are in a good position to evaluate some aspects of training, but SET are at best tenuously connected to teaching effectiveness.”  (pg. 3)

Measurement of a rating scale of “effective to not effective at all” is easy, takes little time and resources. Without doubt, student feedback is an essential part of course evaluation but more thorough and in-depth approaches could be utilised, but they require more resources.

Can one educator really be effective for the mix of students across age, sex, background, learning style and abilities?

“there’s no reason nonresponders should be like responders- and good reasons they might not be. For instance, anger motivates people to action more than satisfaction does.”   (pg. 4)

Statistics and SET’s:

  • What is the minimum response rate required to be classified as ‘representative’ for further analysis?
  • Is using these statistics to create averages to compare to past courses, against departmental averages or wider organisation data meaningful?
  • Do we really consider these comparisons as safe assumptions of quality learning.

“What is effective teaching? One definition is that an effective teacher is skillful at creating conditions conducive to learning. Some learning happens no matter what the instructor does. Some students do not learn much no matter what the instructor does. How can we tell how much the instructor helped or hindered?  (pg. 10)

Stark & Freishtat highlight that measuring learning is extremely hard, and that currently we measure what students say, not actual teacher effectiveness, and pretend it’s all the same. Of course, all these arguments are also the same for using course grades for measuring quality, so what’s left? What about observation and providing feedback to educators on the teaching itself over a period of time, then this can align with the SET data to provide a bigger picture overview.

To Counteract: The Data Argument

Without Data


Stark, P. & Freishtat, R. (2014). An Evaluation of Course Evaluations. University of California, Berkeley.