summary of "evaluation and elearniging" by Debra Peak

Week 5 & 6
Review
Turkish Online Journal of Distance Education-TOJDE January 2006 ISSN 1302-6488 Volume: 7 Number: 1 Article: 11 by Debra PEAK & Zane L. BERGE

This article by peak considers levels of evaluation as modeled by Kirkpatric in 1994. Which I had to look up as follows







Kirkpatrick's Four Levels of Evaluation
In Kirkpatrick's four-level model, each successive evaluation level is built on information provided by the lower level.
ASSESSING TRAINING EFFECTIVENESS often entails using the four-level model developed by
Donald Kirkpatrick (1994). According to this model, evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level's evaluation. Thus, each successive level represents a more precise measure of the effectiveness of the training program, but at the same time requires a more rigorous and time-consuming analysis.

This paper compares evaluating elearning and traditional classroom instruction evaluation. It consideres why level 4 evaluations are so difficult to do and why elearning evaluation has evolved to include return-on-investment claculations.


Peak suggest that the main difference in the two delivery methods centers on the data collection methods. Both level 1 and level 2 evaluations(student evaluations and student results) are easier to build into elearning courseware - due to the use of computer technology.

Level 4 evaluation - results - measures the success of the program (Elaine C Winfrey). Peak ask why are these difficult to measure and suggest two reasons

  1. Organisations think they that the results must be quantifiable and that a negitive result implies that training has failed.

  2. "Trainers belive that they must be able to show proof beyond a reasonable doubt that business improvements were a direct result of training." (peak)

There is now a push for level 5 evaluations. ROI evaluation are liked by managers and senior executives. Peak points out that there is a difficulty in converting educational outcomes into monetary values and many organisations do not even attempt too. Peak then gives steps towards finding a rate of return.

Peak concludes with this statement in support of ROI evaluations. 'Regardless of what is measured or how, the consensus seems to be that what is important is that business values are finally being attached to the corporate learning experience. Holly Burkett, an ROI evaluator at Apple Computer, stated “For me,
it’s more empowering to know that our department’s work has a direct impact on performance, productivity, or sales than it is to know that people enjoy the training program” (Purcell, 2000).'


Ref

  1. Turkish Online Journal of Distance Education-TOJDE January 2006 ISSN 1302-6488 Volume: 7 Number: 1 Article: 11, 'Evaluation and eLearning' by Debra PEAK & Zane L. BERGE
    UMBC, 1000 Hilltop Circle, Baltimore MD 21250

  2. Elaine C. WinfreyGraduate StudentSDSU Educational Technology
    Winfrey, E.C. (1999). Kirkpatrick's Four Levels of Evaluation. In B. Hoffman (Ed.), Encyclopedia of Educational Technology. Retrieved
    April 14, 2009, from http://coe.sdsu.edu/eet/Articles/k4levels/start.htm

1 comment:

  1. This pyramid looks interesting. One question I had was would we need to push through from the initial reaction to investigate the actual learning. Sometimes initial reaction to elearning can be "go away, its too much to take on right now" when if you actually implement it and students/staff get used to it, it improves learning. Just a thought?

    ReplyDelete