Summary of EASEIT-ENG project
Development of a practical Evaluation Methodology for the production of Engineering learning Technology Case Studies by the EASEIT-ENG project.
I found this paper with its reference to engineering and evaluation. EASEIT-ENG stands for Evaluative and Advisory Support to Encourage Innovative Teaching – Engineering. A group established in the UK whose aim is to assist engineering academics in selecting learning technology material, for use in their teaching, that was right for both them and their students. They have done this by getting a number of tertiary education providers to do evaluations of engineering computer based learning and teaching materials they are using or trailing. These evaluations are then put in a data bank and made available to academic tutors. The document summarizes each aspect of the project and provides references.
The project puts together a manual form which evaluations of can be carried out. The manual also includes
1. Client contact details
2. Fact sheets
3. Pre-evaluation questionnaire
4. Student question
5. Observation schedule
6. Guidance on focus groups
7. Tutor interview structures
8. Evaluation report template and feedback questionnaire
I.e. a lot of the same type of data collection that I (we) intend to do. The actual questionnaires are available online and will be a valuable resource as a template for my evaluation.
As a list of evaluated software it was only a little useful but the evaluation process that was described gave an idea of the detail required in doing an evaluation and that was very useful. It gave a summary of what the project revealed about engineering leaning technology materials which was very interesting and varied. One point that interested me was ‘that academics seemed convinced of the pedagogic benefits of using learning technology but were less convinced that the right material was available for them or that there would be any time saving associated with its use.”
Finally the reference section gave great reference to elearning materials for engineering.
Reference
EASEIT-ENG Project Phil Baker, Fiona Lamb, Steve Rothberg. (October 2005) Development of a practical Evaluation Methodology for the production of Engineering learning Technology Case Studies by the EASEIT-ENG project. The Higher Education Academy – Engineering Subject Centre.
URL http://www.easeit-eng.ac.uk/
Outline of Evaluation Project
Needs Assessment -·
Who users are,
type of users·
cost-effective·
where the gaps are.
Reason for a 'needs analysis'
I believe that a review of current practice can reveal that little elearning is encompassed within the teaching of engineering. At the best there are electronic notes and assignments made available to students on an electronic filing cabinet. I would like to see more dynamic use of elearning in the form of quizzes and possible group discussions.
Students that study engineering tend to be kinesthetic learners and already use computers in some engineering courses like computer aided design, so they have a familiarity with computers. I suspect that the main reason elearning instruction is not more widely used is due to those that teach engineering not having the skill or knowledge to do so.
The strategic plan of this polytechnic is to provide a state of the art facility and learning environment. That must include the use of modern techniques (such as elearning) where appropriate. And to be student focused. So there is a directive from above that encourages elearning to be developed into our courses. Since we have the job of running the courses for the first time, we have the opportunity to do this.
There is a ‘need for analysis’ to find out if what I believe could be true and if elearning would be aligned with the strategic plan.
What is to be evaluated.
It is intended that elearning be introduced slowy, as instructors get up to speed with the technology. So a pilot is proposed, one which includes the use of interactive learning material with follow up quizzes, and a site which will allow instruction form f2f for review.
I would also like to check the viability of using a discussion board, I not sure how this could be used inside of the engineering curriculum but there may well be instances where it would be appropriate.
How the evaluation will work
It will use a 'multiple methods' model to collect data and information. Methods I aim use are:
Student survey –To find student demographics and learner types.
Focus groups – Get students to trial a quiz and give their opinion. I will endeavor to test if learning outcomes can be met. I will give them a small assessment before the trial and another assessment after the trail, to try and measure effective change to learning outcomes. Both test with a range of outcome levels and should give some quantifiable results. I am hoping to have 10 to 15 students in my focus group.
A separate survey after they have completed the trial to see what their feelings were on using elearning as part of their learning. This survey would have open ended questions and the results would have to be considered with consultation of other instructors.
Expert review - Peers to trial a quiz and be interviewed on their opinion.
Summary - Needs assessment by Rossett
Needs Assessment By Allison Rossett - Summary
Rossett give a general overview of needs assessment with the emphasis on occupational instructional training. She give a number of example that are corporate based – one that runs through the whole discussion is of soft and limp French fries in a fast food joint.
She defines it as “needs assessments gather information to assist professionals in making data-driven and responsive recommendations about how to solve the problem or introduce the new technology. While needs assessment may lead to the development of instruction, it does not always do so. The important role that needs assessment plays is to give us information, at the beginning of the effort, about what is needed to improve performance. She defines five purposes for doing needs assessments
- To define Optimal performance
- To find out what is happening with actual performance
- To know how learners feel about
- To find the cause of performance problems
a. Lack the skill or knowledge
b. The environment is in the way
c. There are no or improper incentives
d. Lack of motivation - To find solutions
Rossett then talk about how to conduct a needs assessment giving 5 steps
- Determine purpose based on Initiators
a. Performance problems
b. New stuff/technologies
c. Mandates from above - Identify sources
- Select tools
a. Interviewing
b. Observing performance
c. Examine records and outcomes
d. Facilitating groups
e. Surveying through questionnaires - Conduct the needs assessment in stages
- Use findings for decision making.
“Deciding to observe interview or survey isn't the hard part. The challenge is to determine sources, plan stages of assessment , and frame questions that will enable a professional to move crisply form a general problem with limp French fries to a series of actions and recommendations that will guarantee yummy ones. “
summary of "evaluation and elearniging" by Debra Peak
Review
Turkish Online Journal of Distance Education-TOJDE January 2006 ISSN 1302-6488 Volume: 7 Number: 1 Article: 11 by Debra PEAK & Zane L. BERGE
This article by peak considers levels of evaluation as modeled by Kirkpatric in 1994. Which I had to look up as follows
Kirkpatrick's Four Levels of Evaluation
In Kirkpatrick's four-level model, each successive evaluation level is built on information provided by the lower level.
ASSESSING TRAINING EFFECTIVENESS often entails using the four-level model developed by Donald Kirkpatrick (1994). According to this model, evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level's evaluation. Thus, each successive level represents a more precise measure of the effectiveness of the training program, but at the same time requires a more rigorous and time-consuming analysis.
This paper compares evaluating elearning and traditional classroom instruction evaluation. It consideres why level 4 evaluations are so difficult to do and why elearning evaluation has evolved to include return-on-investment claculations.
Peak suggest that the main difference in the two delivery methods centers on the data collection methods. Both level 1 and level 2 evaluations(student evaluations and student results) are easier to build into elearning courseware - due to the use of computer technology.
Level 4 evaluation - results - measures the success of the program (Elaine C Winfrey). Peak ask why are these difficult to measure and suggest two reasons
- Organisations think they that the results must be quantifiable and that a negitive result implies that training has failed.
- "Trainers belive that they must be able to show proof beyond a reasonable doubt that business improvements were a direct result of training." (peak)
There is now a push for level 5 evaluations. ROI evaluation are liked by managers and senior executives. Peak points out that there is a difficulty in converting educational outcomes into monetary values and many organisations do not even attempt too. Peak then gives steps towards finding a rate of return.
Peak concludes with this statement in support of ROI evaluations. 'Regardless of what is measured or how, the consensus seems to be that what is important is that business values are finally being attached to the corporate learning experience. Holly Burkett, an ROI evaluator at Apple Computer, stated “For me,
it’s more empowering to know that our department’s work has a direct impact on performance, productivity, or sales than it is to know that people enjoy the training program” (Purcell, 2000).'
Ref
- Turkish Online Journal of Distance Education-TOJDE January 2006 ISSN 1302-6488 Volume: 7 Number: 1 Article: 11, 'Evaluation and eLearning' by Debra PEAK & Zane L. BERGE
UMBC, 1000 Hilltop Circle, Baltimore MD 21250 - Elaine C. WinfreyGraduate StudentSDSU Educational Technology
Winfrey, E.C. (1999). Kirkpatrick's Four Levels of Evaluation. In B. Hoffman (Ed.), Encyclopedia of Educational Technology. Retrieved
April 14, 2009, from http://coe.sdsu.edu/eet/Articles/k4levels/start.htm