Types of Evaluations in Instructional Design
Evaluations are normally divided into two broad categories: formative and summative.
A formative evaluation (sometimes referred to as internal) is a method for judging the worth of a program while the program activities are forming (in progress). This part of the evaluation focuses on the process.
Thus, formative evaluations are basically done on the fly. They permit the designers, learners, and instructors to monitor how well the instructional goals and objectives are being met. Its main purpose is to catch deficiencies so that the proper learning interventions can take place that allows the learners to master the required skills and knowledge.
Formative evaluation is also useful in analyzing learning materials, student learning and achievements, and teacher effectiveness.... Formative evaluation is primarily a building process which accumulates a series of components of new materials, skills, and problems into an ultimate meaningful whole. - Wally Guyot (1978)
In addition, prototyping is used in formative evaluations to test a particular design aspect by using one or more iterations. For more information on prototyping and iterations, see Validating Instructional Design.
A summative evaluation (sometimes referred to as external) is a method of judging the worth of a program at the end of the program activities (summation). The focus is on the outcome.
All assessments can be summative (i.e., have the potential to serve a summative function), but only some have the additional capability of serving formative functions. - Scriven (1967)
The various instruments used to collect the data are questionnaires, surveys, interviews, observations, and testing. The model or methodology used to gather the data should be a specified step-by-step procedure. It should be carefully designed and executed to ensure the data is accurate and valid.
Questionnaires are the least expensive procedure for external evaluations and can be used to collect large samples of graduate information. The questionnaires should be trialed (tested) before using to ensure the recipients understand their operation the way the designer intended. When designing questionnaires, keep in mind the most important feature is the guidance given for its completion. All instructions should be clearly stated...let nothing be taken for granted.
History of the Two Evaluations
Scriven (1967) first suggested a distinction between formative evaluation and summative evaluation. Formative evaluation was intended to foster development and improvement within an ongoing activity (or person, product, program, etc.). Summative evaluation, in contrast, is used to assess whether the results of the object being evaluated (program, intervention, person, etc.) met the stated goals.
Scriven saw the need to distinguish the formative and summative roles of curriculum evaluation. While Scriven preferred summative evaluations — performing a final evaluation of the project or person, he did come to acknowledge Cronbach's merits of formative evaluation — part of the process of curriculum development used to improve the course while it is still fluid (he believed it contributes more to the improvement of education than evaluation used to appraise a product).
Later, Misanchuk (1978) delivered a paper on the need to tighten up the definitions in order to get measurements that are more accurate . The one that seems to cause the greatest disagreement is the keeping of fluid movements or changes strictly in the prerelease versions (before it hits the target population).
In Paul Saettler's (1990, pp430-431) history of instructional technology, he describes the two evaluations in the context of how they were used in developing Sesame Street and The Electric Company by the Children's Television Workshop. CTW used formative evaluations for identify and defining program designs that could provide reliable predictors of learning for particular learners. They later used summative evaluations to prove their efforts (to quite good effect I might add). While Saettler praises CTW for a significant landmark in the technology of instructional design, he warns that it is still tentative and should be seen more as a point of departure rather than a fixed formula.
Saettler defines the two types of evaluations as: 1) formative is used to refine goals and evolve strategies for achieving goals, while 2) summative is undertaken to test the validity of a theory or determine the impact of an educational practice so that future efforts may be improved or modified.
Thus, using Misanchuk's defining terms will normally achieve more accurate measurements; however, the cost is higher as it is highly resource intensive, particularly with time because of all the pre-work that has to be performed in the design phase: create, trial, redo, trial, redo, trial, redo, etc.; and all preferably without using the target population.
However, most organizations are demanding shorter design times. Thus the formative part is moved over to the other methods, such as through the use of rapid prototyping and using testing and evaluations methods to improve as one moves on. Which of course is not as accurate but it is more appropriate to most organizations as they are not really that interested in accurate measurements of the content but rather the end product — skilled and knowledgeable workers.
Misanchuk's defining terms basically puts all the water in a container for accurate measurements while the typical organization estimates the volume of water running in a stream.
Thus if you are a vendor, researcher, or need highly accurate measurements you will probably define the two evaluations in the same manner as Misanchuk. If you need to push the training/learning out faster and are not all that worried about highly accurate measurements, then you define it closer to how most organizations do and how Saettler describes the CTW example.
Misanchuk, E.R. (1978). Uses and Abuses of Evaluation in Continuing Education Programs: On the Frequent Futility of Formative, Summative, and Justificative Evaluation. San Antonio, Texas: Paper presented at the Adult Education Research Conference, 4-78.
Rossett, A., Sheldon, K. (2001). Beyond the Podium: Delivering Training and Performance to a Digital World. San Francisco: Jossey-Bass/Pfeiffer
Saettler, P. (1990). The Evolution of American Educational Technology. p350. Englewood, Colorado: Libraries Unlimited, Inc.
Scriven, M. (1967). The methodology of evaluation. R.W. Tyler, R M. Gagne, M. Scriven (eds.), Perspectives of curriculum evaluation, 39-83. Chicago, IL: Rand McNally.
Go to the next section: Kirkpatrick's Four Levels of Evaluation
Return to the Table of Contents
Steps in the Evaluation Phase