Validating Instructional Design
The last step in the Development phase is to validate the material by using representative samples of the target population and then revising as needed. The heart of the systems approach to training is revising and validating the instructional material until the learners meet the planned learning objectives. Also, it should not be thought of as a single shot affair — success or failure is not measured at a single point.
ISD or ADDIE is iterative, NOT linear. This is because in traditional waterfall-type projects, training is developed in lengthy sequential phases. Learning methods and delivery flaws are normally only discovered during the delivery or evaluation phases. Fixing these defects can waste resources and cause delays to the learning platform or process due to the rework required. This is often referred to as the “1-100-1,000 rule:” if it cost one to fix it in the initial stages of the project, It will cost 100 times more to fix it at the end of the project and up to 1,000 times more to fix it once it is delivered.
Types of Validation
There are normally two types of validating learning or training platforms:
- prototyping parts of the learning platform throughout the entire ADDIE process through small iterations
- trialing the entire learning platform before it is implemented to see if it does what it is supposed to do
Bill Moggridge (2007) wrote that iterative prototyping, understanding people, and synthesis are the core skills of design and that these skills are central to design:
- Iterative Prototyping - successive small-scale tests on variations of a limited function prototype in order to permit continual design refinements
- Understanding People - having a basic foundation of the cognitive and behavioral sciences
- Synthesis - applying prior knowledge and skills to produce a new, innovative, or original whole
Prototyping allows designers to look at their concept in real world usage before final design decisions are committed to, which makes it quite useful in solving highly complex problems. Understanding people has always been a big part of designing for performance, however it now extends out to the real world and the concepts and products we create. And of course this is all brought together in a unified whole.
Iterations are normally performed using two methods (Saffer, 2007):
- Design Iteration (interpretive) — the iteration is performed to test a learning method, function, feature, etc. of the learning platform to a small set of learners to see if it valid
- Release Iteration (statistical) — the iteration is released as a product to the business unit or customer. Although it may not be fully completed or functional, the designers believe it is good enough to be of use to the learners and business unit
A design iteration is a micro-technique in that it uses a small set of learners to test part of the learning platform so that you make an interpretation of its effectiveness. This method is normally used for innovative design. A design iteration will generally use two types of prototypes:
- Drawing or print prototypes use paper and pencil models. This allows the design to be quickly sketched out so that you can get input from the learners. It normally solicits more input than a real model printed as the learners do not think of the design as being locked-in, thus they are more willing to make suggestions. In addition, it is quite versatile as you can add post-it notes to the paper drawing to simulate drop-down menus, dialog boxes, etc.
- Interactive prototypes use a more realistic model of the learning platform. Its advantage is that it gets you closer to where you need to be. In addition, the learners think it is more locked-in, thus once you have captured their basic needs with the paper and pencil prototypes, they are more hesitate to offer suggestions unless there is a real need for the changes (this helps to prevent running around in circles with design changes).
A release iteration is a macro-technique in that it uses a large set of learners in order to satisfy two requirements:
- It gets the learning platform out as fast as possible, even though it may not be fully ready. While it won't fully solve the performance problem, it will partially help to alleviate it.
- It allows large scale testing of the platform before it is polished. A large and difficult or innovative project might use several Design Iterations and then perform a Release Iteration. In turn, this process is repeated until the learning platform is completed.
Large scale testing of the learning platform before its final release is often referred to as trialing. The validation will depend upon the complexity of the training material and your resources. Listed below is a five-step procedure that provides an effective validation of a large, complex training program. Adjust it as needed to fit the size and complexity of your program, but keep in mind that the closer your validation follows this one, the less problems you will encounter when it is released for delivery.
1. Select the participants that will be in the trials:
The participants should be randomly selected, but they must represent all strata of the target population. They should be clearly told what their roles are in the validation process are. Let them know that they are helping to develop and improve the lessons and that they should feel free to tell you what they think about it. The participants should be pretested to ensure that the students learn from the instructional material and not from past experience.
2. Conduct individual trials:
This trial is performed on one learner at a time. The instruction is presented to a single learner at a time. The separate pieces of instructions, tests, practice periods, etc., should be timed to ensure they match the estimated training times. Do not tutor unless the learner cannot understand the directions. Whenever you help or observe the learner having difficulty with the material, document it.
3. Revise instruction as needed:
Using the documents from the individual trials, revise the material as needed. Closely go over any evaluations that were administered. A large number of wrong answers for an item indicates a trouble area. Conversely, a large number of correct answers for an item could indicate the learners already knew the material, the test items were too easy, or the lessons over taught the material. For more information, see Test Item Analysis.
4. Repeat individual trials until the lesson does what it is supposed to do:
There is no magical number for individual trials. From three to five times seems to be the usual number. If you are trialing a large course, you might only need to trial the whole course once, and then specific troublesome areas of the course, rather than the whole course itself.
5. Conduct group trial:
After you are satisfied with the results of the individual tryouts, move on to the group tryouts. These can be of any size. It may consist of several small groups, one large group, or a combination of both. The procedure is the same as the individual tryouts except for one difference. At some point in the trials you must determine if the program needs to be accepted or if it needs major revision. Usually a minimum of two successful tryouts are conducted to ensure the program does what it is supposed to do. Minor problems should not hold up implementing the program. As was stated earlier in this section, revisions do not stop upon the first implementation of the program, but are performed throughout the life of the program.
Go to the next chapter: Implementation (Delivery) Phase
Return to the Table of Contents
Pages in the Development Phase
Moggridge, B. (2007). Designing Interactions. Cambridge, Massachusetts: The MIT Press.
Saffer, D. (2007). Designing for Interactions: Creating Smart Applications and Clever Devices. Berkeley, CA: New Riders.