<

Note: This site is moving to KnowledgeJump.com. Please reset your bookmark.

Testing Instruments in Instructional Design

In this step, tests are constructed to evaluate the learner's mastery of the learning objectives. You might wonder why the tests are developed so soon in the design phase, instead of in the development phase after all of the training material has been built. In the past, tests were often the last items developed in an instructional program. This is fine, except that many of the tests were based on testing the instructional material, nice to include information, items not directed related to the learning objectives, etc.

The purpose of the test is to promote the development of the learner. It ascertains whether the desired performance change has occurred following the training activities. It performs this by evaluating the learner's ability to accomplish the Performance Objective. It also is a great way to provide feedback to both the learner and the instructor.

Test Design

The Performance Objective should be a good simulation of the conditions, behaviors, and standards of the performance needed in the real world, hence the evaluation at the end of the instruction should match the objective. The methodology and contents of the learning program should directly support the performance and learning objectives. The instructional media should explain, demonstrate, and provide practice. Then, when students learn, they can perform on the test, meet the objective, and perform as they must in the real world. The diagram shown above, illustrates how it all flows together

Testing Terms

Tests are often referred to as evaluations or measurements. However, in order to avoid confusion, three terms will be defined:

Planning the Test

Before plunging directly into test item writing, a plan should be constructed. Without an advance plan, some test items may be over or under represented, while others may stay untouched. Often, it is easier to build test items on some topics than on others. These easier topics tend to get over-represented. It is also easier to build test items that require the recall of simple facts, rather than questions calling for critical evaluation, integration of different facts, or application of principles to new situations.

A good evaluation plan has a descriptive scheme that states what the learners may or may not do while taking the test. It includes behavioral objectives, content topics, the distribution of test items, and what the learner's test performance really means.

Types of Tests

There are several varieties of tests. The most commonly used in training programs are criterion-referenced Written Tests, Performance Tests, and Attitude Surveys. Although there are exceptions, normally one of the three types of test are given to test one of the three learning domains (Krathwohl, et al., 1964). Although most tasks requires the use of more than one learning domain, there is generally one that stands out. The dominant domain should be the focal point of one of the following evaluations:

Whenever possible, criterion-referenced performance tests should be used. Having a learner perform the task under realistic conditions is normally a better indicator of a person's ability to perform the task under actual working conditions than a test that asks them questions about the performance.

If a performance test is not possible, then a criterion-referenced written test should be used to measure the learners' achievements against the objectives. The test items should determine the learner's acquisition of the KSAs required to perform the task. Since a written measuring device samples only a portion of the population of behaviors, the sample must be representative of the behaviors associated with the task. Since it must be representative, it must also be comprehensive.

eLearning and Drag and Drop

While testing learners with an elearning platform uses many of the same techniques as classrooms, elearning merits a few words as the tests can differ. One of these differences in that the tests can offer immediate feedback. For example, one of the more popular methods is using what is known as Drag and Drop — clicking on a virtual object and dragging it onto another virtual object. While some Drag and Drop tests score the results at the end of the game, others provide immediate feedback by allowing only the correct object to be dropped in its answer. This allows the learners to practice their skills and knowledge before moving on to a test instrument that is scored.

Drag and Drop Testing

This Instructional System Design Game is an example of drag and drop. Clicking on the link will bring the game up in a small window. To see how it works and to get the template, click on the “Licensed by URL” on the bottom of the screen.

Performance Tests

A performance test allows the learner to demonstrate a skill that has been instructed in a training program. Performance tests are also criterion-referenced in that they require the learner to demonstrate the required behavior stated in the objective. For example, the learning objective “Calculate the exact price on a sales using a cash register” could be tested by having the learners ring up the total with a given number of sales items by using a cash register. The evaluator should have a check sheet to go by that lists all the performance steps that the learner must perform to pass the test. If the standard is met, then the learner passes. If any of the steps are missed or performed incorrectly, then the learner should be given additional practice and coaching and then retested.

There are three critical factors in a well-conceived performance test:

Written Tests

A written test may contain any of these types of questions:

  1. Open-ended question: This is a question with an unlimited answer. The question is followed by an ample blank space for the response.
  2. Checklist: This question lists items and directs the learner to check those that apply to the situation.
  3. Two-Way question: This type of question has alternate responses, such as yes/no or true/false.
  4. Multiple-Choice question: this gives several choices, and the learner is asked to select the most correct one.
  5. Ranking Scales: This type of question requires the learner to rank a list of items.
  6. Essay: Requires an answer in a sentence, paragraph, or short composition. The chief criticism leveled at essay questions is of the wide variance in which instructors grade these. A chief criticism of the other types of questions (multiple choice, true/false, etc.) is that they emphasize isolated bits of information and thus measure a learner's ability to recognize the right answer, but not the ability to recall or reproduce the right answer. In spite of this criticism, learners who score high on these types of questions also do well on essay examinations. Thus, the two kinds of tests appear to measure the same type of competencies.

Multiple Choice

The most commonly used question in training environments is the multiple-choice question. Each question is called a test item. The parts of the test item are labeled as:

1. This part of the test item is called the "stem".
_____a. The incorrect choices are called "distracters".
_____*. Correct response
_____c. Distracter
_____d. Distracter

When writing multiple-choice questions follow these points to build a well constructed test instrument:

Poor example:

1. The written objectives statement should
_____a. reflect the identified needs of the learner and developer
_____b. reflect the identified needs of the learner and organization
_____c. reflect the identified needs of the developer and organization
_____d. reflect the identified needs of the learner and instructor

Better example:

1. The written objectives statement should reflect the identified needs of the
_____a. learner and developer
_____b. learner and organization
_____c. developer and organization
_____d. learner and instructor

The distracters should be believable and in sequence:

Poor example:

2. A student who earns a score of 60, 70, 75, 95, and 95 would have a mean score of
_____a. 79
_____b. 930
_____c. 3
_____d. 105

In the above example, all the distracters were simply chosen at random. A better example with believable distracters and numbers in sequence would be:

2. A student who earns a score of 60, 70, 75, 95, and 95 would have a mean score of
_____a. 5 (total number of scores)
_____b. 75 (medium)
_____c. 79 (correct response)
_____d. 95 (mode)
(also notice that the choices are in numerical order)

If an item analysis is performed on the above example, we might discover that none of the learners choose the first distracter, (a). In our search for a better distracter, the instructor informs us that some of the learners are entering the class with the myth that the mean is found by using the incorrect formula shown on the left below, instead of the correct formula shown to its right:

mean formula

That is, they are adding a 1 to the total number of scores. We could change the first distracter (a) as follows:

2. A student who earns a score of 60, 70, 75, 95, and 95 would have a mean score of
_____a. 65 (answer if incorrect formula is used)
_____b. 75 (medium)
_____c. 79 *correct
_____d. 95 (mode)

Although a new item analysis might show that the learners are not choosing the new distracter because the myth is adequately being dispelled by the instructor, it could still be left in as a distracter to let the instructor know if the myth is properly being dispelled.

If a plausible distracter cannot be found, then go with a fewer number of distracters. Although four choices are considered the standard for multiple-choice questions as they only allow a 25% chance of the learner guessing the correct answer, go with three if another believable distracter cannot be constructed. A distracter should never be used just to provide four choices as it wastes the learner's time reading through the possible choices.

Also, notice that the layout of the above example makes an excellent score sheet for the instructor as it gives all the required information for a full review of the evaluation.

True and False

True and false questions provide an adequate method for testing learners when three or more distracters cannot be constructed for a multiple-choice question or to break up the monopoly of a long test.

Multiple-choice questions are generally preferable as a learner who does not know the answer has a 25 percent chance of correctly guessing a question with four choices or approximately 33 percent for a question with three choices. With a true-false questions their odds get better with a 50 percent chance of guessing the correct answer.

True and false questions are constructed as follows:

__ T __ F__ 1. There should always be twice the number of true statements verses false statements in a True/False test.
__ T__ F __ 2. Double negative statements should not be used in True/False test statements.

Question 1 is false as there should be approximately an equal number of true and false items. Question 2 is true for any type of question. Other pointers when using True and False tests are:

  1. Use definite and precise meanings in the statements.
  2. Do not lift statements directly from books or notes.
  3. Distribute the true and false statements randomly in the test instrument.

Open Ended Questions

Although open-ended questions provide a superior method of testing than multiple-choice or true-false questions as they allow little or no guessing, they take longer to construct and are more difficult to grade. Open-ended questions are constructed as follows:

1. In what phase of the Instruction Skills Development model are tests constructed? ____________________
(This is an example of a direct question)

2. Open-ended test statements should not begin with a _________________________ .
(This is an example of an incomplete statement.)

The blank should be placed near the end of the sentence:

Poor example:

3. ____________________ is the formula for computing the mean.

Better example:

3. The formula for computing the mean is _____________________.

Placing the blank at or near the end of a statement allows the learner to concentrate on the intent of the statement. Also, the overuse of blanks tends to create ambiguity. For example:

Poor example:

4. _________________ theory was developed in opposition to the _____________ theory of _______________________ by ___________________ and ____________________.

Better example:

4. The Gestalt theory was developed in opposition to the ____________________ theory of psychology by ______________________ and _____________________________.

 

Attitude Surveys

Attitude surveys measure the results of a training program, organization, or selected individuals. The goal might be to change the entire organization (Organizational Development) or measure a learner's attitude in a specific area. Since attitudes are defined as latent constructs and are not observable in themselves, the developer must identify some sort of behavior that is representative of the display of the attitude in question. This behavior can then be measured as an index of the attitude construct.

Often, the survey must be administered several times as employees' attitude will vary over time. Before and after measurements should be taken to show the changes in attitude. Generally, a survey is conducted one or more times to assess the attitude in a given area, then a program is undertaken to change the employee's attitudes. After the program is completed, the survey is again administered to test its effectiveness.

A survey example can be found at Job Survey.

One test is worth a thousand expert opinions. — Bill Nye the Science Guy

Next Steps

Go to the next section: Identify Learning Steps

Read about Item Analysis (testing the test)

Return to the Table of Contents

Pages in the Design Phase

References

Brown, F.G. (1971). Measurement and Evaluation. Itasca, Ill: F. E. Peacock.

Krathwohl, D.R., Bloom, B.S., Mesia, B.B. (1964). Taxonomy of Educational Objectives: Handbook 1: Cognitive Domain. New York: David McKay.

Wolansky, W.D. (1985). Evaluating Student Performance in Vocational Education. Ames, Iowa: Iowa State University Press.