Evaluating Instructional Design

Evaluation is the systematic determination of merit, worth, and significance of a learning or training process by comparing criteria against a set of standards. While the evaluation phase is often listed last in the ISD process, it is actually ongoing throughout the entire process. This is what partially makes ISD or ADDIE a dynamic process rather than just a waterfall or linear process. This dynamic process of evaluation can best be shown with this model (Department of the Army, 2011):

non-linear ADDIE Model


The primary purpose is to ensure that the stated goals of the learning process will actually meet the required business need. Thus, it is performed not only at the end of the process, but also during the first four phases of the ISD process:

The two main requirements that must be performed during the evaluation phase are, 1) ensuring the learners can actually meet the new performance standards once they have completed their training and returned to their jobs; 2) ensuring that the business need or goal is actually being met.

The most exciting place in teaching is the gap between what the teacher teaches and what the student learns. This is where the unpredictable transformation takes place, the transformation that means that we are human beings, creating and dividing our world, and not objects, passive and defined. - Alice Reich (1983).

Evaluations help to measure Reich's gap by determining the value and effectiveness of a learning program. It uses assessment and validation tools to provide data for the evaluation. Assessment is the measurement of the practical results of the training in the work and learning environment; while validation determines if the objectives of the training goal were met.

Bramley and Newby (1984) identified five main purposes of evaluation:

A literature review for the seventeen year period leading up to 1986 suggested that there is a widespread under-evaluation of learning and performing processes, and that what is being done is of uneven quality (Foxon, 1989).

Patel (2010) measured the use of Kirkpatrick's level and discovered that organizations measured the following:

However, with today's economy and organizations looking to cut programs that do not work, this lackadaisical attitude towards training evaluation is changing, not necessarily towards ROI (Return on Investment), but rather towards ensuring that training supports the stakeholders' needs. This is primarily because stakeholders for the most part do not view training as a profit center, but rather as a strategic partner who supports their goals.

Backwards Planning

Throughout this ISD guide, we have been using the ADDIE Backwards Planning Model to ensure the learning platform is designed in a manner that achieves the specified business Need or Objective:

Backwards Planning Model

It is based on Kirkpatrick's Four Levels of Evaluation (1975); however, it starts with the last step and works backwards to achieve the desired goals. Three of the levels, Reaction (motivation), Learning, and Behavior (performance) are measured using formative evaluations, while the last step, Results, are used to judge the overall worth of the learning platform (summative evaluation).

Next Steps

Go to the next section: Formative and Summative Evaluations

Return to the Table of Contents


Bramley, P., Newby, A.C. (1984). The Evaluation Of Training Part I: Clarifying The Concept. Journal of European & Industrial Training, 8(6), 10-16.

Department of the Army (2011). Army Learning Policy and Systems. TRADOC Regulation 350-70

Foxon, M. (1989). Evaluation of training and development programs: A review of the literature. Australian Journal of Educational Technology, 5(2), 89-104.

Kirkpatrick, D.L. (1975). Techniques for Evaluating Training Programs. Evaluating training programs in D.L. Kirkpatrick (ed.) Alexandria, VA: ASTD.

Patel, L. (2010). ASTD State of the industry report 2010. Alexandria,
VA: American Society for Training & Development.