Learner Self-Assessment Ratings
Students' ratings can correlate well with external measures of their learning and with the instructor's self-ratings (Cashin, 1995).
Student ratings are (L'Hommedieu, Menges, Brinko, 1990; d'Apollonia, Abrami, 1997; Ory, Braskamp, Pieper, 1980; and Centra, 1993):
statistically reliable (they have internal stability and are consistent over time),
are more statistically reliable than colleague ratings,
are not easily or automatically manipulated by grades.
Imagine, your learners are better able to rate you than your fellow instructors (Dancer, Dancer, 1992)!
Just as professionals learn to rate their efforts, students can do the same
Donnelly and Woolliscroft (1989) reviewed student ratings, using 12 descriptive items over a period of a year and concluded that the learners' evaluations were reliable and that their judgments were sophisticated and well thought-out.
Also, intellectually challenging classes average higher ratings than do easier courses with light work loads (Cohen, 1981). This closely relates to Snow's (1988) research in which a learner's future potential depends upon his or her current cognitive state and that we can increase the learner's potential by increasing the standards (Aptitude-Treatment Interaction [ATI] — ATI suggests that optimal learning occurs when the instruction is matched to the aptitudes of the learner).
Even the learners know when they are being challenged and they appreciate it!
What Should We Ask?
Part of the problem with achieving valid ratings from learners is that we do not always know what to ask. These types of questions produce the most reliable results (Abrami, 1989 & Cashin, 1992):
Overall rating of the instructor.
Giving an overall rating of the course.
Estimating how much a learner has learned in the course.
Rating the effectiveness of the instructor in stimulating a learner's interest in the subject.
Rating the effectiveness of this course in challenging the learner intellectually. (This is important if we are supposed to stimulate continued learner interest and provide positive influence on the ways learners think and act.)
Do not ask the students questions on teaching methods as an instructor might get high marks on “how much the learners learned” (which tend to be valid) and low marks on “how well the course was carefully planned and organized” (which tend not to be valid). Even if these process questions were valid, they do not tell us anything that the “Result” questions cannot.
In the questions above, note that the word “rating” rather thant “evaluation” is used. Ratings imply a source of data, while evaluation implies that we have an answer. That is, the learners provide us information and then we combine it with other sources of information to arrive at a total evaluation. Learners are not always on target, thus their ratings can provide valuable information, but they cannot always tell evaluators everything needed in order to make a valid assessment of the training.
Perhaps the most unreliable question is, “How much did you enjoy the class?” Learners generally enjoy courses that are the most intellectually challenging and meaningful. Yet, they will also report that they may enjoy a class that contributes little to their learning. Nevertheless, when the same learners are asked to assess their learning, provide a rating of the instructor and/or course, or to assess its intellectual contributions, the students, as a whole, are able to distinguish fluff from substance (Kaplan, 1974; Naftulin, 1973).
Prior learner interest in a subject does influence the outcome of student ratings of effectiveness (Marsh, Dunkin, 1992). For example, an instructor taking a train-the-trainer class will normally give a higher rating, than if she was taking a class in which she had no real interest.
Also, learners do not give higher ratings to classes in which they receive the highest grade (Howard, Maxwell, 1980). Again, the highest marks often go to the most challenging courses. However, a learner's ratings tend to be slightly higher if a learner expects to receive higher grades—the research suggests that the differences is due to the learner being highly motivated and he or she is learning more and can thus expect to get higher grades (Howard, Maxwell, 1982).
To collect immediate feedback, end the session about five minutes early and ask: 1) What major conclusion did you draw from today's session, and 2) what major questions remain in your mind?
These two questions provide the learners valuable insight into the feedback process as it informs them: 1) What have I learned, and 2) what do I need to learn now?
The two questions provide the instructors with feedback, such as discovering if the learners are drawing conclusions quite different from the ones intended. This allows adjustments to be made in the next session with responses to the patterns that emerge and make adjustments in the learning methods.
Learners who have little or no previous experience often have the most inconsistent feedback. This is partially because they have nothing to base their initial feedback on. Using the two questions in multiple sessions aids the students in learning what they do and do not know, thus their feedback is greatly improved upon.
Traditional testing methods do not often fit well with such goals as lifelong learning, reflective thinking, critical thinking, the capacity to evaluate oneself, and problem-solving (Dochy, Moerkerke, 1997). For these, self-assessment plays an important role. Self-assessment refers to the involvement of learners in making judgments about their own learning, particularly about their achievements and the outcomes of their learning (Boud, Falchikov, 1989). It increases the role of learners as active participants in their own learning (Boud, 1995), and is mostly used for formative assessment in order to foster reflection on one's own learning processes and results.
Overall, it can be concluded that research reports positive findings concerning the use of self-assessment in educational practice. Students who engage in self-assessment tend to score most highly on tests. Self-assessment, used in most cases to promote the learning of skills and abilities, leads to more reflection on one's own work, a higher standard of outcomes, responsibility for one's own learning and increasing understanding of problem-solving. The accuracy of the self-assessment improves over time. This accuracy is enhanced when teachers give feedback on students' self-assessment. - Dochy, Segers, Sluijsmans, 1999
Boud (1992, 1995) developed a self-assessment schedule in order to provide a comprehensive and analytical record of learning in situations in which students had substantial responsibility for what they did. The main guidance was a handout that suggested the headings a student might use—goals, criteria, evidence, judgments, and further action.
Weaker learners often overrate themselves. Adams and King (1995) identified a three-step framework to help them develop self-assessment skill:
Learners work on understanding the assessment process, such as: discussing good and bad characteristics of sample work, discussing what was required in an assessment, and critically reviewing the literature.
Learners work to identify important criteria for assessment.
Learners work towards playing an active part in identifying and agreeing on assessment criteria and being able to assess peers and themselves competently.
Another assessment framework looks at the various dimensions (Garfield, 1994):
What to assess, which may be broken down into: concepts, skills, applications, attitudes, and beliefs.
Purpose of assessment: why the information is being gathered and how the information will be used (e.g., to inform students about strengths and weaknesses of learning, or to inform the teacher about how to modify instruction).
Who will do the assessment: the student, peers (such as members of the student's work group), or the teacher. They need to be given opportunities to step back from their work and think about what they did and what they learned.
Method to be used (e.g., quiz, report, group project, individual project, writing, or portfolio).
Action that is taken and the nature of the action.
Feedback given to students. This is a crucial component of the assessment process that provides the link between assessment and improved student learning.
Negative attitudes toward student ratings are especially resistant to change, and it seems that faculty and administrators support their belief in student-rating myths with personal and anecdotal evidence, which [for them] outweighs empirically based research evidence. - Cohen - as cited by Cashin, 1992
The research on student SETEs [Student Evaluations of Teacher Effectiveness] has provided strong support for their reliability, and there has been little dispute about it - Hobson, Talbot, 2001
The learner's rating will serve their purpose if (Centra, 1993):
- You learn something new from them,
- you value the information,
- you understand how to make improvements, and
- you are motivated to make the improvements .
Abrami, P.C. (1989). How Should We Use Student Ratings to Evaluate Teaching? Research in Higher Education, vol 30, 221-227.
Adams, C., King, K. (1995). Towards a framework for student self-assessment. Innovations in Education and Training International, vol. 32, pp. 336-343.
Boud, D. (1992). The use of self-assessment schedules in negotiated learning. Studies in Higher Education, vol. 17, pp. 185-200.
Boud D. (1995). Enhancing Learning through Self-assessment. London and Philadelphia: Kogan Page.
Boud, D., Falchikov, N. (1989). Quantitative studies of self-assessment in higher education: a critical analysis of findings. Higher Education, vol. 18, pp. 529-549.
Cashin, W.E. (1995). Student Ratings of Teaching. The Research Revisited.Center for Faculty Evaluation and Development, Kansas State University, Manhattan, KS. IDEA Paper, No. 32, Sept.
Cashin, W.E., Downey R.G. (1992). Using Global Student Ratings for Summative Evaluation. Journal of Educational Psychology, vol. 84, 563-572.
Centra, J.A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Josse-Bass.
Cohen P.A. (1981). Student Ratings of Instruction and Student Achievement: A Meta-analysis of Multisection Validity Studies. Review of Educational Research, vol. 51 Fall, 281-309.
Cohen, P. (1980). Effectiveness of Student-Rating Feedback for Improving College Instruction: A Meta-Analysis of Findings. Research in Higher Education, vol. 13, 321-341.
Dancer, W.T., Dancer, J. (1992). Peer rating in higher education. Journal of Education for Business, vol. 67, pp. 306-309.
d'Apollonia, S., Abrami, P.C. (1997). Navigating student ratings of instruction. American Psychologist, vol. 52, 1198-1208.
Dochy, F., Moerkerke, G. (1997). The present, the past and the future of achievement testing and performance assessment. International Journal of Educational Research, vol. 27, pp. 415-432.
Dochy, F., Segers, M., Sluijsmans, D. (1999). The Use of Self-, Peer and Co-Assessment in Higher Education: A Review. Studies in Higher Education, November, 24(3), p.331.
Donnelly, M., Woolliscroft, J. (1989). Evaluation of Clinical Instructors by Third-Year Medical Students. Academy of Medicine, vol. 64, 159-164.
Garfield, J. (1994). Beyond Testing and Grading: Using Assessment To Improve Student Learning. Journal of Statistics Education, vol.2, no.1.
Howard, G., Maxwell, S. (1980). Correlation Between Student Satisfaction and Grades: A Case of Mistaken Causation. Journal of Educational Psychology, no. 72, December, 810-820.
Howard G., Maxwell, S. (1982). Do Grades Contaminate Student Evaluations of Instruction? Research in Higher Education no. 16, 175-188.
Hobson, S., Talbot, D.. (2001). Understanding Student Evaluations. College Teaching, vol. 49(1) January, p. 26.
Kaplan, R. (1974). Reflections on the Doctor Fox Paradigm. Journal of Medical Education, no. 49 March, 310-312.
L'Hommedieu, R., Menges, R., Brinko K. (1990). Methodological Explanations for the Modest Effects of Feedback from Student Ratings. Journal of Educational Psychology, vol. 82(2), 232–241.
Marsh, H.W., Dunkin, M. (1992). Students evaluations of University Teaching: A Multidimensional Perspective. (J. C. Smart, editor.) Higher Education: Handbook of Theory and Research. New York: Agathon, vol. 8, 143-233.
Naftulin, D., Ware J. (1973). The Dr. Fox Lecture: A Paradigm of Educational Seduction. Journal of Medical Education, vol. 48, July, 630-635.
Snow, R. (1998). Abilities and Aptitudes as Achievements in Learning Situations. In Human Cognitive Abilities in Theory and Practice. McArdle, J., Woodcock, R. (Eds). Hillsdale, NJ: Erlbaum.
Ory, J.C., Braskamp L., Pieper, D.M. (1980). Congruency of Student Evaluative Information Collected by Three Methods. Journal of Educational Psychology, vol. 72, 181-185.