Type to search

Category: Evaluation and Feedback

When student ratings first started being widely used and the research associated with them was prolific, many faculty objected to the focus on teaching effectiveness. “It shouldn't be about me! I'm a good teacher if my students are learning. Measure how much my students have learned and then make judgments about my teaching.” The problem has always been those intervening variables between the teacher and the students that make it hard to tie the learning to the teaching. Is the student learning lots because she's highly motivated and wants to learn? Is the student not learning because his life is full of work and family without time to study? Variables like these affect how much a student learns regardless of the quality of the instruction. And there's a second persistent problem: End-of-course ratings are anonymous, which makes it impossible to tie the ratings of the instructor to the student's performance in the course. Are the high teacher ratings being given by the students who learned the most or by students who liked the course because it was easy? A new study with an interesting design looked at the learning-ratings link from a perspective that is relevant to the questions raised above. It was a small study (188 students), done at one institution, in three sections of an undergraduate management course, all taught by the same instructor. The course instructor identified two learning objectives as essential and important. At the end of the course students rated the progress they had made in achieving these two objectives. They also rated their progress on 10 other objectives that the instructor had identified to the students. During the course the students completed five multiple-choice exams. The research team hypothesized “that exam performance would be positively correlated with student self-reported progress on learning objectives identified by the instructor as relevant to the course but unrelated to objectives identified as less relevant.” (p. 378) The instructor scored the exams and submitted grades without knowing the results of the ratings. Various mechanisms within the research design guaranteed the anonymity of individual student ratings of the instructor and their exam scores. As for the results, “average exam scores were significantly and positively correlated with student ratings of learning on objectives identified as either essential or important by the faculty member. … Exam scores did not correlate significantly with student ratings of progress on less relevant objectives.” (p. 385) The authors write that the results “all indicate that students are capable of assessing what they have learned at the time they complete course ratings.” (p. 385) Evidence like this continues to underscore the fact that statistically significant correlations between student ratings and achievement measures such as exam scores are not necessarily an indication of easy courses with instructors who give lots of high grades. As notable researchers such as McKeachie have pointed out, those positive correlations should be expected. If students believe they are learning lots in a course and they see the instructor as instrumental in that learning, that instructor deserves high ratings. This research shows that students' beliefs about their learning (in this case related to achievement of two objectives) are valid. There is a correlation between their sense of how much they've learned and their exam scores. If students are accurately assessing how well they're learning in a course (and this study documents that they are), then those students with high grades who give the teacher a high evaluation are doing so because that teaching effectively facilitated their learning. Those high ratings are given for a legitimate reason. Reference: Benton, S. L., Duchon, D., and Pallett, W. H. (2013). Validity of student self-reported ratings of learning. Assessment & Evaluation in Higher Education, 38 (4), 377-388.