- The Teaching Professor - https://www.teachingprofessor.com -

What Can We Learn from End-of-Course Evaluations?

No matter how much we debate the issue, end-of-course evaluations count. How much they count is a matter of perspective. They matter if you care about teaching. They frustrate you when you try to figure out what they mean. They haven’t changed; they are regularly administered at odds with research-recommended practices. And faculty aren’t happy with the feedback they provide. A survey (Brickman et al., 2016) of biology faculty members found that 41% of them (from a wide range of institutions) were not satisfied with the current official end-of-course student evaluations at their institutions, and another 46% were only satisfied “in some ways.”

But are these approaches to assessing teaching likely to go away any time soon? I’m not feeling the winds of change. For that reason, I’d like to use this post to suggest several ways faculty can work around and move beyond end-of-course ratings.

Teaching Professor Blog A good place to start is with how we orient toward the feedback provided by these summative assessments, and for this there’s literature to help. Golding and Adam (2016) used focus groups to explore how award-winning teachers approached the feedback provided on student evaluations. Among a number of findings, these faculty talked about an improvement mindset—about always confronting themselves with how they could improve, always being on the lookout for ways to increase student learning, and always accepting that no matter how high (or low) the scores, improvement is an option. Hodges and Stanton (2007) looked at a collection of common student complaints (e.g. “Problems on the exam weren’t like the ones done in class”) for what they indicated about the intellectual challenges faced by novice learners. Gallagher (2000) received a set of low ratings. After some rationalizing and blaming, he decided to see if he could learn something from the feedback. By reading the comments through this new lens, he saw that they could be used to improve his teaching.

The global judgments frequently offered by end-of-course ratings (how does this instructor compare with all others on the planet) should be viewed as a place to start. Rather than offering answers, they can be used to raise questions. “What am I doing that’s causing students to view my teaching this way?” Such questions need to lead us to specific, concrete behaviors—things teachers are or aren’t doing. The Teaching Practices Inventory developed by Weiman and Gilbert (2014) is a great place to start acquiring this very detailed, nuts and bolts understanding of one’s instructional practice. It was developed for use in science and math courses, but slight adjustments can make it relevant in many other disciplines.

The Brickman et al. (2016) study of biology faculty also asked them what kinds of instructional feedback they thought they needed. The faculty reported that they value what peers could provide, but they usually don’t. Classroom observations for promotion and tenure were seen more as rubber stamps than real opportunities for critical analysis of teaching. Classroom observations can do so much more, as two recently developed instruments (COPUS and PORTAAL, see references) demonstrate. COPUS collects data on teacher and student actions at regular time intervals, and PORTAAL provides observational feedback on the use of 21 active learning elements with proven positive effects on learning. To clarify, if a colleague observes a session across disciplines, the observer is there not to judge but to experience the session as a student. When was it easy to understand? What examples made sense? When was it confusing? What questions should have been asked?

We also can obtain more useful input from students. We need to ask for feedback in the middle of the course, when there’s still time to make changes and students feel they have a stake in the action. We need to provide ground rules that give students the opportunity to practice the principles of constructive feedback. And we need to ask more specific questions formatted in different ways. Hoon et al. (2015) showed that even the simple start-stop-continue format improved the quality of student feedback, as did Veeck et al. (2016) with collaborative online evaluations. (For those not familiar with start-stop-continue, this is where you ask students to tell you what you should start doing, what you should stop doing, and what you should continue doing.) Finally, we need to close the loop by talking about what we’ve learned from the feedback, what we’ve decided to change, and what will remain the same.

Brickman et al. wrote, “Our findings reveal a large, unmet desire for greater guidance and assessment data to inform pedagogical decision making” (p. 1). This post illustrates some things faculty can do about that.

References

Brickman, P., Gormally, C., and Martella, A. M., (2016). Making the grade: Using instructional feedback and evaluation to inspire evidence-based teaching. Cell Biology Education, 15 (1), 1-14.

Eddy, S. L., Converse, M., and Wenderoth, M. P., (2015). PORTAAL: A classroom observation tool assessing evidence-base teaching practice for active learning in large science, technology, engineering and mathematics classes. Cell Biology Education, 14 (Summer), 1-16.

Gallagher, T. J. “Embracing Student Evaluations of Teaching: A Case Study.” Teaching Sociology, April 2000, 28, 140-146.

Golding, C., and Adam, L., (2016). Evaluate to improve: Useful approaches to student evaluation. Assessment & Evaluation in Higher Education, 41 (1), 1-14.

Gormally, C., Evans, M., and Brickman, P., (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. Cell Biology Education, 13 (Summer), 187-199.

Hodges, L. C., and Stanton, K. (2007). Translating comments on student evaluations into the language of learning. Innovative Higher Education, 31, 279-286.

Hoon, A., Oliver, E., Szpakowska, K., and Newton, P., (2015). Use of the Stop, Start, Continue method is associated with the production of constructive qualitative feedback by students in higher education. Assessment & Evaluation in Higher Education, 40 (5), 755-767.

Smith, M. K., Jones, F. H. M., Gilbert. S. L., and Weiman, C. E. (2013). The classroom observation protocol for undergraduate STEM (COPUS): A new instrument to characterize university STEM classroom practices. Cell Biology Education, 12, (Winter), 618-625.

Veeck, A., O’Reilly, K., MacMillan, A., and Yu, H., (2016). The use of collaborative midterm student evaluations to provide actionable results. Journal of Marketing Education, 38 (3), 157-169.

Wieman, C., and Gilbert, S., (2014). The teaching practices inventory: A new tool for characterizing college and university teaching in mathematics and science. Cell Biology Education, 13 (Fall), 552-569.

© Magna Publications. All rights reserved.