There’s a lot to be gained from considering ideas and arguments at odds with current practice. In higher education, many instructional practices are accepted and replicated with little thought. Fortunately, there are a few scholars ...
chat_bubble0 Commentsvisibility5510 Views
2718 Dryden Drive Madison, WI 53704 1-800-433-0499
There’s a lot to be gained from considering ideas and arguments at odds with current practice. In higher education, many instructional practices are accepted and replicated with little thought. Fortunately, there are a few scholars who keep asking tough questions and challenging conventional thinking. Australian D. Royce Sadler is one of them. His views on feedback and assessment are at odds with the mainstream, but his scholarship is impeccable, well-researched, and logically coherent. His ideas merit our attention, make for rich discussion, and should motivate us to delve into the assumptions that ground current policies and practices.
Sadler’s 2016 article proposes three radical assessment reforms. I have space in today’s post to explore one: “accumulation of marks” or grading systems where students collect points across the course. This is probably the most commonly used grading system in North America. Here’s Sadler’s position: “Whether the actual path of learning is smooth or bumpy, and regardless of the effort the student has (or has not) put in, only the final achievement status should matter in determining the course grade” (p. 1087).
He makes three arguments against “accumulating performance measures.” First, continuing summative assessments rob students of the opportunity to learn from failure. Students need opportunities for “false starts,” “bumbling attempts,” and “time spent going up blind alleys” (p. 1088). Those are the very experiences that lead to deep understanding. Developing competence takes time. It requires experiences that occur across months, not days.
Sadler’s second objection involves how final grades “mishmash” the data, mingling credit for behaviors like effort, engagement in preferred activities, completion of exercises, and participation with evaluations of academic achievement. “These behaviors and activities may well assist learning, but they do not constitute the final level of achievement or even part of it. . . . The cost of using marks to modify behavior is contamination of the grade” (p. 1088).
Finally, although earning points throughout the course motivates students and keeps them working, that focus ends when they’ve acquired enough points to satisfy their grade needs. Moreover, this grading approach reinforces the idea that everything students do in the course merits points. But even more significantly, “A steady stream of extrinsic rewards is a poor substitute for developed intrinsic rewards where students take primary responsibility for their own learning” (p. 1088). These are not grading systems that encourage autonomy and self-direction in learners.
The alternative? Sadler advocates formative assessment, clearly separated from summative evaluation. Purely formative assessments have high stakes for learning and zero influence on the final, end-of-course assessment. So, students do all the regular course assignments and exams (save the final) but not for credit. Sadler also believes students should be much more involved in assessing their work. In a 2010 article, he offers ample evidence that few students act on the feedback teachers provide—something many of us have experienced with our own students. It’s another example of how teachers tell students what they should be discovering for themselves. “Students need to become competent not only in making judgments about their own works, but also in defending those judgments and figuring out how those works could have been made better” (p. 1089).
As for the final summative assessment at the end of the course, Sadler pushes us to think in new ways here as well. Student knowledge and understanding should be tested in more authentic ways. Final assessments should allow students to take advantage of “the technologies and tools of production currently used in most workplaces” (p.1090). Time limits should be more generous, with responses written and then revised. “The quality of a student’s response as appraised against standards rather than against other students’ work is a clearer indicator of their capability than the speed of task completion” (p. 1091).
Clearly, these ideas hold students responsible to a far greater degree than current practices do. Sadler acknowledges, noting that these reforms “shift a significant measure of responsibility from the educational environment [teachers, program directors, the institution, for example] to the students themselves” (p. 1091). Most of our students are not prepared to accept this level of responsibility, and there are institutional barriers associated with class sizes and teaching loads.
These aren’t easily implemented reforms. But that shouldn’t prevent us from considering the ideas, debating their merits, and if we concur in principle, searching for small changes that might move assessment practices in these directions.
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation, 35(5), 535–550.
Sadler, D. R. (2016). Three in-course assessment reforms to improve higher education learning outcomes. Assessment & Evaluation in Higher Education, 41(7), 1081–1099.
Previous blog posts that highlight some of Sadler’s work: