As faculty we tend to chalk up students’ failure on assessments to lack of effort or lack of understanding of the material. But often that failure is due instead to the gap between instruction and ...
A lot of professors assign readings as follows: students read a piece of text, respond to it in some way, and come prepared to discuss it in class. Yet over half of students don’t do ...
We learn best by returning to the same content over and over, reflecting on it each time to deepen our understanding. This is because knowledge is stored as patterns of neuroconnections in the brain, and ...
While many of us know by now that lecture alone is incongruent with student learning, it remains the predominant form of teaching—and understandably so. Pedagogical change is not a natural function in college classrooms, and ...
Despite the challenges of remote teaching and learning during the pandemic, student surveys in 2021 indicate that students want to continue having the option to learn online. Many instructors are willing to accommodate these students. ...
The past 18 months brought us to see systemic challenges and disparities in higher education. The new emphasis on diversity, equity, and inclusion has pushed each of us to recognize hidden barriers and inequities that ...
Interleaving is the process of alternating between concepts during learning by periodically returning to earlier ones. Studies have shown that interleaving content promotes retention (Richland et al., 2005; Rohrer, 2012; Rohrer et al., 2015). Rohrer ...
Before 2020, we thought students coming to class in pajamas was unprofessional but also a “college thing.” Last semester, Tom recorded a student taking an exam shirtless while lying in bed. Sarah’s student had technological ...
Summer is a great time to sit outside. I remember when we were building a new patio. The contractors delivered a huge pile of rocks to our front yard. Big stones. Midsize stones. Some multihued ...
As you prepare your next asynchronous lecture, you set up your laptop on an appropriately sized delivery service box for the best camera angle, turn on the ring light, click start on the extension that ...
As faculty we tend to chalk up students’ failure on assessments to lack of effort or lack of understanding of the material. But often that failure is due instead to the gap between instruction and performance, where misunderstandings intervene to undermine performance. Guided evaluation of past assessments as exemplars can be a powerful tool for closing this gap and improving student performance.
Consider the example of coaching. Coaches focus on improving player performance on the field, and so much of their teaching is done through guided evaluation of performances. The first thing NFL players do the day after a game is to sit in film study of that game. They learn about their own performance, what they did right or wrong, and what they need to do to improve. But they also learn from seeing the evaluation of other players’ performances. They see the mistakes they need to avoid and what they should emulate. These evaluations bring the coach’s instruction to life by showing players how to apply it in their performance, thus helping to close the gap between instruction and performance. This also corrects misconceptions players might have formed about how to apply the techniques that they have learned to their performance.
Similarly, faculty have large repositories of student work from prior classes that is fertile (if seldom-used) teaching material. Using past student work as a guide, faculty can clarify their expectations and establish standards of excellence with current students. For instance, students tend to read assigned articles for simple facts, whereas the instructor wants them to read for underlying concepts (Rhem, 2009). An instructor going through different examples of student essays can point out how discussing concepts is the goal of the work and so correct any misunderstanding.
Another benefit of this process is that it can demonstrate how an expert in the field analyzes the types of problems that are given to students. Instructors in quantitative fields like math and physics teach students how to solve problems by applying procedures to examples. But they often leave out the process of analyzing problems that determines which procedures to use. As a result, students search for commonalities between examples to pick the correct procedures, but they often latch onto the wrong commonalities. One study, for instance, found that physics students tend to classify problems in terms of superficial features, such as “circular problems,” because they see shape as common to a particular procedure when it’s just a coincidental feature of the problems used as examples (Chi, 1981). Their instructors instead classify problems according to energy principles, such as conservation of energy, to determine the proper procedures. Instructors do not see this disconnection due to the expert blind spot: the tendency of experts to not understand the problems of novices because they do not see the world the way novices do.
Part of the problem is that instructors tend to show only the correct ways to solve problems, not common errors to avoid. Also, instructors use problems that they have seen and solved before and so tend to bypass the first step of analysis. Walking students through the process of solving problems on an exam allows the instructor to step outside their own perspective by taking the position of a student who is seeing the problem for the first time. They can make their own thinking visible by explaining how they analyze a new problem, demonstrating the correct categories to use, and using student errors on former exams to illuminate common mistakes to avoid.
Faculty can use parts of past student work scrubbed of any identifying content for their examples or create their own. I like to do both. I pick or create essay examples that illustrate common student errors, as well as exemplars that demonstrate achievement of the standards that they should strive toward. As faculty we often teach only the correct processes, forgetting that errors are also a powerful learning device. They also capture our attention. There is a reason advertisements use headlines like “The five most common investment errors that ruin retirement savings” rather than “The principles of retirement investing.” We want to hear about errors to avoid them, and students will perk up their ears when a faculty member tells them that they will learn about the common mistakes they need to avoid to be successful.
After guiding students through various examples, the instructor can then give them new ones to evaluate on their own. This step can include peer evaluations of fellow students that will help each other improve their work. Numerous studies show that peer evaluations benefit not only the person being evaluated but also the person doing the evaluation. Baniya et al. (2019) found that students learned quite a bit from doing peer evaluations. As one put it,
I liked how I could view other [students’] works and see what they did right in order to improve myself. I thought this was helpful because it allowed me to read criticism that I wouldn’t think my project would have. (p. 89)
A similar sentiment was expressed in a peer evaluation study by Canty (2012): “Having completed the assessment I am more confident in my ability to think for myself and produce work to a fairly high level that is also creative and interesting” (p. 231).
The upshot is that as faculty, we tend to focus so much on conveying information to students that we forget about the inference gap between teaching and student application of learning, where students can legitimately infer different things from our instruction. That makes evaluation of performance examples a valuable tool for improving learning and student performance.
Baniya, S., Chesley, A., Mentzer, N., Bartholomew, S., Moon, C., & Sherman, D. (2019). Using adaptive comparative judgment in writing assessment: An investigation of reliability among interdisciplinary evaluators. Journal of Technology Studies, 45(2), 24–35. https://doi.org/10.21061/jots.v45i1.a.3
Canty, D. (2012). The impact of holistic assessment using adaptive comparative judgment of student learning [Doctoral dissertation, University of Limerick]. https://ulir.ul.ie/handle/10344/6766
Chi, M., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121–152. https://doi.org/10.1207/s15516709cog0502_2
Rhem, J. (2009). Deep/surface approaches to learning in higher education: A research update. Essays on Teaching Excellence: Toward the Best in the Academy, 21(8). https://podnetwork.org/content/uploads/V21-N8-Rhem.pdf