It happens regularly in educational research: two studies look at the same intervention and report findings that disagree. That frustrates teachers looking for a clear answer about a technique’s effectiveness. But are the findings all ...
As instructors, we want our students to be self-directed learners. We want them to be able to evaluate their submissions and think through their learning process. In fact, thinking about one’s thinking improves understanding of ...
It happens regularly in educational research: two studies look at the same intervention and report findings that disagree. That frustrates teachers looking for a clear answer about a technique’s effectiveness. But are the findings all that matter in these empirical explorations of instructional strategies?
Let’s consider a case in point: two studies that looked at exam wrappers (Edlund, 2020; Soicher & Gurung, 2017). Used as part of an exam debrief, a wrapper is a simple intervention in which students complete a short questionnaire that asks them how much time they spent preparing for the exam, what kinds mistakes they made, and how they plan to prepare for the next exam. Approximately 10 different studies have explored the effects of exam wrappers, mostly on exam scores, and the results are mixed.
With these two studies, consistent results seem reasonable to expect because of all the similarities. Both asked students almost the same exam-wrapper questions. Both used the exam wrappers in undergraduate psychology courses. The cohort sizes were close. The researchers analyzed how exam wrappers affected academic performance, specifically on course exams. Both used statistical analyses appropriate to the study designs. But they found exactly the opposite. Soicher and Gurung report “no improvement on any of three exams, final grades or metacognitive ability” (p. 64). Edlund’s exploration provides “strong evidence that the use of an exam wrapper . . . improved student performance on later exams in the course.” In study 1, the improvement was a 6.4-point gain, and in study 2 it was a 7.25-point gain (p. 160).
Despite their similarities, there were differences between the studies, some small and some more significant. In one, the students attended a community college; in the other, a selective, four-year institution. Besides exam performance, Soicher and Gurung also tested whether the intervention developed students’ metacognitive skills: Did the questions on the exam wrapper make students more aware of their study approaches and motivate behavior changes? And there was a timing issue. In both studies students completed the exam wrapper during the debrief and turned them in, but Edlund returned the completed wrapper to students one week before the next exam. And there were some small differences in the exam wrappers themselves. The instructions to students were different, the order of the questions was different, slight changes in wording existed, and Soicher and Gurung asked how the current exam score compared to the previous one while Edlund asked what else he could to support students’ learning and exam preparation.
So, which of those differences singly or in combination might account for the very different results? Edlund, whose study occurred subsequent to Soicher and Gurung’s, attributes the difference to returning the exam wrapper just before the next exam (p. 160). Soicher and Gurung cite some research indicating that the development of metacognitive skills takes time and repetition. To have positive effects, exam wrappers may need to be used in multiple courses.
Both of those explanations are conjectures. I think the more important takeaway is an appreciation of the role details play in determining outcomes. Exam wrappers are not an inherently good or bad technique; what matters is how they’re used. Even though understanding more about the details might make the desired results more likely, what the further research documents is never going to guarantee that this intervention will improve exam scores. Teachers use wrappers with different content, different students, and in a wide range of courses. Instructional situations are unique with enough variables to make the outcome less than assured.
So, should teachers just ignore the results? Forget reading research? Absolutely not! But we need to stop looking at research just for the results and instead focus on details. These two studies identify a host of details involved in using exam wrappers—and they’re all important to consider if you are using or plan to use them. Beyond the results, these articles include the exam wrapper questions used in the research. They give background on the theory and research that provides the rationale for exam wrappers. They describe implementation details. And they get teachers thinking about what (if any) learning outcomes exam wrappers might be affecting in their courses. Figuring that out is far more important than the research results.
Edlund, J. E. (2020). Exam wrappers in psychology. Teaching of Psychology, 47(2), 156–161. https://doi.org/10.1177/0098628320901385
Soicher, R. N., & Gurung, R. A. R. (2017). Do exam wrappers increase metacognition and performance? A single course intervention. Psychology Learning & Teaching, 16(1), 64–73. https://doi.org/10.1177/1475725716661872
To sign up for weekly email updates from The Teaching Professor, visit this link.