LOADING

Type to search

Collaborative Testing: Conversations That Promote Learning

For Those Who Teach

Collaborative Testing: Conversations That Promote Learning

Print This Post
Share

Interest in group exams and quizzes continues to grow, as does the research on how they affect learning. The process of having students take an exam or quiz individually and then collectively in a group goes by several different names, including collaborative testing and two-stage testing. Beyond that are an array of different design details associated with the practice: for example, how the groups formed, the number of collaborative or traditional testing events or both in a course, the kinds of test questions, the length of the group collaboration, the relative weight of the individual score, and the group score. In studies of group testing experiences, these design details are equally varied. So although we have some good research, it’s difficult to integrate the findings and offer a definitive set of conclusions.

Teaching Professor Blog

That’s clearly revealed in a recently published and commendable review of research on the topic. It’s another review that’s written with practitioners in mind. It’s well organized and accessible and makes recommendations. Author Sandra Efu (2019) looked at 16 empirical studies that investigated collaborative assessment (yet another name). In addition to their varying design details, the studies were conducted in several different disciplines. All 16 are concisely laid out in a table that summarizes the collaborative approach and the findings—Efu’s work is a valuable resource on numerous fronts. “Of the 16 studies examined in this paper,” Efu writes, “nine found that collaborative assessments improved student learning, while seven found no difference in learning between students that completed their exams individually versus in groups” (p. 81). Here again, as is so often the case with discipline-based pedagogical scholarship, the results are mixed.

Efu delves deeply into the studies to see what might explain these contradictory results. She finds a variety of interesting possibilities, one of which I want to explore here. In some group testing approaches, students take the exam individually and then take it again in a small group. Those groups get one answer sheet, which means either everyone agrees on the answer or a majority does. In other collaborative approaches, students take the exam individually but then can discuss answers with others. After those discussions they have the option of changing any of their answers.

As Efu writes, “Students’ commitment to the learning process, and participation level in the collaborative process are influenced by their perception of the impact collaboration with peers has on their grade” (p. 78). In other words, if students think they can get high grades regardless of the performance of those in their groups, they aren’t nearly as motivated to collaborate. And in six of the seven studies that found a collaborative exam experience made no difference in exam scores, students did not participate in groups where answer agreement was required.

Those of us who’ve used group exam structures frequently experience resistance from the bright students. They don’t want to work on exam questions with others. In my case, students could opt out of the group exam experience, and by and large the students who did were the best ones. That was true even though I’d designed the experience so that the individual grade could only be helped, not hurt, by the group grade, and still those bright students saw no value in collaboration.

It makes sense that discussion, argument, and debate will increase if students have to select a group answer. I could not believe the heated conversations my students had over the various answer options; they talked about the course content with way more enthusiasm than they did in class discussions. I could see how their interactions over test questions helped their understanding. They were forced to explain why they thought an answer was correct and then field questions and challenges to those explanations.

But the two approaches aren’t an open-and-shut case. For students other than the brightest, the ones who sort of know an answer or have at least some inkling of it, talking with others provides the opportunity to clarify or expand their understanding. Yes, it may be that the conversation enables them to get an answer correct, but if it also means they’ve learned the material, isn’t that the goal? There’s potential learning in these conversations on another front. If a student doesn’t know or isn’t sure and asks others, the issue of who to believe emerges. Should the student stick with his or her original answer? At what point does a student decide to go with what another person confidently says is right?

If there are points to be had on an exam or quiz, most students are willing to collaborate, and that can lead to an exchange that deepens their understanding of the content. But that’s not all. Students also get to experience the value of collaboration, learn more about making and defending arguments, and confront when and how others should change their beliefs.

Reference

Efu, S. I. (2019). Exams as learning tools: A comparison of traditional and collaborative assessment in higher education. College Teaching, 67(1), 72–82. https://doi.org/10.1080/87567555.2018.1531282

Tags:

You Might also Like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.