At this point, clickers and other electronic tools that encourage student interaction are accepted instructional practices and commonly used in large courses. What they offer that other instructional strategies don’t is a means for every ...
Concern over large courses (especially required ones) persists even as their economic viability has made them an increasingly accepted part of higher education. We’re not expecting them to go away any time soon, but that ...
At this point, clickers and other electronic tools that encourage student interaction are accepted instructional practices and commonly used in large courses. What they offer that other instructional strategies don’t is a means for every student to participate. Their effects are also relatively easy to study, and consequently, there’s a plethora of research that explores how they affect learning. As we have come to expect with empirical analyses of instructional approaches, the effects are mixed: some results report learning gains and others don’t.
In general, these are well-designed studies, but faculty employ clickers and solicit electronic participation in so many different ways. Sometimes every student has a clicker, and students work individually before reporting their answers with the device. Sometimes there’s an opportunity but not a requirement to talk with others before and after seeing the correct answer. Sometimes students work on problems in groups that they form; other times the teacher may assign them to a group. Sometimes the group has only one clicker, so the group must agree or vote on the answer they submit. Most of the tasks involve problem-solving, but not always. And finally, faculty are using clickers with content from different fields, mostly but not exclusively the sciences. With all that variance, conflicting study and classroom results are not surprising.
Recent research is trying to isolate some of these variables, and a study by Weiss et al. (2020) is a good example. Notable in this work is the size of the cohort: 26 sections of the same introductory chemistry course, offered across 13 years, with a total of 1,500 students all taught by the same instructor. In the first five sections students learned in a traditional lecture mode, in the next 14 sections students used clickers individually or in groups (their choice), and in the last seven sections students solved problems in teacher-assigned groups and individually used clickers to report their answers. The amount of time devoted to lecture decreased to 50 percent and then 30 percent, respectively, in the clicker sections.
The goal was to compare the three approaches with particular interest in what happens when the problem-solving happens in groups with stable membership. Weiss et al. found that course grades improved in both clicker treatments. Seventy-two percent of the students in the traditional lecture format earned As, Bs, or Cs; that number jumped to 76 percent in the first clicker treatment and 77 percent in the second. The research team also looked at student performance on homework, quizzes, the midterm exams (there were three of them), and the final. “The most substantive improvement between the traditional lecture strategy and the peer-assisted learning with assigned groups was with midterm exams and the final examination” (p. 62).
More statistical analysis revealed this key finding: “students need to work with other students consistently in the assigned groups to achieve real impact on their grades” (p. 63). Simply adding clickers did not have as significant an effect as that provided by ongoing group interaction. The authors cite another recent study that found that when students worked on problems in groups, they performed better when they answered more complex problems individually. In other words, that collaboration with others may be instrumental in developing individual problem-solving skills.
Did you note the decrease in time devoted to lecture and imagine that meant less content coverage? That’s what the authors anticipated, but they report, “We did not have to sacrifice any lecture material that we would have covered the traditional lecture approach” (p. 64). The content was transformed into clicker questions, and class time was devoted to solving and discussing the problems and related content. And there was still some lecture. Students wanted a mix, and the instructor found that in some cases students needed more introduction to the material and help with mathematics.
Talking about the content helps students understand it. That deeper understanding extends to the group member who seeks to explain an approach to the problem and the group member who doesn’t yet understand how to solve the problem. Given the chance and a teacher’s guidance and support, students will learn from and with each other. It isn’t always easy for teachers to listen to students’ talk, but what they’re hearing is the hard, messy work of learning.
Weiss, D. J., McGuire, P., Clouse, W., & Sandoval, R. (2020). Clickers are not enough: Results of a decade-long study investigating instructional strategies in chemistry. Journal of College Science Teaching, 49(3), 58–65.
To sign up for weekly email updates from The Teaching Professor, visit this link.