LOADING

Type to search

Teaching through Assessment

Grading and Feedback

Teaching through Assessment

Print This Article
assessing online student learning
There is an unfortunate tendency among higher education publications to measure the quality of online education by surveying faculty on whether they think online education is as good as face-to-face learning. But do these surveys ask whether the faculty answering have actually taught an online course? Plus, why assume face-to-face learning is the standard of quality? Why not ask whether face-to-face-learning is as good as online learning?

To continue reading, you must be a Teaching Professor Subscriber. Please log in or sign up for full access.

Tags:

You Might also Like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

There is an unfortunate tendency among higher education publications to measure the quality of online education by surveying faculty on whether they think online education is as good as face-to-face learning. But do these surveys ask whether the faculty answering have actually taught an online course? Plus, why assume face-to-face learning is the standard of quality? Why not ask whether face-to-face-learning is as good as online learning?

By contrast, in a Forbes article, Brandon Busteed cites numerous studies showing that the outcomes of online education are not only as good as face-to-face learning, but often better (Busteed, 2019). For instance, the average first-time pass rate for distance learners at law schools taking California’s First Year Student Law Exam (FYSLE) was slightly more than twice that of learners at traditional law schools in the state—34.8 percent versus 17.1 percent.

Similar results appear when students are surveyed about their experiences with online education. The University of Essex’s online degrees were in the top 18 percent of all U.K. degrees in Britain’s National Student Survey, and a study by Learning House found that 85 percent of students surveyed who had experience in both face-to-face and online learning said online courses are as good or better than face-to-face courses.

There could be any number of reasons for these results, but one reason jumps out: Online courses tend to have more assessments than face-to-face courses. Online course developers generally build at least one assessment per week into the course, and more if you count discussions. By contrast, most face-to-face courses have about one every few weeks, well after the student has covered the learning content. Studies have consistently shown that more, shorter, assessments yield better learning outcomes than fewer, longer assessments (Holmes, 2015).

An obvious way to add assessments to a course is by breaking down larger assessments into shorter weekly assessments. While this is undoubtedly an improvement, an end-of-module assessment will not help the student understand the material itself. These later assessments embody the “pitch and catch” theory of assessment—that information is pitched to students via readings and lectures, and assessments are just a means of measuring how much of that information is caught. By contrast, assessments can help teach the content when they are integrated into the content itself.

One method to tie assessments to content is to begin a module with a short ungraded quiz about the coming topic. In a face-to-face course this can be done with an audience response system like Kahoot, while the LMS quizzing tool can be used in online courses. These pre-module assessments activate student thinking about the topic using their prior understanding, which is important because knowledge-building happens by connecting new information to prior understandings. These pre-module assessments will also motivate student interest in the topic when they are used to expose incorrect assumptions.

For instance, a physics instructor may start a unit on gas mechanics by asking students what would happen if a balloon were filled with helium, tied to the seat of a car with the windows shut, and the car was driven forward. Will the balloon stay in one place, go backward, or go forward? The answer is that it will go forward, which is counterintuitive, and thus most students will get it wrong. The counterintuitive answer creates interest among students in finding out why they were wrong. Not only are students more engaged in what follows, but they will see how their prior assumptions and reasoning about the situation were incorrect. The lesson thus teaches them more than just a singular fact about gas dynamics, it teaches them something about how to approach problems in the field.  

A second method is to intersperse assessment into the learning content. The Case Scenario/Critical Reader Builder developed by the University of Wisconsin–Madison does this by allowing faculty to put course content into an online module—be it readings, videos, case scenarios, etc.—and then add assessments that occur in different places of the module. These can be set to open upon introduction of a new concept for the student, and if the student gets it wrong, they must go back through the recent content and retake the question until they get it right. Instead of just moving forward when they don’t understand something, they have that understanding ensured right when they are exposed to the concept.

The system can also set up a branching scenario that takes student to content specific to their answer. The Department of Biology uses it to put students into the role of responding to a disease outbreak in Bangladesh. The choices that the students make at different junctures determine the course of the outbreak, meaning that students can see the results of wrong choices.

Another use of assessments is to teach active reading skills. The instructor can upload a course text into a module and ask students anticipatory questions about it, such as how they interpret different claims from the author, whether the article fulfilled the promise set up in the introduction, or what questions the text raises about the topic. In other words, the instructor can ask the kinds of questions that the instructor  would ask when reading an academic article to teach how to gain more from an article by reading with a questioning mind. The instructor is using assessments to scaffold learning.

EdPuzzle, which we’ve written about many times, is an excellent way to add assessments and other interactions to online videos. The instructor can load a video that he or she has created or pick one from any of numerous video hosting sites integrated into the system, such as YouTube, Khan Academy, or TED Talks. The instructor then adds stops in the video that open questions, information about tricky concepts just covered, or even other videos to help explain a point. Once again, assessment questions can be used prior to watching the video to prime the neuro pump for learning, as well as during the video to force reflection on the points made or just test comprehension of the concepts. The system allows the assessments to be graded or ungraded, and there is a large repository of lessons shared by others that faculty can use for their own class at no cost. Faculty can also start with lessons created by others and customize the lessons to their own purposes.

Assessments are more than ways to simply measure how much learning has been caught by the student. Assessments can be teaching devices when they are integrated into the content itself. Adding graded or ungraded assessments to teaching content will deepen learning and increase knowledge retention. 

References

Busteed, B. (2019, March 5). Online education: From good to better to best? Forbes. https://www.forbes.com/sites/brandonbusteed/2019/03/05/online-education-from-good-to-better-to-best

Holmes, N. (2015). Student perceptions of their learning and engagement in response to the use of a continuous e-assessment in an undergraduate module. Assessment & Evaluation in Higher Education, 40(1), 1–14. https://doi.org/10.1080/02602938.2014.881978