LOADING

Type to search

Category: Course Design

Supriya Sarnikar, associate professor in the Economics and Management Department at Westfield State University, enrolled in several MOOCs (Massive Open Online Courses) offered through Coursera for several reasons: personal enrichment, to learn of any pedagogical or technological innovations these courses offered, and to better understand the students' experiences in online courses. “I was curious about the Coursera platform and wanted to learn whether it offered any features that were different from or better than learning management systems that are commonly used on most campuses.” In an email interview with Online Classroom, Sarnikar shared her reflections of the Coursera experience.

OC: How did your experience compare to your expectations?

Sarnikar: I expected to receive a good introduction with leads for further reading in the disciplines with which I was unfamiliar. All courses met this expectation.

I did not expect courses to be pedagogically innovative and universally accessible to students with different learning styles/preferences. These expectations turned out to be true. Most courses were online versions of chalk-and-talk. Even a course that was supposed to teach instructors how to design effective online courses failed to follow its own design principles and was cancelled. This is not to say that instructors of Coursera courses are somehow inept, but that the challenge of adapting existing pedagogical practices to the online platform is one that seems to trip up even the most seasoned and resource-rich academic institutions.

I had hoped the Coursera platform would offer some features that would be technologically innovative since it is a brainchild of two computer science professors. I was both favorably impressed and disappointed in this respect. I will begin with the good features of the platform.

  1. The default interface is simple and easily navigable for students and seemingly also for instructors. (I got a brief preview of the instructor's interface when a course that I had enrolled in was accidentally made available before any content was uploaded and the instructions for the course designer were visible.)
  2. Videos are routinely offered in multiple formats and students can choose the default format that is suitable for their hardware and software settings.
  3. Students have the option of streaming or downloading the video lectures.
  4. Other course materials, when made available by the instructors, are available in several formats (PowerPoint slides or PDF).

Of course, all these features can be offered through common learning management systems and are not particularly innovative. But what struck me was that these were standardized through all courses. Whenever video lectures were available, they were available in different formats.

In terms of offering technologically innovative tools for instruction, Coursera disappoints utterly. The platform possibly has better backend features, which allow for massive data collection that could be helpful for research on student learning. But in terms of what it offers to students, it lacks imagination. The technological tools offered for incorporating interaction in the courses are the same unimaginative threaded discussion forums as are available in any low-end technological solution. The use of discussion forums to incorporate collaborative learning is even more inefficient in such large classes than it is in smaller classes. Most courses did not use discussion forums as an integrated part of the learning structure, but used them as an additional tool for students to interact as they please. Instructors who wanted to incorporate more deliberate interaction most often turned to other publicly available collaborative tools, such as Google Hangouts, which come with their own limitations.

OC: What did you like about these courses?

Sarnikar:

  1. They do not entail any additional monetary investments on the part of the student.
  2. The default interface is simple and easy to navigate. But it is customizable, and some instructors managed to clutter the interface with too many customized icons and turned their courses into hyperlink-heavy monstrosities. Courses in the beginning were more standardized. Humanities courses, which were later additions, generally did not use the default format.
  3. I noticed a rapid improvement in the quality of video lectures within the short time frame of a semester. A course that I took in fall 2012 had video lectures that were captured from a classroom lecture in Stanford with all the “ums,” “ahs,” and awkward pauses of an impromptu classroom lecture. The hour-long lectures were cut into smaller segments after the fact. But later courses taken in spring 2013 seemed to be designed with shorter, segmented lectures and each segment had its own theme. The videos also seemed to be professionally edited and had closed captions that were more accurate.
  4. Some courses (in the computer science or related fields) offered customized levels of difficulty. Those who wanted a broad introduction watched fewer video lectures and completed fewer assignments. Those who wanted to delve a little deeper could watch additional video lectures and complete more assignments, such as programming assignments.

 OC: What did you dislike about these courses?

Sarnikar:

  1. Courses that could not rely on chalk-and-talk, such as courses on first-year composition, and courses that chose not to rely entirely on multiple-choice quizzes placed a heavy emphasis on peer evaluations. But only one or two courses were able to use peer evaluations effectively. Many used poorly designed assignments and seemed to have included peer evaluations solely as a way to offset the workload created by the impossible student-teacher ratios in these courses. Many students complained about peer evaluations, but in a manner that was sensitive to the problem of student-teacher ratios and the fact that instructors were not directly compensated for their efforts. Many offered solutions for making them more effective and less subjective. If Coursera begins imposing a fee for its courses, I would imagine that the complaints about poorly thought-out peer evaluations will become more strident.
  2. Courses from different institutions had assignment deadlines in different time zones. Since the instructors are not manually grading any assignments, there is no reason why the deadlines should be denominated in the instructor's time zone. I was simultaneously enrolled in courses from universities situated in California, Pennsylvania, Tel Aviv, Edinburgh, and Hong Kong, each one with deadlines in its own time zone. It should be entirely possible to standardize the time zones used for deadlines. This will make it easier for students taking courses from different institutions to manage their time and plan their studies.

OC: Did you participate in discussions? If so, did you find them valuable?

Sarnikar: I did not find discussions to be valuable. Very few courses tried to use focused discussions in a structured manner. But even in these, there was a lot of irrelevant or uninformed chatter that made it time consuming to separate the wheat from the chaff. Discussions are generally effective in small online classes if they are focused and properly designed by the instructor. But perhaps because Coursera discussions are not graded, one did not see the same amount of quality discourse even in the few instances where the instructors tried to focus the discussions. Whatever quality there was often got lost in the incessant noisy chatter.

 OC: Would you take another course in this format?

Sarnikar: If “this format” means existing chalk-and-talk-style online courses, the answer is yes. But only if (1) the courses remain free of additional monetary cost, (2) peer evaluations and discussions do not become the major form of assessments, and (3) the purpose of taking the courses is personal enrichment rather than career advancement.

OC: What lessons did you learn that you will apply to the online courses you teach?

 Sarnikar:

Lessons learned:

  1. Do not clutter course pages with too much information and too many hyperlinks. The number of hyperlinks increases cognitive load and distracts from, rather than facilitates, student learning. An instructor's job is not to show off her knowledge by throwing everything but the kitchen sink at the student but to carefully pick those resources that are most useful and helpful.
  2. Do not use peer evaluations solely as a means of transferring the workload of providing feedback from the instructor to the students. This is not an effective way of teaching or learning.
  3. Peer evaluations can be a valuable learning tool if they are designed properly. Assignments with questions that can be easily automated are not good candidates for peer evaluation.
  4. Do not expect that students will be able to provide constructive feedback in a peer review assignment if they are not trained how to do so—and even when they are trained to do so. It is important to pay attention to the unintended incentives that are created by the design of an assignment.
  5. Be clear about why you are requiring peer evaluation of an assignment. And if using teaching assistants, make sure they are trained properly to stick to the goals of the assignment. In one course, a student complained that his writing sample was graded on content rather than on its technical qualities. He thought that the peer had given his submission a low score because the peer did not agree with the student's position or topic. This student was told by the teaching assistant that it was the student's fault for choosing a topic that was likely to be controversial or provocative. The assignment's rubric, however, did not include an evaluation of choice of topic.