Presentation on SoTL research re: peer feedback

In mid-November I gave a presentation at the SoTL Symposium in Banff, Alberta, Canada, sponsored by Mount Royal University.

It’s a little difficult to describe this complex research, so I’ll let my (long) abstract for the presentation tell at least part of the story.


750-word abstract

Title: Tracking a dose-response curve for peer feedback on writing

There is a good deal of research showing that peer feedback can contribute to improvements in student writing (Cho & MacArthur, 2010; Crossman & Kite, 2012). Though intuitively one might think that students would benefit most from receiving peer comments on their written work, several studies have shown that student writing benefits both from comments given as well as comments received–indeed, sometimes the former more than the latter (Li, Liu & Steckelberg, 2010; Cho & MacArthur, 2011).

There are, however, some gaps in the literature on the impact of peer feedback on improving student writing. First, most studies published on this topic consider the effect of peer feedback on revisions to a single essay, rather than on whether students use peer comments on one essay when writing another essay. Cho and MacArthur (2011) is an exception: the authors found that students who wrote reviews of writing samples by students in a past course produced better writing on a different topic than those who either only read those samples or who read something else. In addition, there is little research on what one might call a “dose-response” curve for the impact of peer feedback on student writing—how are the “doses” of peer feedback related to the “response” of improvement in writing? It could be that peer feedback is more effective in improving writing after a certain number of feedback sessions, and/or that there are diminishing returns after quite a few sessions.

To address these gaps in the literature, we designed a research study focusing on peer feedback in a first-year, writing intensive course at a large university in North America. In this course students write an essay every two weeks, and they meet every week for a full year in groups of four plus their professor to give comments on each others’ essays (the same group stays together for half or the full year, depending on the instructor). With between 20 and 22 such meetings per year, students get a heavy dose of peer feedback sessions, and this is a good opportunity to measure the dose-response curve mentioned above. We can also test the difference in the dose-response curve for the peer feedback groups that change halfway through the year versus those who remain the same over the year. Further, we can evaluate the degree to which students use comments given by others, as well as comments they give to others, on later essays.

While at times researchers try to gauge improvement in student work on the basis of peer feedback by looking at coarse evaluations of quality before and after peer feedback (e.g., Sullivan & Pratt, 1996; Braine, 2001), because many things besides peer feedback could go into improving the quality of student work, more specific links between what is said in peer feedback and changes in student work are preferable. Thus, we will compare each student’s later essays with comments given to them (and those they gave to others) on previous ones, to see if the comments are reflected in the later essays, using a process similar to that described in Hewett (2000).

During the 2013-2014 academic year we ran a pilot study with just one of those sections (sixteen students, out of whom thirteen agreed to participate), to refine our data collection and analysis methods. For the pilot program we collected ten essays from each of the students who agreed to participate, comments they received from their peers on those essays, as well as comments they gave to their peers. For each essay, students received comments from three other students plus the instructor. We will use the instructor comments to, first, see whether student comments begin to approach instructor comments over time, and to isolate those things that only students commented on (not the instructor) to see if students use those in their essays (or if they mainly focus on those things that the instructor said also).

In this session, the Principal Investigator will report on the results of this pilot study and what we have learned about dealing with such a large data set, whether we can see any patterns from this pilot group of thirteen students, and how we will design a larger study on the basis of these results.


 

It turned out that we were still in the process of coding all the data when I gave the presentation, so we don’t yet have full results. We have coded all the comments on all the essays (10 essays from 13 participants), but are still coding the essays themselves (had finished 10 essays each from 6 participants, so a total of 60 essays).

I’m not sure the slides themselves tell the whole story very clearly, but I’m happy to answer questions if anyone has any. I’m saving up writing a narrative about the results until we have the full results in (hopefully in a couple of months!).

We’re also putting in a grant proposal to run the study with a larger sample (didn’t get a grant last year we were trying to get…will try again this year).

Here are the slides!