How does giving comments in peer assessment affect students? (Part 3)

This is the third post in a series summarizing empirical studies that attempt to answer the question posed in the title. The first two can be found here and here. This will be the last post in the series, I think, unless I find some other pertinent studies.

Lundstrom, K. and Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer’s own writing, Journal of Second Language Writing 18, 30-43. Doi: 10.1016/j.jslw.2008.06.002

This article is focused on students in “L2” classes (second language, or additional language), and asks whether students who review peers’ papers do better in their own (additional language) writing than students who only receive peer reviews and attempt to incorporate the feedback rather than giving comments on peers’ papers.

Participants were 91 students enrolled in nine sections of additional language writing classes at the English Language Center at Brigham Young University. The courses were at two levels out of a possible five: half the students were in level 2, “high beginning,” and half were in level 4, “high intermediate” (33). The students were then divided into a control and experimental group:

The first group was composed of two high beginning and three high intermediate classes, (totaling forty-six students). This group was the control group (hereafter ‘‘receivers’’) and received peer feedback but did not review peers’ papers (defined as compositions written by students at their same proficiency level). The second group was composed of two high beginning classes and two high intermediate classes, (totaling forty-five students), and made up the experimental group, who reviewed peer papers but did not receive peer feedback (hereafter ‘‘givers’’). (33; emphasis mine)

Research questions and procedure to address them

Research questions:

1. Do students who review peer papers improve their writing ability more than those who revise peer papers (for both beginning and intermediate students)?

2. If students who review peer papers do improve their writing ability more than those who revise them, on which writing aspects (both global and local) do they improve? (32)

I found it difficult to read these research questions at first, because I didn’t know what “revise peer papers” meant. I thought the “receivers” of comments would be revising their own papers, but that’s not the case; they really are revising peer papers.

Instead engaging in the usual kind of peer review, where groups of students trade papers and give feedback on each others’ work (and receive feedback from others), the students in this study read papers by other students at the same proficiency level as them, but not in that same course. The procedure was not, as I had at first imagined, that some students in the classes wrote papers that others read and commented on (but the first group did not comment on anyone else’s), while other students commented on papers of the first group but didn’t get any comments on their own papers.

The authors claimed they used sample essays instead, “in order to control for differences in student writing (since with different papers there would be the possibility of wide differences in both how well the papers were written and what types of changes were needed)” (33). It also occurred to me that my initial thought on what they were doing may have been problematic: asking students to only give but not receive comments, or vice versa, on their own work might be perceived by students as more unfair to one or both groups of students than using sample papers.

Specific procedures

Both the experimental and control group were given training in peer review, but the “givers” were trained to give feedback, and the “receivers” to use feedback to improve writing.

Then, both groups got the same sample essay from another student not in that class, and both were asked the same questions about it, such as ” ‘How would you improve the thesis statement?’”

The receivers were to take the feedback written in the margins of the example essay and then rewrite the thesis statement. The ‘‘givers’’ were to provide their own suggestions for improving the thesis statement and write it in the margins of the example essay (their copy of the essay did not include suggestions already written in the margins). (33)

The students also took pre- and post-tests to determine improvement in writing. These were 30-minute, timed writing exercises. These were scored using a grading rubric they got from Paulus (1999), that yields scores out of ten for each of the following categories: “organization, development, cohesion, structure, vocabulary, and mechanics” (34). All essays were scored by two raters and the scores of those two averaged, unless those two differed by more than one point in any of the rubric categories. In the latter case, “a third rater also graded the disputed aspects of the essay before the scores for the essay were averaged” (34). The raters also did a significant amount of practice marking and discussion until they consistently scored within one point of each other on the practice essays.

To further account for scorer differences, the tests were run through a statistical program (FACETS) that adjusted the scores for “rater severity or leniency,” in order to achieve “more accurate representations of student writing ability independent of raters” (35). I wish I understood better how this worked, but that’s all the information they gave in the article.

Results

First research question

Results addressed to the first research question (whether “givers” improve writing more than “receivers”) were achieved by using the following, which I quote for those who understand (I don’t entirely):

… we performed a repeated measures ANOVA with the 7 writing aspects (overall, organization, development, cohesion, vocabulary, mechanics, grammar) and time (pre-test and post-test) as within subject matters and treatment group (receiver vs. giver) as between subjects factors. These analyses determined which of the two groups made significant gains from pre- test to post-test, and they were run separately from for the beginning and intermediate levels to determine if differences occurred on both levels. (35)

For the beginning group (givers and receivers in the level 2 courses), the givers improved more between their pre- and post-tests than did the receivers (35).

For the intermediate group (givers and receivers in the level 4 courses), there were no significant differences between the gains from pre- to post-test in the giver group than the receiver group (36).

The authors hypothesized that perhaps the intermediate group had received more peer assessment training and practice than the beginners, and that this could account for the difference. To test this, they divided the intermediate students into those for whom this was the first course in English in the United States and those for whom it was not, hypothesizing that for the former, this might be their first experience with peer review. [But why think this? See commentary below.]

When running two-way ANOVA (treatment X writing aspects) on the pre- and post-test gains of those for whom this was the first class in English in the United States, the authors found results “suggesting that for those students who were new to peer review, the intermediate giver students made greater gains than did the students in the receiver group” (37). A similar analysis for those who had had previous courses in English in the U.S. did not reveal greater gains for givers than receivers.

Second research question

The results for the second research question (if givers and receivers differ in amount of improvement in writing, on what aspects of writing do they differ?) were gotten through “[f]ollow-up post-hoc Tukey tests” (35). Again, this is something I am not familiar with.

For the beginner group, the givers were found to have greater writing gains overall, and also specifically in three “global” aspects (organization, development, cohesion/coherence) (35).

For the intermediate group, both the givers and receivers improved in organization, development and structure, but not in cohesion/coherence, vocabulary or mechanics (36). For those intermediate students for whom this was the first course in English in the U.S., the givers improved more than the receivers in overall scores, as well as in organization and development aspects of writing (37).

Discussion

The authors discussed several issues, but I just want to point to their hypothesis about why there may have been a difference between givers and receivers in the beginning group but not the intermediate group. They suggest first that the intermediate students may have already received the initial boost in writing ability that can be gotten from engaging in peer review, so it wouldn’t show up later. The authors also suggest something that may be more specific to L2 contexts:

the intermediate group may be developing skills that take longer than one semester to develop; thus the benefits from learning how to better revise student papers at their level will not be manifest over the course of only one semester. (39)

My comments

This study seems largely well-designed, especially in the care the authors have taken to ensure the validity of the pre- and post-test scores. I cannot, of course, comment on the statistical analyses used at this time — not until I’ve learned more about research methods.

The authors acknowledge that there may be intervening factors in the improvement gains from pre-test to post-test that aren’t captured here, but they argue that the following (among others) help to support the claim that it was the difference in treatment (giver vs. receiver) that accounts for much of the effect: (1) both the beginning givers and the  the givers in the intermediate group who had not had previous exposure to peer review made the same kinds of gains; (2) it would be surprising if the intermediate givers who had no exposure to peer review differed so much from those who had if it weren’t due to differences in treatment (37).

The biggest flaw I see is in the assumption that the students for whom this was the first course in English in the U.S. had not previously been exposed to peer review. This requires the assumption that peer review is only common in the U.S., which is a problematic assumption to make. Perhaps they did not have access to the students when they were doing the analysis to ask them if they had previously engaged in peer review or not, which would have been the ideal line along which to separate students.

In addition, I still think that just measuring improvement in writing after engaging in peer review is fairly coarse. What the authors did here was to ask students to give comments or revise a paper based on comments in several different aspects of writing, and then they measured students’ gains in writing on those (? or similar?) aspects. They didn’t look at what sorts of things students said on the sample papers as “givers” and whether they incorporated those kinds of things into their improvements in writing later (or vice versa, those sorts of things that students tried to revise as “receivers”).

The authors themselves suggest that that kind of study should be done in the future:

… to greater support the findings of this study, further research should examine the effects of these two tasks [giving and receiving feedback] in a qualitative study that closely identifies which aspects students discuss while in the reviewing part of peer review and whether these same aspects are improved in the reviewer’s own writing. (39).

Precisely. I think my colleague who suggested this in the first place is right on target!

Earlier studies

Finally, I just want to mention that this is the only article of the four on this topic I’ve looked at that mentions much earlier studies on the same topic. Were it not for this article, I wouldn’t have been aware of these. I’ll have to go back and look at some of them (but I won’t go into detail about them on this blog, if I mention them at all): Sager (1973), Bruffee (1978), Marcus (1984), and Graner (1987). The authors claim that little has been done since then on the question of whether there’s a difference in writing gains between “givers” of peer feedback or “receivers” of peer feedback.

 

Works Cited

Bruffee, K. (1978). The Brooklyn Plan: Attaining intellectual growth through peer-group tutoring. Liberal Education, 64, 447–468.

Graner, M. (1987). Revision workshops: An alternative to peer editing groups. The English Journal, 76, 40–45.

Marcus, H. (1984). The writing center: Peer tutoring in a supportive setting. The English Journal, 73, 66–67.

Paulus, T. M. (1999). The effect of peer and teacher feedback on student writing. Journal of Second Language Writing, 8, 265–289.

Sager, C. (1973, November). Improving the quality of written composition through pupil use of rating scale. Paper presented at the annual meeting of the National Council of Teachers of English, Philadelphia, PA.

 

 

 

2 comments

  1. Peer-to-peer is definitely a growing area of interest. The new President of MIT used his inauguration to talk almost exclusively about P2P learning. A researcher worth checking out is Keith Topping who has a nice stream of studies.

    My colleagues and I have also been working to build a system to facilitate and study P2P learning phenomenon. We would be delighted to give give you a demonstration and would value your feedback. The site can be seen at http://www.ctasit.com

    1. Hi Eric:

      So sorry for the delay in posting your comment and replying! It got put in the spam queue for some reason. Good thing I checked that queue! I am going away for a few days, but will visit the site when I get back and get in touch with you to get a demonstration and give feedback if I have any!

Comments are closed.