Peer assessment: face to face vs. asynchronous, online (Pt. 1)

I have been doing a good deal of reading research on peer assessment lately, especially studies that look at differences and benefits/drawbacks of doing peer assessment face to face , orally, and through writing–both asynchronous writing in online environments (e.g., comments on a discussion board) and synchronous writing online (e.g., in text-based “chats”). I summarized a few studies on oral vs. written  peer assessment in this blog post, and then set out a classification structure for different methods of peer assessment in this one.

Here, I summarize a few studies I’ve read that look at written, online, asynchronous peer feedback. In another post I’ll summarize some studies that compare oral, face to face with written, online, synchronous (text-based chats). I hope some conclusion about the differences and the benefits of each kind can be drawn after summarizing the results.

1. Tuzi, F. (2004) The impact of e-feedback on the revisions of L2 writers in an academic writing course, Computers and Composition 21, 217–235. doi:10.1016/j.compcom.2004.02.003

This study is a little outside of my research interest, as it doesn’t compare oral feedback to written (in any form). Rather, the research focus was to look at how students revised essays after receiving e-feedback from peers and their teacher. Oral feedback was only marginally part of the study, as noted below.

20 L2 students (students for whom English was an additional language) in a first-year writing course at a four-year university participated in this study. Paper drafts were uploaded onto a website where other students could read them and comment on them. The e-feedback could be read on the site, but was also sent via email to students (and the instructor). Students wrote four papers as part of the study, and could revise each paper up to five times. 97 first drafts and 177 revisions were analyzed in the study. The author compared comments received digitally to later revised drafts, to see what had been incorporated. He also interviewed the authors of the papers to ask what sparked them to make the revisions they did.

Tuzi combined the results from analyzing the essay drafts and e-feedback (to see what of the feedback had been incorporated into revisions) with the results of the interviews with students, to identify the stimuli for changes in the drafts. From these data he concludes that 42.1% of the revisions were instigated by the students themselves, 15.6% from e-feedback, 148.% from the writing centre, 9.5% from oral feedback (from peers, I believe), and for 17.9% of the revisions, the source was “unknown.” He also did a few finer-grained analyses, showing how e-feedback fared in relation to these other sources at different levels of writing (such as the punctuation, word, sentence, paragraph), in terms of the purpose of the revision (e.g., new information, grammar) and more. In many analyses, the source of most revisions was the students themselves, but e-feedback ranked second some (such as working at the sentence, clause and paragraph levels, and adding new information). Oral feedback was always low on the list.

In the “discussion” section, Tuzi states:

Although e-feedback is a relatively new form of feedback, it was the cause of a large number of essay changes. In fact, e-feedback resulted in more revisions than feedback from the writing center or oral feedback. E-feedback may be a viable avenue for receiving comments for L2 writers. Another interesting observation is that although the L2 writers stated that they preferred oral feedback, they made more e-feedback-based changes than oral-based changes.

True, but note that in this study oral feedback was not emphasized. It was something students could get if they wanted, but only the e-feedback was focused on in the course. So little can be concluded here about oral vs. e-feedback. To be fair, that wasn’t really the point of the study, however. The point was simply to see how students use e-feedback, whether it is incorporated into revisions, and what kinds of revisions e-feedback tends to be used for. And Tuzi is clear towards the end: “Although [e-feedback] is a useful tool, I do not believe it is a replacement for oral feedback or classroom interaction …”. Different means of feedback should be available; this study just shows, he says, that e-feedback can be useful as one of them.

What I found especially helpful was the description of comparing drafts to the feedback that had been provided, to see whether the feedback is incorporated into the drafts. This was a lengthy process, involving reading all the drafts of an essay, recording all the changes, then comparing these with the feedback received. He also took the extra step of interviewing the authors to find out what they were thinking when they made the changes, and what they remember the stimulus being. This made me realize: one probably shouldn’t just look at the drafts and the feedback, because just because the feedback was given doesn’t mean that was the source of revisions. Still, one can’t necessarily get a full picture from interviews, either; students may think something came from their own ideas when they have forgotten someone else suggested it to them. Perhaps a combination of these two strategies is best, though I wish I could have seen whether there was any difference in the results from each–e.g., whether one might have come to the conclusion based on the drafts and feedback alone that the feedback was more of a stimulus than the students themselves said in interviews.

2. Figl, K., Bauer, C., Mangler, J., Motschnig, R. (2006) Online versus Face-to-Face Peer Team Reviews, Proceedings of Frontiers in Education Conference (FIE). San Diego: IEEE. See here for online version (behind a paywall)

In this study, students in a computer science project management course participated in peer assessment in teams (each team reviewing and evaluating another team’s materials). Students created software development projects in teams of three, and there were two peer evaluation activities: first, teams were connected in pairs to do face to face evaluations (paper documents given out, discussions took place face to face); then later, new pairs of teams were generated for the online peer evaluation (online presentation of documents, online, asynchronous, written comments and a chance to engage in further asynchronous, written discussion). The same evaluation form was used for both the face to face and online reviews.

The research questions for the study were:

Do the online and the face-to-face version show differences in communication and discussion? How do collaboration within and among teams as well as workload distribution differ? Are there any differences in the perceived review quality and work efficiency?

These questions were addressed, however, using only a questionnaire given to students who had completed both sets of peer evaluation activities. The questionnaire asked students to rate the value of each form of peer evaluation for things such as reviewing the other team’s documents, discussing with the partner team, revising the project through discussion within the team, the quality of the comments, and more. They received questionnaires from 16 students.

The results were fairly mixed, with the numbers being pretty close in favour of f2f and online, asynchronous for most questions (and since the number of students was so small, it’s hard to conclude very much here). The only result that really stood out as significant was that most students preferred the face to face peer evaluation for the purpose of discussing the projects with the partner teams–comments could be clarified, questions answered, misunderstandings cleared up, etc.

A similar conclusion was reached in the next study.

3. Guardado, M., Shi, L. (2007) ESL students’ experiences of online peer feedback, Computers and Composition 24, 443–461. doi: 10.1016/j.compcom.2007.03.002

This study was similar to that by Tuzi, above, in that it focused specifically on electronic, asynchronous, written feedback and did not compare that with other means of providing feedback.

The research questions were:

  1. What types of online peer feedback did student authors receive?
  2. Did student authors follow peer comments in their revisions? And if so, how did they perceive such experiences?
  3. Did student authors ignore peer comments in their revisions? And if so, how did they perceive such experiences?

22 students participating in an exchange program at a large Canadian university (they were all from Japan) were part of the study, in a course on intercultural communication. Students wrote three, 500-word essays, the first two of which underwent face to face peer evaluation (in groups of three). For the third essay, students posted their essays on a course website with their names attached, and other students commented on the essays anonymously through a discussion board. The authors could interact with reviewers on the discussion board as well and ask questions, make clarifications, and generally discuss comments further as needed. The online feedback was mostly given during class time (in which students gave the feedback in a room full of computers), students could also continue the discussions  for two weeks afterwards. Students then revised their essays and submitted them to the instructor. The authors also interviewed the students about their experiences with online feedback, including whether they thought it was useful, whether they preferred online feedback or face to face feedback, and whether they prefer to give and receive feedback anonymously.

The data for the study consisted of the transcribed interviews, the students’ third paper drafts, and the electronic feedback they had received on those drafts. The authors compared the first draft and the final essay for changes, and compared these with the peer feedback received. The authors did not record the face to face peer feedback sessions, so did not compare oral vs. written, online, asynchronous feedback in terms of its incorporation into revisions.

Just 13 out of 22 students revised their essays, and out of them, 10 used peer feedback in their revisions. The other three used only self-generated revisions. 60 peer-generated revisions were suggested in total, and just 27 of those were used by the 13 students who undertook peer-generated revisions. I’m not sure what to do with these numbers, really–are they good? Are they comparable to the numbers of peer-feedback-generated revisions in face to face feedback?

The interviews revealed numerous things, but I want to highlight one thing of interest to the topic of face to face vs. mediated, asynchronous feedback: many students felt that the online, asynchronous feedback process did not provide the kind of interaction they wanted in the feedback process (to ask questions, clarify things, etc.). Though that opportunity was provided on the discussion board, none of the participants engaged in further discussion there.

I can imagine partly why that might be the case: once one moves away from the review process in time, the motivation to go back and revisit it may be less. In addition, even if the author wanted to ask a reviewer a question, it wouldn’t get answered unless the reviewer happened to go back online and check the discussion board. I know myself that I often fail to do that sort of thing, having moved on to other activities. Clarifying issues, asking questions, making further comments are all best done, I think, synchronously, when all are thinking about them in the moment. It’s hard to re-create that over the course of time.

Some students, in interviews, also pointed out that without knowing the names of reviewers, they couldn’t easily go and ask those people to clarify their comments. Though anonymity of reviews has upsides (students may feel more comfortable saying what they really think), it has this downside if it’s done asynchronously as well.

As the authors note in the conclusion:

The lack of interaction turned the online peer feedback in the present study into a one-way communication process, leaving a good portion of peer comments unaddressed and, thus, opportunities missed. (458)

And like Tuzi, they conclude: “Our study suggests that online peer feedback is not a simple alternative to face-to- face feedback and needs to be organized carefully to maximize its positive effect” (458).

Conclusions

Not much can be concluded here except that electronic, asynchronous feedback can be useful, but probably shouldn’t be an alternative to face to face discussion. The students in both Figl et al (2006) and Guardardo and Shi (2007) noted that face to face discussions of feedback are useful in ways that online, asynchronous media don’t easily allow.

This is not terribly surprising, though I wonder if a more synchronous online environment would allow for the same kind of discussion as a face to face environment, and so, be just as useful. In a later post I’ll look at some studies that discuss synchronous, written chats as an alternative to face to face.

A possible study that comes to mind after summarizing these: one could look at two of the same courses taught by the same teacher, one in which only written peer feedback is given, and one in which written peer feedback plus a chance to discuss that feedback and the essay orally in pairs or groups. Then, one could compare whether the feedback is better incorporated in one situation than another. It seems that students do tend to prefer face to face discussion of feedback, but perhaps that plus written feedback would be best for revision purposes.

Of course, one would somehow have to factor into this process the fact that some feedback should not be incorporated into drafts, because it’s off the mark. I think that’s where interviews could come in. But then, at least, you could find out if the feedback was taken seriously enough to weigh it and decide not to use it.


Any thoughts on these studies or my comments? Please let me know!

I summarize one more study looking at face to face discussions vs. online, asynchronous discussions, here.