Literature on written and oral peer feedback

For context on why I’m interested in this, see the previous post.

I’ve done some searches into the question of oral and written peer feedback, and been surprised at the paucity of results. Or rather, the paucity of results outside the field of language teaching, or teaching courses in a language that is an “additional language” for students. I have yet to look into literature on online vs. face-to-face peer review as well. Outside of those areas, I’ve found only a few articles.

1. Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.  http://dx.doi.org/10.1080/13562510500527685

In this article Van den Berg et al report on a study in which they looked at peer feedback in seven different courses in the discipline of history (131 students). These courses had peer feedback designs that differed according to things such as: what kind of assignment was the subject of peer feedback, whether the peer feedback took place alongside teacher feedback or whether there was peer feedback only, whether students who commented on others got comments from those same others on their own work or not, how many students participated in feedback groups, and more. Most of the courses had both written and oral peer feedback, though one of the seven had just written peer feedback.

The authors coded both the written and oral feedback along two sets of criteria: feedback functions and feedback aspects. I quote from their paper to explain these two things, as they are fairly complicated:

Based on Flower et al . (1986) and Roossink (1990), we coded the feedback in relation to its product-oriented functions (referring directly to the product to be assessed): analysis, evaluation, explanation and revision. ‘Analysis’ includes comments aimed at understanding the text. ‘Evaluation’ refers to all explicit and implicit quality statements. Arguments supporting the evaluation refer to ‘Explanation’, and suggested measures for improvement to ‘Revision’. Next, we distinguished two process-oriented functions, ‘Orientation’ and ‘Method’. ‘Orientation’ includes communication which aims at structuring the discussion of the oral feedback. ‘Method’ means that students discuss the writing process. (141-142)

By the term ‘aspect’ we refer to the subject of feedback, distinguishing between content, structure, and style of the students’ writing (see Steehouder et al., 1992). ‘Content’ includes the relevance of information, the clarity of the problem, the argumentation, and the explanation of concepts. With ‘Structure’ we mean the inner consistency of a text, for example the relation between the main problem and the specified research questions, or between the argumentation and the conclusion. ‘Style’ refers to the ‘outer’ form of the text, which includes use of language, grammar, spelling and layout. (142)

They found that students tended to focus on different things in their oral and written feedback. Written feedback over all the courses tended to be more product-oriented than process-oriented, with a focus on evaluation of quality rather than explaining that evaluation or offering suggestions for revision.  In terms of feedback aspect, written feedback focused more on content and style than structure (143).

Students’ oral feedback was less product-oriented, the authors found, “about one-third of it being aimed at structuring the discourse, or discussing subject matter which was related to, but not written in the text” (144). “If feedback on the writing process was provided at all, it was oral, rather than in written form” (145). Those comments that were product-oriented were not mostly focused on evaluation, as with the written feedback, but tended to include analysis, evaluation, explanation, and suggestions for revision. Interestingly, regarding feedback aspect, oral feedback also focused mostly on content and style, with little on structure (144).

Still, when comparing the two kinds of feedback, the authors claim that, in regard to feedback aspect, “In their written feedback students commented on structure more than in their oral feedback. In their oral feedback students commented more on style” (145).

The authors also analyzed differences between course designs, but I won’t focus on that here. They also have an article arguing for the most effective peer feedback designs, using the same data set: Van den Berg, Ineke, Wilfried Admiraal & Albert Pilot (2006): Peer assessment in university teaching: evaluating seven course designs, Assessment & Evaluation in Higher Education, 31:1, 19-36. http://dx.doi.org/10.1080/02602930500262346

Because of the different nature of written and oral feedback, the authors conclude that “for PA [peer assessment] to yield the most complete feedback, a combination of written and oral feedback is essential” (146).

This study is right along the lines of the sort of thing I’m interested in. It would be possible to try to replicate this sort of thing on a smaller scale with my own courses in Philosophy, or those plus courses by others in Philosophy, to see if I get similar results. But I’m also interested in whether students perceive oral feedback or written feedback or both to be best. Finally, I’d like to know which actually tends to lead to improvement in writing, but I don’t know how to do that. Maybe consider how much students consciously use the feedback on their next assignment? Of course, that can be influenced by professors asking them directly to reflect on how much they have done that.

 

2.Krych-Appelbaum, M. & Musial, J. (2007) Students’ Perception of Value of Interactive Oral Communication as Part of Writing Course Papers, Journal of Instructional Psychology, Vol. 34. No. 3, 131-136.

Here, the authors specifically consider students’ own views of written vs. oral feedback, though not quite in the way I am thinking of myself. The subjects of the study were 20 undergraduate students in a psychology of language course (they don’t state what level this course is–first year, second, third, etc.). Students were randomly assigned to one of two groups: one gave and received written feedback on a draft of a paper, via email, and the other met twice, face-to-face, to discuss and give comments before writing the draft and after writing the draft (before writing the draft they shared ideas on what they were thinking of writing). The feedback was about one paper only, so students in the first group only gave and received feedback once, and in the second group twice on the same paper. After the students submitted their papers, they filled out “a questionnaire about their experience and the techniques they would use to write papers in the future” (133). I’m not sure, but I think (from one of the tables) that only 16 of the 20 filled out the questionnaires.

In the results section, the authors just focus on students’ views on techniques for improving writing, rather than what students said about the feedback experience. It’s not clear from the article what sorts of questions were asked about students’ experiences with the feedback. In terms of which writing techniques they rate most highly, there was a difference between students who did written feedback and those who did oral: the latter rated talking about one’s paper with others as an effective technique for improving writing higher than those who engaged in written feedback. The written feedback group rated evaluating another’s paper, reading a sample paper and outlining one’s own paper higher than did the oral feedback group.

The authors also report what sorts of techniques students predicted they would use in the future. Again, there was a difference: 50% of the oral group planned to talk with someone else before writing a paper in the future, while no students in the written group did. 50% of the written group said they’d use outlining in the future, while only 25% of the oral group did. 25% of the written group said they’d use “evaluate a friends draft” in the future, while none of the oral group did. Interestingly, 25% of each group said they planned  in the future to talk to someone else about a draft of a paper after they wrote it. Even more interestingly, the authors hardly comment on that at all. It suggests that even those who didn’t try talking to someone else after writing their draft would like to try it in the future. (Or maybe that one was about talking with someone else either orally or in writing? It’s not clear, which is a problem.)

Krych-Appelbaum and Musial conclude that the study provides some evidence that students value oral peer feedback:

These results provide some initial evidence that talking with another person about what one writes or what one plans to write may be very useful. In particular, students indicated they were likely to converse with someone in the future about their writing. Half of the students who were in the conversational condition rated it as their top choice of technique for writing in the future, whereas none of the students in the written condition did so. (135)

I don’t find this study design to be as useful, though, as one that allowed students to try both kinds of feedback and then give their views. It’s not surprising, really, that those who tried oral feedback would consider using it in the future, while those who didn’t try it wouldn’t consider using it at all. It would be more interesting to have more than one assignment, and have students do written feedback on one, and oral on another (maybe half oral, half written on one, and then switch the groups for the other), and then see what they think of each kind.

 

 3. Cartney, P. (2010) Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used, Assessment & Evaluation in Higher Education,35:5, 551-564. http://dx.doi.org/10.1080/02602931003632381

This article is a case study that forms part of a larger project of determining the use of peer assessment as a way to help students “close the gap” referred to in the article title. There isn’t much about that in this particular article, actually. The results reported here are of a focus group of ten students who engaged in a peer feedback exercise in a first year social work course. The author uses these results to discuss some potential complications of using peer assessment.

Students in the social work course were divided into groups of five, and within the group each student gave feedback on a draft essay of each of the other students. Students sent each other essays via email, and then exchanged feedback forms on those essays via email as well. They were then encouraged to engage in online discussion of the feedback. The focus group discussion appears to have taken place a few months after the peer feedback exercise.

There are a number of issues discussed in the article, such as anxiety about being assessed as well as being an assessor, whether the peer feedback process helped students get a better understanding of assessment criteria (and what pitfalls there can be in wording those criteria), and the degree to which students felt they used the feedback from their peers in their later work. There is one part that is related to the issue of giving oral vs. written feedback, though. Though students were invited to discuss each others’ feedback online, apparently a number of groups elected to engage in discussion face-to-face instead. A quote from one of the students in the focus group is particularly interesting here:

Verbal feedback is essential – what you have written on paper doesn’t translate what is in your head. You would have to write a whole essay for some feedback where you could just explain it in a few words.

This makes sense to me. I find that I spend quite a bit of time writing comments that I could provide more quickly by speaking. I expect that would be the case for students as well. Another student commented on how oral communication allows students to explain misunderstandings. Of course, that could be done through an online discussion as well.

What I found particularly interesting is that the author treats this development (that a number of groups bypassed the online discussion for an oral one) as a kind of failure. She suggests that part of the problem may be that e-learning is not a very common part of students’ experience, and so they don’t have as much desire to use such technology as they might if they were asked to do so more often. She also notes that in her programme, “e-learning technology is often a source of information-giving rather than a vehicle for dialogue,” so students have little experience of using it for dialogue. She concludes that:

A wider debate at programme level is suggested here, … about how we can encourage students to communicate in writing to each other – and later to other professionals in their work.

and

… something of a cultural shift may be necessary to encourage students to embrace e-learning. (558)

Why not see this as evidence that oral, face-to-face feedback is considered by students to be more useful than online, written feedback and dialogue, rather than arguing for a need to get students more practice in the latter so they can see its value? Perhaps Cartney’s points are also true, but she doesn’t consider the possibility that even if students were more versed in online, written feedback and dialogue, they would still prefer oral dialogue. I’m not saying that’s necessarily the case, just that it’s a possibility that’s not discussed here.

 

4. Reynolds, J. & Russell, R. (2008)Can You Hear Us Now?: A comparison of peer review quality when students give audio versus written feedback, The WAC Journal 19(1), 29-44. http://wac.colostate.edu/journal/vol19/index.cfm

This study doesn’t quite fit with the others, because it’s about comparing students’ experiences of giving and receiving audio (recorded into digital files) vs. written feedback. The authors had found that providing audio feedback was much more efficient than writing it out, and the fact that their university had given every student an iPod ensured that all students could listen to the feedback. They experimented with asking students to give and receive both audio and written feedback to each other.

This design did what I suggested Krych-Appelbaum and Musial should have, namely that students were split into two groups, and with one group giving audio feedback and the other written for one assignment, and then switching for a second assignment. The authors then asked the students to fill out a questionnaire about their experiences giving and receiving each type of feedback. They also evaluated the “quality” of the feedback, using two measures: the number of lower vs. higher order concerns addressed (where lower include things like mechanics of writing and higher include things like arguments, ideas, use of evidence, organization, tone, and audience), and generic vs. specific comments.

The results showed that while audio feedback provided higher quality comments (more specific comments, and more higher-order comments), students preferred both to give and to receive written feedback instead of audio feedback. The reasons for preferring giving feedback in written form included that students felt they could organize their thoughts better in writing, and that they felt more comfortable writing their comments rather than speaking them. The reasons for preferring receiving written feedback included that students felt they needed to write down the feedback comments to have them for future reference, and that this took more time than just receiving them in writing anyway.

The authors point out, though, that audio feedback, if students use it carefully, requires that they think about it and interpret it while writing it down, and perhaps even respond to it in their own mind. This is a good thing to be doing, they note, and it might not happen as much if students receive written feedback:

… students remarked that they had to spend more time thinking about audio feedback; they indicated that they had to interpret the reviewers’ comments and then decide how to respond. Ideally, all forms of feedback should prompt students to make these writing decisions, so we found it particularly interesting that students may not be reflecting critically on written feedback. (36)

Reynolds and Russell also note that “some evidence suggests that students comprehend and retain information better when they receive it from more than one sensory channel (Mayer & Moreno, 2003 ; Paivio,1986), suggesting that audio comments may complement other modes of feedback. (36-37).

Even though this study wasn’t looking at face-to-face, in-person feedback like I’m interested in, I really liked the authors’ point that getting feedback through audio (which can occur face-to-face too) can stimulate students to think about it further before writing it down for future reference, which can be an important thing to do. Clearly, a face-to-face discussion requires this sort of process directly, as students have to engage in a give-and-take to figure out what the other person means and actually formulate a response.

Finally, I find it intriguing that the idea of recording my feedback seems more intimidating to me than engaging in an oral discussion in person. The fact that the feedback is kept in a semi-permanent form suggests that I ought to be more careful in my speech than if it’s in person–I would just feel more pressure to do so. So I can see why some students felt less comfortable giving audio feedback and preferred the written instead, in which they could review and revise their comments as desired. Perhaps this is a feeling one could get over with practice.

Regardless, providing audio feedback along the lines of this study is just another way of engaging in one-way communication like written feedback does, when I’m more interested in whether some form of dialogue in addition to just one-way feedback (such as in-person dialogue or online dialogue) is better than the one-way alone.

 

Those are the things I’ve found so far, outside of the areas noted above that I haven’t looked at yet. If anyone reading this knows of more, please let me know (in comments)!

 

Works cited

Flower, L., Hayes, J. R., Carey, L., Schriver, K. & Stratman, J. (1986) Detection, diagnosis, and the strategies of revision, College Composition and Communication, 37, 16-55.

Mayer, R. E., & Moreno, R. (2003) Nine ways to reduce cognitive load in multimedia learning, Educational Psychologist, 38(1), 43-52.

Paivio, A. (1986). Mental representations: A dual coding approach. Oxford, England: Oxford University Press.

Roossink, H. J. (1990) Terugkoppelen in het natuurwetenschappelijk onderwijs, een model voor de docent [Feeding back in science education, a feedback model for the teacher]. Doctoral dissertation, Enschede, University of Twente.

Steehouder, M., Jansen, C., Maat, K., van de Staak, J. & Woudstra, E. (1992) Leren communiceren [Learning to communicate] (Groningen, Wolters-Noordhoff).

 

One comment

Comments are closed.