How does giving comments in peer assessment impact students? (Part 2)

This is the second post looking at published papers that use empirical data to answer the question in the title. The first can be found here. As noted in that post, I’m using “peer assessment” in a broad way, referring not just to activities where students give grades or marks to each other, but more on the qualitative feedback they provide to each other (as that is the sort of peer assessment I usually use in my courses).

Here I’ll look at just one article on how giving peer feedback affects students, as this post ended up being long. I’ll look at one last article in the next post (as I’ve only found four articles on this topic so far).

Lu, J. and Law, N. (2012) Online peer assessment: effects of cognitive and affective feedback, Instructional Science 40, 257-275. DOI 10.1007/s11251-011-9177-2. This article has been made open access, and can be viewed or downloaded at: http://link.springer.com/article/10.1007%2Fs11251-011-9177-2

In this study, 181 13-14 year old students in a Liberal Studies course in Hong Kong participated in online peer review of various parts of their final projects for the course. They were asked to both engage in peer grading and give peer feedback to each other in groups of four or five. The final project required various subtasks, and peer grading/feedback was not compulsory — students could choose which subtasks to give grades and feedback to their peers about. The grades were given using rubrics created by the teacher for each subtask, and both grades and feedback were given through an online program specially developed for the course.

Research Questions

  1. Are peer grading activities related to the quality of the final project for both assessors and assessees?
  2. Are different types of peer …  feedback related to the quality of the final projects for both assessors and assessees? (261)

The authors were looking specifically at whether providing grades and/or feedback to peers affects the quality of final projects, and whether one of these seems to be more important than another.

They also considered different types of peer feedback, using a distinction from Nelson and Schunn (2009): cognitive vs. affective feedback:

Cognitive feedback targets the content of the work and involves summarizing, specifying and explaining aspects of the work under review. Affective feedback targets the quality of works and uses affective language to bestow praise (‘‘well written’’) and criticism (‘‘badly written’’), or uses non-verbal expressions, such as facial expressions, gestures and emotional tones. (259)

I find this distinction difficult to understand, especially since the authors go on to clarify that “When assessors give cognitive feedback they summarize arguments, identify problems, offer solutions, and explicate comments” (259). It seems to me that when one is identifying problems one is evaluating the quality of the work, which I thought fell under “affective” comments.

Further, the authors later give examples of various types of cognitive and affective comments, and some examples of cognitive comments include: “The topic is too broad,” “Your writing is to [sic] colloquial”; examples of affective comments are “Badly written,” and “Very good” (265). So is it perhaps that “affective” feedback comments don’t refer to the content at all, but are quite general, whereas “cognitive” comments can be evaluative but also refer to some specific aspect of the content?

 I’ll admit straight away that I haven’t read the paper they cite on this distinction between cognitive and affective feedback (listed below, under “Works Cited”), so that’s likely contributing to the problem. Still, it should be clear in this paper as well.

Data and Analysis

After it was collected through the online site, the peer feedback was coded as follows:

  • Affective comments were divided into positive and negative
  • Cognitive comments were divided into “(1) identify problem; (2) suggestion; (3) explanation; and (4) comment on language” (264).

The students’ previous grades in two courses, Computer Literacy and Humanities, were used as controls, and the authors tried to determine whether the number of peer grades and/or peer feedback comments given and received would affect final project scores, as well as whether the types of peer feedback given and received would affect those (266).

They used hierarchical multiple regression to analyze the data:

By entering blocks of independent variables in sequence, we identified additional variance explained by newly introduced variables in each step. In the first step, two control variables, examination scores in the Humanities and Computer Literacy courses, were entered into the regression equations. Peer grading measures to assessees and by assessors were entered in the second step. Peer feedback measures to assessees and by assessors were entered in the third step. (266)

I still don’t quite understand this fully, but I’m sure some people out there do. It has something to do with determining which interventions can account for the variance in final project scores–in this case, is it the grades in previous courses, and/or the peer grading students gave and received, and/or the feedback given and received?

Results

The authors found that, controlling for grades in the two previous courses, what had the most impact on the variance in scores on the final project was peer feedback, not peer grading. Specifically, the following three things were found to be most influential: “Giving suggestions to peers (t = 2.17, p < .05), giving feedback on identifying problems to peers (t=2.16, p < .05), and positive affective feedback from peers (t=2.25, p < .05)” (267).

They conclude that

  • peer grading alone may be effective for learning than peer grading plus peer feedback (269)
  • assessors in peer grading and feedback may benefit more from the process than assessees, “particularly with regard to comments that identify problems and make suggestions” (270)
  • “The more problems assessors identified, and the more suggestions they made, the better they performed in their own LS projects” (270), so it’s reasonable to think that giving comments to others in these ways helps students to think more carefully about their own projects.

My thoughts

As to the last bullet point, above: it’s also possible, of course, that the students who gave more comments about problems and suggestions are just already thinking more carefully about the assessment criteria and what counts as quality work, and so it makes sense that their final projects would be better. I think controlling for earlier grades is supposed to help account for this in some way, but I don’t think it does a good job of that, since of course there are numerous reasons why one might do better or worse in a previous course that aren’t focused on the tasks needed for this particular course.

I’m not convinced that counting the number of comments given of various types is particularly helpful here, either. Is it necessarily the case that making more comments means one is thinking more carefully about the rubric and assessment criteria? Noting which comments were on the mark in terms of the criteria, and whether they were made in places they should be made, would be better (though more work, of course!).

I have also determined that before I embark on any serious research project of my own I’m going to need to find a collaborator who understands the statistics needed (unlike me, who was trained as a philosopher).

Your thoughts

Have I missed something in my criticisms here? Let me know in comments!

 

The next post in this series can be found here.

 

Works Cited

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37(4), 375–401.