Some colleagues and I are brainstorming various research we might undertake regarding peer assessment, and in our discussions the question in the title of this post came up. I am personally interested in the comments students can give to each other in peer assessment, more than in students giving marks/grades to each other. Students engaging in giving comments on each others’ work are not only impacted by receiving peer comments, of course, but through the process of giving them as well. How does practice in giving comments and evaluating others’ work affect students’ own work or the processes they use to produce it?
I’ve already looked at a couple of articles that address this question from a somewhat theoretical (rather than empirical) angle (see earlier posts here and here). As discussed in those posts, it makes sense to think that practice in evaluating the work of peers could help students get a better sense of what counts as “high quality,” and thus have that understanding available to use in self-monitoring so as to become more self-regulated.
In this post I summarize the findings of two empirical articles looking at the question of whether and how providing feedback to others affects the quality of students’ own work. I will continue this summary in another post, where I look at another few articles.
(1) Li, L., Liu, X. and Steckelberg, A.L. (2010) Assessor or assessee: How student learning improves by giving and receiving peer feedback, British Journal of Educational Technology 41:3, 525-536. DOI: 10.1111/j.1467-8535.2009.00968.x
In this study, 43 undergraduate teacher-education students engaged in online peer assessment of each others’ WebQuest projects. Each student evaluated the projects of two other students. They used a rubric, and I believe they gave both comments and marks to each other. Students then revised their projects, having been asked to take the peer assessment into account and decide what to use from it. The post-peer assessment projects were marked by the course instructor.
The research questions for the study:
(1) When the quality of students’ initial projects (prior to peer assessment) is controlled for, is there a relationship between the quality of students’ final projects (post-peer assessment) and the quality of peer feedback students provide to others?
(2) When the quality of student initial project is controlled for, is there a relationship between the quality of students’ final projects and the quality of peer feedback these students receive? (527)
One of the researchers and an independent rater evaluated the projects (pre- and post-revision) and the peer assessments. These two used an assessment rubric for the feedback that asked, about each section of the project, whether the assessor identified “critical issues” in that section, and whether that person also provided “constructive suggestions” for the issues identified (appendix, pp. 535-536). There was one point for each of these questions in each of the five sections of the project, meaning that peer feedback that identified critical issues and provided constructive suggestions in all sections was given 10 points.
Results
Using hierarchical multiple regression (which I’ll admit I don’t understand, but I put it in here in case others do), the researchers found that
When the quality of their project before peer assessment was controlled for, students who gave better feedback to their peers produced significantly better final projects than those who gave poor feedback. (532)
and that
the quality of reviews students received was not significantly related to the quality of the final project … .(532)
They conclude that these results suggest that giving feedback may be more important to improving the quality of students’ work than receiving it.
A potential problem with the way feedback quality was judged
As noted above, the rubric used for assessing quality of feedback gave more points for finding critical issues and offering constructive advice in each section of a project. There was just one point given for any amount of feedback on critical issues in each section, and one point for any amount of constructive suggestions.
So, if a student reviewed projects that didn’t have critical issues in each section, did the feedback get rated as lower quality just because the original project was already good? Or, if a student gave a significant amount of helpful feedback in two sections but none in three other sections (because they were already good, perhaps), it seems their feedback would be counted as lower quality than a student who gave helpful feedback in three or four sections.
In other words, what’s needed is discussion of whether “quality” of feedback was considered against the background of how good the project was to begin with. If there are few critical issues to note, then feedback shouldn’t be considered of lower quality when it doesn’t note them. Perhaps this sort of thing was taken into account, but it wasn’t discussed in the article.
One might argue that this concern isn’t of great importance to this particular study, because it could be that students improve their own work when they get practice actually giving feedback, rather than just recognizing when it’s needed and when it’s not (so the more feedback needed, and given, the better students improve). I’m not sure that’s an adequate response, but nothing like that was discussed in the article, either.
(2). Cho, Y.H. and Cho, K. (2011) Peer reviewers learn from giving comments, Instructional Science 39, 629–643. DOI 10.1007/s11251-010-9146-1
Data for this study were collected from 72 undergraduates in an introductory physics course. Students engaged in online peer feedback (through the SWoRD system) on a draft of a lab report.
Peer evaluation: Each student was give three or four drafts to evaluate. They gave written comments and a numerical evaluation of the overall writing quality of each draft, from 1 (worst) to 7 (best).
Feedback on the feedback: Students then received their drafts and comments back, and gave feedback on the feedback, providing comments and a rating from 1 (least helpful) to 7 (most helpful). Student reviewers received this feedback and could also see what other reviewers had said about the drafts they reviewed.
Second round of peer evaluation: Students then commented on and rated the revised lab reports in the same way as they had done in the first round; the students evaluated the same reports they had read in the first round.
Research questions
- What types of comments do reviewers generate with regard to peer drafts?
- How do the types of given-comments and received-comments influence writing improvement?
- How do the reviewers’ initial writing skills and the qualities of the peer drafts influence the types of given-comments? (632)
Regarding “types of comments” referred to in these research questions, the researchers coded the comments (I assume the comments in both rounds of peer review noted above) according to two dimensions: “evaluation (strength vs. weakness) and scope (surface features, micro-meaning, or macro-meaning)” (634). Surface features includes things like grammar, punctuation, style. The “meaning” dimension includes comments on “focus, development, validity or organization” (634); micro-meaning comments are about content within a paragraph, and macro-meaning comments are about content that spans more than one paragraph.
Results
I am only going to look at the results pertaining to question 2, above, as that is most relevant to the content of this post.
Again, the authors used a multiple regression analysis, here with “the quality of revised drafts as a dependent variable and the types of given-comments and received-comments as independent variables” (636). In other words, they were trying to determine whether the quality of students’ revised drafts changed in relation to the types of comments they gave to others as well as (in a separate analysis) the types of comments they received from others. They also controlled for the effects of initial writing skill (as measured by the quality of the first drafts, judged by averaging the peer ratings on those drafts) and the quality of the drafts students read by peers (as measured by averaging the peer ratings on those drafts).
The quality of drafts being measured according to the mean of peer-provided ratings was supported further by subjecting all of the drafts to evaluation by PhD students who had previously taught similar courses before. Interrater reliability measures between the PhD students and the student peers on the ratings of the drafts were acceptable: “r = 0.80, p \ 0.01 in the first writing and r = 0.75, p \ 0.01 in the revised writing” (634).
Not unexpectedly, the quality of the revised drafts was most correlated with initial writing quality — i.e., those students who started off with a higher quality draft produced a higher quality draft when revised (again, where writing quality is measured by the mean of the ratings given by peer reviewers).
In answer to question 2, above, when students gave comments on others’ work, this was correlated with improvements in their own work, but receiving comments from others was much less correlated with improvements in their work. Specifically,
when student reviewers commented more on the strength of macro-meaning and the weakness of micro-meaning, the revision qualities of their own drafts tended to improve. (636-637)
Giving comments at the surface level did not seem to influence the quality of revised drafts.
The authors conclude:
Student reviewers appear to learn writing by giving comments at the meaning-level rather than at the surface-level. Furthermore, students seem to improve their writing more by giving comments than by receiving comments. (640)
My thoughts
One thing that is different about this study from other peer assessment studies I have read is that the students received feedback on their feedback (from the students whose work they had reviewed). This could, as the authors note, affect the quality of the feedback they provide, allowing them (depending on the quality of the feedback on the feedback, and if they pay attention to it) to adjust the sorts of comments they give, and how they give them. It could conceivably also have an indirect effect on their own writing, as they may adjust what they thought was needed for good quality writing.
I would like to see what the authors themselves point out they did not do here: a discussion of whether there is a link between the types of comments students give to others and the types of revisions they themselves make on later drafts of a work. That’s the sort of thing one of my colleagues suggested we might consider. So many things could go into affecting the quality of a revised draft, that connecting the sorts of things said in feedback to others, and whether those sorts of things get reflected in their own later drafts, would provide a clearer connection between giving feedback and improving writing, I think.
The next two posts in this series, focused on the question in the title, can be found here and here.
Interesting. I think we should be incorporating peer feedback more into Arts One (though it’s more work for everyone). I’ve been trying to do this, but it’s also hard to get them to provide useful feedback… and to persuade those who get the feedback to revise thoroughly. More, next semester!
Sorry for the late reply! I have been thinking of Arts One as actually being a standout forum for peer feedback, since we are able to provide weekly, one-hour tutorials for students in groups of four to be able to give each other feedback on their essays. That’s a luxury that can’t be had in most courses! Agreed that another challenge is closing the loop by trying to ensure the feedback is actually used for revision. That’s another interesting research area–how to encourage revision based on teacher and peer feedback. Lots of research in that area already, I think, and sometime in the future I plan to discuss a couple of papers about that on this blog.
Yes, it could indeed be a “standout forum for peer feedback.” But at the moment, as far as I can see, students provide feedback almost exclusively on work that is not going to be revised subsequently.
Right, that’s true. That’s the only thing about the peer feedback that could be a drawback in Arts One. However, of course, we do treat the process as one where students are to use the feedback from each essay to “feed forward” to the next one. That’s harder to do than to use it to revise the essay that had feedback itself, though. We’ve talked before in Arts One about how to incorporate revision of papers, but the structure really makes that difficult (unless students were to do it on the week they are not writing a new paper, possibly).