The value of peer review for effective feedback

No matter how expertly and conscientiously constructed, it is difficult to comprehend how feedback, regardless of its properties, could be expected to carry the burden of being the primary instrument for improvement. (Sadler 2010, p. 541)

… [A] deep knowledge of criteria and how to use them properly does not come about through feedback as the primary instructional strategy. Telling can inform and edify only when all the referents – including the meanings and implications of the terms and the structure of the communication – are understood by the students as message recipients. (Sadler 2010, p. 545)

In “Beyond feedback: developing student capability in complex appraisal” (Assessment & Evaluation in Higher Education, 35:5, 535-550), D. Royce Sadler points out how difficult it can be for instructor feedback to work the way we might want–to allow students to improve their future work. Like Nicol and Macfarlane-Dick 2006 (discussed in the previous post), Sadler here argues that effective feedback should help students become self-regulated learners:

Feedback should help the student understand more about the learning goal, more about their own achievement status in relation to that goal, and more about ways to bridge the gap between their current status and the desired status (Sadler 1989). Formative assessment and feedback should therefore empower students to become self-regulated learners (Carless 2006). (p. 536)

 The issue that Sadler focuses on here is that students simply cannot use feedback for improvement and development of self-regulation unless they share some of the same knowledge as the person giving the feedback. Much of this is complex or tacit knowledge, not easily provided in things such as lists of criteria or marking rubrics. Instructors may try to make their criteria for marking and their feedback as clear as they can,

Yet despite the teachers’ best efforts to make the disclosure full, objective and precise, many students do not understand it appropriately because, as argued below, they are not equipped to decode the statements properly. (p. 539)

Student challenges in interpreting feedback

Sadler lists four challenges students face in interpreting feedback (p. 540):

  1. Students may focus in part on what they intended their work to be like and in part on how it appears, thus making it difficult for them to grasp how the feedback connects to the work.
  2. Students may not understand concepts or criteria used in the feedback in the same way as the instructor does.
  3. Students may lack tacit knowledge needed to see how some of the feedback applies to one or more parts of their work.
  4. Even if the above problems are solved, students need to incorporate the feedback into their own working knowledge somehow, so that it can be used for later work. Just reading the feedback, for example, isn’t necessarily going to perform this function.

What is needed to address some of these issues (not clear which, but at least #2, probably #3, and maybe #1) is for students to gain the sort of knowledge that their teachers use in providing feedback, at least insofar as this is possible. This knowledge is achieved in large part through experience in appraising works.  Teachers, depending on how long they have been teaching, often have extensive experience assessing work of the kind they are assigning to students. This provides them with information on the various ways students have been able to complete the assignment, the sorts of “moves” they have been able to make, the kinds of original responses they have given, etc. This allows teachers to appraise current assignments in light of what they think students are capable of, and to suggest improvements that they think are within the realm of what typical students can do. They can also judge when a particular submission is especially original or creative, or shows a novel approach, which may be part of assessing quality for a particular assignment. As Sadler puts it, teachers’ assessment experience provides them with an ability to judge the quality of assignments because they have a sense of what “quality” means for that particular sort of assignment. They can also then compare different assignments that are similar in quality but different in execution (541).

Peer review

How might students be able to gain the sort of knowledge teachers use in assessment and in their feedback?

The overall aim is to induct students into sufficient explicit and tacit knowledge of the kind that would enable them to recognise or judge quality when they see it and also explain their judgements. (p. 542)

The solution proposed here is to provide learners with appraisal experience that is similar to the teacher’s. Desirably, it should be as close in scope and kind as resources will allow. (p. 541)

The only clear candidate in my mind, and the one that Sadler suggests, is peer feedback. It would have to be a significant part of the curriculum, not an occasional or one-off activity for it to work for the above purposes, as Sadler notes. Students need experience in assessing works in order to gain an understanding of three main concepts: task compliance (whether the work adheres to the assignment), quality (“the degree to which a work comes together as a whole to achieve its intended purpose” (p. 544)), and criteria (understanding what criteria mean and how to apply them correctly).

To achieve these purposes, students need not only to read works by others (and ones of various quality) but to verbalize (in written or oral form) their judgments about task compliance, quality, and criteria, and to discuss these with other students and instructors (p. 544). In this way, and through repeated experience, students could gradually build up tacit knowledge resembling that of their teacher:

This experiential activity gives rise not just to an appraisal or judgement but also to a body of unseen, unarticulated and often unheralded know-how of the intricate relationships between the appraisal elements and how they are applied. (p. 546)

No preset criteria

Finally, Sadler argues that the common practice of providing students with a set of criteria upon which to base their assessment of others’ work is misguided. It does not allow students to develop the skills needed to determine which criteria are relevant to a particular task–skills that are required in the world beyond schooling, where there are rarely preset rubrics to follow. Providing sets of criteria and rubrics can also “inhibit the formation of a full-bodied concept of quality because they tend to prioritise specific qualities (criteria) rather than quality as a global property” (p. 548). The argument here is that for complex tasks, the global concept of quality is not simply a sum of judgments about whether the work adheres to a set of criteria.

In practice, quality is often easier to recognise when it presents itself than it is to define in the abstract, or account for fully in the particular. Not uncommonly, something significant is lost when attempts are made to express quality in propositional or declarative form, that is, in words, including rubrics and expansions of fixed criteria. (p. 544)

My responses to these arguments

Intuitively, it makes a lot of sense to say that there is a wealth of experience going into my evaluative judgements and feedback on students’ essays that could make it difficult for students to grasp fully what I’m saying and why, even if I provide as much information as I can on criteria and give out a rubric (and comments tied to the rubric). Indeed, I say at the top of the rubric that I give out that the criteria on it are the best approximation I can offer of the sorts of things I look for in an essay, and that they may not be completely exhaustive. I also point out that there is inherently some subjective aspects to marking that cannot be eliminated. I don’t like the way I put that, because it could signal to students that the decisions might be arbitrary. That’s not what I mean; I mean that there are some judgements I make that I can’t articulate fully, such as judgements about overall quality (as Sadler points out), that are about how the work is put together as a whole. I mean to say, and probably should clarify, that there is some amount of assessment that is done on the basis of knowledge gained by experience and immersion in the field, that is difficult, if not impossible, to articulate in the form of criteria and rubrics.

It also makes sense intuitively to say that peer review could provide the best available path for students to gain a similar kind of experience to the teacher. Of course, it will not be the same–they will not have exposure to the same number and diversity of essays in philosophy that I have, through one class in philosophy. If peer review were more extensive throughout the university, students would get a lot of experience in different kinds of quality and criteria, some of which would be useful across courses and disciplines and some of which would likely not. But still, it may be the best option we have for at least allowing students a chance to understand our feedback and the sense of a “good” work in a particular field.

I agree with Sadler’s point that students need to discuss their assessments and explanations for them not only with other students, but also with instructors. As I noted in the previous post, I fear that misunderstandings could be perpetuated by being reinforced through several students agreeing on them. In order for students to understand what a particular professor means by a term or criterion, they need to compare their own understanding with that of the professor too. I suppose this need not be done through face-to-face discussion, though I think that is a particular effective way to do it, to clarify potential confusions and answer questions quickly and efficiently. Of course, the sheer size of so many university classes makes that impossible in many cases.

Finally, I am not yet convinced of Sadler’s argument about avoiding the use of pre-set criteria. I can understand the idea of providing opportunities for acting as one will need to do after university, but I wonder if in the early years, providing pre-set criteria might be useful as a means of getting students into the process of peer assessment and feedback in an effective way. I have seen the effects firsthand of students using vastly different criteria for evaluating each other’s work, and of students not understanding the criteria others are using. Perhaps  one could ask students to work together to come up with agreed upon criteria to use. Maybe that’s what Sadler means after all. He has another paper (Sadler 2009) that I plan to read, wherein I think he sets out his argument for that more fully.

 

Works cited

Carless, D. 2006. Differing perceptions in the feedback process. Studies in Higher Education 31: 219–33.

Sadler, D.R. 1989. Formative assessment and the design of instructional systems. Instructional Science 18: 119–44.

Sadler, D.R. 2009. Indeterminacy in the use of preset criteria for assessment and grading in higher education. Assessment & Evaluation in Higher Education 34: 159–79.  A pre-publication version of this essay can be found here: http://www98.griffith.edu.au/dspace/bitstream/handle/10072/30686/59464_1.pdf?sequence=1

5 comments

  1. Hello there, this is Royce. The point I want to make is that preset criteria also predispose students to look for certain aspects or properties, and here is no guarantee (in advance) that these will turn out to be the ones that matter most in making a judgment about quality. I believe students need to learn to ‘see’, literally, what is there and evaluate it. For anyone interested, I am happy to send the manuscript of a book chapter coming out in 2013 called “Opening up feedback: Teaching learners to see”. M. L. J Abercrombie’s little book called The Anatomy of Judgment is a lovely contribution to this debate. To get my email address, look me up at either Griffith University or the University of Queensland, or just do a Google search.

    1. Thanks, Royce, for your offer–I will email you and take you up on it! I do understand your point, and I agree that ultimately it is best for students to learn how to see what aspects of a work are important in judging quality. I just need to do some further thinking and reading about how that might best be done. I may end up agreeing with you completely. Soon I plan to do an overview in my blog of your article, “Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning” (book chapter), and I expect that will help me grasp your argument even better. Perhaps also the manuscript of the 2013 book chapter too!

Comments are closed.