Tag Archives: Research Reviews

Literature on written and oral peer feedback

For context on why I’m interested in this, see the previous post.

I’ve done some searches into the question of oral and written peer feedback, and been surprised at the paucity of results. Or rather, the paucity of results outside the field of language teaching, or teaching courses in a language that is an “additional language” for students. I have yet to look into literature on online vs. face-to-face peer review as well. Outside of those areas, I’ve found only a few articles.

1. Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.  http://dx.doi.org/10.1080/13562510500527685

In this article Van den Berg et al report on a study in which they looked at peer feedback in seven different courses in the discipline of history (131 students). These courses had peer feedback designs that differed according to things such as: what kind of assignment was the subject of peer feedback, whether the peer feedback took place alongside teacher feedback or whether there was peer feedback only, whether students who commented on others got comments from those same others on their own work or not, how many students participated in feedback groups, and more. Most of the courses had both written and oral peer feedback, though one of the seven had just written peer feedback.

The authors coded both the written and oral feedback along two sets of criteria: feedback functions and feedback aspects. I quote from their paper to explain these two things, as they are fairly complicated:

Based on Flower et al . (1986) and Roossink (1990), we coded the feedback in relation to its product-oriented functions (referring directly to the product to be assessed): analysis, evaluation, explanation and revision. ‘Analysis’ includes comments aimed at understanding the text. ‘Evaluation’ refers to all explicit and implicit quality statements. Arguments supporting the evaluation refer to ‘Explanation’, and suggested measures for improvement to ‘Revision’. Next, we distinguished two process-oriented functions, ‘Orientation’ and ‘Method’. ‘Orientation’ includes communication which aims at structuring the discussion of the oral feedback. ‘Method’ means that students discuss the writing process. (141-142)

By the term ‘aspect’ we refer to the subject of feedback, distinguishing between content, structure, and style of the students’ writing (see Steehouder et al., 1992). ‘Content’ includes the relevance of information, the clarity of the problem, the argumentation, and the explanation of concepts. With ‘Structure’ we mean the inner consistency of a text, for example the relation between the main problem and the specified research questions, or between the argumentation and the conclusion. ‘Style’ refers to the ‘outer’ form of the text, which includes use of language, grammar, spelling and layout. (142)

They found that students tended to focus on different things in their oral and written feedback. Written feedback over all the courses tended to be more product-oriented than process-oriented, with a focus on evaluation of quality rather than explaining that evaluation or offering suggestions for revision.  In terms of feedback aspect, written feedback focused more on content and style than structure (143).

Continue reading

The value of peer review for effective feedback

No matter how expertly and conscientiously constructed, it is difficult to comprehend how feedback, regardless of its properties, could be expected to carry the burden of being the primary instrument for improvement. (Sadler 2010, p. 541)

… [A] deep knowledge of criteria and how to use them properly does not come about through feedback as the primary instructional strategy. Telling can inform and edify only when all the referents – including the meanings and implications of the terms and the structure of the communication – are understood by the students as message recipients. (Sadler 2010, p. 545)

In “Beyond feedback: developing student capability in complex appraisal” (Assessment & Evaluation in Higher Education, 35:5, 535-550), D. Royce Sadler points out how difficult it can be for instructor feedback to work the way we might want–to allow students to improve their future work. Like Nicol and Macfarlane-Dick 2006 (discussed in the previous post), Sadler here argues that effective feedback should help students become self-regulated learners:

Feedback should help the student understand more about the learning goal, more about their own achievement status in relation to that goal, and more about ways to bridge the gap between their current status and the desired status (Sadler 1989). Formative assessment and feedback should therefore empower students to become self-regulated learners (Carless 2006). (p. 536)

 The issue that Sadler focuses on here is that students simply cannot use feedback for improvement and development of self-regulation unless they share some of the same knowledge as the person giving the feedback. Much of this is complex or tacit knowledge, not easily provided in things such as lists of criteria or marking rubrics. Instructors may try to make their criteria for marking and their feedback as clear as they can,

Yet despite the teachers’ best efforts to make the disclosure full, objective and precise, many students do not understand it appropriately because, as argued below, they are not equipped to decode the statements properly. (p. 539)

Continue reading

Seven Principles of Effective Feedback Practice

I recently read an article by David J. Nicol and Debra Macfarlane-Dick that I found quite thought-provoking:

David J. Nicol & Debra Macfarlane-Dick (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, 31:2, 199-218.   http://dx.doi.org/10.1080/03075070600572090

The basic belief guiding their argument is that formative assessment (which they define, referring to Sadler 1998, as “assessment that is specifically intended to generate feedback on performance to improve and accelerate learning” (199)) should be aimed at helping students become more self-regulated. What does it mean for students to be self-regulated? The authors state that it manifests in behaviours such as monitoring and regulating processes such as “the setting of, and orientation towards, learning goals; the strategies used to achieve goals; the management of resources; the effort exerted; reactions to external feedback; the products produced” (199). They also cite later in the article (p. 202) a definition from Pintrich and Zusho, 2002:

Self-regulated learning is an active constructive process whereby learners set goals for their learning and monitor, regulate, and control their cognition, motivation, and behaviour, guided and constrained by their goals and the contextual features of the environment. (Pintrich and Zusho (2002), 64)

Students who are self-regulated learners, Nicol and Macfarlane-Dick explain on p. 200, set goals for themselves (usually affected by external goals in the educational setting) against which they can measure their performance. They generate internal feedback about the degree to which they are reaching these goals, what they need to do to improve progress towards them, etc. They incorporate external feedback (e.g., from instructors and peers) into their own sense of how well they are doing in relation towards their goals. The better they are at self-regulation, the better they are able to use their own and external feedback to progress towards their goals (here they cite Butler and Winne, 1995). The authors point to research providing evidence that “learners who are more self-regulated are more effective learners: they are more persistent, resourceful, confident and higher achievers (Pintrich, 1995; Zimmerman & Schunk, 2001)” (205).

Nicol and Macfarlane-Dick note, however, that current literature shows little work on arguing for how formative feedback can improve student self-regulation. That’s what they do here.

Continue reading

Potential problems with comments on students’ essays

[The following is from my monthly reflections journal for the UBC Scholarship of Teaching and Learning Leadership program I’m attending this year (a year-long workshop focused in part on general improvements in pedagogy, but also in large part on learning about SoTL and developing a SoTL project). Warning–a long post!]

I recently read an article by Ursula Wingate of the Department of Education and Professional Studies at King’s College London, entitled The impact of formative feedback on the development of academic writing,” Assessment & Evaluation in Higher Education Vol. 35, No. 5 (August 2010): 519-533. I am very interested in this article b/c it deals with a question I myself have wondered, in relation to my teaching in Arts One: why do some students improve so much in their writing over the course of the year, and why do some fail to do so? I was thinking this might be a future SoTL project for me, and I’m glad to see that there is literature on this…so I’ve got some work in the future, looking through that literature!

In this article Wingate reports on a study focused on two research questions:

(1) Can improvements in student writing be linked to the use of the formative feedback?
(2) What are the reasons for engaging or not engaging with the assessment feedback? (p. 523)

The sample was a set of essays by 62 students in a first year course focused in part on writing (the course was part of a program in applied linguistics). Comments on essays were coded according to which category they belong to in terms of assessment criteria, and the researchers compared the comments in each category from an early essay to those on a later essay from each student. They separated students into three main categories: those whose essays showed equally high achievement over the two assignments, those whose marks improved by at least 10% between the two essays, and those whose marks didn’t show much of a difference between the two essay (+ or – 5%).  After this separation they ended up with 39 essays. They list a few reasons why they didn’t include other students in the study, but those aren’t crucial to what I want to comment on here (I think). Mostly they were looking to find why some students improve a lot and some don’t, so the second two groups make sense, but the first group (those who were consistently high achievers) were included to see if they could find out in interviews some useful information about how/why they are academically engaged.

Continue reading