Seven Principles of Effective Feedback Practice

I recently read an article by David J. Nicol and Debra Macfarlane-Dick that I found quite thought-provoking:

David J. Nicol & Debra Macfarlane-Dick (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, 31:2, 199-218.

The basic belief guiding their argument is that formative assessment (which they define, referring to Sadler 1998, as “assessment that is specifically intended to generate feedback on performance to improve and accelerate learning” (199)) should be aimed at helping students become more self-regulated. What does it mean for students to be self-regulated? The authors state that it manifests in behaviours such as monitoring and regulating processes such as “the setting of, and orientation towards, learning goals; the strategies used to achieve goals; the management of resources; the effort exerted; reactions to external feedback; the products produced” (199). They also cite later in the article (p. 202) a definition from Pintrich and Zusho, 2002:

Self-regulated learning is an active constructive process whereby learners set goals for their learning and monitor, regulate, and control their cognition, motivation, and behaviour, guided and constrained by their goals and the contextual features of the environment. (Pintrich and Zusho (2002), 64)

Students who are self-regulated learners, Nicol and Macfarlane-Dick explain on p. 200, set goals for themselves (usually affected by external goals in the educational setting) against which they can measure their performance. They generate internal feedback about the degree to which they are reaching these goals, what they need to do to improve progress towards them, etc. They incorporate external feedback (e.g., from instructors and peers) into their own sense of how well they are doing in relation towards their goals. The better they are at self-regulation, the better they are able to use their own and external feedback to progress towards their goals (here they cite Butler and Winne, 1995). The authors point to research providing evidence that “learners who are more self-regulated are more effective learners: they are more persistent, resourceful, confident and higher achievers (Pintrich, 1995; Zimmerman & Schunk, 2001)” (205).

Nicol and Macfarlane-Dick note, however, that current literature shows little work on arguing for how formative feedback can improve student self-regulation. That’s what they do here.

Too often, the authors argue, formative feedback has been conceived along a “transmission” model, wherein the teacher transmits the feedback and the student uses it to make changes. They emphasize instead the students’ active role in this process, noting that feedback must be interpreted by students (which interpretations may not match what instructors intended) and incorporated into students’ own sense of their goals and their own current strengths and weaknesses. They develop a theoretical model that emphasizes the processes that occur in students internally in the movement from external feedback to changes in work.

The authors then provide seven principles of good feedback practice, defining such practice as that which promotes self-regulation in students. They developed these through analysis of recent literature, which they discuss in the article.

Good feedback practice:

  1. helps clarify what good performance is (goals, criteria, expected standards);
  2. facilitates the development of self-assessment (reflection) in learning;
  3. delivers high quality information to students about their learning;
  4. encourages teacher and peer dialogue around learning;
  5. encourages positive motivational beliefs and self-esteem;
  6. provides opportunities to close the gap between current and desired performance;
  7. provides information to teachers that can be used to help shape teaching.

I’ll elaborate a bit on five of these, and reflect on my own practice in the process.

1. “Helps clarify what good performance is”: I try to do this through providing intricately detailed instructions for assignments and marking rubrics explaining the categories I use to mark essays and the things that would be in an “A,” “B,” etc. essay in each category. Sometimes I think these things are too detailed, in that they provide so much information it can be overwhelming. But so far I’ve erred on the side of providing as much clarity as I can on what I am looking for in a “good” essay. The authors point out, though, that assessment of complex tasks like writing is so complicated and multidimensional, and relies enough on tacit assumptions, beliefs and values of the assessor, that it’s difficult to convey the criteria to students (here they cite York, 2003). Several options are suggested, including providing exemplars along with written instructions, and engaging students in peer and self-assessment according to the criteria to help them grasp in a deeper way what “good performance” is.

I haven’t done much in the way of offering exemplars, though that’s a good idea, especially if the students could work on assessing them according to the criteria together, in groups. Perhaps exemplars of different sorts of essays: A-level, B-level, C-level, etc. Still, to best ensure that they are grasping the criteria, it would be ideal if the professor could sit in when they engage in assessment of the exemplars (or peer assessment of each others’ work), to clear up any misconceptions. We have the luxury of doing this in four-student tutorials in the Arts One program, but it’s not something instructors can often do.

2. “Facilitates the development of self-assessment (reflection) in learning”: Self-assessment is just as it sounds–students assessing their own work. The thought here is that this process goes on all the time anyway (students already engage in internal feedback about the quality of their work), but that it can be made more conscious and the quality of that assessment improved through training in self-assessment practice and structured exercises. This can be just in the form of narrative comments on their own work, or in the form of giving themselves a grade (which may or may not count in the final grade given to the work).

The ways I’ve used self-assessment have all been in Arts One. I’ve asked students: (1) to bring to peer-review tutorial sessions in Arts One a list of things they think they did well and/or could improve; (2) to bring to tutorial sessions a list of ways they have changed an essay based on peers and my comments on previous essays; and (3) to reflect at the end of the year on how their writing has improved, providing some early and late essays and discussing specific improvements from one to the other (this one was optional–I said it would help me decide if their mark at the end of the year should be higher than their assignment average would otherwise be, based on improvement). I haven’t really done any training for students in how to assess their own work, but just asked them to try to do it. I guess I hoped with practice it would get better. Maybe, maybe not. Nicol and Macfarlane-Dick argue that self-assessment can improve with practice in peer assessment, which students do a lot of in Arts One but not necessarily in my Philosophy courses. Something to consider trying in the future in Philosophy.

3. “Delivers high quality information to students about their learning”: What sort of information is “high quality”? They explain:

Good quality external feedback is information that helps students troubleshoot their own performance and self-correct: that is, it helps students take action to reduce the discrepancy between their intentions and the resulting effects. (208)

How to provide this sort of feedback? The authors cite work by Lunsford, 1997, arguing that feedback that is effective in this way shows how the reader perceived the argument more than it provides judgments–thus allowing the student to see the difference between intentions and how the work is read by others. Any judgmental comments should focus on corrective advice rather than just strengths and weaknesses. Lunsford also argues that no more than three complex comments should be made per essay, because otherwise students will have a hard time acting on all of them.

That last one I fail miserably. My comments are so copious students sometimes say I’ve written more than they have. They may be right, at times. And it’s not just long explanations of a few issues, it’s long explanations of every good or problematic thing I find. I have erred on the side of completeness, thinking that otherwise how will students understand the grade I’ve given them, and how will they know how much they have to improve? But I do see that this must be balanced with not overwhelming them.

4. “Encourages teacher and peer dialogue around learning”: Here, Nicol and Macfarlane-Dick argue that feedback is best conceived as a dialogue rather than as a one-way transmission. Feedback should be framed as a kind of dialogue and there should be opportunity, if possible, to discuss with the instructor (for the sake of both engaging in clarification of meaning, to defend work or recognize the need for change, etc.). We are very fortunate in Arts One that we can do this: every week we have tutorials of four students plus the professor in which the students peer review each others’ essays and the professor offers comments too. In this oral discussion there can be a good amount of give-and-take between students and professor on the former’s work (however, I need to look further into how to structure that dialogue so that the professor’s perceived authority does not serve to silence, or hamper the student’s own contributions to the dialogue).

Large class sizes make this difficult for many, so the authors suggest peer discussion of feedback, where students talk with other students about the feedback they have received. I am having trouble seeing how this would serve the same sort of purpose as student-instructor dialogue. The peers might need to have read the essay, if it’s essays that are being discussed, in order to understand the feedback. Perhaps they could read out parts of their essay to the group and give the feedback, and the group could help the student better understand the feedback. Still, this would more like guesswork on the peers’ part, it seems. Unless, that is, one has followed the other principles of good feedback…?

6. “Provides opportunities to close the gap between current and desired performance”: I find this one particularly important. How often do instructors feel they are spending hours on giving feedback without knowing whether or how that will be used to improve performance? How do we encourage that feedback to be used well? The authors suggest providing clear opportunities to use the feedback, such as requiring (or allowing) re-submission of the same work, and/or providing feedback at early stages in the work, before it is completed (perhaps at specific sub-task levels). That way, students have a clear reason to go back to the feedback and use it in their further work. Another option is to either give to students, or better (in my view) ask students to create their own “action points” (214) in relation to the feedback–ask them, e.g., to think about what they will do next on the basis of the feedback they have received.

I have often allowed re-submission of work in my Philosophy courses, and at times have assigned essays that build upon one another, so that a short essay gets developed into a longer one, one or two times during a class term. What I need to work on further is determining how to ensure that response to feedback isn’t just in the way of providing a one-sentence change that doesn’t really affect the rest of the essay, or inserting things just because the professor says to do it rather than that the student really grasps why it would be helpful.

Finally, I have experimented a bit with the idea of “action points,” by providing specific things to work on at the end of students’ essays, and asking them to reflect on past feedback and discuss what they have changed in a later essay based on that. But getting them to develop their own action points further, in a more regular and structured way, is a good idea to consider.


There are interesting things to think about in relation to the other two principles, but as this post is already quite long, I’ll stop here!


Works cited

Butler, D. L. & Winne, P. H. (1995) Feedback and self-regulated learning: a theoretical synthesis, Review of Educational Research, 65(3), 245–281.

Lunsford, R. (1997) When less is more: principles for responding in the disciplines, in: M. Sorcinelli & P. Elbow (Eds) Writing to learn: strategies for assigning and responding to writing across the disciplines (San Francisco, CA, Jossey-Bass).

Pintrich, P. R. (1995) Understanding self-regulated learning (San Francisco, CA, Jossey-Bass).

Pintrich, P. R. & Zusho, A. (2002) Student motivation and self-regulated learning in the college classroom, in: J. C. Smart & W.G. Tierney (Eds) Higher Education: handbook of theory and research (vol. XVII) (New York, Agathon Press).

Sadler, D. R. (1998) Formative assessment: revisiting the territory, Assessment in Education, 5(1), 77–84.

Yorke, M (2003) Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice, Higher Education, 45(4), 477–501.

Zimmerman, B. J. & Schunk, D. H. (2001) Self-regulated learning and academic achievement: theoretical perspectives (Mahwah, NJ, Lawrence Erlbaum Associates).