Category Archives: Instructor feedback

Closing the feedback loop

I attended the biannual meeting of the American Association of Philosophy Teachers July 30-Aug 2, 2014, and got some fantastic suggestions/ideas for future teaching, as I did the last time I attended this conference. The AAPT workshop/conference is easily one of my top favourite conferences: it is so friendly, inviting, supportive, and there are great people to talk to about teaching philosophy as well as about life in general. I haven’t laughed this much, for so many days in succession, for a long time. It’s too bad this meeting is only held every two years, as these are people I’d sure like to see more often!

I’m going to take a few of blog posts to write down some of the (many) things that inspired me at this conference, that I’d like to try in my own teaching one way or another. There were many more things than I’m going to write about here—I have pages and pages of notes that I typed out during the conference. But in this and a couple of future posts, I’ll focus on just a few.

Broken feedback loop: when did you not respond well to feedback?

Rebecca Scott from Loyola University Chicago facilitated a session on closing the feedback loop, which started off in a really helpful way: she asked us to consider (among other things) times when we received feedback from someone (whether in the context of our academic lives or other aspects of our lives) and didn’t respond in the way that we now think would be most helpful.

Kawazu Loop Bridge, Flickr photo shared by Tanaka Juuyoh, licensed CCBY 2.0

Kawazu Loop Bridge, Flickr photo shared by Tanaka Juuyoh, licensed CCBY 2.0

I won’t give details on either situation, but one of them had to do with feedback I received at the end of a course that utterly shocked and floored me. More than one student said that I did something that was so very far from who I think I am that I just couldn’t believe it was true. All I could think of was: “How could someone think I was doing that? There’s no way I did that! They must be wrong.” I didn’t entertain (at first) the idea that the feedback could be right in some way. It just didn’t fit with who I thought I was.

Remembering this situation helped put me into the mindset of students receiving critical feedback (or, at least, helped move me closer to that I hope), and not believe it, getting angry, indignant, even lashing out. When that happens you are not even allowing yourself to think that the feedback might be true; since it doesn’t fit with who you think you are, your own evaluation of the quality of your work, the truth must be that whoever said that is simply wrong. I’m reminded of Socrates who, at least in Plato’s texts, would show his interlocutors that they didn’t know what they thought they knew, and for some the reaction was to just assume that Socrates must be wrong and to get angry with him.

Why might feedback not be incorporated into future work?

We came up with numerous reasons during the session, which I wrote down:

  • Getting emotional; taking things too personally; losing sight of the goal of feedback
  • Not caring about the work, just trying to get credit
  • Too motivated by grade, not enough by learning
  • Not believing that the feedback is true; e.g., coming into class with mindset that one is an A student b/c have gotten A’s so far, so don’t believe the instructor who gives a lower mark
  • Distrust of the instructor, institution, due to larger social issues/context
  • Not thinking that you could do any better, that you’re capable of improving even with feedback; including: getting discouraged at how much they have to change and thinking they can’t
  • Not seeing work as formative process; thinking that when the assignment is done you are done and don’t need to revisit it, to learn from it
  • Professor and students seeing diff goals of feedback; students might think that feedback is there to explain why they got the grade they did, but for the prof it might be there to show ways to improve
  • Not understanding the feedback
  • Not connecting feedback from past to future situations
  • Thinking that just reading the comments is enough to improve for later
  • Not having a clear idea of what good work looks like to aim for
  • Too much feedback; overwhelmed; don’t know what to do with it

The one that I find hardest to deal with (though many are quite challenging) is the first: the emotional reaction. It kept me from addressing my situation as well as I could have, and I can see how student emotional reactions could lead them to not want to even look at the feedback again or think about it at all.

A reflective assignment to close the feedback loop

Rebecca shared with us an assignment she gives to students that asks them to reflect on their feedback, that forces them to read it and consider it and reflect on what they want to change for the future based on it. And the first item on that assignment is a question, asking them what their immediate reaction was on receiving the feedback. The idea is that maybe if they have an outlet to write it down, to let you know their emotional reaction, this might help them move past it.

But I think the rest of the assignment might help with that too. Because it goes on to ask students to

  • write down how many comments they got in each of several categories (to help them see which areas they need to work on, and to ensure that they read the comments or at least skim them),
  • what grade they expected, what grade they got and what do they think explains the difference between these
  • how much of the feedback do they feel they understand
  • what two things do they want to work on for the next assignment, and
  • whether they have any questions or comments about the feedback they received

How might all of this help with the emotional reaction issue? Besides making them continue to think about the feedback even if they get angry instead of just ignoring it, it also gives them a chance to give feedback on the feedback, to try to figure out what could explain the difference between the grade they expected and the grade they got, which could include thinking about the feedback and how it might suggest that the grade makes at least some sense. Or, if they disagree with the feedback, it gives them an outlet to do so, and the instructor can follow up with them later to discuss the issue.

How I’d like to adapt this assignment, and also address a couple of the other problems above

I like this idea of a reflection on the feedback that you submit to the instructor, but I also want them to have a kind of running record of the feedback they’ve received, the 2-3 things they want to work on for the next time, what they did well and want to keep doing, etc. In addition, I want to make sure that they have to look back at this feedback for the next paper they write.

So, here’s an idea.

1. For the Arts One course I teach, in which students write a paper every 2 weeks (12 over the course of a year), I think I’ll ask them to include on each new essay:

  • a list of at least two things they tried to do better on this one, based on feedback from the last one
  • at least one thing they themselves noticed from their previous essay that either they think was good or that they would like to improve on, that no one else pointed out
    • this is so that they don’t just look back at the feedback but also back at their previous essay and see what they themselves think, in order to do some self-assessment

2. I would also like to institute a policy in terms of my own feedback: that I will point out one or two instances of a certain type of mistake, and ask them to look for more instances (if I saw more in the essay, that is). Then, also on the next essay:

  • Point out at least one other place in the previous essay where one of the comments I made applies elsewhere too.
    • This is again so that they need to go do some self assessment of their work, and so I don’t need to go through and point out every single mistake. I think this could help with the issue of being overwhelmed by too much feedback

3. Finally, I think it would be great if they could keep a learning log, digitally, where they keep track of, for each essay: the comments they’ve gotten from peers, at least two things from me that they want to work on, the things they’re doing well and want to keep doing. That way they have a running record and periodically I can ask them to reflect on whether there are any patterns/repeated comments, or whether they are getting better because certain sorts of comments aren’t being said anymore.

These things could hopefully all help with the issue of not connecting feedback on previous work to later work. But I have to figure out how much of this is adding too much work for the students, or whether it is all so pedagogically valuable as to be worth it.

Back to when I didn’t respond well

At first, I just shut down. So I can understand when students do that. I didn’t want to think about it and just wanted to move past it. But I did eventually do something: I emailed all my students and asked them to fill in another feedback form, anonymously, that would just go to me. I asked them to be as specific as possible, because I didn’t get quite enough details on the first one. I got a few more details on this second round, which helped me understand some of the concerns expressed and how students may have come to the conclusion they did (and even that I might have been unconsciously doing some of what they thought, even though I’m still reluctant to believe that). But not entirely fully. I think there was some miscommunication somewhere that I just can’t rectify now.

All the more reason to give students more of a chance to give feedback during the course so problems can be solved earlier! (I just did it once, during the first term, and not at all during the second: lesson learned!)

 

Providing feedback to students for self-regulation

On Nov. 21, 2013, I did a workshop with graduate students in Philosophy at UBC on providing effective feedback on essays. I tried to ground as much as I could on work in the Scholarship of Teaching and Learning.

Here are the slides for the workshop (note, we did more than this…this is just all I have slides for):

 

Here is the works cited for the slides:

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233.

 

Chanock, K. (2000). Comments on essays: Do students understand what tutors write? Teaching in Higher Education, 5(1), 95-105.

 

Lizzio, A. and Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment and Evaluation in Higher Education, 33(3), 263-275.

 

Lunsford, R.F. (1997). When less is more: Principles for responding in the disciplines. New Directions For Teaching and Learning, 69, 91-104.

 

Nicol, D.J. and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

 

Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.

 

Walker, M. (2009). An investigation into written comments on assignments: do students find them usable? Assessment and Evaluation in Higher Education, 34(1), 67-78.

 

Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education, 31(3), 379-394.

Contract grading, Part 1

I first came across the phrase “contract grading” in some Twitter feed or another (I really should write down which feeds I get things from, so I can give proper credit!). I couldn’t for the life of me figure out what contract grading could be–surely it didn’t mean that one would sign a contract with a student, promising to give them a certain grade, right?

Wrong. That’s exactly what it means. Though, of course, the student has to hold up their end of the bargain too.

I’ve looked into the idea a bit more, and I’m still ambivalent about it, though intrigued.

The Bok Blog (Harvard) starts off a post on contract grading this way:

The central feature of contract grading is the contract: a clear and detailed set of guidelines that stipulate exactly what a student needs to do in order to earn each possible grade.

Surely that’s not all there is to it, of course, since that’s what a syllabus gives (this point is also made in a post by Billie Hara on contract grading over at The ProfHacker blog, from The Chronicle of Higher Education). The first versions of contract grading I have come across, though, do resemble sections of syllabi more than contracts.

Continue reading

Oral and written peer feedback

This post is part of my ongoing efforts to develop a research project focusing on the Arts One program–a team-taught, interdisciplinary program for first-year students in the Faculty of Arts at the University of British Columbia. As noted in some earlier posts, one of the things that stands out about Arts One is what we call “tutorials,” which are weekly meetings of four students plus the professor in which all read and comment on each others’ essays (students write approximately one essay every two weeks). Thus, peer feedback on essays is an integral part of this course, occurring as a regular part of the course meeting time, every week.

In a recent survey of Arts One Alumni (see my post summarizing the results), students cited tutorials as one of the things that helped them improve their writing the most, and as one of the most important aspects of the program. In that earlier post I speculated on what might be so valuable about these tutorials, such as the frequency of providing and getting peer feedback (giving feedback every week, getting feedback on your own paper every two weeks), the fact that professors are there in the meetings with students to give their comments too and comment on the students’ comments, the fact that students revisit their work in an intensive way after it’s written, that they may feel pressure to make the work better before submitting it because they know they’ll have to present and defend it with their peers, etc. That last point is perhaps made even more important when you consider that the students get to know each other quite well, meeting every week for at least one term (the course is two terms, or one year long, but some of us switch students into different tutorial groups halfway through so they get the experience of reading other students’ papers too).

One thing I didn’t consider before, but am thinking about more now, is whether the fact that the feedback is done mostly, if not exclusively, orally and synchronously (and face-to-face) rather than through writing and asynchronously, makes a difference.

Continue reading

The value of peer review for effective feedback

No matter how expertly and conscientiously constructed, it is difficult to comprehend how feedback, regardless of its properties, could be expected to carry the burden of being the primary instrument for improvement. (Sadler 2010, p. 541)

… [A] deep knowledge of criteria and how to use them properly does not come about through feedback as the primary instructional strategy. Telling can inform and edify only when all the referents – including the meanings and implications of the terms and the structure of the communication – are understood by the students as message recipients. (Sadler 2010, p. 545)

In “Beyond feedback: developing student capability in complex appraisal” (Assessment & Evaluation in Higher Education, 35:5, 535-550), D. Royce Sadler points out how difficult it can be for instructor feedback to work the way we might want–to allow students to improve their future work. Like Nicol and Macfarlane-Dick 2006 (discussed in the previous post), Sadler here argues that effective feedback should help students become self-regulated learners:

Feedback should help the student understand more about the learning goal, more about their own achievement status in relation to that goal, and more about ways to bridge the gap between their current status and the desired status (Sadler 1989). Formative assessment and feedback should therefore empower students to become self-regulated learners (Carless 2006). (p. 536)

 The issue that Sadler focuses on here is that students simply cannot use feedback for improvement and development of self-regulation unless they share some of the same knowledge as the person giving the feedback. Much of this is complex or tacit knowledge, not easily provided in things such as lists of criteria or marking rubrics. Instructors may try to make their criteria for marking and their feedback as clear as they can,

Yet despite the teachers’ best efforts to make the disclosure full, objective and precise, many students do not understand it appropriately because, as argued below, they are not equipped to decode the statements properly. (p. 539)

Continue reading

Seven Principles of Effective Feedback Practice

I recently read an article by David J. Nicol and Debra Macfarlane-Dick that I found quite thought-provoking:

David J. Nicol & Debra Macfarlane-Dick (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, 31:2, 199-218.   http://dx.doi.org/10.1080/03075070600572090

The basic belief guiding their argument is that formative assessment (which they define, referring to Sadler 1998, as “assessment that is specifically intended to generate feedback on performance to improve and accelerate learning” (199)) should be aimed at helping students become more self-regulated. What does it mean for students to be self-regulated? The authors state that it manifests in behaviours such as monitoring and regulating processes such as “the setting of, and orientation towards, learning goals; the strategies used to achieve goals; the management of resources; the effort exerted; reactions to external feedback; the products produced” (199). They also cite later in the article (p. 202) a definition from Pintrich and Zusho, 2002:

Self-regulated learning is an active constructive process whereby learners set goals for their learning and monitor, regulate, and control their cognition, motivation, and behaviour, guided and constrained by their goals and the contextual features of the environment. (Pintrich and Zusho (2002), 64)

Students who are self-regulated learners, Nicol and Macfarlane-Dick explain on p. 200, set goals for themselves (usually affected by external goals in the educational setting) against which they can measure their performance. They generate internal feedback about the degree to which they are reaching these goals, what they need to do to improve progress towards them, etc. They incorporate external feedback (e.g., from instructors and peers) into their own sense of how well they are doing in relation towards their goals. The better they are at self-regulation, the better they are able to use their own and external feedback to progress towards their goals (here they cite Butler and Winne, 1995). The authors point to research providing evidence that “learners who are more self-regulated are more effective learners: they are more persistent, resourceful, confident and higher achievers (Pintrich, 1995; Zimmerman & Schunk, 2001)” (205).

Nicol and Macfarlane-Dick note, however, that current literature shows little work on arguing for how formative feedback can improve student self-regulation. That’s what they do here.

Continue reading

Potential problems with comments on students’ essays

[The following is from my monthly reflections journal for the UBC Scholarship of Teaching and Learning Leadership program I’m attending this year (a year-long workshop focused in part on general improvements in pedagogy, but also in large part on learning about SoTL and developing a SoTL project). Warning–a long post!]

I recently read an article by Ursula Wingate of the Department of Education and Professional Studies at King’s College London, entitled The impact of formative feedback on the development of academic writing,” Assessment & Evaluation in Higher Education Vol. 35, No. 5 (August 2010): 519-533. I am very interested in this article b/c it deals with a question I myself have wondered, in relation to my teaching in Arts One: why do some students improve so much in their writing over the course of the year, and why do some fail to do so? I was thinking this might be a future SoTL project for me, and I’m glad to see that there is literature on this…so I’ve got some work in the future, looking through that literature!

In this article Wingate reports on a study focused on two research questions:

(1) Can improvements in student writing be linked to the use of the formative feedback?
(2) What are the reasons for engaging or not engaging with the assessment feedback? (p. 523)

The sample was a set of essays by 62 students in a first year course focused in part on writing (the course was part of a program in applied linguistics). Comments on essays were coded according to which category they belong to in terms of assessment criteria, and the researchers compared the comments in each category from an early essay to those on a later essay from each student. They separated students into three main categories: those whose essays showed equally high achievement over the two assignments, those whose marks improved by at least 10% between the two essays, and those whose marks didn’t show much of a difference between the two essay (+ or – 5%).  After this separation they ended up with 39 essays. They list a few reasons why they didn’t include other students in the study, but those aren’t crucial to what I want to comment on here (I think). Mostly they were looking to find why some students improve a lot and some don’t, so the second two groups make sense, but the first group (those who were consistently high achievers) were included to see if they could find out in interviews some useful information about how/why they are academically engaged.

Continue reading