Category Archives: Scholarship of Teaching and Learning

Peer assessment: face to face vs. asynchronous, online (Pt. 1)

I have been doing a good deal of reading research on peer assessment lately, especially studies that look at differences and benefits/drawbacks of doing peer assessment face to face , orally, and through writing–both asynchronous writing in online environments (e.g., comments on a discussion board) and synchronous writing online (e.g., in text-based “chats”). I summarized a few studies on oral vs. written  peer assessment in this blog post, and then set out a classification structure for different methods of peer assessment in this one.

Here, I summarize a few studies I’ve read that look at written, online, asynchronous peer feedback. In another post I’ll summarize some studies that compare oral, face to face with written, online, synchronous (text-based chats). I hope some conclusion about the differences and the benefits of each kind can be drawn after summarizing the results.

1. Tuzi, F. (2004) The impact of e-feedback on the revisions of L2 writers in an academic writing course, Computers and Composition 21, 217–235. doi:10.1016/j.compcom.2004.02.003

This study is a little outside of my research interest, as it doesn’t compare oral feedback to written (in any form). Rather, the research focus was to look at how students revised essays after receiving e-feedback from peers and their teacher. Oral feedback was only marginally part of the study, as noted below.

20 L2 students (students for whom English was an additional language) in a first-year writing course at a four-year university participated in this study. Paper drafts were uploaded onto a website where other students could read them and comment on them. The e-feedback could be read on the site, but was also sent via email to students (and the instructor). Students wrote four papers as part of the study, and could revise each paper up to five times. 97 first drafts and 177 revisions were analyzed in the study. The author compared comments received digitally to later revised drafts, to see what had been incorporated. He also interviewed the authors of the papers to ask what sparked them to make the revisions they did.

Tuzi combined the results from analyzing the essay drafts and e-feedback (to see what of the feedback had been incorporated into revisions) with the results of the interviews with students, to identify the stimuli for changes in the drafts. From these data he concludes that 42.1% of the revisions were instigated by the students themselves, 15.6% from e-feedback, 148.% from the writing centre, 9.5% from oral feedback (from peers, I believe), and for 17.9% of the revisions, the source was “unknown.” He also did a few finer-grained analyses, showing how e-feedback fared in relation to these other sources at different levels of writing (such as the punctuation, word, sentence, paragraph), in terms of the purpose of the revision (e.g., new information, grammar) and more. In many analyses, the source of most revisions was the students themselves, but e-feedback ranked second some (such as working at the sentence, clause and paragraph levels, and adding new information). Oral feedback was always low on the list.

In the “discussion” section, Tuzi states:

Although e-feedback is a relatively new form of feedback, it was the cause of a large number of essay changes. In fact, e-feedback resulted in more revisions than feedback from the writing center or oral feedback. E-feedback may be a viable avenue for receiving comments for L2 writers. Another interesting observation is that although the L2 writers stated that they preferred oral feedback, they made more e-feedback-based changes than oral-based changes.

True, but note that in this study oral feedback was not emphasized. It was something students could get if they wanted, but only the e-feedback was focused on in the course. So little can be concluded here about oral vs. e-feedback. To be fair, that wasn’t really the point of the study, however. The point was simply to see how students use e-feedback, whether it is incorporated into revisions, and what kinds of revisions e-feedback tends to be used for. And Tuzi is clear towards the end: “Although [e-feedback] is a useful tool, I do not believe it is a replacement for oral feedback or classroom interaction …”. Different means of feedback should be available; this study just shows, he says, that e-feedback can be useful as one of them.

Continue reading

Problems with grading rubrics for complex assignments

In an earlier post I discussed a paper by D. Royce Sadler on how peer marking could be a means for students to learn how to become better assessors themselves, of their own and others’ work. This could not only allow them to become more self-regulated learners, but also fulfill roles outside of the university in which they will need to evaluate the work of others. In that essay Sadler argues against giving students preset marking criteria to use to evaluate their own work or that of other students (when that work is complex, such as an essay), because:

  1. “Quality” is more of a global concept that can’t easily be captured by a set of criteria, as it often includes things that can’t be easily articulated.
  2. As Sadler pointed out in a comment to the post noted above, having a set of criteria in advance predisposes students to look for only those things, and yet in any particular complex work there may be other things that are relevant for judging quality.
  3. Giving students criteria in advance doesn’t prepare them for life beyond their university courses, where they won’t often have such criteria.

I was skeptical about asking students to evaluate each others’ work without any criteria to go on, so I decided to read another one of his articles in which this point is argued for more extensively.

Here I’ll give a summary of Sadler’s book chapter entitled “Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning” (in Assessment, Learning and Judgement in Higher Education, Ed. G. Joughin. Dordrecht: Springer, 2009). DOI: 10.1007/978-1-4020-8905-3_4).

[Update April 22, 2013] Since the above is behind a paywall, I am attaching here a short article by Sadler that discusses similar points, and that I’ve gotten permission to post (by both Sadler and the publisher): Are we short-changing our students? The use of present criteria in assessment. TLA Interchange 3 (Spring 2009): 1-8. This was a publication from what is now the Institute for Academic Development at the University of Edinburgh, but these newsletters are no longer online.

Note: this is a long post! That’s because it’s a complicated article, and I want to ensure that I’ve got all the arguments down before commenting.

Continue reading

Using SoTL to create guidelines for teaching

On the Resources” page of the International Society for the Scholarship of Teaching and Learning (ISSOTL), I came across a link to “Guidelines on Learning that Inform Teaching,” a project by Adrian Lee, Emeritus Professor at the University of New South Wales, in Sydney, Australia. Through his work in his position as Pro Vice Chancellor (Education & Quality Improvement) at the University of NSW, Lee decided to work with others to create a set of guidelines, based on research, on what “works” in teaching.

The main ideas behind the project are stated as follows:

  • As academics, our task is to help students learn.
  • There is a vast research literature on how students learn and examples of good teaching based on this research.
  • As we claim to be research intensive institutions should not our teaching be based on this research?

But, as Lee notes, most faculty are very busy, and don’t have time to look into the relevant research on teaching, on top of their own research and teaching activities (I am lucky to be able to spend my sabbatical time doing lots of research into teaching, given that I’m in a teaching position, but most people don’t have that luxury). So he decided to distill the research into a set of guidelines, with links to relevant research papers.

Lee’s paper, “From Teaching to Learning,” linked at the bottom of the front page of the website, provides information (towards the end) on how the guidelines were developed. It also provides a great summary of some other excellent work done at UNSW on professional development for faculty re: teaching and learning–some things to learn from there, for institutions!

There are 16 guidelines, each with its own dedicated page. On those pages are quotes from various research papers that are meant to illustrate the guideline. I found these somewhat helpful, but would liked to have seen more narrative description of each guideline and what kind of research supports it. Perhaps one or two disciplinary examples would have helped illustrate the guideline a bit better, though those are all collected in a different section (see below). There are also links to research papers related to that particular guideline, for those who are interested in seeing some of the sources of the guideline. Of course, this effort is hampered by the fact that so many SoTL articles are not open access, so they can’t be easily linked to. Or rather, he could have put links to them, with notes that they are subscription only–so those whose institutions do have subscriptions could read them. Still, interested faculty could look up some of the non-linked papers themselves if their institutions have subscriptions.

There is also a section that hasDiscipline-specific exemplars,” which has links to ways people are putting the guidelines into action (whether they are consciously aware of these specific guidelines or not!) in different disciplines. The “humanities” section is fairly sparse at this point, but Lee asks for exemplars and provides his email address.

Finally, each guideline has a “toolkit” designed to encourage reflection on each one by individual teachers (linked at the bottom of each guideline page). It asks instructors to think of examples, to engage in reflection on the guideline, to consider obstacles to implementing it, and more. I find the toolkit a bit sparse, and think that more instruction could be helpful–more specific questions to guide reflection, for example. And perhaps a prompt to ask instructors to come up with a way they could use it in their own teaching in the future.

Lee’s larger project is to encourage institutions to set up their own guidelines, for a similar purpose–to help faculty think about the current research in a distilled fashion, and see examples of work being done at their own institution and elsewhere that conforms to the guidelines. On the homepage he links to three universities that have used the 16 guidelines as a model for their own set of guidelines. For example, MIT has a page listing exemplars of the guidelines from MIT itself.


I think this is a very important concept–for those who don’t have the time to go to workshops on teaching where they might get similar information (or those for whom such workshops are not available), it’s helpful to have research-based information on quality teaching in higher ed available. Of course, a project like this requires upkeep, and some of the links on the site are currently broken (including the link to one of the three universities with their own, similar guidelines, University of Bedfordshire-UK). In addition, the latest papers linked to on the site are from the mid-2000s, and it would be nice to have later papers available as well. Still, what Lee is really hoping will happen is that universities create their own guidelines and sites, and keep these up to date; and that’s an excellent idea.

Do you know of any colleges or universities that have similar sorts of guidelines, backed by research? If so, please give links in the comments and I’ll send them to Adrian Lee! Or, if you have some discipline-specific exemplars for one or more of the guidelines, please post those below.
P.S. I’m off and on out of town for the next month, as it is summer here in Australia (where I am for sabbatical), so I may be a little slow in responding to comments!


How does giving comments in peer assessment affect students? (Part 3)

This is the third post in a series summarizing empirical studies that attempt to answer the question posed in the title. The first two can be found here and here. This will be the last post in the series, I think, unless I find some other pertinent studies.

Lundstrom, K. and Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer’s own writing, Journal of Second Language Writing 18, 30-43. Doi: 10.1016/j.jslw.2008.06.002

This article is focused on students in “L2” classes (second language, or additional language), and asks whether students who review peers’ papers do better in their own (additional language) writing than students who only receive peer reviews and attempt to incorporate the feedback rather than giving comments on peers’ papers.

Participants were 91 students enrolled in nine sections of additional language writing classes at the English Language Center at Brigham Young University. The courses were at two levels out of a possible five: half the students were in level 2, “high beginning,” and half were in level 4, “high intermediate” (33). The students were then divided into a control and experimental group:

The first group was composed of two high beginning and three high intermediate classes, (totaling forty-six students). This group was the control group (hereafter ‘‘receivers’’) and received peer feedback but did not review peers’ papers (defined as compositions written by students at their same proficiency level). The second group was composed of two high beginning classes and two high intermediate classes, (totaling forty-five students), and made up the experimental group, who reviewed peer papers but did not receive peer feedback (hereafter ‘‘givers’’). (33; emphasis mine)

Research questions and procedure to address them

Research questions:

1. Do students who review peer papers improve their writing ability more than those who revise peer papers (for both beginning and intermediate students)?

2. If students who review peer papers do improve their writing ability more than those who revise them, on which writing aspects (both global and local) do they improve? (32)

Continue reading

How does giving comments in peer assessment impact students? (Part 2)

This is the second post looking at published papers that use empirical data to answer the question in the title. The first can be found here. As noted in that post, I’m using “peer assessment” in a broad way, referring not just to activities where students give grades or marks to each other, but more on the qualitative feedback they provide to each other (as that is the sort of peer assessment I usually use in my courses).

Here I’ll look at just one article on how giving peer feedback affects students, as this post ended up being long. I’ll look at one last article in the next post (as I’ve only found four articles on this topic so far).

Lu, J. and Law, N. (2012) Online peer assessment: effects of cognitive and affective feedback, Instructional Science 40, 257-275. DOI 10.1007/s11251-011-9177-2. This article has been made open access, and can be viewed or downloaded at:

In this study, 181 13-14 year old students in a Liberal Studies course in Hong Kong participated in online peer review of various parts of their final projects for the course. They were asked to both engage in peer grading and give peer feedback to each other in groups of four or five. The final project required various subtasks, and peer grading/feedback was not compulsory — students could choose which subtasks to give grades and feedback to their peers about. The grades were given using rubrics created by the teacher for each subtask, and both grades and feedback were given through an online program specially developed for the course.

Research Questions

  1. Are peer grading activities related to the quality of the final project for both assessors and assessees?
  2. Are different types of peer …  feedback related to the quality of the final projects for both assessors and assessees? (261)

Continue reading

Learnist boards on SoTL and Open Access

During the past week or so I’ve been working on finishing a couple of things before their deadlines, and during breaks in working on those I’ve been having fun with Learnist: Learnist is sort of like Pinterest but focused more on learning about new things. The idea is that someone who has some knowledge about something designs a “learnboard” about it, and collects things from the web (including PDFs, Google docs and books, Prezi presentations, Kickstarter campaigns, and more), plus their own materials if they like, and puts them together in an order that helps others learn the topic. That person also adds commentary to help explain the artifact posted. There are places to comment on each artifact or on the whole board, and others can suggest new things to be put on the board.

Right now Learnist is in beta, and I am a bit frustrated by the fact that there is nothing on the front page to explain it (though there are learnboards on Learnist itself that give you an overview–you just have to type in “learnist” on the search bar at the top to find them).

You have to send a request to them if you want an account to create your own boards or make comments, but you don’t have to have an account to see all the existing boards. You can see mine by following the links below. They are still in progress, so more will likely be added later (esp. to the first and third ones).


1. What is the Scholarship of Teaching and Learning and How Can I Get Involved?

2. Open Access Journals in the Scholarship of Teaching and Learning

3. Open Access Scholarly Publishing (and Why It’s Important)


And you could also check out my personal page on Learnist, which lists any new learnboards I may have added since writing this post, as well as the boards I myself “like” and “follow.”

Update Dec. 6, 2012

I received an email from someone over at Grockit, the company that runs Learnist, asking for some suggestions to try to address the problems I noted above with being confused about how it all works when you arrive at the page for the first time. Nice to hear they are concerned and open to suggestions!

They also let me know I could invite others to use Learnist and they could bypass the usual waiting period through that means. So if you want an invite, send me an email (I may have to ask you a couple of questions to make sure you’re a real person and really interested in Learnist, since I’ll personally be inviting you!). Find my email address on my personal website at:


How does giving comments in peer assessment impact students? (Part 1)

Some colleagues and I are brainstorming various research we might undertake regarding peer assessment, and in our discussions the question in the title of this post came up. I am personally interested in the comments students can give to each other in peer assessment, more than in students giving marks/grades to each other. Students engaging in giving comments on each others’ work are not only impacted by receiving peer comments, of course, but through the process of giving them as well. How does practice in giving comments and evaluating others’ work affect students’ own work or the processes they use to produce it?

I’ve already looked at a couple of articles that address this question from a somewhat theoretical (rather than empirical) angle (see earlier posts here and here). As discussed in those posts, it makes sense to think that practice in evaluating the work of peers could help students get a better sense of what counts as “high quality,” and thus have that understanding available to use in self-monitoring so as to become more self-regulated.

In this post I summarize the findings of two empirical articles looking at the question of whether and how providing feedback to others affects the quality of students’ own work. I will continue this summary in another post, where I look at another few articles.

(1) Li, L., Liu, X. and Steckelberg, A.L. (2010) Assessor or assessee: How student learning improves by giving and receiving peer feedback, British Journal of Educational Technology 41:3, 525-536. DOI: 10.1111/j.1467-8535.2009.00968.x

In this study, 43 undergraduate teacher-education students engaged in online peer assessment of each others’ WebQuest projects. Each student evaluated the projects of two other students. They used a rubric, and I believe they gave both comments and marks to each other. Students then revised their projects, having been asked to take the peer assessment into account and decide what to use from it. The post-peer assessment projects were marked by the course instructor.

Continue reading

Literature on written and oral peer feedback

For context on why I’m interested in this, see the previous post.

I’ve done some searches into the question of oral and written peer feedback, and been surprised at the paucity of results. Or rather, the paucity of results outside the field of language teaching, or teaching courses in a language that is an “additional language” for students. I have yet to look into literature on online vs. face-to-face peer review as well. Outside of those areas, I’ve found only a few articles.

1. Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.

In this article Van den Berg et al report on a study in which they looked at peer feedback in seven different courses in the discipline of history (131 students). These courses had peer feedback designs that differed according to things such as: what kind of assignment was the subject of peer feedback, whether the peer feedback took place alongside teacher feedback or whether there was peer feedback only, whether students who commented on others got comments from those same others on their own work or not, how many students participated in feedback groups, and more. Most of the courses had both written and oral peer feedback, though one of the seven had just written peer feedback.

The authors coded both the written and oral feedback along two sets of criteria: feedback functions and feedback aspects. I quote from their paper to explain these two things, as they are fairly complicated:

Based on Flower et al . (1986) and Roossink (1990), we coded the feedback in relation to its product-oriented functions (referring directly to the product to be assessed): analysis, evaluation, explanation and revision. ‘Analysis’ includes comments aimed at understanding the text. ‘Evaluation’ refers to all explicit and implicit quality statements. Arguments supporting the evaluation refer to ‘Explanation’, and suggested measures for improvement to ‘Revision’. Next, we distinguished two process-oriented functions, ‘Orientation’ and ‘Method’. ‘Orientation’ includes communication which aims at structuring the discussion of the oral feedback. ‘Method’ means that students discuss the writing process. (141-142)

By the term ‘aspect’ we refer to the subject of feedback, distinguishing between content, structure, and style of the students’ writing (see Steehouder et al., 1992). ‘Content’ includes the relevance of information, the clarity of the problem, the argumentation, and the explanation of concepts. With ‘Structure’ we mean the inner consistency of a text, for example the relation between the main problem and the specified research questions, or between the argumentation and the conclusion. ‘Style’ refers to the ‘outer’ form of the text, which includes use of language, grammar, spelling and layout. (142)

They found that students tended to focus on different things in their oral and written feedback. Written feedback over all the courses tended to be more product-oriented than process-oriented, with a focus on evaluation of quality rather than explaining that evaluation or offering suggestions for revision.  In terms of feedback aspect, written feedback focused more on content and style than structure (143).

Continue reading

Oral and written peer feedback

This post is part of my ongoing efforts to develop a research project focusing on the Arts One program–a team-taught, interdisciplinary program for first-year students in the Faculty of Arts at the University of British Columbia. As noted in some earlier posts, one of the things that stands out about Arts One is what we call “tutorials,” which are weekly meetings of four students plus the professor in which all read and comment on each others’ essays (students write approximately one essay every two weeks). Thus, peer feedback on essays is an integral part of this course, occurring as a regular part of the course meeting time, every week.

In a recent survey of Arts One Alumni (see my post summarizing the results), students cited tutorials as one of the things that helped them improve their writing the most, and as one of the most important aspects of the program. In that earlier post I speculated on what might be so valuable about these tutorials, such as the frequency of providing and getting peer feedback (giving feedback every week, getting feedback on your own paper every two weeks), the fact that professors are there in the meetings with students to give their comments too and comment on the students’ comments, the fact that students revisit their work in an intensive way after it’s written, that they may feel pressure to make the work better before submitting it because they know they’ll have to present and defend it with their peers, etc. That last point is perhaps made even more important when you consider that the students get to know each other quite well, meeting every week for at least one term (the course is two terms, or one year long, but some of us switch students into different tutorial groups halfway through so they get the experience of reading other students’ papers too).

One thing I didn’t consider before, but am thinking about more now, is whether the fact that the feedback is done mostly, if not exclusively, orally and synchronously (and face-to-face) rather than through writing and asynchronously, makes a difference.

Continue reading

The value of peer review for effective feedback

No matter how expertly and conscientiously constructed, it is difficult to comprehend how feedback, regardless of its properties, could be expected to carry the burden of being the primary instrument for improvement. (Sadler 2010, p. 541)

… [A] deep knowledge of criteria and how to use them properly does not come about through feedback as the primary instructional strategy. Telling can inform and edify only when all the referents – including the meanings and implications of the terms and the structure of the communication – are understood by the students as message recipients. (Sadler 2010, p. 545)

In “Beyond feedback: developing student capability in complex appraisal” (Assessment & Evaluation in Higher Education, 35:5, 535-550), D. Royce Sadler points out how difficult it can be for instructor feedback to work the way we might want–to allow students to improve their future work. Like Nicol and Macfarlane-Dick 2006 (discussed in the previous post), Sadler here argues that effective feedback should help students become self-regulated learners:

Feedback should help the student understand more about the learning goal, more about their own achievement status in relation to that goal, and more about ways to bridge the gap between their current status and the desired status (Sadler 1989). Formative assessment and feedback should therefore empower students to become self-regulated learners (Carless 2006). (p. 536)

 The issue that Sadler focuses on here is that students simply cannot use feedback for improvement and development of self-regulation unless they share some of the same knowledge as the person giving the feedback. Much of this is complex or tacit knowledge, not easily provided in things such as lists of criteria or marking rubrics. Instructors may try to make their criteria for marking and their feedback as clear as they can,

Yet despite the teachers’ best efforts to make the disclosure full, objective and precise, many students do not understand it appropriately because, as argued below, they are not equipped to decode the statements properly. (p. 539)

Continue reading