Tag Archives: #HumanMOOC

Rubrics and peer feedback

I’ve been participating in an open, online course called Human MOOC: Humanizing Online Instruction. It’s officially over now, but I’m just completing a couple of final things from it.

One of the sections was on peer review/peer feedback by students of each others’ work. There was a link to a very helpful resource on peer feedback from the teaching and learning centre at Washington University in St. Louis. This page, linked to the previous one, is also very useful: “How to Plan and Guide In-class Peer Review Sessions.” A couple of things struck me about these resources that I wanted to comment on briefly.

What rubric/criteria should students use to do peer review?

On the first resource linked above, the following is stated:

Some instructors ask their students to evaluate their peers’ writing using the same criteria the instructor uses when grading papers (e.g., quality of thesis, adequacy of support, coherence, etc.). Undergraduate students often have an inadequate understanding of these criteria, and as a result, they either ignore or inappropriately apply such criteria during peer-review sessions (Nilson 2003).

The second resource states similarly:

The role of the peer-reviewer should be that of a reader, not an evaluator or grader. Do not replicate the grading criteria when designing these worksheets. Your students will not necessarily be qualified to apply these criteria effectively, and they may feel uncomfortable if they are given the responsibility to pronounce an overall judgment on their peers’ work.

This makes sense, though at the same time it’s troubling because if the students can’t understand the rubrics we use to mark their work, then how can they understand why they got the mark they did, or what they need to do to improve? It seems to me the answer here is not to ask students to use a different rubric when doing peer review than what we use to mark, but changing the rubric we use to mark so that it makes more sense to students (if there are comprehension problems). Now, I haven’t read the work by Nilson cited above, but it would be interesting to look more carefully into what undergraduate students tend to understand or not understand, or why, and then change one’s rubric accordingly.

One way one might do this, perhaps, is to ask them to use one’s marking rubric to evaluate sample essays and then invite feedback on the rubric as/after they are doing this. Then one can maybe catch some of the things students don’t understand before one uses the rubric for marking the essays?

Mock peer review session

The second resource suggests that one holds a mock session to begin with, which seems an excellent idea. It connects with the importance of training students in peer review before asking them to engage in it on work for the course (as discussed in Sluijsmans et al., 2002).

The idea would be to give them a “fake” essay of a kind similar to what they need to write, give them the peer review worksheet, and ask them to come up with comments on the paper. This can be done individually or in groups. Then, in class, have students give their comments to the whole group and the instructor writes them down on something that can be shown on the screen (or, alternatively, one could have them write the comments on a shared document online so they could be projected easily and the instructor doesn’t have to re-write them!). Then the class can have a discussion on the essay, the comments, and the marking worksheet/rubric, to clear up any confusion or help students improve their comments–e.g., moving from “good introduction” to saying what about the introduction is good, in particular.

This is an excellent idea, and I’m going to incorporate it in my upcoming philosophy class this summer. In Arts One we meet every week to do peer review of essays, in groups of four students plus the prof, so we can help students learn how to do peer review well on an almost one-to-one basis. And, since they do it every week for a year, they get quite good at it after awhile, even a very short time, actually!

 

Self-assessment

I could have sworn that the resources linked above from Washington University also talked about the value of students doing self-assessment of their own work, but now I can’t find that on those pages. But I was thinking that after they do peer feedback on each others’ work, it would be useful for them to go back to their own work and give feedback on it. It seems to me that after reading and commenting on others’ work, seeing what works/what doesn’t work, one could come to one’s own with fresh eyes, having learned from others’ work and also having distanced oneself from one’s own a bit.

I think I’ll try asking students to submit the peer review worksheet on their own essays after doing the peer feedback on others’, when they turn in their drafts post-peer-feedback.

 

Works cited

Nilson, Linda. (2003). “Improving Student Peer Feedback.” College Teaching, 51 (1), p. 34-38.
Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T. J. (2002). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29(1), 23–42. http://doi.org/10.1016/S0191-491X(03)90003-4

 

 

trying out voice thread

I’m trying to participate here and there in #HumanMOOC, which is an open online course about humanizing online instruction–how to make online courses feel engaging, connect students to the instructor and each other, etc. Here’s the main course page on the Canvas network, and here’s a more “open” version of the general topics and activities of the course, on Word Press.

This week, one of the activities is to try out Voice Thread. They had created a set of slides, and invited participants to comment on them using Voice Thread (free account to comment on already-created presentations), using either audio, video, or text comments.

I found myself frustrated right away with a couple of things.

1. I’m in a coffee shop, the wifi is not that great, and I kept getting kicked off the wifi. It was difficult to move through this program with spotty wifi!

2. I was looking at the embedded version of the presentation on the course website, and found it very hard to deal with. The text comments only have one or two lines visible, and you have to scroll to see the rest; but the scrolling is so fast that you can’t really read it smoothly. See screenshot below (I blacked out names and faces, b/c I haven’t asked permission if I can post them here! When I blacked out one of them it inexplicably turned into a star!).

Screen Shot 2015-03-22 at 12.27.34 PM

You can see how there are just a couple of lines visible and you have to scroll down to see the rest.

Turns out this is solved if you go to the “full screen” option at the top left. You can actually see the whole comment!

3. When I clicked on the next slide, the writing on the slide was not visible due to some dark bar on it:

Screen Shot 2015-03-22 at 12.25.58 PMTurns out this was fixed when I went to full screen as well. Good to maybe highlight these issues for students and tell them to use the full screen view.

 

Otherwise, it seems like a good tool! It plays through all the comments one by one, automatically, but you can pause them if you want. The audio and video comments seem to work just fine (so long as my wifi stays up).

One downside I can see is that if there are a lot of people giving comments, then it’s quite a long process to go through them all. Of course, you can just tell students they don’t have to go through all of them, but then it will likely be that it’s just the first few people whose comments get read/heard/watched. You can jump to the middle or the end easily, in the timeline at the bottom, but I expect most people would just tend to stick with the beginning ones because those start playing right away when you open up the file.  It would be cool if you had the option to mix up the comments randomly for different people watching, to avoid this problem.

Unfortunately, I’m not sure I could use this tool in my courses, at least not easily. British Columbia has a law, the Freedom of Information and Protection of Privacy Act, and in section 3, 30.1, it states that storage of any personal information collected by public bodies (including public universities) must be in Canada unless people give written permission otherwise. Vancouver Island University has put together a useful guide for how to deal with this law in public postsecondary institutions in BC, and at the end there is a sample consent form for students to sign. It’s quite an affair to do, though, because you should to present students with the privacy policies of the tool, what could happen to their information (whom it may be shared with), and more. Plus, if one has many, many students in the course, it’s quite a pain to deal with getting all these things back and making sure everyone has filled one out. And then, what do you do about those who don’t agree? Need to come up with an alternative. And since you have to have an alternative anyway, it’s just easier not to use the tools that store personal information outside Canada!

Many educators are frustrated with this law, but it doesn’t look like it’s changing any time soon, unfortunately. Why does it exist? I’m told it started with the Patriot Act in the U.S., and British Columbia not wanting identifiable information to be shared by its public institutions with the U.S. unless the individuals had given informed consent.

I could use this tool if it had a way to log in and post anonymously. Or maybe people could make up a fake name and a fake email address, but that’s rather a pain for the students!

Apparently, though, one university in BC has integrated Voice Thread with Moodle (see presentation here), so it is at least possible…but I think they had to go through a fair bit of work to get that to happen, and I don’t see anything about Voice Thread and UBC when I do a web search. Hmmmmm….