Category Archives: Teaching

Peer Assessment: Face to face vs. online, synchronous (Part 2)

Here I look at one last study I’ve found that focuses on the nature of student peer feedback discussions when they take place in a synchronous, online environment (a text-based chat). Part 1 corresponding to this post can be found here.

Jones, R.H., Garralda, A., Li, D.C.S. & Lock, G. (2006) Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers, Journal of Second Language Writing 15,  1–23. DOI: http://dx.doi.org/10.1016/j.jslw.2005.12.001

This study is rather different than the ones I looked at in Part 1 of face to face vs. online, synchronous peer assessment, because here the subjects of the study are students and peer tutors in a writing centre rather than peers in the same course. Still, at least some of their results regarding the nature of peer talk in the tutor situation may still be relevant for peer assessment in courses.

Participants and data

The participants in this study were five peer tutors in a writing centre in Hong Kong, dedicated to helping non-native English speakers write in English. For both tutors and clients, English was an additional language, but the tutors were further along in their English studies and had more proficiency in writing in English than the clients. Data was collected from transcripts of face to face consultations of the tutors with clients, as well as transcripts of online, text-based chat sessions of the same tutors, with many of the same clients.

Face to face tutoring was only available in the daytime on weekdays, so if students wanted help after hours, they could turn to the online chat. Face to face sessions lasted between 15 and 30 minutes, and students “usually” emailed a draft of their work to the tutor before the session. Chat sessions could be anywhere from a few minutes to an hour, and though tutors and clients could send files to each other through a file exchange system, this was only done “sometimes” (6). These details will become important later.

Model for analyzing speech

To analyze the interactions between tutors and clients, the authors used a model based on “Halliday’s functional-semantic view of dialogue (Eggins & Slade, 1997; Halliday, 1994)” (4). In this model, one analyzes conversational “moves,” which are different than “turns”–a “turn” can have more than one “move.” The authors explain a move as “a discourse unit that represents the realization of a speech function” (4).

In their model, the authors use a fundamental distinction given by Halliday into “initiating moves” and “responding moves”:

Initiating moves (statements, offers, questions, and commands) are those taken independently of an initiating move by the other party; responding moves (such as acts of acknowledgement, agreement, compliance, acceptance, and answering) are those taken in response to an initiating move by the other party. (4-5)

They then subdivide these two categories further, some of which is discussed briefly below.

Results

Conversational control

In the face to face meetings, the tutors exerted the most control over the discussions. Tutors had many more initiating moves (around 40% of their total moves, vs. around 10% of those for clients), whereas clients had more responding moves (around 33% of clients’ total moves, vs. about 14% for tutors). In the chat conversations, on the other hand, initiating and responding moves were about equal for both tutors and clients (7).

Looking more closely at the initiating moves made by both tutors and clients, the authors report:

In face-to-face meetings, tutors controlled conversations primarily by asking questions, making statements, and issuing directives. In this mode tutors asked four times more questions than clients. In the on-line mode, clients asked more questions than tutors, made significantly more statements than in the face-to-face mode, and issued just as many directives as tutors. (10)

Types of questions

However, the authors also point out that even though the clients asserted more conversational control in the online chats, it was “typical” of the chats to consist of questions by students asking whether phrases, words, or sentences were “correct” (11). They did not often ask for explanations, just a kind of check of their work from an expert and a quick answer as to whether something was right or wrong. On the other hand, when tutors controlled the conversations with their questions, it was often the case that they were using strategies to try to get clients to understand something themselves, to understand why something is right or wrong and to be able to apply that later. So “control” over the conversation, and who asks the most questions or issues the most directives, are not the only important considerations here.

The authors also divided the questions into three different types. Closed questions: “those eliciting yes/no responses or giving the answerer a finite number of choices; open questions: “those eliciting more extended replies”; rhetorical questions: “those which are not meant to elicit a response at all” (12)

In the face to face sessions, tutors used more closed questions (about 50% of their initiating questions) than open questions (about 33%); the opposite was true in the online chats: tutors used more open questions (about 50% of their initiating questions) than closed (about 41%).

Continue reading

Peer assessment: Face to face vs. online, synchronous (Part 1)

This is another post in the series on research literature that looks at the value of doing peer assessment/peer feedback in different ways, whether face to face, orally, or through writing (mostly I’m looking at computer-mediated writing, such as asynchronous discussion boards or synchronous chats). Earlier posts in this series can be found here, here, here and here.

In this post I’ll look at a few studies that focus on peer assessment through online, synchronous discussions (text-based chats).

1. Sullivan, S. & Pratt, E. (1996) A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom, System 29, 491-501. DOI: http://dx.doi.org/10.1016/S0346-251X(96)00044-9

 38 second-year university students studying English writing for the first time (where English was an additional language) participated in the study. They were distributed in two classes taught by the same professor, where all the teaching materials were the same except that in one class all class discussions and peer evaluation discussions were held orally, face to face, and in the other all class discussions and peer group discussions were held online, in a synchronous “chat” system. In the computer-assisted class, students met often in a computer lab, where they engaged in whole-class discussions and peer group discussions using the chat system.

[I see the reason for doing this sort of thing, so that students don’t have to spend time outside of class doing online chats, but I do always find it strange to have a room full of students and the teacher sitting together but only communicating through computers.]

Research questions:

(1) Are there differences in attitudes toward writing on computers, writing apprehension, and overall quality of writing between the two groups after one semester?; and

(2) Is the nature of the participation and discourse in the two modes of communication different?

In what follows I will only look at the last part of question 1 (the overall quality of writing), as well as question 2.

Writing scores

At the beginning of the term, students produced a writing sample based on a prompt given by the instructor. This was compared with a similar writing sample given at the end of the term. These were “scored holistically on a five point scale by two trained raters” (494).

In the oral class, strangely, the writing scores went down by the end of the term: at the beginning the mean was 3.41 (out of 5), with a standard deviation of 0.77, and at the end it was 2.95 with a SD of 0.84. The authors do not comment on this phenomenon, though the difference (0.46) is not great. In the computer class, the writing scores went up slightly: from a mean of 3.19 (SD 0.77) at the beginning to 3.26 (SD 0.70) at the end. The authors note, though, that “[t]he students in the two classes did not differ significantly (below the 0.05 probability level) at the beginning nor at the end of the semester” (496).

They did find a some evidence that the students in the computer assisted class did improve their writing:

However, some evidence was found for improved writing in the computer-assisted class by comparing the writing score changes of the two classes (computer-assisted classroom’ s gain (+0.07) to oral classroom’s loss (-0.46)). A t-test showed the difference to be significant at the 0.08 probability level. (496)

The authors conclude, however, that the data does not support saying one environment is better than another in terms of improving writing (nor, incidentally, for the rest of research question (1), above).

Discourse patterns in peer group discussions 

[The authors also looked at discourse patterns in the whole-class discussions, but as I don’t plan to do whole-class discussions via chats in the near future, I’m skipping that portion of the article here.]

There were more comments made in the oral class, during peer assessment discussions, than in the online chat groups: 40-70 turns per group for the oral discussions and 14-25 turns per group for the online chats (498). However, the authors found that the discussion in the oral class was, as they put it, “less focused” (498), in the sense that there were more interjections of personal narratives and repetitions of what other students had said. In the computer class, the talk was more “focused on the task of criticizing the writing rather than conversing with their fellow students while on the network” (499).

The tone of the article here indicates that the talk in the online chat was better than that in the oral discussion. But as noted in Hewett (2000), the sort of talk that might be interpreted as “unfocused” could also be interpreted as an important part of participating in an interactive discussion. Repetitions indicate that one is listening, following along, and being an engaged participant in a discussion. Personal narratives can both help to make a point as well as forge some connections between discussion group members, perhaps bringing them closer together and thereby helping them feel more comfortable (which could contribute to more productive peer evaluation).

In addition, in the oral groups the author of the paper being discussed often dominated the discussion, while the author spoke less in the online chats, making for more equal participation.

Continue reading

(etmooc) Digital Storytelling, you’re looking better every day

In a recent post I explained that I just haven’t been very into digital storytelling, the second topic in etmooc. While many of the other participants have been busy creating animated gifs, 5 card stories, photo stories and more, I just wasn’t engaged enough to try to do much myself.

But then something happened. Well, Cogdog (Alan Levine) happened.

He gave a presentation on digital storytelling for etmooc, which I was able to join live. I’m not sure what was so inspiring about it, really–he introduced some tools, talked about how to write stories, asked some of the participants to play pechaflickr during the session. But somehow, partway through, I started getting excited.

Probably it was Cogdog’s enthusiasm. He just is so into storytelling, and digital storytelling, that I thought, well, there must be something to this. His excitement was infectious. I caught it.

The part of the presentation that really got me, though, was when he talked about how professional writing could be more like storytelling, that we could provide information, but do it in a more engaging way. He cited a book by Randy Olson called Don’t be Such a Scientist, which discusses the need for scientists to reach a broader audience and the power of storytelling to help do so. Olson was a professor at a university and then moved into filmmaking, and argues that scientists could learn a lot from the world of storytellers, in order to make what they do more accessible.

So could philosophers

And it hit me that this could be a great way to try to make my class lectures, the presentations I do for classes more engaging. I already try to ensure I don’t do too much lecturing and also have a good deal of activities for students to engage in during class time, discussions, working together in groups, etc. But why not find a way to make the lectures themselves more like stories?

This is challenging, but it’s a challenge I’m suddenly wanting to take on. I just needed to find something that I felt passionate about, and getting students as excited as I am about philosophy is that something.

Why not start small, by trying to incorporate some of the aspects of good storytelling practice in some lectures (it will take awhile to change many or all of them!). Why not, for example, start with a hook, something that draws people in, present an obstacle, resolve it, and then set up for a new story? (As discussed here, where storytelling meets math.) This could be done fairly easily without requiring too much in the way of time or learning new technological tools.

But there’s more

Somehow I also got excited about the digital part of digital storytelling. I mean, I started to want to spend time with some of the tools. I started coming up with ideas for stories–like telling the story of a recent trip to New Zealand (some of the photos are posted on flickr, though the ones with people are private), or the story behind the name of this blog–and I was motivated to look around Cogdog’s 50+ ways to tell a digital story site to find tools that would work.

My previous reluctance was due to numerous reasons, but partly because I didn’t want to put a lot of time into learning a new tool and creating something with it, only to discover that in a couple years’ time the tool would disappear. It’s hard to know which of these applications will stick around and which will die off. It seemed a waste of time.

But then in his presentation Cogdog pointed out: sure, some of the tools will disappear, but you will still have all your source photos, video, text, transcripts, etc., and it’s not that hard to create the story again in something new. Good point. I’m still worried about making things for my son that will still be viewable 20 or 30 years down the road, so I’m making a photo book that will be printed; that way, technology obsolescence won’t destroy it (though dirt, water, and forgetting it in a box might).

A true story

So I got up this morning and re-recorded my “true story of open sharing” for Cogdog’s collection. I tried to start with something that was a little more engaging … “I got a comment on my blog.” Okay, that’s not very exciting in itself, but it could make you think about what sort of comment on my blog could lead me to want to tell a story. It might get people wondering.

The rest of the story is rather like it was before, but at least it’s a start. And I played around with iMovie (an application that comes with Mac computers) to add in a couple of titles, at the beginning and end, and put in some transitions from the titles to the video.

 I spent a good deal of time trying to lessen the background noise–an airplane, and my husband trying to get the pilot light on the gas fireplace lit. (I was originally going to film this in front of our fireplace, with the gas flames going, but it’s summer here in Australia and we turned off the pilot light. Turned out there was a trick to getting it back on and it took awhile to figure out! So I just filmed outside instead). I couldn’t really get the background noise gone completely without making my voice sound very, very strange. But it is better than it was.

Then, I put the video into Mozilla Popcorn maker, because I wanted to include some relevant links (e.g., to my home page, to my blog). Here’s the result.

Okay, so it took me a couple hours longer than I thought it would, but now I have the hang of Popcorn Maker. And special thanks to Glenn Hervieux (@SISQITMAN), who came to my aid on Twitter when I ran into a problem with it!

Peer assessment: face to face vs. online, asynchronous (Pt. 2)

This is part of a series of posts in which I summarize and comment on research literature about different methods of doing peer assessment. Earlier posts in this series can be found here and here, and part 1 corresponding to this particular post is here.

In this post I summarize, as briefly as I can, a complex study on differences between how students speak to each other when doing peer assessment when it’s in person versus on a discussion board (mostly asynchronous, but students also did some posting to the discussion boards in a nearly synchronous environment, during class time).

Hewett, B. (2000) Characteristics of Interactive Oral and Computer-Mediated Peer Group Talk and Its Influence on Revision, Computers and Composition 17, 265-288. DOI: 10.1016/S8755-4615(00)00035-9

This study looked at differences between ways peers talk in face to face environments and computer-mediated environments (abbreviated in the article as CMC, for computer-mediated communication). It also looked at whether there are differences in the ways students revise writing assignments after these different modes of peer assessment and feedback.

There were several research questions for the study, but here I’ll focus just on this one:

How is peer talk that occurs in the traditional oral and in the CMC classroom alike and different? Where differences exist, are they revealed in the writing that is developed subsequent to the peer-response group sessions? If so, how? (267)

Participants and data

Students in two sections of an upper-level course (Argumentative writing) at a four-year university participated; one section engaged in face to face peer assessment, and the other used computer-mediated peer assessment, but otherwise the two course were the same, taught by the same instructor. The CMC course used a discussion board system with comments organized chronologically (and separated according to the peer groups), and it was used both during class, synchronously (so students were contributing to it while they were sitting in a class with computers) and outside of class, asynchronously.

Peer group conversations were recorded in the face to face class, and the record of conversations from the CMC class could just be downloaded. The author also collected drafts of essays on which the peer discussion took place. Data was collected from all students, but only the recordings of conversations in one peer group in each class (oral and CMC) were used for the study. I’m not sure how many students this ended up being–perhaps 3-4 per peer group? [update (Feb. 28, 2013)] Looking at the article again, a footnote shows that there were four students in each group.

One of those groups, the CMC group, engaged in both computer-mediated peer discussion as well as oral discussion at a later point–so this group provides a nice set of data about the same people, discussing together, in two different environments. Below, when talking about the “oral” groups, the data include the group that was in the oral only class, plus the CMC group when they discussed orally.

Results

Nature of the talk in both environments

Not surprisingly, the student discussion in the face to face groups was highly interactive; the students’ statements often referred to what someone else had said, asked questions of others, clarified their own and others’ statements, and used words and phrases that cued to others that they were listening and following along, encouraging dialogue (e.g., saying “yes,” “right,” “okay,” “exactly”) (269-270).

In the CMC discussions, the talk was less interactive. Multiple threads of discussion occurred on the board, and each students’ comments could pick up on several at a time. This created a “multivocal tapestry of talk” that individuals would have to untangle in order to participate (270). At times, students in a peer group would respond to the paper being discussed, but not to each other (271), so that the comments were more like separate, standalone entities than part of an interactive conversation.

In addition, the possibility for asynchronous communication, though it could be convenient, also left some students’ comments and questions unanswered, since others may or may not return to the board after the synchronous group “chat” time had ended.

Subjects of the talk in each environment

Hewett found that face to face discussion had more talk about ideas, wider issues raised in the papers, and information about the contexts surrounding the claims and issues discussed in the papers, than in the CMC discussion (276). The CMC groups tended to focus more on the content of what was written, and showed less evidence of working together to develop new ideas about the topics in the essays. Hewett suggests: “Speculative thinking often involves spinning fluid and imperfectly formed ideas; it requires an atmosphere of give-and-take and circumlocution,” which is more characteristic of oral speech (276).

Continue reading

Peer assessment: face to face vs. asynchronous, online (Pt. 1)

I have been doing a good deal of reading research on peer assessment lately, especially studies that look at differences and benefits/drawbacks of doing peer assessment face to face , orally, and through writing–both asynchronous writing in online environments (e.g., comments on a discussion board) and synchronous writing online (e.g., in text-based “chats”). I summarized a few studies on oral vs. written  peer assessment in this blog post, and then set out a classification structure for different methods of peer assessment in this one.

Here, I summarize a few studies I’ve read that look at written, online, asynchronous peer feedback. In another post I’ll summarize some studies that compare oral, face to face with written, online, synchronous (text-based chats). I hope some conclusion about the differences and the benefits of each kind can be drawn after summarizing the results.

1. Tuzi, F. (2004) The impact of e-feedback on the revisions of L2 writers in an academic writing course, Computers and Composition 21, 217–235. doi:10.1016/j.compcom.2004.02.003

This study is a little outside of my research interest, as it doesn’t compare oral feedback to written (in any form). Rather, the research focus was to look at how students revised essays after receiving e-feedback from peers and their teacher. Oral feedback was only marginally part of the study, as noted below.

20 L2 students (students for whom English was an additional language) in a first-year writing course at a four-year university participated in this study. Paper drafts were uploaded onto a website where other students could read them and comment on them. The e-feedback could be read on the site, but was also sent via email to students (and the instructor). Students wrote four papers as part of the study, and could revise each paper up to five times. 97 first drafts and 177 revisions were analyzed in the study. The author compared comments received digitally to later revised drafts, to see what had been incorporated. He also interviewed the authors of the papers to ask what sparked them to make the revisions they did.

Tuzi combined the results from analyzing the essay drafts and e-feedback (to see what of the feedback had been incorporated into revisions) with the results of the interviews with students, to identify the stimuli for changes in the drafts. From these data he concludes that 42.1% of the revisions were instigated by the students themselves, 15.6% from e-feedback, 148.% from the writing centre, 9.5% from oral feedback (from peers, I believe), and for 17.9% of the revisions, the source was “unknown.” He also did a few finer-grained analyses, showing how e-feedback fared in relation to these other sources at different levels of writing (such as the punctuation, word, sentence, paragraph), in terms of the purpose of the revision (e.g., new information, grammar) and more. In many analyses, the source of most revisions was the students themselves, but e-feedback ranked second some (such as working at the sentence, clause and paragraph levels, and adding new information). Oral feedback was always low on the list.

In the “discussion” section, Tuzi states:

Although e-feedback is a relatively new form of feedback, it was the cause of a large number of essay changes. In fact, e-feedback resulted in more revisions than feedback from the writing center or oral feedback. E-feedback may be a viable avenue for receiving comments for L2 writers. Another interesting observation is that although the L2 writers stated that they preferred oral feedback, they made more e-feedback-based changes than oral-based changes.

True, but note that in this study oral feedback was not emphasized. It was something students could get if they wanted, but only the e-feedback was focused on in the course. So little can be concluded here about oral vs. e-feedback. To be fair, that wasn’t really the point of the study, however. The point was simply to see how students use e-feedback, whether it is incorporated into revisions, and what kinds of revisions e-feedback tends to be used for. And Tuzi is clear towards the end: “Although [e-feedback] is a useful tool, I do not believe it is a replacement for oral feedback or classroom interaction …”. Different means of feedback should be available; this study just shows, he says, that e-feedback can be useful as one of them.

Continue reading

Peer feedback: oral, written, synchronous, asynchronous…oh my

In an earlier post I summarized a few studies of peer assessment/peer feedback, and since then I’ve done some more research and found several more studies. What I also realized is that “oral vs written peer feedback” is not an adequate description of the myriad options. There is also synchronous vs. asynchronous, and face-to-face vs. mediated (often computer mediated).

So, written feedback can be asynchronous or synchronous (such as with online “chats”), and done on paper or computer mediated (typed digitally and emailed or uploaded into some online system that allows students to retrieve the comments on their papers).

Oral feedback, in turn, can be face-to-face or computer mediated, and the latter can be asynchronous (such as recorded audio or video) or synchronous (such as real-time video or audio chatting).

Thus, the possibilities are (at least insofar as I understand them at this point):

The situations in two of the boxes are quite rare:

  1.  Oral, asynchronous, face-to-face feedback situation are probably unlikely to happen very often, such as showing videos of one’s feedback to another person in person (or doing the same with an audio recording).
  2.  Written, face-to-face, synchronous feedback by itself is likely also rare, since it’s more probable that students will be writing comments on each others’ papers in each others’ presence while also discussing comments together–in which case the situation would be a blend of written, face-to-face, synchronous and oral, face-to-face, synchronous.

Also, I’m not really sure about the written, face-to-face, asynchronous box; that is only face-to-face insofar as the comments are given to the peer face-to-face, on paper.

The reason why I’m taking the time to distinguish these various permutations is that the literature I’ve been reading lately falls into various boxes on the table. For example, Reynolds and Russell (2008) (see this post) would fall into the oral, computer-mediated, asynchronous box. Most of the literature that talks about “oral” feedback is talking about oral, face-to-face, synchronous feedback (as I was using the term “oral” before now).

So now I guess I’ll need to come up with a new naming convention, and it will likely be an ugly set of abbreviations, such as:

Oral, face-to-face (assumed synchronous): OFTF
— though maybe written face-to-face is rare enough that this could just be FTF?

Oral, computer mediated, synchronous: OCMS

Oral, computer mediated, asynchronous: OCMA

Etc. Quite ugly.

In the next few days I’ll summarize and comment on a few more articles that fall into these various boxes, and then see if I can come up with any conclusions about the different types of oral vs. written peer feedback from those summaries and the ones in the post linked at the beginning, above. Does the research allow for any stable conclusions at this point? I’ll have to see after I think through the various papers I’m about to summarize in the next few days…

 

Works cited

Reynolds, J. & Russell, R. (2008) Can You Hear Us Now?: A comparison of peer review quality when students give audio versus written feedback, The WAC Journal 19(1), 29-44. http://wac.colostate.edu/journal/vol19/index.cfm

Problems with grading rubrics for complex assignments

In an earlier post I discussed a paper by D. Royce Sadler on how peer marking could be a means for students to learn how to become better assessors themselves, of their own and others’ work. This could not only allow them to become more self-regulated learners, but also fulfill roles outside of the university in which they will need to evaluate the work of others. In that essay Sadler argues against giving students preset marking criteria to use to evaluate their own work or that of other students (when that work is complex, such as an essay), because:

  1. “Quality” is more of a global concept that can’t easily be captured by a set of criteria, as it often includes things that can’t be easily articulated.
  2. As Sadler pointed out in a comment to the post noted above, having a set of criteria in advance predisposes students to look for only those things, and yet in any particular complex work there may be other things that are relevant for judging quality.
  3. Giving students criteria in advance doesn’t prepare them for life beyond their university courses, where they won’t often have such criteria.

I was skeptical about asking students to evaluate each others’ work without any criteria to go on, so I decided to read another one of his articles in which this point is argued for more extensively.

Here I’ll give a summary of Sadler’s book chapter entitled “Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning” (in Assessment, Learning and Judgement in Higher Education, Ed. G. Joughin. Dordrecht: Springer, 2009). DOI: 10.1007/978-1-4020-8905-3_4).

[Update April 22, 2013] Since the above is behind a paywall, I am attaching here a short article by Sadler that discusses similar points, and that I’ve gotten permission to post (by both Sadler and the publisher): Are we short-changing our students? The use of present criteria in assessment. TLA Interchange 3 (Spring 2009): 1-8. This was a publication from what is now the Institute for Academic Development at the University of Edinburgh, but these newsletters are no longer online.

Note: this is a long post! That’s because it’s a complicated article, and I want to ensure that I’ve got all the arguments down before commenting.

Continue reading

The Power of Space in the Classroom

Most of us know very well the importance of space in the classroom–how the room is set up can really change the dynamics of a class. For example, in a discussion course, I try to set up the room in as much of a circle as possible (which, given the configuration of some rooms, is sometimes impossible). Once I had a seminar-style class in a room where we simply could not put the tables and chairs into a circle, and had to leave them in rows (because there wasn’t enough room to put them in a circle). That was the worst term I’ve ever had for discussion.

A colleague of mine in the Arts One Program was even more innovative in her use of space than I’ve ever thought of being myself.

I have had the chance to view the classes of some of my colleagues in Arts One over the past few years. I wish I had more such chances to see others teach, since I always learn from what others are doing in their classes.

Arts One has two, 75-80 minute seminar-style discussion classes per week, with a maximum of 20 students, so most of the rooms we have allow for circular (actually rectangular) seating. There are tables arranged in a circle, with a big space in the middle of them. That works pretty well, since everyone can see everyone else.

Still, the professor usually sits at one of the “heads” of the table, on one of the shorter ends (we don’t have to do this, of course, but I’ve often seen it done). Subtly, then, we are still making ourselves the focal point by making sure most students can see us well (often students avoid sitting right next to the prof, and sit on the longer sides of the table instead).

This sort of setup is good for having books, paper and computers (if they’re allowed) out on the desk while engaging in discussion, but the tables with the big space in the middle cuts us off from one another in a sense, providing a pretty big distance from one another.

Continue reading

How does giving comments in peer assessment affect students? (Part 3)

This is the third post in a series summarizing empirical studies that attempt to answer the question posed in the title. The first two can be found here and here. This will be the last post in the series, I think, unless I find some other pertinent studies.

Lundstrom, K. and Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer’s own writing, Journal of Second Language Writing 18, 30-43. Doi: 10.1016/j.jslw.2008.06.002

This article is focused on students in “L2” classes (second language, or additional language), and asks whether students who review peers’ papers do better in their own (additional language) writing than students who only receive peer reviews and attempt to incorporate the feedback rather than giving comments on peers’ papers.

Participants were 91 students enrolled in nine sections of additional language writing classes at the English Language Center at Brigham Young University. The courses were at two levels out of a possible five: half the students were in level 2, “high beginning,” and half were in level 4, “high intermediate” (33). The students were then divided into a control and experimental group:

The first group was composed of two high beginning and three high intermediate classes, (totaling forty-six students). This group was the control group (hereafter ‘‘receivers’’) and received peer feedback but did not review peers’ papers (defined as compositions written by students at their same proficiency level). The second group was composed of two high beginning classes and two high intermediate classes, (totaling forty-five students), and made up the experimental group, who reviewed peer papers but did not receive peer feedback (hereafter ‘‘givers’’). (33; emphasis mine)

Research questions and procedure to address them

Research questions:

1. Do students who review peer papers improve their writing ability more than those who revise peer papers (for both beginning and intermediate students)?

2. If students who review peer papers do improve their writing ability more than those who revise them, on which writing aspects (both global and local) do they improve? (32)

Continue reading

How does giving comments in peer assessment impact students? (Part 2)

This is the second post looking at published papers that use empirical data to answer the question in the title. The first can be found here. As noted in that post, I’m using “peer assessment” in a broad way, referring not just to activities where students give grades or marks to each other, but more on the qualitative feedback they provide to each other (as that is the sort of peer assessment I usually use in my courses).

Here I’ll look at just one article on how giving peer feedback affects students, as this post ended up being long. I’ll look at one last article in the next post (as I’ve only found four articles on this topic so far).

Lu, J. and Law, N. (2012) Online peer assessment: effects of cognitive and affective feedback, Instructional Science 40, 257-275. DOI 10.1007/s11251-011-9177-2. This article has been made open access, and can be viewed or downloaded at: http://link.springer.com/article/10.1007%2Fs11251-011-9177-2

In this study, 181 13-14 year old students in a Liberal Studies course in Hong Kong participated in online peer review of various parts of their final projects for the course. They were asked to both engage in peer grading and give peer feedback to each other in groups of four or five. The final project required various subtasks, and peer grading/feedback was not compulsory — students could choose which subtasks to give grades and feedback to their peers about. The grades were given using rubrics created by the teacher for each subtask, and both grades and feedback were given through an online program specially developed for the course.

Research Questions

  1. Are peer grading activities related to the quality of the final project for both assessors and assessees?
  2. Are different types of peer …  feedback related to the quality of the final projects for both assessors and assessees? (261)

Continue reading