Peer assessment: Face to face vs. online, synchronous (Part 1)

This is another post in the series on research literature that looks at the value of doing peer assessment/peer feedback in different ways, whether face to face, orally, or through writing (mostly I’m looking at computer-mediated writing, such as asynchronous discussion boards or synchronous chats). Earlier posts in this series can be found here, here, here and here.

In this post I’ll look at a few studies that focus on peer assessment through online, synchronous discussions (text-based chats).

1. Sullivan, S. & Pratt, E. (1996) A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom, System 29, 491-501. DOI: http://dx.doi.org/10.1016/S0346-251X(96)00044-9

 38 second-year university students studying English writing for the first time (where English was an additional language) participated in the study. They were distributed in two classes taught by the same professor, where all the teaching materials were the same except that in one class all class discussions and peer evaluation discussions were held orally, face to face, and in the other all class discussions and peer group discussions were held online, in a synchronous “chat” system. In the computer-assisted class, students met often in a computer lab, where they engaged in whole-class discussions and peer group discussions using the chat system.

[I see the reason for doing this sort of thing, so that students don’t have to spend time outside of class doing online chats, but I do always find it strange to have a room full of students and the teacher sitting together but only communicating through computers.]

Research questions:

(1) Are there differences in attitudes toward writing on computers, writing apprehension, and overall quality of writing between the two groups after one semester?; and

(2) Is the nature of the participation and discourse in the two modes of communication different?

In what follows I will only look at the last part of question 1 (the overall quality of writing), as well as question 2.

Writing scores

At the beginning of the term, students produced a writing sample based on a prompt given by the instructor. This was compared with a similar writing sample given at the end of the term. These were “scored holistically on a five point scale by two trained raters” (494).

In the oral class, strangely, the writing scores went down by the end of the term: at the beginning the mean was 3.41 (out of 5), with a standard deviation of 0.77, and at the end it was 2.95 with a SD of 0.84. The authors do not comment on this phenomenon, though the difference (0.46) is not great. In the computer class, the writing scores went up slightly: from a mean of 3.19 (SD 0.77) at the beginning to 3.26 (SD 0.70) at the end. The authors note, though, that “[t]he students in the two classes did not differ significantly (below the 0.05 probability level) at the beginning nor at the end of the semester” (496).

They did find a some evidence that the students in the computer assisted class did improve their writing:

However, some evidence was found for improved writing in the computer-assisted class by comparing the writing score changes of the two classes (computer-assisted classroom’ s gain (+0.07) to oral classroom’s loss (-0.46)). A t-test showed the difference to be significant at the 0.08 probability level. (496)

The authors conclude, however, that the data does not support saying one environment is better than another in terms of improving writing (nor, incidentally, for the rest of research question (1), above).

Discourse patterns in peer group discussions 

[The authors also looked at discourse patterns in the whole-class discussions, but as I don’t plan to do whole-class discussions via chats in the near future, I’m skipping that portion of the article here.]

There were more comments made in the oral class, during peer assessment discussions, than in the online chat groups: 40-70 turns per group for the oral discussions and 14-25 turns per group for the online chats (498). However, the authors found that the discussion in the oral class was, as they put it, “less focused” (498), in the sense that there were more interjections of personal narratives and repetitions of what other students had said. In the computer class, the talk was more “focused on the task of criticizing the writing rather than conversing with their fellow students while on the network” (499).

The tone of the article here indicates that the talk in the online chat was better than that in the oral discussion. But as noted in Hewett (2000), the sort of talk that might be interpreted as “unfocused” could also be interpreted as an important part of participating in an interactive discussion. Repetitions indicate that one is listening, following along, and being an engaged participant in a discussion. Personal narratives can both help to make a point as well as forge some connections between discussion group members, perhaps bringing them closer together and thereby helping them feel more comfortable (which could contribute to more productive peer evaluation).

In addition, in the oral groups the author of the paper being discussed often dominated the discussion, while the author spoke less in the online chats, making for more equal participation.

My thoughts

I’m not terribly keen on trying to measure differences in writing quality in regards to methods of doing peer evaluation; mostly this is because it is very difficult to tease out whether differences in writing quality could be attributed to the different peer group environments or whether they have more to do with other factors. I’m more interested in looking at whether suggestions made in peer group discussions end up appearing in students’ later essay drafts. Perhaps small improvements in specific aspects of writing can be made this way, even if writing quality as a whole doesn’t improve a lot in a short amount of time.

I found it interesting that there was a difference in the two types of environments in regards to how much the author spoke in the discussion; but then again, I’m not sure that this is terribly important. After all, the author needs to be able to make sure she understands what is being said, and to ask questions to clarify things. It’s also a good thing for the author to try to defend what she has written when she feels that critical comments are not on the mark. So really, just noting the different amount of comments made by authors in the two situations doesn’t tell us much, and doesn’t suggest that one type of peer evaluation is superior to another in this regard.

 

2. Liu, J. & Sadler, R.W. (2003) The effect and affect of peer review in electronic versus traditional modes on L2 writing, Journal of English for Academic Purposes 2, 193–227. DOI: http://dx.doi.org/10.1016/S1475-1585(03)00025-0

This study compared peer assessment that took place in a “traditional” way (comments written on essays on paper, then discussed within a small group face to face) with a computer-enhanced process, in which students commented on each others’ essays digitally, using MS Word, and then discussed the essays and comments through a synchronous, online chat system. As with the above study, the chats took place during class time, with the students in the room together on computers.

Research questions: The authors asked whether both the asynchronous, written comments (paper vs. MS Word) and the synchronous comments (oral vs. text-based chat) differed in nature, according to “area (global versus local), the type (evaluation, clarification, suggestion, alteration), and the nature of comments (i.e. revision-oriented versus non revision-oriented)” (197). They also asked whether the two modes of commenting differed in terms of their effect on later revisions to essays.

In what follows I look just at the analysis of differences in “global” vs. “local” comments. “Global” comments have to do with “idea development, audience and purpose, and organization of writing,” while “local” ones are about “copy-editing, such as wording, grammar, and punctuation” (202; here they cite McGroarty & Zhu, 1997).

Participants and data: Data were gathered from two groups of four students, one in each of two 1st-year composition courses–one with the “traditional” method of peer review as described above, and one with the computer-mediated method. For all the students in the two groups, English was an additional language.

Results

Oral and text-based chat discussions

[I’m not interested in the differences between digital comments on essays and handwritten comments on essays, so I skip most of that part of the study]

The comments made in the oral and text-based chat discussions were both predominantly global in nature: 86% of total comments in the discussion in the oral group, 97% in the computer mediated group (209). Compare this with the significantly higher proportions of local comments when students were writing comments on each others’ essays: 58% of the total comments in the “traditional” group, writing on paper, were “local,” and 72% of those in the computer-mediated group were local (204). The authors note that spelling and grammar revision comments were much greater in the computer mediated group’s written comments via MS Word, likely due to Word’s automation of spelling and grammar. Recognizing the small sample size, these results might suggest that oral or chat-based discussions may focus more on global concerns with writing than local ones.

An important difference in the two types of discussions was the amount of “conversation maintenance,” or talk that helped to move the conversation along, organize it; this category also includes greetings and other social talk that helps the group engage in an effective and respectful conversation. 43% of the turns taken in the face to face group fell into this category, while 68% did in the text-based chat group (209). The authors suggest that more work has to be done in a text-based chat environment, devoid of body language and facial gestures, to keep conversations going and to avoid misunderstanding.

There were also significantly more revision-oriented comments in the face to face discussion than in the text-based chat discussion. Excluding the maintenance turns, 97% of the other turns in the face to face discussion were revision-oriented, compared to 76% of the turns in the text-based chat group. [added March 6, 2013: Including maintenance turns, 55% of the total conversational turns for the face to face group were revision-oriented, compared to 24% for the e-chat group (210).] One reason for this, the authors suggest, could have been because the students in the oral group often referred to the peer papers and the review sheets they had prepared during the discussion, while those in the computer chat did not (211). Why might this have been the case? Partly because the essays and comments were digital, and partly because even if the students had printed them out, they had to pay more attention to the comments rolling on the screen and couldn’t spend much time looking at the essays.

Effect of the two types of peer discussion on revisions

Here, the authors consider the effect of both the written comments (on paper and on MS Word, respectively) and the discussions (face to face and text-based chat, respectively).

In the “traditional” group, students acted more on the “global” revision comments than in the computer-mediated group. 68% of the global revision-oriented comments were used in a second draft in the traditional group, vs. 43% for the computer-mediated group. There were more global revision comments in the computer-mediated group (69, or 22% of the total comments vs. 47, or 26% of the total comments for the traditional group), so this means that even with fewer global revision comments, students in the traditional group acted on them more than in the computer-mediated group (214; table 6).

The use of “local” revision comments in both groups was similar: 27% of local revision comments were used in the traditional group, and 22% in the computer-mediated group. This is interesting, because there were quire a few more local revision comments in the computer-mediated group: 222 comments in the computer mediated group were local, revision-oriented (70% of the total comments for that group), vs. 89 in the traditional group (49% of the total comments). Thus, even though there were more local, revision-oriented comments in the computer group, they were used about as much as those in the traditional group (214; table 6).

Discussion

 There were more revision-oriented comments overall (both written digitally on the papers and in the text-based chat) in the computer-mediated group than in the traditional group: 291 in the computer mediated group (92% of the total comments in that group) vs. 136 in the traditional group (76% of the total comments in that group) (214; fig. 8). However, the traditional group acted on more of the revision-oriented comments given in their group, as evidenced by the second draft of their essays. Including both global and local revision-oriented comments,

the percentage of revisions made based on revision-oriented comments was much higher for the traditional group (41% versus 27%) in comparison to the technology-enhanced group. Therefore, even though the technology-enhanced group did have a larger number of revisions, the comments made do appear to be less effective overall. (218)

Part of the problem with the computer chat discussions had to do with the need for more explicit conversation maintenance that would otherwise be taken care of non-verbally, as noted above. There was another problem noted as well, though, having to do with the time it takes to type something and the fact that while one student is typing others may be going forwards:

…it is hard to determine turn-taking, and each student feels rushed to type his or her comments in order to follow the flow of communication. As one student described in the survey, ‘‘I have some ideas but after I type one sentence, my peers have already switched to the next paragraph. I feel very frustrated, and I simply quit commenting.’’ (220)

The authors conclude that face to face discussion in peer assessment settings is best (though they also suggest that students use digital, asynchronous, written comments on essays, based on results given in a part of the article I did not discuss) (221-222).

My thoughts

It is interesting that this study and the one reported in Sullivan and Pratt (1996) came to different conclusions about the online chat environment. Whereas Sullivan and Pratt (1996) found that there were fewer comments overall, and more focused discussion in the online chats, Liu and Sadler (2003) found that there were more comments given in the online chats, and at the same time much more “conversation maintenance” needed. I honestly don’t know what to make of these differences, except to note that the Liu and Sadler study collected data from only 4 students in each group. Perhaps with a larger sample the results would have been different.

In this study, both groups had both written comments and a synchronous discussion about those, so the “oral” group still also had a written record of comments to use for later revision. That the comments in the oral group seemed to be more effective in terms of being used in revisions, then, is striking. There may be something about oral discussion vs. text-based chat that allows students to better understand, see the value of, and otherwise feel motivated to use peer comments. Perhaps it is the ease with which one can follow discussions, not get left behind when the conversation goes on as you are still typing, or the more personal nature of the discussion. Maybe, as one gets to know one’s peers better, as could potentially happen in a face to face situation, with body language and facial gestures, one might be more likely to take their comments seriously. That’s purely speculation, of course.

There is one more article on face to face vs. online, synchronous discussion of peer feedback that I plan to comment on, in an upcoming post. This one is quite long already.

Photo Credit: Extra Ketchup via Compfight cc, CC-BY license

 

Works Cited

Hewett, B. (2000) Characteristics of Interactive Oral and Computer-Mediated Peer Group Talk and Its Influence on Revision, Computers and Composition 17, 265-288. DOI: 10.1016/S8755-4615(00)00035-9

McGroarty, M. E., & Zhu, W. (1997). Triangulation in classroom research: a study of peer revision. Language Learning, 47(1), 1–43.