Here I look at one last study I’ve found that focuses on the nature of student peer feedback discussions when they take place in a synchronous, online environment (a text-based chat). Part 1 corresponding to this post can be found here.
Jones, R.H., Garralda, A., Li, D.C.S. & Lock, G. (2006) Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers, Journal of Second Language Writing 15, 1–23. DOI: http://dx.doi.org/10.1016/j.jslw.2005.12.001
This study is rather different than the ones I looked at in Part 1 of face to face vs. online, synchronous peer assessment, because here the subjects of the study are students and peer tutors in a writing centre rather than peers in the same course. Still, at least some of their results regarding the nature of peer talk in the tutor situation may still be relevant for peer assessment in courses.
Participants and data
The participants in this study were five peer tutors in a writing centre in Hong Kong, dedicated to helping non-native English speakers write in English. For both tutors and clients, English was an additional language, but the tutors were further along in their English studies and had more proficiency in writing in English than the clients. Data was collected from transcripts of face to face consultations of the tutors with clients, as well as transcripts of online, text-based chat sessions of the same tutors, with many of the same clients.
Face to face tutoring was only available in the daytime on weekdays, so if students wanted help after hours, they could turn to the online chat. Face to face sessions lasted between 15 and 30 minutes, and students “usually” emailed a draft of their work to the tutor before the session. Chat sessions could be anywhere from a few minutes to an hour, and though tutors and clients could send files to each other through a file exchange system, this was only done “sometimes” (6). These details will become important later.
Model for analyzing speech
To analyze the interactions between tutors and clients, the authors used a model based on “Halliday’s functional-semantic view of dialogue (Eggins & Slade, 1997; Halliday, 1994)” (4). In this model, one analyzes conversational “moves,” which are different than “turns”–a “turn” can have more than one “move.” The authors explain a move as “a discourse unit that represents the realization of a speech function” (4).
In their model, the authors use a fundamental distinction given by Halliday into “initiating moves” and “responding moves”:
Initiating moves (statements, offers, questions, and commands) are those taken independently of an initiating move by the other party; responding moves (such as acts of acknowledgement, agreement, compliance, acceptance, and answering) are those taken in response to an initiating move by the other party. (4-5)
They then subdivide these two categories further, some of which is discussed briefly below.
Results
Conversational control
In the face to face meetings, the tutors exerted the most control over the discussions. Tutors had many more initiating moves (around 40% of their total moves, vs. around 10% of those for clients), whereas clients had more responding moves (around 33% of clients’ total moves, vs. about 14% for tutors). In the chat conversations, on the other hand, initiating and responding moves were about equal for both tutors and clients (7).
Looking more closely at the initiating moves made by both tutors and clients, the authors report:
In face-to-face meetings, tutors controlled conversations primarily by asking questions, making statements, and issuing directives. In this mode tutors asked four times more questions than clients. In the on-line mode, clients asked more questions than tutors, made significantly more statements than in the face-to-face mode, and issued just as many directives as tutors. (10)
Types of questions
However, the authors also point out that even though the clients asserted more conversational control in the online chats, it was “typical” of the chats to consist of questions by students asking whether phrases, words, or sentences were “correct” (11). They did not often ask for explanations, just a kind of check of their work from an expert and a quick answer as to whether something was right or wrong. On the other hand, when tutors controlled the conversations with their questions, it was often the case that they were using strategies to try to get clients to understand something themselves, to understand why something is right or wrong and to be able to apply that later. So “control” over the conversation, and who asks the most questions or issues the most directives, are not the only important considerations here.
The authors also divided the questions into three different types. Closed questions: “those eliciting yes/no responses or giving the answerer a finite number of choices; open questions: “those eliciting more extended replies”; rhetorical questions: “those which are not meant to elicit a response at all” (12)
In the face to face sessions, tutors used more closed questions (about 50% of their initiating questions) than open questions (about 33%); the opposite was true in the online chats: tutors used more open questions (about 50% of their initiating questions) than closed (about 41%).
Directives (a subcategory of initiating moves)
The authors note that “in face-to-face sessions, tutors issued more than six times more directives than clients,” but in the online chats the number of directives was about the same for tutors and clients (14). They subdivided directives into requests, suggestions and commands, and found that:
- In face to face meetings tutors made over twice as many requests as clients, whereas in the online chats clients made about three times more requests than tutors.
- In face to face meetings clients made very few commands, whereas in the online chats the number of commands amongst clients and tutors was about the same.
- In both modes, unsurprisingly, tutors made significantly more suggestions than clients. (14)
Topics of discussion
In the face to face mode, there were many more conversational turns devoted to “textual” issues such as grammar, word choice and style than in the online chat mode. On the other hand, in the online chats there were more conversational turns devoted to the “higher order goals” (Brufee, 1986; Harris, 1986) related to content and the writing process than in the face to face mode.
Some of the other results could be grounded in this difference in topic, the researchers point out. Focusing on grammar, word choice, and other “local,” “textual” issues may tend to lead to tutor-controlled discussions with fewer open-ended questions than focusing on larger writing issues.
Finally, the authors point out that there was much more “relational” talk in the online chat mode than in the face to face mode. Clearly some of this is due to the fact that establishing and maintaining relationships is harder in a text-based chat than in a face to face meeting, but the authors suggest that more was happening than this: the tutor/client relationship was different in the online mode than in the face to face mode, they argue (16). Relying mostly on the sheer amount of relational talk in the online mode (as well, presumably, as an analysis of the recorded chats, though they don’t mention this), they authors state that the online chats were more like conversations, more open and fluid and with more sharing of personalities, than the face to face meetings. The latter were structured more like “lessons,” with a hierarchical relationship between tutor and client.
Discussion
In the last part of the article, the authors ask,
what can account for the dramatic shift in interactional dynamics when the tutoring sessions were conducted on-line? (17)
They suggest, relying in part on other research, that computer-mediated communication can lead to more personal sharing and disclosure, the development of more egalitarian relationships, partly due to the perceived distance from the other person. They also note that going into the writing centre meant clients were on the tutors’ “turf,” whereas engaging in online chats could be done from the clients’ own “turf.”
The conclusion reached here is that both types of tutoring might be useful, since they seem to focus on different topics (local, textual vs. global).
My thoughts
It may be that many of these results can be explained by the tutors having essays before the face to face meeting (and thus able to come ready with detailed comments) and also tutors and clients both looking at the essay during the session. Neither of these were usually the case for the online chats. Thus, perhaps it was not so much the medium that made the difference–except insofar as the medium of the online chat made it more difficult to share files (I don’t know if that was true or not).
The different topics of the chat could be explained by this–it’s much easier to discuss textual issues with the essay in front of you during the meeting. And if, as the authors pointed out when discussing the different topics of the two modes (local vs. global writing concerns), the differing topics can explain at least some of the other differences found, such as in the types of questions and the power relations, then it could be that many of the results found here were due to tutors having the essay in front of them (and beforehand) during the face to face session but not the online chat.
For example, couldn’t the differences in the initiating moves, including questions, requests and commands, have in large part to do with this difference? If the tutor has the paper beforehand, s/he could have numerous questions, requests and commands ready. In the online chats the initiator of the discussion is the client, who chooses to contact the tutor with a request or question in mind. The client is now the one who is prepared beforehand with something to say, and the discussion is centred around his/her question, request or other initiating move.
If the tutor and client both had the essay in front of them during the online chat, and if the tutor had had it before the chat and thus came ready with comments already prepared, would this have changed the nature of the dialogue in the chat?
Relating this study to peer assessment
I’m less concerned in peer assessment with issues of power hierarchies than Jones et al. were in this study about tutors and clients, though I think such issues can and do come up. It is quite possible that some students see themselves as more expert on a subject or with a skill than other students–and others might see them this way as well–which could lead to some problematic power dynamics in peer assessment.
I’m more interested in whether the two modes (face to face or online, text-based chat) differ in terms of how students interact in them. I expect they do, but I am concerned about the issue noted above with the difference here possibly pertaining largely to having the essay in hand, and ahead of time, or not. In all my work with peer assessment, students read and comment on each others’ essays before discussing together, so an online chat with this kind of peer assessment may look very different from what Jones et al. have reported.
In the next post I’m going to try to summarize and make some conclusions (if possible) from all the research I’ve summarized (so far) on different modes of peer assessment.
Works Cited
Bruffee, K. A. (1986). Social construction, language and knowledge. A bibliographical essay. College English, 48, 773–790.
Eggins, S., & Slade, D. (1997). Analyzing casual conversation. London: Cassell.
Halliday, M. A. K. (1994). An introduction to functional grammar (2nd ed.). London: Edward Arnold.
Harris, M. (1986). Teaching one-to-one: The writing conference. Urbana, IL: National Council of Teachers of English.
Thanks for the fine overview of this issue. I want to add some comments from a rhizomatic point of view. One of the things I noticed in all three of the peer review studies that you examine is the apparent emphasis on content (the text itself) independent of context (the situation that elicited the text, including at least the writer, the reader, and the issue under discussion). In other words, all three studies seem to take a very traditional approach to assessment of a text: judging a text from a privileged position of authority. This is a particular habit of school, and I’m not sure it ever works very well. It isn’t the way texts are judged outside school. Outside of school, context is unavoidable when judging texts.
For instance, you and I have engaged in conversations across both our blogs as a result of the ETMOOC community. We have engaged in conversation because we have a common interest (rhizomatic learning), but we have persisted because we find value in the conversation. This value comes only partly from our texts themselves. Much of the value of our texts here (I will insist that most of the value) comes from the connectivity to lively, interesting discussions with lively, interesting people. We both strive to write well, in terms of standard academic English, so as not to undermine the conversation and lose the value in that conversation, but merely writing correctly is not the point. True, if either you or I had been unable to write in an academic vernacular, the conversation would never have developed as the other would have declined to engage, but mere correctness is not a sufficient basis for real conversation. Likewise if a privileged outsider (let’s say, Alec Couros) had stepped in to judge our conversation, editing this and that and giving us a grade, then likely the conversation would have died another death.
People talk and write to connect to others and to issues. Moreover, people will work really hard to master the language of a conversation space, IF (and usually only if) they really want to connect to that conversation. Millions of people have learned the quirky grammar, punctuation, and spelling of texting with no training. Why? Because they wanted to connect to the conversations afforded by that peculiar language.
I suspect that too many of my students don’t master standard academic English for two main reasons: first, they are afraid of having their wrists whacked every time they make the slightest misteak (yes, I know. Slap my wrist. No toddler would learn to speak under such a regime), and second, they really don’t care if they connect to me their teacher (they usually assume that I am the only reader who counts, even if I try to arrange it otherwise) or to the topics I propose. Even if I convince them to write about their own topics to their fellow students, they struggle to connect to those artificial contexts created in a classroom.
cMOOCs work against these two issues. If we cannot speak or write well, then the MOOC simply ignores us, and we drop out. Or we lurk (they way toddlers do) until we find confidence to speak. Then, MOOCs have no penalty for testing a conversation and simply turning away if we find nothing engaging. I suspect that this explains why so many people drop out of MOOCs:
I take several lessons for my classes. First, I always provide a space for people to write without fear of grading, often in a blog. The blogs generate some of the best conversations ever in my classes, and I engage them as a fellow blogger, NOT as a teacher. To be fair, some of the blogs are deadly boring and poorly constructed, engaging no one. I ignore those, as do the other students.
Then, I work really hard to find ways for students to bring their own value to the class. Even if I assign a topic, I encourage them to approach the topic from their own interests and points of view. I look for real reasons for writing to real readers, and I accept different dialects for different readers. I watch for their interests and passions for something, and I try to cultivate that, to connect their writing to that. It isn’t easy, but I have some success.
I look for ways for students to really play at writing, not just practice. When they are playing, they write so much better. Nobody likes practice much.
Wow–excellent comments here, Keith. Really got me thinking, so thank you.
It makes complete sense to me that a large part (perhaps most) of the value in writing comes from the ability to connect with others through it and engage in conversations. Of course, some writing doesn’t do this at all, as it’s done (e.g.) in order to instruct and does not become part of a conversation, an exchange of any sort. And that this applies to teaching and learning in the sense that students will learn to write better if they find this sort of value in their writing. Blogs are a good way to have students’ written work being part of a conversation with others.
I teach in a program in which even the academic papers can be part of a larger conversation. In Arts One students meet every week in a group of four, along with their instructor, to give peer feedback on each others’ essays. Now, one of the many benefits that emerges from this practice is that students are able to discuss their work in a small group, listen to questions and comments and defend or decide to alter what they’re saying. But that’s not all; sometimes (and I love it when this happens) other students pick up on an argument that someone has made in an essay and respond to it in their own essay. They might use it and apply it in a new way, or possibly criticize it (constructively), even while writing about a different topic and text. I think this sort of academic conversation might be facilitated by the fact that the students get to know each other so well, and work closely together for a year.
So I’d like to think that peer feedback practices can involve student writing in larger conversations. It’s just that it probably rarely happens.
You’re right that the empirical studies I’ve discussed all talk about the content of essays (insofar as they judge the essays at all; some talk just about the nature of the comments given in the peer groups). The context is given: a university course, of some kind or another. And writing for the instructor and for the course, mainly, is treated as the norm.
Perhaps peer feedback would be different if students’ written works were more connected to their own contexts, their own interests, and/or to larger conversations that the work is a part of? Possibly. I can imagine that the comments given would change if students were more invested in the writing, more engaged in what they were saying and the conversations they are a part of. And I haven’t seen anyone looking at that issue in the literature I’ve found so far (but I have by no means looked at all the peer assessment literature–a vast amount!). What a great thing to think about.