Category Archives: Scholarship of Teaching and Learning

Peer Assessment: Face to face vs. online, synchronous (Part 2)

Here I look at one last study I’ve found that focuses on the nature of student peer feedback discussions when they take place in a synchronous, online environment (a text-based chat). Part 1 corresponding to this post can be found here.

Jones, R.H., Garralda, A., Li, D.C.S. & Lock, G. (2006) Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers, Journal of Second Language Writing 15,  1–23. DOI: http://dx.doi.org/10.1016/j.jslw.2005.12.001

This study is rather different than the ones I looked at in Part 1 of face to face vs. online, synchronous peer assessment, because here the subjects of the study are students and peer tutors in a writing centre rather than peers in the same course. Still, at least some of their results regarding the nature of peer talk in the tutor situation may still be relevant for peer assessment in courses.

Participants and data

The participants in this study were five peer tutors in a writing centre in Hong Kong, dedicated to helping non-native English speakers write in English. For both tutors and clients, English was an additional language, but the tutors were further along in their English studies and had more proficiency in writing in English than the clients. Data was collected from transcripts of face to face consultations of the tutors with clients, as well as transcripts of online, text-based chat sessions of the same tutors, with many of the same clients.

Face to face tutoring was only available in the daytime on weekdays, so if students wanted help after hours, they could turn to the online chat. Face to face sessions lasted between 15 and 30 minutes, and students “usually” emailed a draft of their work to the tutor before the session. Chat sessions could be anywhere from a few minutes to an hour, and though tutors and clients could send files to each other through a file exchange system, this was only done “sometimes” (6). These details will become important later.

Model for analyzing speech

To analyze the interactions between tutors and clients, the authors used a model based on “Halliday’s functional-semantic view of dialogue (Eggins & Slade, 1997; Halliday, 1994)” (4). In this model, one analyzes conversational “moves,” which are different than “turns”–a “turn” can have more than one “move.” The authors explain a move as “a discourse unit that represents the realization of a speech function” (4).

In their model, the authors use a fundamental distinction given by Halliday into “initiating moves” and “responding moves”:

Initiating moves (statements, offers, questions, and commands) are those taken independently of an initiating move by the other party; responding moves (such as acts of acknowledgement, agreement, compliance, acceptance, and answering) are those taken in response to an initiating move by the other party. (4-5)

They then subdivide these two categories further, some of which is discussed briefly below.

Results

Conversational control

In the face to face meetings, the tutors exerted the most control over the discussions. Tutors had many more initiating moves (around 40% of their total moves, vs. around 10% of those for clients), whereas clients had more responding moves (around 33% of clients’ total moves, vs. about 14% for tutors). In the chat conversations, on the other hand, initiating and responding moves were about equal for both tutors and clients (7).

Looking more closely at the initiating moves made by both tutors and clients, the authors report:

In face-to-face meetings, tutors controlled conversations primarily by asking questions, making statements, and issuing directives. In this mode tutors asked four times more questions than clients. In the on-line mode, clients asked more questions than tutors, made significantly more statements than in the face-to-face mode, and issued just as many directives as tutors. (10)

Types of questions

However, the authors also point out that even though the clients asserted more conversational control in the online chats, it was “typical” of the chats to consist of questions by students asking whether phrases, words, or sentences were “correct” (11). They did not often ask for explanations, just a kind of check of their work from an expert and a quick answer as to whether something was right or wrong. On the other hand, when tutors controlled the conversations with their questions, it was often the case that they were using strategies to try to get clients to understand something themselves, to understand why something is right or wrong and to be able to apply that later. So “control” over the conversation, and who asks the most questions or issues the most directives, are not the only important considerations here.

The authors also divided the questions into three different types. Closed questions: “those eliciting yes/no responses or giving the answerer a finite number of choices; open questions: “those eliciting more extended replies”; rhetorical questions: “those which are not meant to elicit a response at all” (12)

In the face to face sessions, tutors used more closed questions (about 50% of their initiating questions) than open questions (about 33%); the opposite was true in the online chats: tutors used more open questions (about 50% of their initiating questions) than closed (about 41%).

Continue reading

Peer assessment: Face to face vs. online, synchronous (Part 1)

This is another post in the series on research literature that looks at the value of doing peer assessment/peer feedback in different ways, whether face to face, orally, or through writing (mostly I’m looking at computer-mediated writing, such as asynchronous discussion boards or synchronous chats). Earlier posts in this series can be found here, here, here and here.

In this post I’ll look at a few studies that focus on peer assessment through online, synchronous discussions (text-based chats).

1. Sullivan, S. & Pratt, E. (1996) A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom, System 29, 491-501. DOI: http://dx.doi.org/10.1016/S0346-251X(96)00044-9

 38 second-year university students studying English writing for the first time (where English was an additional language) participated in the study. They were distributed in two classes taught by the same professor, where all the teaching materials were the same except that in one class all class discussions and peer evaluation discussions were held orally, face to face, and in the other all class discussions and peer group discussions were held online, in a synchronous “chat” system. In the computer-assisted class, students met often in a computer lab, where they engaged in whole-class discussions and peer group discussions using the chat system.

[I see the reason for doing this sort of thing, so that students don’t have to spend time outside of class doing online chats, but I do always find it strange to have a room full of students and the teacher sitting together but only communicating through computers.]

Research questions:

(1) Are there differences in attitudes toward writing on computers, writing apprehension, and overall quality of writing between the two groups after one semester?; and

(2) Is the nature of the participation and discourse in the two modes of communication different?

In what follows I will only look at the last part of question 1 (the overall quality of writing), as well as question 2.

Writing scores

At the beginning of the term, students produced a writing sample based on a prompt given by the instructor. This was compared with a similar writing sample given at the end of the term. These were “scored holistically on a five point scale by two trained raters” (494).

In the oral class, strangely, the writing scores went down by the end of the term: at the beginning the mean was 3.41 (out of 5), with a standard deviation of 0.77, and at the end it was 2.95 with a SD of 0.84. The authors do not comment on this phenomenon, though the difference (0.46) is not great. In the computer class, the writing scores went up slightly: from a mean of 3.19 (SD 0.77) at the beginning to 3.26 (SD 0.70) at the end. The authors note, though, that “[t]he students in the two classes did not differ significantly (below the 0.05 probability level) at the beginning nor at the end of the semester” (496).

They did find a some evidence that the students in the computer assisted class did improve their writing:

However, some evidence was found for improved writing in the computer-assisted class by comparing the writing score changes of the two classes (computer-assisted classroom’ s gain (+0.07) to oral classroom’s loss (-0.46)). A t-test showed the difference to be significant at the 0.08 probability level. (496)

The authors conclude, however, that the data does not support saying one environment is better than another in terms of improving writing (nor, incidentally, for the rest of research question (1), above).

Discourse patterns in peer group discussions 

[The authors also looked at discourse patterns in the whole-class discussions, but as I don’t plan to do whole-class discussions via chats in the near future, I’m skipping that portion of the article here.]

There were more comments made in the oral class, during peer assessment discussions, than in the online chat groups: 40-70 turns per group for the oral discussions and 14-25 turns per group for the online chats (498). However, the authors found that the discussion in the oral class was, as they put it, “less focused” (498), in the sense that there were more interjections of personal narratives and repetitions of what other students had said. In the computer class, the talk was more “focused on the task of criticizing the writing rather than conversing with their fellow students while on the network” (499).

The tone of the article here indicates that the talk in the online chat was better than that in the oral discussion. But as noted in Hewett (2000), the sort of talk that might be interpreted as “unfocused” could also be interpreted as an important part of participating in an interactive discussion. Repetitions indicate that one is listening, following along, and being an engaged participant in a discussion. Personal narratives can both help to make a point as well as forge some connections between discussion group members, perhaps bringing them closer together and thereby helping them feel more comfortable (which could contribute to more productive peer evaluation).

In addition, in the oral groups the author of the paper being discussed often dominated the discussion, while the author spoke less in the online chats, making for more equal participation.

Continue reading

Peer assessment: face to face vs. asynchronous, online (Pt. 1)

I have been doing a good deal of reading research on peer assessment lately, especially studies that look at differences and benefits/drawbacks of doing peer assessment face to face , orally, and through writing–both asynchronous writing in online environments (e.g., comments on a discussion board) and synchronous writing online (e.g., in text-based “chats”). I summarized a few studies on oral vs. written  peer assessment in this blog post, and then set out a classification structure for different methods of peer assessment in this one.

Here, I summarize a few studies I’ve read that look at written, online, asynchronous peer feedback. In another post I’ll summarize some studies that compare oral, face to face with written, online, synchronous (text-based chats). I hope some conclusion about the differences and the benefits of each kind can be drawn after summarizing the results.

1. Tuzi, F. (2004) The impact of e-feedback on the revisions of L2 writers in an academic writing course, Computers and Composition 21, 217–235. doi:10.1016/j.compcom.2004.02.003

This study is a little outside of my research interest, as it doesn’t compare oral feedback to written (in any form). Rather, the research focus was to look at how students revised essays after receiving e-feedback from peers and their teacher. Oral feedback was only marginally part of the study, as noted below.

20 L2 students (students for whom English was an additional language) in a first-year writing course at a four-year university participated in this study. Paper drafts were uploaded onto a website where other students could read them and comment on them. The e-feedback could be read on the site, but was also sent via email to students (and the instructor). Students wrote four papers as part of the study, and could revise each paper up to five times. 97 first drafts and 177 revisions were analyzed in the study. The author compared comments received digitally to later revised drafts, to see what had been incorporated. He also interviewed the authors of the papers to ask what sparked them to make the revisions they did.

Tuzi combined the results from analyzing the essay drafts and e-feedback (to see what of the feedback had been incorporated into revisions) with the results of the interviews with students, to identify the stimuli for changes in the drafts. From these data he concludes that 42.1% of the revisions were instigated by the students themselves, 15.6% from e-feedback, 148.% from the writing centre, 9.5% from oral feedback (from peers, I believe), and for 17.9% of the revisions, the source was “unknown.” He also did a few finer-grained analyses, showing how e-feedback fared in relation to these other sources at different levels of writing (such as the punctuation, word, sentence, paragraph), in terms of the purpose of the revision (e.g., new information, grammar) and more. In many analyses, the source of most revisions was the students themselves, but e-feedback ranked second some (such as working at the sentence, clause and paragraph levels, and adding new information). Oral feedback was always low on the list.

In the “discussion” section, Tuzi states:

Although e-feedback is a relatively new form of feedback, it was the cause of a large number of essay changes. In fact, e-feedback resulted in more revisions than feedback from the writing center or oral feedback. E-feedback may be a viable avenue for receiving comments for L2 writers. Another interesting observation is that although the L2 writers stated that they preferred oral feedback, they made more e-feedback-based changes than oral-based changes.

True, but note that in this study oral feedback was not emphasized. It was something students could get if they wanted, but only the e-feedback was focused on in the course. So little can be concluded here about oral vs. e-feedback. To be fair, that wasn’t really the point of the study, however. The point was simply to see how students use e-feedback, whether it is incorporated into revisions, and what kinds of revisions e-feedback tends to be used for. And Tuzi is clear towards the end: “Although [e-feedback] is a useful tool, I do not believe it is a replacement for oral feedback or classroom interaction …”. Different means of feedback should be available; this study just shows, he says, that e-feedback can be useful as one of them.

Continue reading

Problems with grading rubrics for complex assignments

In an earlier post I discussed a paper by D. Royce Sadler on how peer marking could be a means for students to learn how to become better assessors themselves, of their own and others’ work. This could not only allow them to become more self-regulated learners, but also fulfill roles outside of the university in which they will need to evaluate the work of others. In that essay Sadler argues against giving students preset marking criteria to use to evaluate their own work or that of other students (when that work is complex, such as an essay), because:

  1. “Quality” is more of a global concept that can’t easily be captured by a set of criteria, as it often includes things that can’t be easily articulated.
  2. As Sadler pointed out in a comment to the post noted above, having a set of criteria in advance predisposes students to look for only those things, and yet in any particular complex work there may be other things that are relevant for judging quality.
  3. Giving students criteria in advance doesn’t prepare them for life beyond their university courses, where they won’t often have such criteria.

I was skeptical about asking students to evaluate each others’ work without any criteria to go on, so I decided to read another one of his articles in which this point is argued for more extensively.

Here I’ll give a summary of Sadler’s book chapter entitled “Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning” (in Assessment, Learning and Judgement in Higher Education, Ed. G. Joughin. Dordrecht: Springer, 2009). DOI: 10.1007/978-1-4020-8905-3_4).

[Update April 22, 2013] Since the above is behind a paywall, I am attaching here a short article by Sadler that discusses similar points, and that I’ve gotten permission to post (by both Sadler and the publisher): Are we short-changing our students? The use of present criteria in assessment. TLA Interchange 3 (Spring 2009): 1-8. This was a publication from what is now the Institute for Academic Development at the University of Edinburgh, but these newsletters are no longer online.

Note: this is a long post! That’s because it’s a complicated article, and I want to ensure that I’ve got all the arguments down before commenting.

Continue reading

Using SoTL to create guidelines for teaching

On the Resources” page of the International Society for the Scholarship of Teaching and Learning (ISSOTL), I came across a link to “Guidelines on Learning that Inform Teaching,” a project by Adrian Lee, Emeritus Professor at the University of New South Wales, in Sydney, Australia. Through his work in his position as Pro Vice Chancellor (Education & Quality Improvement) at the University of NSW, Lee decided to work with others to create a set of guidelines, based on research, on what “works” in teaching.

The main ideas behind the project are stated as follows:

  • As academics, our task is to help students learn.
  • There is a vast research literature on how students learn and examples of good teaching based on this research.
  • As we claim to be research intensive institutions should not our teaching be based on this research?

But, as Lee notes, most faculty are very busy, and don’t have time to look into the relevant research on teaching, on top of their own research and teaching activities (I am lucky to be able to spend my sabbatical time doing lots of research into teaching, given that I’m in a teaching position, but most people don’t have that luxury). So he decided to distill the research into a set of guidelines, with links to relevant research papers.

Lee’s paper, “From Teaching to Learning,” linked at the bottom of the front page of the website, provides information (towards the end) on how the guidelines were developed. It also provides a great summary of some other excellent work done at UNSW on professional development for faculty re: teaching and learning–some things to learn from there, for institutions!

There are 16 guidelines, each with its own dedicated page. On those pages are quotes from various research papers that are meant to illustrate the guideline. I found these somewhat helpful, but would liked to have seen more narrative description of each guideline and what kind of research supports it. Perhaps one or two disciplinary examples would have helped illustrate the guideline a bit better, though those are all collected in a different section (see below). There are also links to research papers related to that particular guideline, for those who are interested in seeing some of the sources of the guideline. Of course, this effort is hampered by the fact that so many SoTL articles are not open access, so they can’t be easily linked to. Or rather, he could have put links to them, with notes that they are subscription only–so those whose institutions do have subscriptions could read them. Still, interested faculty could look up some of the non-linked papers themselves if their institutions have subscriptions.

There is also a section that hasDiscipline-specific exemplars,” which has links to ways people are putting the guidelines into action (whether they are consciously aware of these specific guidelines or not!) in different disciplines. The “humanities” section is fairly sparse at this point, but Lee asks for exemplars and provides his email address.

Finally, each guideline has a “toolkit” designed to encourage reflection on each one by individual teachers (linked at the bottom of each guideline page). It asks instructors to think of examples, to engage in reflection on the guideline, to consider obstacles to implementing it, and more. I find the toolkit a bit sparse, and think that more instruction could be helpful–more specific questions to guide reflection, for example. And perhaps a prompt to ask instructors to come up with a way they could use it in their own teaching in the future.

Lee’s larger project is to encourage institutions to set up their own guidelines, for a similar purpose–to help faculty think about the current research in a distilled fashion, and see examples of work being done at their own institution and elsewhere that conforms to the guidelines. On the homepage he links to three universities that have used the 16 guidelines as a model for their own set of guidelines. For example, MIT has a page listing exemplars of the guidelines from MIT itself.

 

I think this is a very important concept–for those who don’t have the time to go to workshops on teaching where they might get similar information (or those for whom such workshops are not available), it’s helpful to have research-based information on quality teaching in higher ed available. Of course, a project like this requires upkeep, and some of the links on the site are currently broken (including the link to one of the three universities with their own, similar guidelines, University of Bedfordshire-UK). In addition, the latest papers linked to on the site are from the mid-2000s, and it would be nice to have later papers available as well. Still, what Lee is really hoping will happen is that universities create their own guidelines and sites, and keep these up to date; and that’s an excellent idea.

Do you know of any colleges or universities that have similar sorts of guidelines, backed by research? If so, please give links in the comments and I’ll send them to Adrian Lee! Or, if you have some discipline-specific exemplars for one or more of the guidelines, please post those below.
P.S. I’m off and on out of town for the next month, as it is summer here in Australia (where I am for sabbatical), so I may be a little slow in responding to comments!

 

How does giving comments in peer assessment affect students? (Part 3)

This is the third post in a series summarizing empirical studies that attempt to answer the question posed in the title. The first two can be found here and here. This will be the last post in the series, I think, unless I find some other pertinent studies.

Lundstrom, K. and Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer’s own writing, Journal of Second Language Writing 18, 30-43. Doi: 10.1016/j.jslw.2008.06.002

This article is focused on students in “L2” classes (second language, or additional language), and asks whether students who review peers’ papers do better in their own (additional language) writing than students who only receive peer reviews and attempt to incorporate the feedback rather than giving comments on peers’ papers.

Participants were 91 students enrolled in nine sections of additional language writing classes at the English Language Center at Brigham Young University. The courses were at two levels out of a possible five: half the students were in level 2, “high beginning,” and half were in level 4, “high intermediate” (33). The students were then divided into a control and experimental group:

The first group was composed of two high beginning and three high intermediate classes, (totaling forty-six students). This group was the control group (hereafter ‘‘receivers’’) and received peer feedback but did not review peers’ papers (defined as compositions written by students at their same proficiency level). The second group was composed of two high beginning classes and two high intermediate classes, (totaling forty-five students), and made up the experimental group, who reviewed peer papers but did not receive peer feedback (hereafter ‘‘givers’’). (33; emphasis mine)

Research questions and procedure to address them

Research questions:

1. Do students who review peer papers improve their writing ability more than those who revise peer papers (for both beginning and intermediate students)?

2. If students who review peer papers do improve their writing ability more than those who revise them, on which writing aspects (both global and local) do they improve? (32)

Continue reading

How does giving comments in peer assessment impact students? (Part 2)

This is the second post looking at published papers that use empirical data to answer the question in the title. The first can be found here. As noted in that post, I’m using “peer assessment” in a broad way, referring not just to activities where students give grades or marks to each other, but more on the qualitative feedback they provide to each other (as that is the sort of peer assessment I usually use in my courses).

Here I’ll look at just one article on how giving peer feedback affects students, as this post ended up being long. I’ll look at one last article in the next post (as I’ve only found four articles on this topic so far).

Lu, J. and Law, N. (2012) Online peer assessment: effects of cognitive and affective feedback, Instructional Science 40, 257-275. DOI 10.1007/s11251-011-9177-2. This article has been made open access, and can be viewed or downloaded at: http://link.springer.com/article/10.1007%2Fs11251-011-9177-2

In this study, 181 13-14 year old students in a Liberal Studies course in Hong Kong participated in online peer review of various parts of their final projects for the course. They were asked to both engage in peer grading and give peer feedback to each other in groups of four or five. The final project required various subtasks, and peer grading/feedback was not compulsory — students could choose which subtasks to give grades and feedback to their peers about. The grades were given using rubrics created by the teacher for each subtask, and both grades and feedback were given through an online program specially developed for the course.

Research Questions

  1. Are peer grading activities related to the quality of the final project for both assessors and assessees?
  2. Are different types of peer …  feedback related to the quality of the final projects for both assessors and assessees? (261)

Continue reading

Learnist boards on SoTL and Open Access

During the past week or so I’ve been working on finishing a couple of things before their deadlines, and during breaks in working on those I’ve been having fun with Learnist: http://learni.st/. Learnist is sort of like Pinterest but focused more on learning about new things. The idea is that someone who has some knowledge about something designs a “learnboard” about it, and collects things from the web (including PDFs, Google docs and books, Prezi presentations, Kickstarter campaigns, and more), plus their own materials if they like, and puts them together in an order that helps others learn the topic. That person also adds commentary to help explain the artifact posted. There are places to comment on each artifact or on the whole board, and others can suggest new things to be put on the board.

Right now Learnist is in beta, and I am a bit frustrated by the fact that there is nothing on the front page to explain it (though there are learnboards on Learnist itself that give you an overview–you just have to type in “learnist” on the search bar at the top to find them).

You have to send a request to them if you want an account to create your own boards or make comments, but you don’t have to have an account to see all the existing boards. You can see mine by following the links below. They are still in progress, so more will likely be added later (esp. to the first and third ones).

 

1. What is the Scholarship of Teaching and Learning and How Can I Get Involved?

2. Open Access Journals in the Scholarship of Teaching and Learning

3. Open Access Scholarly Publishing (and Why It’s Important)

 

And you could also check out my personal page on Learnist, which lists any new learnboards I may have added since writing this post, as well as the boards I myself “like” and “follow.”

Update Dec. 6, 2012

I received an email from someone over at Grockit, the company that runs Learnist, asking for some suggestions to try to address the problems I noted above with being confused about how it all works when you arrive at the page for the first time. Nice to hear they are concerned and open to suggestions!

They also let me know I could invite others to use Learnist and they could bypass the usual waiting period through that means. So if you want an invite, send me an email (I may have to ask you a couple of questions to make sure you’re a real person and really interested in Learnist, since I’ll personally be inviting you!). Find my email address on my personal website at: https://blogs.ubc.ca/christinahendricks

 

How does giving comments in peer assessment impact students? (Part 1)

Some colleagues and I are brainstorming various research we might undertake regarding peer assessment, and in our discussions the question in the title of this post came up. I am personally interested in the comments students can give to each other in peer assessment, more than in students giving marks/grades to each other. Students engaging in giving comments on each others’ work are not only impacted by receiving peer comments, of course, but through the process of giving them as well. How does practice in giving comments and evaluating others’ work affect students’ own work or the processes they use to produce it?

I’ve already looked at a couple of articles that address this question from a somewhat theoretical (rather than empirical) angle (see earlier posts here and here). As discussed in those posts, it makes sense to think that practice in evaluating the work of peers could help students get a better sense of what counts as “high quality,” and thus have that understanding available to use in self-monitoring so as to become more self-regulated.

In this post I summarize the findings of two empirical articles looking at the question of whether and how providing feedback to others affects the quality of students’ own work. I will continue this summary in another post, where I look at another few articles.

(1) Li, L., Liu, X. and Steckelberg, A.L. (2010) Assessor or assessee: How student learning improves by giving and receiving peer feedback, British Journal of Educational Technology 41:3, 525-536. DOI: 10.1111/j.1467-8535.2009.00968.x

In this study, 43 undergraduate teacher-education students engaged in online peer assessment of each others’ WebQuest projects. Each student evaluated the projects of two other students. They used a rubric, and I believe they gave both comments and marks to each other. Students then revised their projects, having been asked to take the peer assessment into account and decide what to use from it. The post-peer assessment projects were marked by the course instructor.

Continue reading

Literature on written and oral peer feedback

For context on why I’m interested in this, see the previous post.

I’ve done some searches into the question of oral and written peer feedback, and been surprised at the paucity of results. Or rather, the paucity of results outside the field of language teaching, or teaching courses in a language that is an “additional language” for students. I have yet to look into literature on online vs. face-to-face peer review as well. Outside of those areas, I’ve found only a few articles.

1. Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.  http://dx.doi.org/10.1080/13562510500527685

In this article Van den Berg et al report on a study in which they looked at peer feedback in seven different courses in the discipline of history (131 students). These courses had peer feedback designs that differed according to things such as: what kind of assignment was the subject of peer feedback, whether the peer feedback took place alongside teacher feedback or whether there was peer feedback only, whether students who commented on others got comments from those same others on their own work or not, how many students participated in feedback groups, and more. Most of the courses had both written and oral peer feedback, though one of the seven had just written peer feedback.

The authors coded both the written and oral feedback along two sets of criteria: feedback functions and feedback aspects. I quote from their paper to explain these two things, as they are fairly complicated:

Based on Flower et al . (1986) and Roossink (1990), we coded the feedback in relation to its product-oriented functions (referring directly to the product to be assessed): analysis, evaluation, explanation and revision. ‘Analysis’ includes comments aimed at understanding the text. ‘Evaluation’ refers to all explicit and implicit quality statements. Arguments supporting the evaluation refer to ‘Explanation’, and suggested measures for improvement to ‘Revision’. Next, we distinguished two process-oriented functions, ‘Orientation’ and ‘Method’. ‘Orientation’ includes communication which aims at structuring the discussion of the oral feedback. ‘Method’ means that students discuss the writing process. (141-142)

By the term ‘aspect’ we refer to the subject of feedback, distinguishing between content, structure, and style of the students’ writing (see Steehouder et al., 1992). ‘Content’ includes the relevance of information, the clarity of the problem, the argumentation, and the explanation of concepts. With ‘Structure’ we mean the inner consistency of a text, for example the relation between the main problem and the specified research questions, or between the argumentation and the conclusion. ‘Style’ refers to the ‘outer’ form of the text, which includes use of language, grammar, spelling and layout. (142)

They found that students tended to focus on different things in their oral and written feedback. Written feedback over all the courses tended to be more product-oriented than process-oriented, with a focus on evaluation of quality rather than explaining that evaluation or offering suggestions for revision.  In terms of feedback aspect, written feedback focused more on content and style than structure (143).

Continue reading