Category Archives: Assignments

Authentic assessments in two PHIL classes

For the blended learning course I’m taking on teaching a blended learning course, we were asked to design an “authentic assessment” for one of our courses. An authentic assessment, from what I understand, is one in which students are either simulating or doing the very sorts of activities you hope they will be able to do outside of class, after they take the course. In addition, according to some of the text of the course I’m taking,

According to Eisner (1993), authentic assessment projects should reveal how students go about solving the problems (process) and should have more than one correct solution. They should:

  • Promote ‘how’ knowledge as opposed to the ‘what’ knowledge measured in ‘traditional’ assignments;
  • Provide a way for students to develop an understanding of complex course material that will serve them beyond the classroom;
  • Encourage higher-order cognitive skills;
  • Involve students more extensively in the development of the assessment and the grading criteria.

PHIL 102: Introduction to Philosophy

Here is an idea for an authentic assessment activity for my Introduction to Philosophy course.

Rationale

In PHIL 102, Introduction to Philosophy, the main theme of the course is investigating what philosophy is, what philosophers do, and the value of these things, both by reading about what philosophers themselves have said about these questions, and by considering what the philosophers whose texts we are reading are doing with their lives and their writing.

One of the things I’d like students to be able to do by the end of the course is to recognize ways in which they themselves engage in philosophical activity, in their everyday lives.

Activity

Students will write a reflective blog post towards the end of the term in which they discuss two things they do in their lives that could show philosophical thinking or addressing of philosophical questions. They will also add a short summary of their post for a class wiki page on this question.

Learning objective addressed: “Explain at least two ways in which you yourself use philosophical thinking or address philosophical questions in your everyday life.”

Instructions

Now that the course is nearly over, you should have a pretty good idea of what philosophy is and what philosophers do. It’s  time to consider the ways in which you yourself engage in philosophy. This assignment consists of two parts:

1. Write a blog post on the class blog in which you do the following:

  • Discuss at least two ways in which you yourself use philosophical thinking or consider philosophical questions in your own life, your own day-to-day activities, your major life decisions, etc.
  • Explain why these could be considered “philosophical,” referring to at least one of the philosophers or texts or ideas we’ve discussed in class.
  • This blog post should be at least 300 words long, but no longer than 800 words

2. After you’ve completed your blog post, contribute your two ways to the class wiki page for this assignment [give URL for this here].

  • Write a one or two-sentence summary of each of the ways you engage in philosophical thinking or activity and put them as bullet points on the wiki page.
  • Christina will then organize these under general categories after they are posted, to make them easier to read through, and we’ll discuss the results in class

Marking criteria

This assignment will be marked using a three-level system:

1. Plus:

  • Your blog post discusses at least two ways in which you engage in philosophical thinking or address philosophical questions in your life
  • Your blog post adequately explains how these things are philosophical, referring to at least one of the philosophers/texts/ideas we’ve discussed in class.
  • Your blog post is between 300 and 800 words long.
  • You wrote a one- or two-sentence summary of each of the two things you discussed in your post, on the class wiki page.
  • Both the post and the wiki entry were completed by the due date and time.

2. Minus:

  • Your blog post discusses only one way in which you engage in philosophical thinking or address philosophical questions in your life, or
  • Your blog post does not adequately explain how this/these activities are philosophical, and/or doesn’t refer to at least one of the philosophers/texts/ideas we’ve discussed in class, or
  • Your blog post is less than 300 words or more than 800 words, or
  • Your blog post was fine, but you didn’t submit your one- or two-sentence summary of each point discussed in the post on the wiki page, or
  • Your blog post and/or wiki entry were submitted after the due date and time, but no later than six days afterwards.

3. Zero:

  • Your post and/or wiki page entry was not completed, or
  • Your blog post and/or wiki entry were completed seven or more days after the due date.

 

Thoughts/questions

I wanted this assignment to not only be useful for the students writing the posts themselves, to get them to think about how philosophy plays a role in their own lives, but also to others. That’s why I thought of having them post to a wiki page–there are often over 100 students in this course, and reading that many different blog posts will be too much for anyone else visiting the course (my courses are on open sites, on UBC Blogs, so anyone can visit them; students always have the option of posting under a pseudonym, or with a password so only the rest of the class can read, or private to me if they choose).

But just having a list of one- or two-sentence summaries on a wiki page is too messy too. So I thought I’d try to categorize them myself after they’re posted, and say something like: 15 people said x, 8 people said y, etc.

Of course, this is more work for me. Any ideas on how to make it so that we have a kind of summary document that might be useful for students in the class as well as others, without me having to go through and categorize all the entries? It’s okay if I have to do so (it’s just busy work, and easy), but if there are other ways I’d love to hear them!

 

PHIL 230: INTRODUCTION TO MORAL THEORY

Here is an idea for an authentic assessment for this course. Students will be writing in a “moral issue” journal throughout the course, starting with what they think about a particular moral issue, then comparing this with what they think each of the philosophers we study would say about it, and then concluding with their thoughts on the value of trying to come up with moral theories such as the ones we’ve studied. For this assignment, I’d like students to be able to take what they’re reflecting on in their moral issue journals and refine part of it into a formal essay.

This way, they’ll be using what they have learned in the course in thinking about moral issues they may face around them in their everyday lives.

Moral issue paper

For this paper, you’ll be using what you’ve reflected on in your moral issue journal and writing a formal paper. The idea here, as with the moral issue journal, is to apply the moral theories we’ve been studying to a moral issue that you might face in your life, or one that involves a larger group of people such as a community or nation. In this way, you’ll be making connections between what we’re studying in class and your life beyond.

Instructions

Using the moral issue you’ve been focusing on in your moral issue journal, write an argumentative paper that argues for how a consequentialist and a Kantian would approach the issue. Include also your own view on whether one approach is better than the other for this particular issue, and why (or why not; it may just be that they are very different and there’s no clear reason to choose one over the other).

Parts of the essay

Note from the Guidelines for essays handout that your essay should have an introduction with your thesis statement, a conclusion that wraps up the essay in some way, and body paragraphs that provide adequate arguments for the conclusion.

Your thesis should include (note that a thesis can be more than one sentence):

  • A summary statement of what a consequentialist and a Kantian would say about the issue
  • A summary of your view on whether or not one approach is better

Be sure to explain the moral issue you’re addressing, early on in the essay.

Length

The essay should be between 5 and 8 pages, typed, double-spaced, with margins between 0.75 and 1 inches, and font size between 11 and 12 points. [Or 2000-3000 words?]

Quotes, paraphrases, and citing sources

Quotes vs paraphrases: It’s usually best to have a mixture of both. You should use quotes where it’s important to give the author’s exact words, where the words themselves help you to make a point. This is often the case when a passage can be interpreted in more than one way, and you want to justify your interpretation with the words of the author. You can also use quotes where you need an extended passage to make your point (be sure to indent quotes over 4 lines long, 5 spaces on the left).

Citing sources in the paragraphs: Whether you give a quote or paraphrase a specific point from the text, you should give a page number or section/paragraph number to show where the information can be found in the text. You choose your favourite citation style, or you can just give the author’s last name plus the page or section number, in parentheses: (Kant 55). (This is the MLA style.) If you are citing more than one text by an author, give a shortened version of the title of the text in the parentheses as well: (Kant, Religion 99).

Citing sources at the end of the essay: Be sure to give a works cited page that includes all the texts you cited in parentheses in the essay. Again, you can use any citation style you wish, but be sure to include all the information that that citation style requires. For example, you can see how to create a Works Cited list in MLA style here [give URL].

Avoid plagiarism: It is the policy of the Instructor to prosecute plagiarism to the fullest extent allowed by UBC. Any use of another’s words, including just a sentence or part of a sentence, without citation, constitutes plagiarism. Use of another’s ideas without citation does as well. To avoid plagiarism, always give a citation whenever you have taken ideas or direct words from another source. Please see this page on the course website for information on how to avoid plagiarism, especially when you’re paraphrasing ideas or quoting from another source—quite a lot of plagiarism is not on purpose, just because students don’t understand the rules! https://blogs.ubc.ca/phil102/resources/

Depth of explanation and narrowness vs. breadth and superficiality: It’s usually best to focus your paper on a small number of claims and argue for them in some depth rather than trying to range widely over a very large number of claims that you then only have space to justify very quickly. Pick the strongest points for each, consequentialism and Kantianism, and focus on those.

Audience you should write for: Write this essay as if you were writing for someone who is in the class, has not read the texts, and has not attended the class meetings (say, a friend or family member). Explain your view, and the arguments of the philosophers you discuss, in as much depth as would be needed to make them clear to such an audience.

Marking: See the marking rubric posted here on the course website [give URL].

Late penalty: 5 points off per weekday late, unless otherwise agreed to by the Instructor (may require documentation). I do not generally give extensions due to students’ workloads, only for things that are unexpected and unavoidable such as medical issues; so plan ahead if you have multiple assignments due around the time that this essay is due!

“Students as Producers” Assignments in Intro to PHIL

For the blended learning course I’m doing on teaching a blended learning course, we were asked to think about possible assignments that could fit the “students as producers” model, where that involves projects that “encompass open-ended problems or questions, a authentic audience and a degree of autonomy” (according to the text in the course). Here’s a nice overview by Derek Bruff of the idea of “students as producers.”

 

Here are two ideas for “student as producer” assignments for my Introduction to Philosophy course (PHIL 102).

1. Shared notes on the reading

One person in each small group (of 4-5 students) is responsible for taking notes on the reading and posting them before any lecture on that section. Students will sign up for specific dates to finish their notes by.

Notes must include:

  • A statement of what you think the main point/main conclusion in this section of the reading is. If there is more than one, pick just one of the main conclusions in the reading. Refer to a page number where this conclusion can be found (or section and paragraph number, if the reading has no page numbers).
  • How the author argues for this point: give the reasons/premises the author gives to support the conclusion. Refer to page numbers where these premises can be found (or section and paragraph numbers, if the reading has no page numbers).
  • Give one or more comments about what you’ve discussed above: is there anything you disagree with? If so, why? Or, is there something in it that you find particularly interesting? How? Or, do you have any questions about it?

These notes must be typed and shared with the class, on the class blog [insert URL for where to share them]. Be sure to tag the post you’ve written with the last name of the author (e.g., Plato, Epicurus).

Anyone in the class can review the sets of notes for each author, which is a great resource for reviewing the text! Any student can respond to a question posed in one of the posts, or make a comment in response to what a student has said about the reading; you don’t have to just do it for the person from your small group.

 

Since the above is only partly open-ended (sections (a) and (b) are not very open-ended), I thought of another assignment as well.

 

2. What would it be like to live like an Epicurean or a Stoic?

For this activity, you will need to imagine what it would be like to live as either an Epicurean or a Stoic (choose one). You’ll need to describe some aspects of your current life and then how they would change if you lived as either an Epicurean or a Stoic. For example, you could consider how the following might be different (or anything else you deem relevant):

a. What you choose to study/what your career might be

b. What you spend your money on

c. What your day to day routine is like, the main choices you make each day and how they might change

Write a blog post on the class blog describing how your life would be different if you were an Epicurean or a Stoic. Discuss at least two ways that your life would be different. Include in your post a reflection on whether you think this would be a good way to live or not, and why.

  • Be sure to tag it either “Epicureanism” or “Stoicism,” and put it under the category “Live like a…”
  • Your blog post should be at least 400 words long, but no more than 900
  • Refer to the text with page numbers or section/paragraph numbers to show where the author says something that justifies why your life would be the way you say it would

This activity will be marked on a three-level scale:

  • Plus:
  • You have described at least two aspects of your life that would be different and why, with specific page or paragraph references to at least one of the texts we’ve read
  • You have included a reflection on whether you think this would be a good way to live or not, and why
  • the blog post is between 400 and 900 words long
  • Minus:
  • You have described only one aspect of your life that would be different, and/or
  • You have not adequately explained why your life would be different, and/or
  • You have not given specific references to the text(s) where needed to support your claims, and/or
  • You have not included a reflection on whether you think this would be a good way to live or not, and why
  • The post is less than 400 words or more than 900 words long, and/or
  • The post is late, without an acceptable excuse for being so (one to six days late)
  • Zero:
  • The post was not completed, or
  • It was completed seven or more days late

 

How are these related to the “student as producer” idea?

I was thinking of “student as producer” as having to do with students making things to share with a wider audience, producing content that would be useful to others. The first assignment does that for other students in the course; the second, if the blog posts are on a public site rather than a closed site (which my class blogs usually are), may provide information that could be interesting and useful to a wider audience trying to understand what Epicureanism and Stoicism are all about.

I was also thinking that the second assignment could be considered a kind of “authentic assignment,” in that many of the ancient philosophers thought that the purpose of philosophy was to change your life, to cause you to live in a better way, to be happier. I considered making them actually live like Epicureans or Stoics for a day, but I’m not sure one would get much out of just one day of doing so. Maybe a week would give you a taste, but that may be too much to ask! So I decided to do a simulation instead.

I’d love to hear anyone’s thoughts on how I might make either one of these assignments more useful to students or a wider audience, or more “authentic.” I considered adding a collaborative element to the second one, having them do it in groups, but I got stuck on whose life they would start with to consider how that life would change if lived as an Epicurean or Stoic, and then I got stuck on how they’d share the duties for writing the blog post about it. Any suggestions here would be great!

Providing feedback to students for self-regulation

On Nov. 21, 2013, I did a workshop with graduate students in Philosophy at UBC on providing effective feedback on essays. I tried to ground as much as I could on work in the Scholarship of Teaching and Learning.

Here are the slides for the workshop (note, we did more than this…this is just all I have slides for):

 

Here is the works cited for the slides:

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233.

 

Chanock, K. (2000). Comments on essays: Do students understand what tutors write? Teaching in Higher Education, 5(1), 95-105.

 

Lizzio, A. and Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment and Evaluation in Higher Education, 33(3), 263-275.

 

Lunsford, R.F. (1997). When less is more: Principles for responding in the disciplines. New Directions For Teaching and Learning, 69, 91-104.

 

Nicol, D.J. and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

 

Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.

 

Walker, M. (2009). An investigation into written comments on assignments: do students find them usable? Assessment and Evaluation in Higher Education, 34(1), 67-78.

 

Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education, 31(3), 379-394.

Summary of research on modes of peer assessment

I have been doing quite a few “research reviews” of articles on peer assessment–where I summarize the articles and offer comments about them. Lately I’ve been reading articles on different modes of peer assessment: written, oral, online, face to face, etc. And here, I am going to try to put together what that research has said to see if anything can really be concluded about these issues from it.

In what follows, I link to the blog posts discussing each article. Links to the articles themselves can be found at the bottom of this post.

I created PDF tables to compare/contrast the articles under each heading. They end up being pretty small here on the blog, so I also have links to each one of them, below.

Peer feedback via asynchronous, written methods or synchronous, oral, face to face methods

This is the dichotomy I am most interested in: is there a difference when feedback is given asynchronously, in a written form, or when given synchronously, as spoken word face to face? Does the feedback itself differ? Might one form of feedback be more effective than another in terms of being taken up in later revisions of essays?

Do the comments differ in the two modes of peer feedback, and are they used differently in later drafts?

The PDF version of the table below can be downloaded here.

van den Berg, Admiraal and Pilot (2006) looked at differences in what was said in peer feedback on writing assignments when it was written (on standardized peer feedback forms, used for the whole class) and when it was given in oral, face to face discussions. They found that written feedback tended to be more focused on evaluating the essays, saying what was good or bad about them, and less on giving explanations for those evaluative comments or on providing suggestions for revision (though this result differed between the courses they analyzed). In the oral discussions, there was more of a balance between evaluating content, explaining that evaluation, and offering revisions. They also found that both written and oral feedback focused more on content and style than on structure, though there were more comments on structure in the written feedback than in the oral. The authors note, though, that in the courses in which peer feedback took place on early drafts or outlines, there was more feedback on structure than when it took place on later drafts. They conclude: “A combination of written and oral feedback is more profitable than written or oral feedback only” (146).

Hewett (2000) looked at differences in peer feedback between an oral, face to face environment and an electronic, text-based environment. She found that the talk in the oral communication was much more interactive, with students responding to each others’ comments, giving verbal cues that they were following along, and also working together to generate new ideas. The text-based, online feedback was much less like a conversation, with students commenting on the papers at hand but not interacting very much with each other. Perhaps unsurprisingly, then, while the feedback in the written environment was mostly focused on the content of the essay being evaluated, the discussion in the oral environment ranged more widely. Hewett also analyzed essay drafts and peer comments from both environments to see if the peer discussion and comments influenced later drafts of essays. She found that in the oral environment, there was more use in students’ work of ideas that came up in the peer discussion about others’ essays, or that one had oneself said. Hewett concludes that a combination of oral discussion and asynchronous, written comments would be good, using the former for earlier stages of writing–since in oral discussion there can be more talk in which students speculate about wider issues and work together to come up with new ideas–and the latter for revisions focused more on content.

What are students’ views of each mode?

A PDF version of the following table can be downloaded here.

Figl et al. (2006) surveyed students in a computer science course who had engaged in peer assessment of a software project in both the face to face mode as well as through an online, asynchronous system that allows for recording of criticisms as well as adding comments as in a discussion board. There wasn’t a clear preference for one mode over another overall, except in one sense: about half of the students preferred using the face to face mode for discussion within their own teams, and with their partner teams (those they are giving feedback to and receiving feedback from). There was not as much discussion of the feedback, whether within the team or with the partner teams, in the online format, students reported, and they valued the opportunity for that discussion. Figl et al. conclude that it would be best to combine online, asynchronous text reviews with face to face activities, perhaps even with synchronous chat or voice options.

The study reported in Guardardo & Shi 2007 focused on asynchronous, written feedback for the most part; the authors recorded online, discussion-board feedback on essays and compared that with a later draft of each essay. They wanted to know if students used or ignored these peer comments, and what they thought of the experience of receiving the asynchronous, written feedback (they interviewed each student as well). All of the students had engaged in face to face peer feedback before the online mode, but the face to face sessions were not recorded so the nature of the comments in each mode was not compared. Thus, the results from this study that are most relevant to the present concern are those that come from interviews, in which the students compared their experiences of face to face peer feedback with the online, written, asynchronous exchange of feedback. Results were mixed, as noted in the table, but quite a few students said they felt more comfortable giving feedback without their names attached, while a significant number of students preferred the face-to-face mode because it made interacting with the reviewer/reviewee easier. The authors conclude that “online peer feedback is not a simple alternative to face-to-face feedback and needs to be organized carefully to maximize its positive effect” (458).

Cartney 2010 held a focus group of ten first-year students in a social work course who had engaged in a peer feedback exercise in which essays and comments on essays, as well as follow up discussion, was to take place over email. Relevant to the present concern is that the focus group discussion revealed that several groups did not exchange feedback forms via email but decided to meet up in person instead in order to have a more interactive discussion. Some groups did exchange written, asynchronous, online feedback, citing discomfort with giving feedback to others to their “faces.” The author concludes that there may be a need to use more e-learning in curricula in order for students to become more accustomed to using it for dialogue rather than one-way communication. But I also see this as an indication that some students recognized a value in face to face, oral, synchronous communication.

Peer feedback via electronic, synchronous text-based chat vs. oral, face to face methods

This dichotomy contrasts two sorts of synchronous methods for peer feedback and assessment: those taking place online, through text-based systems such as “chats,” and those taking place face to face, orally.

Do comments given synchronously through text-based chats differ from those given orally, face to face? And do these two modes of commenting affect students’ revisions of work differently?

A PDF version of both of the tables below can be downloaded here.

Sullivan & Pratt 1996  looked at two writing classes: in one class all discussions and peer feedback took place through a synchronous, electronic, text-based chat system and in the other discussions and peer feedback took place face to face, orally. They found that writing ability increased slightly more for the computer-assisted class over the traditional class, and that there were differences in how the students spoke to each other in the electronic, text-based chat vs. face to face, orally. The authors stated that the face to face discussion was less focused on the essay being reviewed than in the online chats (but see my criticisms of this interpretation here). They also found that the electronic chats were more egalitarian, in that the author did not dominate the conversation in them in the same way as happened with the face to face chats. The authors conclude (among other things) that discussions through online chats may be beneficial for peer assessments, since their study “showed that students in the computer-assisted class gave more suggestions for revision than students in the oral class” (500), and since there was at least some evidence for greater writing improvement in the “chat” class.

Braine 2001 (I haven’t done an earlier summary of this article in my blog) looked at students in two different types of writing classes in Hong Kong (in English), similar to those discussed in Sullivan & Pratt (1996), in which one class has all discussions and peer assessment taking place orally, and the other has these taking place on a “Local Area Network” that allows for synchronous, electronic, text-based chats. He looked at improvement in writing between a draft of an essay and a revision of that essay (final version) after peer assessment. Braine was testing students’ ability to write in English only, through the “Test of Written English.” He found that students’ English writing ability improved a bit more for the face-to-face class than the computer-mediated class, and that there were significant differences in the nature of discussions in the two modes. He concluded that oral, face-to-face discussions are more effective for peer assessment.

Liu & Sadler 2003  contrasted two modes of peer feedback in two composition classes, one of which wrote comments on essays by hand and engaged in peer feedback orally, face to face, and the other wrote comments on essays digitally, through MS Word, and then engaged in peer discussion through an electronic, synchronous, text-based chat during class time. The authors asked about differences in these  modes of commenting, and whether they had a differential impact on later essay revisions. Liu & Sadler were not focused on comparing the asynchronous commenting modes with the synchronous ones, but their results show that there was a higher percentage of “global” comments in both of the synchronous modes, and a higher percentage of “local” comments in the asynchronous ones. They also found that there was a significantly higher percentage of “revision-oriented” comments in the oral discussion than in the electronic chat. Finally, students acted more often on the revision-oriented comments given in the “traditional” mode (handwritten, asynchronous comments plus oral discussion) than in the computer-mediated mode (digital, asynchronous comments plus electronic, text-based chat). They conclude that for asynchronous modes of commenting, using digital tools is more effective than handwriting (for reasons not discussed here), and for synchronous modes of commenting, face to face discussions are more effective than text-based, electronic chats (219-221). They suggest combining these two methods for peer assessment.

Jones et al 2006  studied interactions between peer tutors in an English writing centre in Hong Kong and their clients, both in face to face meetings and in online, text-based chats. This is different from the other studies, which were looking more directly at peer assessment in courses, but the results here may be relevant to what we usually think of as peer assessment. The authors were looking at interactional dynamics between tutors and clients, and found that in the face-to-face mode, the relationship between tutors and clients tended to be more hierarchical than in the electronic, online chat mode. They also found that the subjects of discussion were different between the two modes: the face-to-face mode was used most often for “text-based” issues, such as grammar and word choice, while in the electronic chats the tutors and clients spoke more about wider issues such as content of essays and process of writing. They conclude that since the two modes differ and both serve important purposes, it would be best to use both modes.

Implications/discussion

This set of studies is not the result of a systematic review of the literature; I did not follow up on all the other studies that cited these, for example. A systematic review of the literature might add more studies to the mix. In addition, there are more variables that should be considered (e.g., whether the students in the study underwent peer assessment training, how much/what kind; whether peer assessment was done using a standardized sheet or not in each study, and more).

Nevertheless, I would like to consider briefly if these studies provide any clarity for direction regarding written peer assessment vs. oral, face-to-face.

For written, asynchronous modes of peer assessment (e.g., writing on essays themselves, writing on peer assessment forms) vs. oral, face-to-face modes, the studies noted here (van den Berg, Admiraal and Pilot (2006) and Hewett (2000)) suggest that in these two modes students give different sorts of comments, and for a fuller picture peer assessment should probably be conducted in both modes. Regarding student views of both modes (Figl et al. (2006), Guardardo & Shi (2007), Cartney (2010)), evidence is mixed, but there are at least a significant number of students who prefer face-to-face, oral discussions if they have to choose between those and asynchronous, written peer assessment.

For written, synchronous modes of peer assessment (e.g., electronic, text-based chats) vs. oral, face-to-face, the evidence here is all from students for whom English is a foreign language, but some of the results might still be applicable to other students (to determine this would require further discussion than I can engage in now). All that can be said here is that the results are mixed. Sullivan & Pratt (1996) found some, but not a lot of evidence that students using e-chats improved their writing more than those using oral peer assessment, but Braine (2001) found the opposite. However, they were using different measures of writing quality. Sullivan & Pratt also concluded that the face-to-face discussions were less focused and effective than the e-chat discussions, while Braine concluded the opposite. This probably comes down in part to interpretation of what “focused” and “effective” mean.

Liu & Sadler (2003) argued that face-to-face modes of synchronous discussion are better than text-based, electronic, synchronous chats–opposing Sullivan & Pratt–because there was a higher percentage of “revision-oriented” conversational turns (as a % of total turns) in the face-to-face mode, and because students acted on the revision-oriented comments more in the traditional class (both writing comments on paper and oral, face-to-face peer discussion) than in the computer-mediated class (digital comments in MS Word and e-chat discussions). Jones et al. (2006) found that students and peer tutors talked about different types of things, generally, in the two modes and thus concluded that both should be used. But that study was about peer tutors and clients, which is a different situation than peer assessment in courses.

So really, little can be concluded, I think, from looking at all these studies, except that it does seem that students tend to say different types of things in different modes of communication (written/asynchronous, written/synchronous, oral/face-to-face/synchronous), and that those things are all valuable; so perhaps what we can say is that using a combination of modes is probably best.

Gaps in the literature

Besides more studies to see if better patterns can emerge (and perhaps they are out there–as noted above, my literature search has not been systematic), one gap is that no one, so far, has considered video chats, such as Google Hangouts, for peer assessment. Perhaps the differences between those and face-to-face meetings might not be as great as between face-to-face meetings and text-based modes (whether synchronous chats or asynchronous, written comments). And this sort of evidence might be useful for courses that are distributed geographically, so students could have a kind of face-to-face peer assessment interaction rather than just giving each other written comments and carrying on a discussion over email or an online discussion board. Of course, the problem there would be that face-to-face interactions are best if supervised, even indirectly, so as to reduce the risk of people treating each other disrespectfully, or offering criticisms that are not constructive.

So, after all this work, I’ve found what I had guessed before starting: it’s probably best to use both written, asynchronous comments and oral, face-to-face comments for peer assessment.

 

Works Cited

Braine, G. (2001) A study of English as a foreign language (EFL) writers on a local-area network (LAN) and in traditional classes, Computers and Composition 18,  275–292. DOI: http://dx.doi.org/10.1016/S8755-4615(01)00056-1

Cartney, P. (2010) Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used, Assessment & Evaluation in Higher Education, 35:5, 551-564. DOI: http://dx.doi.org/10.1080/02602931003632381

Figl, K., Bauer, C., Mangler, J., Motschnig, R. (2006) Online versus Face-to-Face Peer Team Reviews, Proceedings of Frontiers in Education Conference (FIE). San Diego: IEEE. See here for online version (behind a paywall).

Guardado, M., Shi, L. (2007) ESL students’ experiences of online peer feedback, Computers and Composition 24, 443–461. Doi: http://dx.doi.org/10.1016/j.compcom.2007.03.002

Hewett, B. (2000) Characteristics of Interactive Oral and Computer-Mediated Peer Group Talk and Its Influence on Revision, Computers and Composition 17, 265-288. DOI: http://dx.doi.org/10.1016/S8755-4615(00)00035-9

Jones, R.H., Garralda, A., Li, D.C.S. & Lock, G. (2006) Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers, Journal of Second Language Writing 15,  1–23. DOI: http://dx.doi.org/10.1016/j.jslw.2005.12.001

Liu, J. & Sadler, R.W. (2003) The effect and affect of peer review in electronic versus traditional modes on L2 writing, Journal of English for Academic Purposes 2, 193–227. DOI: http://dx.doi.org/10.1016/S1475-1585(03)00025-0

Sullivan, S. & Pratt, E. (1996) A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom, System 29, 491-501. DOI: http://dx.doi.org/10.1016/S0346-251X(96)00044-9

Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.  DOI: http://dx.doi.org/10.1080/13562510500527685

Peer Assessment: Face to face vs. online, synchronous (Part 2)

Here I look at one last study I’ve found that focuses on the nature of student peer feedback discussions when they take place in a synchronous, online environment (a text-based chat). Part 1 corresponding to this post can be found here.

Jones, R.H., Garralda, A., Li, D.C.S. & Lock, G. (2006) Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers, Journal of Second Language Writing 15,  1–23. DOI: http://dx.doi.org/10.1016/j.jslw.2005.12.001

This study is rather different than the ones I looked at in Part 1 of face to face vs. online, synchronous peer assessment, because here the subjects of the study are students and peer tutors in a writing centre rather than peers in the same course. Still, at least some of their results regarding the nature of peer talk in the tutor situation may still be relevant for peer assessment in courses.

Participants and data

The participants in this study were five peer tutors in a writing centre in Hong Kong, dedicated to helping non-native English speakers write in English. For both tutors and clients, English was an additional language, but the tutors were further along in their English studies and had more proficiency in writing in English than the clients. Data was collected from transcripts of face to face consultations of the tutors with clients, as well as transcripts of online, text-based chat sessions of the same tutors, with many of the same clients.

Face to face tutoring was only available in the daytime on weekdays, so if students wanted help after hours, they could turn to the online chat. Face to face sessions lasted between 15 and 30 minutes, and students “usually” emailed a draft of their work to the tutor before the session. Chat sessions could be anywhere from a few minutes to an hour, and though tutors and clients could send files to each other through a file exchange system, this was only done “sometimes” (6). These details will become important later.

Model for analyzing speech

To analyze the interactions between tutors and clients, the authors used a model based on “Halliday’s functional-semantic view of dialogue (Eggins & Slade, 1997; Halliday, 1994)” (4). In this model, one analyzes conversational “moves,” which are different than “turns”–a “turn” can have more than one “move.” The authors explain a move as “a discourse unit that represents the realization of a speech function” (4).

In their model, the authors use a fundamental distinction given by Halliday into “initiating moves” and “responding moves”:

Initiating moves (statements, offers, questions, and commands) are those taken independently of an initiating move by the other party; responding moves (such as acts of acknowledgement, agreement, compliance, acceptance, and answering) are those taken in response to an initiating move by the other party. (4-5)

They then subdivide these two categories further, some of which is discussed briefly below.

Results

Conversational control

In the face to face meetings, the tutors exerted the most control over the discussions. Tutors had many more initiating moves (around 40% of their total moves, vs. around 10% of those for clients), whereas clients had more responding moves (around 33% of clients’ total moves, vs. about 14% for tutors). In the chat conversations, on the other hand, initiating and responding moves were about equal for both tutors and clients (7).

Looking more closely at the initiating moves made by both tutors and clients, the authors report:

In face-to-face meetings, tutors controlled conversations primarily by asking questions, making statements, and issuing directives. In this mode tutors asked four times more questions than clients. In the on-line mode, clients asked more questions than tutors, made significantly more statements than in the face-to-face mode, and issued just as many directives as tutors. (10)

Types of questions

However, the authors also point out that even though the clients asserted more conversational control in the online chats, it was “typical” of the chats to consist of questions by students asking whether phrases, words, or sentences were “correct” (11). They did not often ask for explanations, just a kind of check of their work from an expert and a quick answer as to whether something was right or wrong. On the other hand, when tutors controlled the conversations with their questions, it was often the case that they were using strategies to try to get clients to understand something themselves, to understand why something is right or wrong and to be able to apply that later. So “control” over the conversation, and who asks the most questions or issues the most directives, are not the only important considerations here.

The authors also divided the questions into three different types. Closed questions: “those eliciting yes/no responses or giving the answerer a finite number of choices; open questions: “those eliciting more extended replies”; rhetorical questions: “those which are not meant to elicit a response at all” (12)

In the face to face sessions, tutors used more closed questions (about 50% of their initiating questions) than open questions (about 33%); the opposite was true in the online chats: tutors used more open questions (about 50% of their initiating questions) than closed (about 41%).

Continue reading

Peer assessment: Face to face vs. online, synchronous (Part 1)

This is another post in the series on research literature that looks at the value of doing peer assessment/peer feedback in different ways, whether face to face, orally, or through writing (mostly I’m looking at computer-mediated writing, such as asynchronous discussion boards or synchronous chats). Earlier posts in this series can be found here, here, here and here.

In this post I’ll look at a few studies that focus on peer assessment through online, synchronous discussions (text-based chats).

1. Sullivan, S. & Pratt, E. (1996) A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom, System 29, 491-501. DOI: http://dx.doi.org/10.1016/S0346-251X(96)00044-9

 38 second-year university students studying English writing for the first time (where English was an additional language) participated in the study. They were distributed in two classes taught by the same professor, where all the teaching materials were the same except that in one class all class discussions and peer evaluation discussions were held orally, face to face, and in the other all class discussions and peer group discussions were held online, in a synchronous “chat” system. In the computer-assisted class, students met often in a computer lab, where they engaged in whole-class discussions and peer group discussions using the chat system.

[I see the reason for doing this sort of thing, so that students don’t have to spend time outside of class doing online chats, but I do always find it strange to have a room full of students and the teacher sitting together but only communicating through computers.]

Research questions:

(1) Are there differences in attitudes toward writing on computers, writing apprehension, and overall quality of writing between the two groups after one semester?; and

(2) Is the nature of the participation and discourse in the two modes of communication different?

In what follows I will only look at the last part of question 1 (the overall quality of writing), as well as question 2.

Writing scores

At the beginning of the term, students produced a writing sample based on a prompt given by the instructor. This was compared with a similar writing sample given at the end of the term. These were “scored holistically on a five point scale by two trained raters” (494).

In the oral class, strangely, the writing scores went down by the end of the term: at the beginning the mean was 3.41 (out of 5), with a standard deviation of 0.77, and at the end it was 2.95 with a SD of 0.84. The authors do not comment on this phenomenon, though the difference (0.46) is not great. In the computer class, the writing scores went up slightly: from a mean of 3.19 (SD 0.77) at the beginning to 3.26 (SD 0.70) at the end. The authors note, though, that “[t]he students in the two classes did not differ significantly (below the 0.05 probability level) at the beginning nor at the end of the semester” (496).

They did find a some evidence that the students in the computer assisted class did improve their writing:

However, some evidence was found for improved writing in the computer-assisted class by comparing the writing score changes of the two classes (computer-assisted classroom’ s gain (+0.07) to oral classroom’s loss (-0.46)). A t-test showed the difference to be significant at the 0.08 probability level. (496)

The authors conclude, however, that the data does not support saying one environment is better than another in terms of improving writing (nor, incidentally, for the rest of research question (1), above).

Discourse patterns in peer group discussions 

[The authors also looked at discourse patterns in the whole-class discussions, but as I don’t plan to do whole-class discussions via chats in the near future, I’m skipping that portion of the article here.]

There were more comments made in the oral class, during peer assessment discussions, than in the online chat groups: 40-70 turns per group for the oral discussions and 14-25 turns per group for the online chats (498). However, the authors found that the discussion in the oral class was, as they put it, “less focused” (498), in the sense that there were more interjections of personal narratives and repetitions of what other students had said. In the computer class, the talk was more “focused on the task of criticizing the writing rather than conversing with their fellow students while on the network” (499).

The tone of the article here indicates that the talk in the online chat was better than that in the oral discussion. But as noted in Hewett (2000), the sort of talk that might be interpreted as “unfocused” could also be interpreted as an important part of participating in an interactive discussion. Repetitions indicate that one is listening, following along, and being an engaged participant in a discussion. Personal narratives can both help to make a point as well as forge some connections between discussion group members, perhaps bringing them closer together and thereby helping them feel more comfortable (which could contribute to more productive peer evaluation).

In addition, in the oral groups the author of the paper being discussed often dominated the discussion, while the author spoke less in the online chats, making for more equal participation.

Continue reading

Peer assessment: face to face vs. online, asynchronous (Pt. 2)

This is part of a series of posts in which I summarize and comment on research literature about different methods of doing peer assessment. Earlier posts in this series can be found here and here, and part 1 corresponding to this particular post is here.

In this post I summarize, as briefly as I can, a complex study on differences between how students speak to each other when doing peer assessment when it’s in person versus on a discussion board (mostly asynchronous, but students also did some posting to the discussion boards in a nearly synchronous environment, during class time).

Hewett, B. (2000) Characteristics of Interactive Oral and Computer-Mediated Peer Group Talk and Its Influence on Revision, Computers and Composition 17, 265-288. DOI: 10.1016/S8755-4615(00)00035-9

This study looked at differences between ways peers talk in face to face environments and computer-mediated environments (abbreviated in the article as CMC, for computer-mediated communication). It also looked at whether there are differences in the ways students revise writing assignments after these different modes of peer assessment and feedback.

There were several research questions for the study, but here I’ll focus just on this one:

How is peer talk that occurs in the traditional oral and in the CMC classroom alike and different? Where differences exist, are they revealed in the writing that is developed subsequent to the peer-response group sessions? If so, how? (267)

Participants and data

Students in two sections of an upper-level course (Argumentative writing) at a four-year university participated; one section engaged in face to face peer assessment, and the other used computer-mediated peer assessment, but otherwise the two course were the same, taught by the same instructor. The CMC course used a discussion board system with comments organized chronologically (and separated according to the peer groups), and it was used both during class, synchronously (so students were contributing to it while they were sitting in a class with computers) and outside of class, asynchronously.

Peer group conversations were recorded in the face to face class, and the record of conversations from the CMC class could just be downloaded. The author also collected drafts of essays on which the peer discussion took place. Data was collected from all students, but only the recordings of conversations in one peer group in each class (oral and CMC) were used for the study. I’m not sure how many students this ended up being–perhaps 3-4 per peer group? [update (Feb. 28, 2013)] Looking at the article again, a footnote shows that there were four students in each group.

One of those groups, the CMC group, engaged in both computer-mediated peer discussion as well as oral discussion at a later point–so this group provides a nice set of data about the same people, discussing together, in two different environments. Below, when talking about the “oral” groups, the data include the group that was in the oral only class, plus the CMC group when they discussed orally.

Results

Nature of the talk in both environments

Not surprisingly, the student discussion in the face to face groups was highly interactive; the students’ statements often referred to what someone else had said, asked questions of others, clarified their own and others’ statements, and used words and phrases that cued to others that they were listening and following along, encouraging dialogue (e.g., saying “yes,” “right,” “okay,” “exactly”) (269-270).

In the CMC discussions, the talk was less interactive. Multiple threads of discussion occurred on the board, and each students’ comments could pick up on several at a time. This created a “multivocal tapestry of talk” that individuals would have to untangle in order to participate (270). At times, students in a peer group would respond to the paper being discussed, but not to each other (271), so that the comments were more like separate, standalone entities than part of an interactive conversation.

In addition, the possibility for asynchronous communication, though it could be convenient, also left some students’ comments and questions unanswered, since others may or may not return to the board after the synchronous group “chat” time had ended.

Subjects of the talk in each environment

Hewett found that face to face discussion had more talk about ideas, wider issues raised in the papers, and information about the contexts surrounding the claims and issues discussed in the papers, than in the CMC discussion (276). The CMC groups tended to focus more on the content of what was written, and showed less evidence of working together to develop new ideas about the topics in the essays. Hewett suggests: “Speculative thinking often involves spinning fluid and imperfectly formed ideas; it requires an atmosphere of give-and-take and circumlocution,” which is more characteristic of oral speech (276).

Continue reading

Peer assessment: face to face vs. asynchronous, online (Pt. 1)

I have been doing a good deal of reading research on peer assessment lately, especially studies that look at differences and benefits/drawbacks of doing peer assessment face to face , orally, and through writing–both asynchronous writing in online environments (e.g., comments on a discussion board) and synchronous writing online (e.g., in text-based “chats”). I summarized a few studies on oral vs. written  peer assessment in this blog post, and then set out a classification structure for different methods of peer assessment in this one.

Here, I summarize a few studies I’ve read that look at written, online, asynchronous peer feedback. In another post I’ll summarize some studies that compare oral, face to face with written, online, synchronous (text-based chats). I hope some conclusion about the differences and the benefits of each kind can be drawn after summarizing the results.

1. Tuzi, F. (2004) The impact of e-feedback on the revisions of L2 writers in an academic writing course, Computers and Composition 21, 217–235. doi:10.1016/j.compcom.2004.02.003

This study is a little outside of my research interest, as it doesn’t compare oral feedback to written (in any form). Rather, the research focus was to look at how students revised essays after receiving e-feedback from peers and their teacher. Oral feedback was only marginally part of the study, as noted below.

20 L2 students (students for whom English was an additional language) in a first-year writing course at a four-year university participated in this study. Paper drafts were uploaded onto a website where other students could read them and comment on them. The e-feedback could be read on the site, but was also sent via email to students (and the instructor). Students wrote four papers as part of the study, and could revise each paper up to five times. 97 first drafts and 177 revisions were analyzed in the study. The author compared comments received digitally to later revised drafts, to see what had been incorporated. He also interviewed the authors of the papers to ask what sparked them to make the revisions they did.

Tuzi combined the results from analyzing the essay drafts and e-feedback (to see what of the feedback had been incorporated into revisions) with the results of the interviews with students, to identify the stimuli for changes in the drafts. From these data he concludes that 42.1% of the revisions were instigated by the students themselves, 15.6% from e-feedback, 148.% from the writing centre, 9.5% from oral feedback (from peers, I believe), and for 17.9% of the revisions, the source was “unknown.” He also did a few finer-grained analyses, showing how e-feedback fared in relation to these other sources at different levels of writing (such as the punctuation, word, sentence, paragraph), in terms of the purpose of the revision (e.g., new information, grammar) and more. In many analyses, the source of most revisions was the students themselves, but e-feedback ranked second some (such as working at the sentence, clause and paragraph levels, and adding new information). Oral feedback was always low on the list.

In the “discussion” section, Tuzi states:

Although e-feedback is a relatively new form of feedback, it was the cause of a large number of essay changes. In fact, e-feedback resulted in more revisions than feedback from the writing center or oral feedback. E-feedback may be a viable avenue for receiving comments for L2 writers. Another interesting observation is that although the L2 writers stated that they preferred oral feedback, they made more e-feedback-based changes than oral-based changes.

True, but note that in this study oral feedback was not emphasized. It was something students could get if they wanted, but only the e-feedback was focused on in the course. So little can be concluded here about oral vs. e-feedback. To be fair, that wasn’t really the point of the study, however. The point was simply to see how students use e-feedback, whether it is incorporated into revisions, and what kinds of revisions e-feedback tends to be used for. And Tuzi is clear towards the end: “Although [e-feedback] is a useful tool, I do not believe it is a replacement for oral feedback or classroom interaction …”. Different means of feedback should be available; this study just shows, he says, that e-feedback can be useful as one of them.

Continue reading

Peer feedback: oral, written, synchronous, asynchronous…oh my

In an earlier post I summarized a few studies of peer assessment/peer feedback, and since then I’ve done some more research and found several more studies. What I also realized is that “oral vs written peer feedback” is not an adequate description of the myriad options. There is also synchronous vs. asynchronous, and face-to-face vs. mediated (often computer mediated).

So, written feedback can be asynchronous or synchronous (such as with online “chats”), and done on paper or computer mediated (typed digitally and emailed or uploaded into some online system that allows students to retrieve the comments on their papers).

Oral feedback, in turn, can be face-to-face or computer mediated, and the latter can be asynchronous (such as recorded audio or video) or synchronous (such as real-time video or audio chatting).

Thus, the possibilities are (at least insofar as I understand them at this point):

The situations in two of the boxes are quite rare:

  1.  Oral, asynchronous, face-to-face feedback situation are probably unlikely to happen very often, such as showing videos of one’s feedback to another person in person (or doing the same with an audio recording).
  2.  Written, face-to-face, synchronous feedback by itself is likely also rare, since it’s more probable that students will be writing comments on each others’ papers in each others’ presence while also discussing comments together–in which case the situation would be a blend of written, face-to-face, synchronous and oral, face-to-face, synchronous.

Also, I’m not really sure about the written, face-to-face, asynchronous box; that is only face-to-face insofar as the comments are given to the peer face-to-face, on paper.

The reason why I’m taking the time to distinguish these various permutations is that the literature I’ve been reading lately falls into various boxes on the table. For example, Reynolds and Russell (2008) (see this post) would fall into the oral, computer-mediated, asynchronous box. Most of the literature that talks about “oral” feedback is talking about oral, face-to-face, synchronous feedback (as I was using the term “oral” before now).

So now I guess I’ll need to come up with a new naming convention, and it will likely be an ugly set of abbreviations, such as:

Oral, face-to-face (assumed synchronous): OFTF
— though maybe written face-to-face is rare enough that this could just be FTF?

Oral, computer mediated, synchronous: OCMS

Oral, computer mediated, asynchronous: OCMA

Etc. Quite ugly.

In the next few days I’ll summarize and comment on a few more articles that fall into these various boxes, and then see if I can come up with any conclusions about the different types of oral vs. written peer feedback from those summaries and the ones in the post linked at the beginning, above. Does the research allow for any stable conclusions at this point? I’ll have to see after I think through the various papers I’m about to summarize in the next few days…

 

Works cited

Reynolds, J. & Russell, R. (2008) Can You Hear Us Now?: A comparison of peer review quality when students give audio versus written feedback, The WAC Journal 19(1), 29-44. http://wac.colostate.edu/journal/vol19/index.cfm

Problems with grading rubrics for complex assignments

In an earlier post I discussed a paper by D. Royce Sadler on how peer marking could be a means for students to learn how to become better assessors themselves, of their own and others’ work. This could not only allow them to become more self-regulated learners, but also fulfill roles outside of the university in which they will need to evaluate the work of others. In that essay Sadler argues against giving students preset marking criteria to use to evaluate their own work or that of other students (when that work is complex, such as an essay), because:

  1. “Quality” is more of a global concept that can’t easily be captured by a set of criteria, as it often includes things that can’t be easily articulated.
  2. As Sadler pointed out in a comment to the post noted above, having a set of criteria in advance predisposes students to look for only those things, and yet in any particular complex work there may be other things that are relevant for judging quality.
  3. Giving students criteria in advance doesn’t prepare them for life beyond their university courses, where they won’t often have such criteria.

I was skeptical about asking students to evaluate each others’ work without any criteria to go on, so I decided to read another one of his articles in which this point is argued for more extensively.

Here I’ll give a summary of Sadler’s book chapter entitled “Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning” (in Assessment, Learning and Judgement in Higher Education, Ed. G. Joughin. Dordrecht: Springer, 2009). DOI: 10.1007/978-1-4020-8905-3_4).

[Update April 22, 2013] Since the above is behind a paywall, I am attaching here a short article by Sadler that discusses similar points, and that I’ve gotten permission to post (by both Sadler and the publisher): Are we short-changing our students? The use of present criteria in assessment. TLA Interchange 3 (Spring 2009): 1-8. This was a publication from what is now the Institute for Academic Development at the University of Edinburgh, but these newsletters are no longer online.

Note: this is a long post! That’s because it’s a complicated article, and I want to ensure that I’ve got all the arguments down before commenting.

Continue reading