Category Archives: Assignments

AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Continue reading

Early thoughts on ChatGPT & writing in philosophy courses

Yes, it’s another post on ChatGPT! Who needs another post? I do! Because one of the main reasons I blog is as a reflective space to think through ideas by writing them down, and then I have a record for later. I’m also very happy if my reflections are helpful to others in some way of course!

Like so many others, I’ve been learning a bit about and reflecting on GPT-3 and ChatGPT, and I must start off by saying I know very little so far. I took a full break from all work-related things from around December 20 until earlier this week, and I plan to do some deeper dives to learn more in the coming days and weeks. I should also say that though this is focused on GPT, that’s just because it’s the only one I’ve looked into at this point.

Mainly why I’m writing this post is to do some deeper reflection on why I have many writing assignments in my philosophy courses, what I hope they will do for students. And as I was thinking about this, I started reflecting on the role of writing in philosophy more generally, since philosophy classes teach…philosophy.

Academic philosophy and writing

Okay, a whole book could be written about the role of writing in academic philosophy. Here are just a few anecdotal reflections.

Philosophy as I have been trained in it and practice it in academia is frequently focused on writing. We also speak orally, and that’s really important to the discipline as well. Conversations in hallways, in classes, with visiting speakers, at conferences, etc. are all crucial ways we engage in thinking, discussing, making arguments as well as critiquing and improving them. This may not be agreed upon by all, but I still think writing is more heavily emphasized. Maybe I think that partly because for hiring, tenure, and promotion processes what seems to count most are written works rather than oral presentations, lectures, or workshops. Maybe it’s because most of what we do when we do research in philosophy is read written works by others and then write articles, chapters, or books ourselves.

Oral conversations tend to be places where philosophers test out ideas, brainstorm new ideas, give and receive feedback, iterate, discuss, do Q&A, and communicate (among other purposes). Interestingly, even at philosophy conferences, at least the ones in North America I’ve attended, it’s common to read written works out loud during research presentation sessions. (This is not the case for sessions focused on teaching philosophy, which are often more workshop-like and focused more on interactive activities.) For me it can be very challenging to pay attention for a long time by just listening, and I personally appreciate when there are slides or a handout to help keep one’s thinking on track and following along. Writing again! Oral conversations and presentations are also not accessible to all, of course, and one alternative (in addition to sign language) is writing, either in captions or transcripts.

Writing is also a way that some folks (maybe many?) think their way through philosophical or other arguments and ideas. As noted at the top of this post, this is certainly the case for me. I have to put things into words in order to really piece them together and form more coherent thoughts, and though that can be done orally (say, through a recording device), for me it works better in writing.

From these brief reflections, here are some of the likely many roles of writing in doing philosophy. This is not a comprehensive list by any means! And it’s likely similar for at least some other disciplines.

  • Writing to think and understand: Sometimes summarizing works by others helps one to understand them better (e.g., outlining premises and conclusions from a complicated text, or recording what one thinks are the main claims and overall conclusion of a text). In addition, sometimes writing helps one to understand better one’s own somewhat vague thoughts, to clarify, delineate, group them into categories, think of possible objections, etc. (That’s what I’m doing with this blog post)
  • Writing to communicate: communicating our own ideas and arguments, and taking in communications by others of theirs by reading them (as one means; communication of philosophical ideas and arguments can happen in other ways too!). Communicating the ideas and arguments of others, as often happens in lectures in philosophy classes, or when summarizing someone else’s argument before critiquing it and offering a revised version or something new.
  • Writing as a memory aid: Taking notes when reading texts, or listening to a speaker, or during class. Writing down notes to remind oneself what to say when teaching, or giving a lecture or conference presentation, or facilitating a workshop. Writing one’s thoughts down to be able to return to them later and review, revise, etc. (as in the last point).

The point of these musings is that at least in my experience, a lot of philosophical work, at least in academia, is done in or through writing, even though many of us also engage in non-written discussions and communications. And for me, this is important context to consider when thinking about teaching philosophy and writing, and what it may mean when tools like ChatGPT come onto the scene.

Teaching philosophy and writing

I came to the thoughts above because I was thinking about how it is very common in philosophy courses to have writing assignments–frequently the major assignments are essays in one form or another–and I started to reflect on why that might be. It could be argued that writing is pretty well baked into what it means to do (academic) philosophy, at least in the philosophical traditions I’m familiar with. So it could make sense that teaching students how to do philosophy, and having them do philosophical work in class, means teaching them to write and having them write! (Of course, academic philosophy is not all of what philosophy can be…this is another area on its own, but I think at least some of the focus on writing in philosophy courses may be related to its focus in academic philosophy.)

And like many academic and disciplinary skills, it can be helpful to build up towards philosophical writing skills by practising the kinds of steps that are needed to do it well. So, for example, in philosophy courses we often ask students to review an argument presented by someone else (usually in writing) and summarize it, perhaps by outlining the premises and conclusion. Then maybe in a later step we’ll ask them to offer questions or critiques of the argument, or alternative views or approaches, all of which are important parts of doing philosophy in the traditions in which I’m immersed. In later stages or upper-level courses we’ll ask students to do research where they gather arguments from multiple sources on a particular topic, analyze them, and offer their own original contributions to the philosophical discussion.

All of this is similar to the sort of work professional philosophers do in their own research, and to me just seems like natural ways of doing philosophy given my own experience. It’s just that we do it at different levels and often in a scaffolded way in teaching.

However, mostly I teach introductory-level courses, and the number of students who will go on to do any more philosophy, much less become professional philosophers, is relatively small. So personally, I including writing assignments not just because they are part of what it means to do philosophy (though it’s partly that), but also because I think the skills developed are useful in other contexts. Being able to take in and understand arguments by others (whether textual or otherwise), break them down into component parts to help support both understanding and evaluation, evaluate them, and revise or come up with different ideas if needed, are I think pretty basic and important skills in many, many areas of work and life. I think this (or something like it) may (?) continue to be the case as AI writing tools become more and more ubiquitous, but of course I’m not sure, and that’s a question for further thought.

Process and product

When teaching it’s much more about the learning and thinking that happens through the process of writing activities that’s important. The essay or parts of an essay that result are not the critical pieces. After all, if I ask 100 or more students to analyze the same argument and produce a set of premises and conclusions (for example), the resulting summary/analysis of the argument isn’t the important piece there, especially when there will be many, many of them. It’s the learning and thinking that’s happening to get to that point. The summary is there as a stand-in for the thinking and learning. And in some cases it’s the same for the critiques, feedback, or alternative ideas that students may offer in response to someone else’s argument–what I may care about more is what they’re learning through doing that thinking rather than the specific replies they produce. Many will be really interesting and thought-provoking. Others may be will be similar across multiple students. Depending on the level of the course and the learning outcomes, all of these may be fine as results; what I care about is that they are putting in the thought and reflection to hone skills of (to use a too-well-worn term) “critical thinking.”

When I think about it this way, I wonder what is the purpose of the actual essay or paragraph or outline of an argument that I assign in courses. It’s often not the actual end product (though sometimes it is, particularly for upper level or graduate courses). The end product is mostly a vehicle and proxy for me as a teacher to review whether the thinking, reflecting, and learning is taking place.

So, thinking about the several ways writing is used in philosophy noted in the previous section, I think largely I’m assigning writing for the purposes of thinking and understanding, and also communicating–maybe to other students, to me, to TAs, etc. And my assumption, when marking writing, is that the written text is actually communicating the student’s thinking and understanding, that the communication and the thinking are linked.

Teaching writing in philosophy, and ChatGPT

One of the things that the emergence of ChatGPT really emphasizes for me is that that end product isn’t really a good communication vehicle to assess whether the thinking and understanding has taken place. This really hit home for me through a post on Crooked Timber by philosopher Eric Schliesser. Schliesser notes that several professors have said that the essays produced by ChatGPT are decent enough to earn a passing grade, if not higher. “But this means that many students pass through our courses and pass them in virtue of generating passable paragraphs that do not reveal any understanding,” Schliesser points out.

This made me think: the essay may not only not be a reliable communication of the student’s own thinking (which we knew already due to concerns about plagiarism, people paying others to write their essays, etc.), but may not be communicating thinking and understanding at all. The link between the two can be completely severed. (This is assuming, as I think it’s safe to assume at this point, that tools like ChatGPT are not doing any thinking or understanding…I know this is a philosophical question but for the moment I’m going to go with the seemingly-reasonable-at-this-point claim that they’re not.)

In one respect, this is an extension of previous academic integrity concerns: if what we want to be assessing is the student’s own thinking and understanding, then ChatGPT and the like are similar issues in that a student could submit something that does not communicate their own understanding–it’s just that in this case, rather than communicating the understanding someone, somewhere, at some point had, it’s not communicating understanding at all.

But of course, we have academic integrity concerns for a reason, and for me it’s not just that I want to be able to tie the writing to the individual student for the sake of integrity and fairness of assessment (though that is important too), it’s also that I want to engage students in activities that will develop skills that will be useful to them in the future. And it’s seeming more and more the case that the written texts I have used in the past as a vehicle to review whether they have developed those skills is less and less useful for that purpose.

At the moment, I can think of a few options, some of which could be combined for a particular assignment or class:

  1. Continue to try to find ways to connect the writing students do out of class to themselves–an extension of academic integrity approaches we already have. These can include:
    • using plagiarism checkers (which right now I think do not work with tools like ChatGPT
    • comparing earlier, in-class writing to later, out-of-class writing
    • quizzing students orally on the content of their written work
    • asking students to do multiple steps for writing assignments, some of which could be done in class, and also ask them to explain their reasoning for the choices they are making (this one from Julia Staffel–see more from her below)
  2. Find other ways for students to show their thinking and understanding than assigning written work done outside of class.
    • E.g., Ryan Watkins from George Washington University suggests (among other things) having students create mind maps (which ChatGPT can’t do … yet?) and holding in-class debates where students could show their thinking, understanding, and skills in communicating.
    • Julia Staffel from the University of Colorado Boulder talks in a video posted on Daily Nous about alternative approaches in philosophy courses, such as in-class essays, oral exams, oral presentations (synchronous or recorded), and assignments based on non-textual sources such as podcasts or videos (but that only works until the tools can start using those as source material).
  3. Use ChatGPT or similar in writing assignments

    • Numerous people have also suggested assignments in which students need to work with ChatGPT; if we think of it like a helper tool that can generate some early ideas for us to build on or critique, or that can provide summaries of others’ work that we can evaluate for ourselves, etc., then we could still be supporting students to build some similar kinds of skills as earlier writing assignments.
    • Still, inspired by a blog post by Autumm Caines, I’m wary of doing this until I look more into privacy implications, who has access to what data and how it’s used. Autumm also talks about the ethics of requiring students to provide free labour to companies to train tools like this. And what happens when the tool or ones like it are no longer offered for free?
    • Finally since ChatGPT can already mark and provide feedback on its own writing (albeit not perhaps the best), it’s not clear to me that having students have the tool draft something and then comment on it/revise it is going to necessarily get around the tie-the-work-to-a-mind issue.

A number of the ideas above have to do with doing things synchronously, in a way that the instructor and/or TAs can witness. Some are alternative approaches to providing evidence of thinking and understanding done outside of class that work for now, just based on what the tech can do at the moment. And maybe those will continue to work for some time, or maybe not. It feels a bit like trying to do catch-up with an ever-changing landscape.

I have many more thoughts, but this blog post is already too long so I’ll save them for later. For now, a takeaway is that maybe one of the things that I’ll need to do in the future is spend more time in class on activities that develop and allow students to communicate the thinking and understanding I’m hoping to support them in. If I have to assess them (which I do), then I’d like to bring the communication and the thinking parts back together. I want to think through pros and cons of a number of suggestions noted above, and similar ones, particularly around what they are actually measuring and whether it’s connecting to my learning goals in teaching (which, incidentally, is an important exercise to do for out-of-class writing too of  course!).

I also have some ill-formed thoughts about the value of teaching students to write philosophy essays at all, if they can be written so easily by a bot that doesn’t think or understand. But that’s for another day!

 

Grading rubrics in philosophy

This is a quick post designed to collect links to grading rubrics in philosophy, for the sake of putting them together in one place for graduate student TAs in our department to refer to if they want to see some examples.

Here is a recent version of a grading rubric for essays that I use in my courses, including Introduction to Philosophy and an interdisciplinary course called Arts One. I’m including a PDF version and also an MS Word version in case anyone wants to use and edit it (Word is often easier to edit). It is licensed CC BY, which means you can use it and change it if state that it’s adapted from mine as the original source.

Hendricks Philosophy Paper Rubric (PDF)

Hendricks Philosophy Paper Rubric (MS Word)

 

Daily Nous had a post in May 2017 with what they called “An impressively detailed philosophy paper grading rubric,” by Micah T. Lewin.

 

 

Mara Harrell of Carnegie Mellon has created this rubric (MS Word) for marking philosophy essays, which is even more detailed than the one above.

 

This paper marking rubric by Melissa Jacquart includes point values for each cell, which is also an option. Giving points for each part of the rubric can make marking quicker, though it also be somewhat problematic because it’s hard to include every aspect of what makes a good paper in a rubric, and sometimes it’s how things work together that leads to a better essay even if some parts are not as strong as one might like.

 

The Teach Philosophy 101 website has a list of rubrics (including some of the above) that has some not only for grading essays, but also for other kinds of assignments.

 

I’d be happy to hear about other rubrics not on this list!

 

 

Collaborating with students on objectives & assessments

I just did a quick read of the following article:

Abdelmalak, M. (2016). Faculty-Student Partnerships in Assessment. IJTLHE : International Journal of Teaching and Learning in Higher Education, 28(2), 193–203.

See the TOC for this issue, with link to the open-access PDF of the article, here.

The article reports on a study of a course in Education in which 6 graduate students collaborated with the professor on developing the course objectives, the assessments to meet those, and the criteria for assessing the work. The students brainstormed ideas, and then they agreed on objectives, goals, criteria based on what they shared in those ideas, and based on negotiations afterwards. Clearly this process would work best in a small class.

The author found that for the grad students involved,

  • collaborating on these things gave them a sense of control over their learning (unsurprising), which increased their motivation to learn.

However,

  • even though they had agreed to provide peer feedback on a writing assignment, most felt uncomfortable providing deep feedback to their peers due to a sense of lack of knowledge and a reticence to take up a perceived position of power over other students
  • some found the whole process difficult because they were used to an instructor deciding all of these things for them.

 

I have been thinking a lot lately about getting students more involved in creating assignments, though mostly what I teach are first-year courses and I think their lack of knowledge about the subject at that point means it would be best to not have them try to decide all the assignments. Plus, I have over 100 students in some of my first-year courses, and that makes such things difficult.

But I think something like this could work in a 4th year course (my 4th year course is max 25 students). The students still might not be able to come up with objectives that have to do with specific content they have yet to learn, but they might be able to come up with good ones about other aspects of the course; and I like the idea of them deciding on assignments after and grounded in the objectives they hope to achieve. Why write a paper, for example? Just because that’s what we always do in Philosophy, or for some other reason? What are we trying to achieve by writing papers? Are there other ways to achieve those goals?

In my experience, most fourth-year students in Philosophy courses don’t have too much issue with providing peer feedback that is critical and useful, so I don’t think I’d run into that problem. But it might be a bit difficult for them to go through this whole exercise because they’re not used to doing it. I think it would be really useful for them to work through why courses are designed as they are, and re-design them as needed to fit goals that are shared by the class.

I haven’t taught a 4th year course since 2014, and I’m not scheduled to do so next year either, but maybe the next time I do I’ll try something like this. Perhaps not for all the assignments, but for one or two to start with.

Has anyone tried anything like this before? If so, how did it go?


Update later on Aug. 12:

Robin DeRosa responded on Twitter that she had done this sort of thing with a first-year composition class–see the thread of that conversation here.

The syllabus, with student-created objectives and policies for that course, is here.

I had thought this wouldn’t work with first-years, but I can see how it works for a composition course in which students come in with some general knowledge about writing–you can get a sense of that from the objectives they created.

For first-year philosophy students, I think they might have a harder time determining just what they want to get out of a course when many of them don’t even really know what philosophy is yet, or why it is worth taking a course on!

Robin had a good suggestion:

So one could have them collaborate on one or two things that they can bring to the table.

And I love this point:

 

Also, Juliet O’Brien gave some great ideas via Twitter, which I’ll just post here as they are pretty self-explanatory I think!

 

 

You can see more about Juliet’s courses from this page: https://metametamedieval.com/courses/

And here is a link to a PDF that explains some of what she’s talking about above.

 

Presentation on SoTL research re: peer feedback

In mid-November I gave a presentation at the SoTL Symposium in Banff, Alberta, Canada, sponsored by Mount Royal University.

It’s a little difficult to describe this complex research, so I’ll let my (long) abstract for the presentation tell at least part of the story.


750-word abstract

Title: Tracking a dose-response curve for peer feedback on writing

There is a good deal of research showing that peer feedback can contribute to improvements in student writing (Cho & MacArthur, 2010; Crossman & Kite, 2012). Though intuitively one might think that students would benefit most from receiving peer comments on their written work, several studies have shown that student writing benefits both from comments given as well as comments received–indeed, sometimes the former more than the latter (Li, Liu & Steckelberg, 2010; Cho & MacArthur, 2011).

There are, however, some gaps in the literature on the impact of peer feedback on improving student writing. First, most studies published on this topic consider the effect of peer feedback on revisions to a single essay, rather than on whether students use peer comments on one essay when writing another essay. Cho and MacArthur (2011) is an exception: the authors found that students who wrote reviews of writing samples by students in a past course produced better writing on a different topic than those who either only read those samples or who read something else. In addition, there is little research on what one might call a “dose-response” curve for the impact of peer feedback on student writing—how are the “doses” of peer feedback related to the “response” of improvement in writing? It could be that peer feedback is more effective in improving writing after a certain number of feedback sessions, and/or that there are diminishing returns after quite a few sessions.

To address these gaps in the literature, we designed a research study focusing on peer feedback in a first-year, writing intensive course at a large university in North America. In this course students write an essay every two weeks, and they meet every week for a full year in groups of four plus their professor to give comments on each others’ essays (the same group stays together for half or the full year, depending on the instructor). With between 20 and 22 such meetings per year, students get a heavy dose of peer feedback sessions, and this is a good opportunity to measure the dose-response curve mentioned above. We can also test the difference in the dose-response curve for the peer feedback groups that change halfway through the year versus those who remain the same over the year. Further, we can evaluate the degree to which students use comments given by others, as well as comments they give to others, on later essays.

While at times researchers try to gauge improvement in student work on the basis of peer feedback by looking at coarse evaluations of quality before and after peer feedback (e.g., Sullivan & Pratt, 1996; Braine, 2001), because many things besides peer feedback could go into improving the quality of student work, more specific links between what is said in peer feedback and changes in student work are preferable. Thus, we will compare each student’s later essays with comments given to them (and those they gave to others) on previous ones, to see if the comments are reflected in the later essays, using a process similar to that described in Hewett (2000).

During the 2013-2014 academic year we ran a pilot study with just one of those sections (sixteen students, out of whom thirteen agreed to participate), to refine our data collection and analysis methods. For the pilot program we collected ten essays from each of the students who agreed to participate, comments they received from their peers on those essays, as well as comments they gave to their peers. For each essay, students received comments from three other students plus the instructor. We will use the instructor comments to, first, see whether student comments begin to approach instructor comments over time, and to isolate those things that only students commented on (not the instructor) to see if students use those in their essays (or if they mainly focus on those things that the instructor said also).

In this session, the Principal Investigator will report on the results of this pilot study and what we have learned about dealing with such a large data set, whether we can see any patterns from this pilot group of thirteen students, and how we will design a larger study on the basis of these results.


 

It turned out that we were still in the process of coding all the data when I gave the presentation, so we don’t yet have full results. We have coded all the comments on all the essays (10 essays from 13 participants), but are still coding the essays themselves (had finished 10 essays each from 6 participants, so a total of 60 essays).

I’m not sure the slides themselves tell the whole story very clearly, but I’m happy to answer questions if anyone has any. I’m saving up writing a narrative about the results until we have the full results in (hopefully in a couple of months!).

We’re also putting in a grant proposal to run the study with a larger sample (didn’t get a grant last year we were trying to get…will try again this year).

Here are the slides!

Non-disposable assignments in Intro to Philosophy

NoDisposableAssignments

Remixed from two CC0 images on Pixabay: trash can and No symbol

Disposable assignments

In the past couple of years I’ve really been grabbed by the issue of “disposable assignments,” as discussed by David Wiley here:

These are assignments that students complain about doing and faculty complain about grading. They’re assignments that add no value to the world – after a student spends three hours creating it, a teacher spends 30 minutes grading it, and then the student throws it away. Not only do these assignments add no value to the world, they actually suck value out of the world.

A non-disposable assignment, then, is one that adds value to the world beyond simply being something the students have to do to get a grade. A similar idea is expressed by Derek Bruff in a post on the idea of “students as producers”–treating students as producers of knowledge, rather than only as consumers: Bruff talks about students creating work for “authentic audiences,” beyond just the teacher.

Wiley gives an example of a non-disposable assignment: students taking instructional materials in the course (which are openly licensed) and revising/remixing them to create tutorials for future students in the course. Other examples can be found in this growing list of examples of open pedagogy. One that I often hear about is asking students to edit or create Wikipedia articles. Or students could post their work more locally, but still have it be publicly visible, such as what Judy Chan does with her students’ research projects at UBC (click on “team projects” in the various years). Simon Bates has his physics students create learning objects to help their peers (see this story for more).

Students as producers in philosophy courses

I have already started to ask students to do some activities that could add value to the world, whether to their fellow students and/or beyond.

  • In a second-year moral theory course I asked them to sign up for 1-2 days on which to do “reading notes” on the class wiki page: they had to outline one of the main arguments in the text assigned for that day and write down questions for their small group to discuss. You can see those here (organized by group number).
  • In a first-year, introduction to philosophy course I have asked students to:
    • blog about what they think philosophy is, both at the beginning and end of the course–this, I thought, could provide some interesting information to others about what our students think “philosophy” is. I don’t have those blog posts visible anymore because I didn’t ask students if I could keep them posted after the course was finished (d’oh!!).
    • write a blog post describing how they see philosophical activity going on in the world around them, beyond the class–I thought this could be useful to show the range of what can count as philosophical activity. I do still have those posts up (but not for long, because again I forgot to ask for permission to keep the posts up after the course is finished…I will do that this term!): https://blogs.ubc.ca/phil102 (click on “philosophy in the world”)

 

But now that I’m working on my Intro to Philosophy course for Fall 2015 (see planning doc here), I’m trying to think through some other options for assignments with authentic audiences and that add value to the world. Here are some ideas (not that I’m going to implement all of these; I’m just brainstorming).

  • Editing Wikipedia articles on philosophy
    • This is a big task; it requires that students learn how to do so (not just technologically, but in terms of the rules and practices of Wikipedia), plus determining which articles need editing, etc.
    • I would prefer to start with students creating Wikipedia-style articles on philosophers or texts on the UBC Wiki first. Then other students (in future classes) could edit those, and then maybe eventually we could move to doing something on Wikipedia itself (the content would be good, and maybe students would be motivated to move some of it over to Wikipedia at that point).
  • Creating tutorials or other “learning objects” for their fellow students and for the public
    • As noted above, Simon Bates does this in his Physics 101 course, and I can pretty easily see how one might ask students to do so for basic physics concepts. But why not do so for some basic philosophy concepts too?
      • e.g., find something you find difficult in the course, and once you feel you have a handle on it, create something to help other students
    • could be done in groups (probably best, with a large class like Intro to Phil (150 students))
    • could be text based, but better if also incorporates some other kinds of visual or auditory elements (e.g., a video, or incorporating images, or slides or something)
  • Creating study questions or suggestions of “what to focus on” for the readings
    • students often get lost in reading primary philosophical texts, and I haven’t yet managed to write up study questions or suggestions for what to focus on for each reading. This would definitely be useful to other students.
    • But wouldn’t it be cruel to ask students to do this for later students when I haven’t done it for them myself? and do I have time to do this before the Fall term this year? Unfortunately not.
  • Creating lists of “common problems” or advice for writing, after doing peer review of each others’ work and self-reflecting on their own
    • I do provide quite a lot of writing advice to students, but I wonder if advice coming from students’ direct experience in my courses might be helpful to later students?
  • Creating possible exam questions
    • I ask students to do this informally, in groups, as part of the review for the final exam. But why not formalize this somehow so their suggestions are posted publicly? The course page on the UBC Wiki seems like a good place, at least to start. Then students could see them from year to year.
    • A number of instructors at UBC use PeerWise as a tool for students to ask and answer questions. It seems like an interesting thing, but:
      • It’s not public; but it could be used to generate questions and then the best ones could be made public somewhere
      • It’s limited to multiple choice questions, which I hardly ever use (and never on exams)

 

Those are my ideas for now. Have any others? Or comments on any of this? Please comment, below!

Rubrics and peer feedback

I’ve been participating in an open, online course called Human MOOC: Humanizing Online Instruction. It’s officially over now, but I’m just completing a couple of final things from it.

One of the sections was on peer review/peer feedback by students of each others’ work. There was a link to a very helpful resource on peer feedback from the teaching and learning centre at Washington University in St. Louis. This page, linked to the previous one, is also very useful: “How to Plan and Guide In-class Peer Review Sessions.” A couple of things struck me about these resources that I wanted to comment on briefly.

What rubric/criteria should students use to do peer review?

On the first resource linked above, the following is stated:

Some instructors ask their students to evaluate their peers’ writing using the same criteria the instructor uses when grading papers (e.g., quality of thesis, adequacy of support, coherence, etc.). Undergraduate students often have an inadequate understanding of these criteria, and as a result, they either ignore or inappropriately apply such criteria during peer-review sessions (Nilson 2003).

The second resource states similarly:

The role of the peer-reviewer should be that of a reader, not an evaluator or grader. Do not replicate the grading criteria when designing these worksheets. Your students will not necessarily be qualified to apply these criteria effectively, and they may feel uncomfortable if they are given the responsibility to pronounce an overall judgment on their peers’ work.

This makes sense, though at the same time it’s troubling because if the students can’t understand the rubrics we use to mark their work, then how can they understand why they got the mark they did, or what they need to do to improve? It seems to me the answer here is not to ask students to use a different rubric when doing peer review than what we use to mark, but changing the rubric we use to mark so that it makes more sense to students (if there are comprehension problems). Now, I haven’t read the work by Nilson cited above, but it would be interesting to look more carefully into what undergraduate students tend to understand or not understand, or why, and then change one’s rubric accordingly.

One way one might do this, perhaps, is to ask them to use one’s marking rubric to evaluate sample essays and then invite feedback on the rubric as/after they are doing this. Then one can maybe catch some of the things students don’t understand before one uses the rubric for marking the essays?

Mock peer review session

The second resource suggests that one holds a mock session to begin with, which seems an excellent idea. It connects with the importance of training students in peer review before asking them to engage in it on work for the course (as discussed in Sluijsmans et al., 2002).

The idea would be to give them a “fake” essay of a kind similar to what they need to write, give them the peer review worksheet, and ask them to come up with comments on the paper. This can be done individually or in groups. Then, in class, have students give their comments to the whole group and the instructor writes them down on something that can be shown on the screen (or, alternatively, one could have them write the comments on a shared document online so they could be projected easily and the instructor doesn’t have to re-write them!). Then the class can have a discussion on the essay, the comments, and the marking worksheet/rubric, to clear up any confusion or help students improve their comments–e.g., moving from “good introduction” to saying what about the introduction is good, in particular.

This is an excellent idea, and I’m going to incorporate it in my upcoming philosophy class this summer. In Arts One we meet every week to do peer review of essays, in groups of four students plus the prof, so we can help students learn how to do peer review well on an almost one-to-one basis. And, since they do it every week for a year, they get quite good at it after awhile, even a very short time, actually!

 

Self-assessment

I could have sworn that the resources linked above from Washington University also talked about the value of students doing self-assessment of their own work, but now I can’t find that on those pages. But I was thinking that after they do peer feedback on each others’ work, it would be useful for them to go back to their own work and give feedback on it. It seems to me that after reading and commenting on others’ work, seeing what works/what doesn’t work, one could come to one’s own with fresh eyes, having learned from others’ work and also having distanced oneself from one’s own a bit.

I think I’ll try asking students to submit the peer review worksheet on their own essays after doing the peer feedback on others’, when they turn in their drafts post-peer-feedback.

 

Works cited

Nilson, Linda. (2003). “Improving Student Peer Feedback.” College Teaching, 51 (1), p. 34-38.
Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T. J. (2002). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29(1), 23–42. http://doi.org/10.1016/S0191-491X(03)90003-4

 

 

Media!

So, this is kind of exciting for me. I’ve been mentioned in some articles and blog posts, and even interviewed!

First, as a result of my presentation at the Open Education Conference 2014 in Washington, DC, I was interviewed by Jenni Hayman of the Open Policy Network about UBC’s Policy 81. You can see a video recording of this interview, which was done via Skype, on the OPN blog.

Most recently, there was a writeup of my research on peer feedback on writing, on the BCcampus website.

And then there was an article about the three Faculty Fellows (including me) with the BCcampus Open Textbook program for 2014-2015.

So it’s not the New York Times or even the CBC, but hey, it’s a start.

Authentic assessment and philosophy

In order to prepare for a meeting of the Scholarship of Teaching and Learning Community of Practice, I recently started reading a few articles on “authentic assessment.” I have considered this idea before (see short blog post here), but I thought I’d write a bit more about just what authentic assessment is and how it might be implemented in philosophy.

Authentic assessment–what

A brief overview of authentic assessment can be found in Svinicki (2004). According to Svinicki, authentic assessment “is based on student activities that replicate real world performances as closely as possible” (23). She also lists several criteria for assessments to be authentic, from Wiggins (1998):

 1. The assessment is realistic; it reflects the way the information or skills would be used in the “real world.”

2. The assessment requires judgment and innovation; it is based on solving unstructured problems that could easily have more than one right answer and, as such, requires the learner to make informed choices.

3. The assessment asks the student to “do” the subject, that is, to go through the procedures that are typical to the discipline under study.

4. The assessment is done in situations as similar to the contexts in which the related skills are performed as possible.

5. The assessment requires the student to demonstrate a wide range of skills that are related to the complex problem, including some that involve judgment.

6. The assessment allows for feedback, practice, and second chances to solve the problem being addressed. (23-24)

She points to an example of how one might assign a paper as an authentic assessment. Rather than just writing an essay about law generally (perhaps legal theory?), one might ask students to write an essay arguing for why a particular law should be changed. Or even better, write a letter to legislators with that argument (25).

Turns out there are numerous lists of what criteria should be used for authentic assessment, though (not surprising?). I have only looked at a few articles, and only those that are available for easy reading online (i.e., not books, or articles in books, or articles in journals to which our library does not have a digital subscription–I know this is lazy, but I’m not doing a major lit review here!). Here’s what I’ve found.

In Ashford-Rowe et al. (2014), eight questions are given that are said to get to the essential aspects of authentic assessment. These were first developed from a literature review on authentic assessment, then subjected to evaluation and discussion by several experts in educational design and assessment, and then used to redesign a module for a course upon which they gathered student and instructor feedback to determine whether the redesign solved some of the problems faced in the earlier design.

(1) To what extent does the assessment activity challenge the student?

(2)  Is a performance, or product, required as a final assessment outcome?

(3)  Does the assessment activity require that transfer of learning has occurred, by means of demonstration of skill?

(4)  Does the assessment activity require that metacognition is demonstrated?

(5)  Does the assessment require a product or performance that could be recognised as authentic by a client or stakeholder? (accuracy)

(6)  Is fidelity required in the assessment environment? And the assessment tools (actual or simulated)?

(7)  Does the assessment activity require discussion and feedback?

(8)  Does the assessment activity require that students collaborate? (219-220)

Regarding number 3, transfer of learning, the authors state: “The authentic assessment activity should support the notion that knowledge and skills learnt in one area can be applied within other, often unrelated, areas” (208). I think the idea here is that the knowledge and skills being assessed should be ones that can transfer to environments beyond the academic setting, which is the whole idea with authentic assessment I think.

Number 4, metacognition, has to do with self-assessment, monitoring one’s own progress, the quality of one’s work, reflecting on the what one is doing and how it is useful beyond the classroom, etc.

Number 6, regarding fidelity, has to do with the degree to which the environment in which the assessment takes place, and the tools used, are similar to what will be used and how, outside of the academic setting.

The point of number 8, collaboration, is that, as the authors state, “The ability to collaborate is indispensable in most work environments” (210). So having assessments that involve collaboration would be important to their authenticity for many work environments. [Though not all, perhaps. And not all authentic assessment needs to be tied to the workplace, right? Couldn’t it be that students are developing skills and attitudes that they can use in other aspects of their lives outside of an educational context?]

Gulikers et al. (2004) define authentic assessment as “an assessment requiring students to use the same competencies, or combinations of knowledge, skills, and attitudes, that they need to apply in the criterion situation in professional life” (69). They took a somewhat different approach to determining the nature of authentic assessments than that reflected in the two lists above. They, too, started with a literature review, but from that focused on five dimensions of authentic assessments, each of which can vary in their authenticity:

(a) the assessment task

(b) the physical context

(c) the social context

(d) the assessment result or form

(e) the assessment criteria (70)

Whereas the above two lists look at the kinds of qualities an assessment should have to count as “authentic,” this list looks at several dimensions of assessments and then considers what sorts of qualities in each dimension would make an assessment more or less authentic.

So, for example, an authentic task would be, given their definition of authentic assessment as connected to professional practice, one that students would face in their professional lives. Specifically, they define an authentic task as one that “resembles the criterion task with respect to the integration of knowledge, skills, and attitudes, its complexity, and its ownership” (71), where ownership has to do with who develops the problem and solution, the employee or the employer (I think that’s their point).

The physical context has to do with what sorts of physical objects people will be working on, and also the tools they will generally be using. It makes assessments less authentic if we deprive students of tools in academic settings that they will be allowed to use in professional settings, or give them tools in academic settings that they generally won’t have access to in professional settings. Time constraints for completing the task are also relevant here, for if professionals have days to complete a task, asking students to do it in hours is less authentic.

The social context has to do with how one would be working with others (or not) in the professional setting. Specifically, they specify that if the task in the professional setting would involve collaboration, then the assessment should do so, but not otherwise.

The assessment result or form has to do with the product created through the task. It should be something that students could be asked to do in their professional lives, something that “permits making valid inferences about the underlying competencies,” which may require more than one task, with a variety of “indicators of learning” (75).

Finally, the criteria for the assessment should be similar to those used in a professional setting and connected to professional competencies.

 

Authentic assessment and philosophy

Though Gulikers et al. (2004) tie authentic assessment pretty closely to professional life, and thus what they say might seem to be most relevant to disciplines where professional practice is directly part of courses (such as medicine, business, architecture, clinical psychology, and more), the overview in Svinicki (2004) suggests that authentic assessments could take place in a wide variety of disciplines. What could it look like in philosophy?

I think this is a somewhat tricky question, because unlike some other fields, where what one studies is quite directly related to a particular kind of activity one might engage in after receiving a degree, philosophy is a field in which we practice skills and develop attitudes that can be used in a wide variety of activities, both within and beyond one’s professional life. What are those skills and attitudes? Well, that’s a whole different issue that could take months to determine (and we’re working on some of that by developing program outcomes for our major in philosophy here at UBC), but for now let’s just stick with the easy, but overly vague answers like: the ability to reason clearly; to analyze problems into their component parts and see interrelationships between these; to consider implications of particular beliefs or actions; to make a strong case for one approach to a problem over another; to identify assumptions lying behind various beliefs, approaches, practices; to locate the fundamental disagreements between two or more “sides” to a debate and thereby possibly find a way forward; to communicate clearly, orally and in writing; to take a charitable attitude towards opponents and focus on their arguments rather than the persons involved; and more.

So what could it mean to do a task in philosophy in a similar way, with similar tools, for example, as what one might encounter in a work environment? Because the skills and attitudes developed in philosophy might be used in many different work environments, which one do we pick? Or, even more broadly, since many of these skills and attitudes can be practiced in everyday life, why restrict ourselves to what one might do in a work environment?

Perhaps, though, this means we have a lot more leeway, which could be a good thing. Maybe authentic assessments in philosophy could be anything that connects to what one might do with philosophical thinking, speaking and writing skills outside of the educational setting. And if several courses included them during a students’ educational career, they could perhaps see how philosophy can be valuable in many aspects of their lives, having done different sorts of authentic assessments applying those skills to different kinds of activities.

When I came up with a couple of possible authentic assessments in philosophy courses last summer, I believe I was thinking along these lines–something that the students would do that would mirror an activity they might engage in outside of class. One, which I implemented this year in my moral theory course, asked students to apply the moral theories we’re studying to a moral dilemma or issue of some kind. This isn’t exactly like an authentic assessment, though, because I’m not sure that I would expect anyone in their everyday lives to read Kant and Mill and then try to apply them to moral dilemmas they face. Maybe some people do, but I’m not really sure that’s the main value of normative moral theories (I’m still working on what I think that value is, exactly).

Another one of the suggested assignments from that earlier blog post was that students would reflect on how they use philosophical thinking or speaking or writing in their lives outside of the course. That one isn’t asking them to do so, though, so it’s not like mirroring a task they might use outside the class; it’s just asking them to reflect on how they already do so.

So I think I need to consider further just what an authentic assessment in philosophy might look like (the one from Svinicki (2004), above, about writing a letter to legislators to change a law is a good candidate), and how I might include one in a course I teach in the future. Possible ideas off the top of my head:

  • Take a discussion of a moral issue (for example) in the media and clearly lay out the positions on the various “sides” and what arguments underlie those. Evaluate those arguments. (We do this sort of thing all the time in philosophy, but not always by starting with media reports, which would be the sort of thing one might do in one’s everyday life.) Or, identify assumptions in those positions.
  • Write a letter to the editor or an op-ed piece about some particular moral or other issue, laying out clear arguments for your case.
  • Participate in or even facilitate a meeting of a Socrates Cafe, a philosophical discussion held in a public place for anyone who is interested to join.
  • Make a case to the university, or your employer, or someone else for something that you’d like to see changed. Give a clear, logical argument for why it should be changed, and how. Can collaborate with others on this project.

Okay, this is hard.

And it occurs to me that some of what we already do might be like an authentic activity, even if not an authentic assessment. For example, when we ask students to engage in philosophical discussion in small groups during class, this is the sort of thing they might also do in their lives outside of class (don’t know how many do, but we are giving them practice for improving such activities in the future).

Hmmm…gotta think more on this…

 

Any ideas are welcome, in the comments below!

 

Works Cited

Ashford-Rowe, K., Herrington, J. & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205-222. DOI: 10.1080/02602938.2013.819566

Gulikers, J.T.M., Bastiaens, T.J., Kirschner, P.A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67-86. Available on JSTOR, here: http://www.jstor.org/stable/30220391?

Svinicki, M. D. (2004). Authentic assessment: Testing in reality. New Directions in Teaching and Learning, 100, 23-29. Available behind a paywall, here: http://onlinelibrary.wiley.com/doi/10.1002/tl.167/abstract

Wiggins, G. (1998). Educative Assessment: Designing Assessments to Inform and Improve Student Performance. San Francisco: Jossey-Bass.

Closing the feedback loop

I attended the biannual meeting of the American Association of Philosophy Teachers July 30-Aug 2, 2014, and got some fantastic suggestions/ideas for future teaching, as I did the last time I attended this conference. The AAPT workshop/conference is easily one of my top favourite conferences: it is so friendly, inviting, supportive, and there are great people to talk to about teaching philosophy as well as about life in general. I haven’t laughed this much, for so many days in succession, for a long time. It’s too bad this meeting is only held every two years, as these are people I’d sure like to see more often!

I’m going to take a few of blog posts to write down some of the (many) things that inspired me at this conference, that I’d like to try in my own teaching one way or another. There were many more things than I’m going to write about here—I have pages and pages of notes that I typed out during the conference. But in this and a couple of future posts, I’ll focus on just a few.

Broken feedback loop: when did you not respond well to feedback?

Rebecca Scott from Loyola University Chicago facilitated a session on closing the feedback loop, which started off in a really helpful way: she asked us to consider (among other things) times when we received feedback from someone (whether in the context of our academic lives or other aspects of our lives) and didn’t respond in the way that we now think would be most helpful.

Kawazu Loop Bridge, Flickr photo shared by Tanaka Juuyoh, licensed CCBY 2.0

Kawazu Loop Bridge, Flickr photo shared by Tanaka Juuyoh, licensed CCBY 2.0

I won’t give details on either situation, but one of them had to do with feedback I received at the end of a course that utterly shocked and floored me. More than one student said that I did something that was so very far from who I think I am that I just couldn’t believe it was true. All I could think of was: “How could someone think I was doing that? There’s no way I did that! They must be wrong.” I didn’t entertain (at first) the idea that the feedback could be right in some way. It just didn’t fit with who I thought I was.

Remembering this situation helped put me into the mindset of students receiving critical feedback (or, at least, helped move me closer to that I hope), and not believe it, getting angry, indignant, even lashing out. When that happens you are not even allowing yourself to think that the feedback might be true; since it doesn’t fit with who you think you are, your own evaluation of the quality of your work, the truth must be that whoever said that is simply wrong. I’m reminded of Socrates who, at least in Plato’s texts, would show his interlocutors that they didn’t know what they thought they knew, and for some the reaction was to just assume that Socrates must be wrong and to get angry with him.

Why might feedback not be incorporated into future work?

We came up with numerous reasons during the session, which I wrote down:

  • Getting emotional; taking things too personally; losing sight of the goal of feedback
  • Not caring about the work, just trying to get credit
  • Too motivated by grade, not enough by learning
  • Not believing that the feedback is true; e.g., coming into class with mindset that one is an A student b/c have gotten A’s so far, so don’t believe the instructor who gives a lower mark
  • Distrust of the instructor, institution, due to larger social issues/context
  • Not thinking that you could do any better, that you’re capable of improving even with feedback; including: getting discouraged at how much they have to change and thinking they can’t
  • Not seeing work as formative process; thinking that when the assignment is done you are done and don’t need to revisit it, to learn from it
  • Professor and students seeing diff goals of feedback; students might think that feedback is there to explain why they got the grade they did, but for the prof it might be there to show ways to improve
  • Not understanding the feedback
  • Not connecting feedback from past to future situations
  • Thinking that just reading the comments is enough to improve for later
  • Not having a clear idea of what good work looks like to aim for
  • Too much feedback; overwhelmed; don’t know what to do with it

The one that I find hardest to deal with (though many are quite challenging) is the first: the emotional reaction. It kept me from addressing my situation as well as I could have, and I can see how student emotional reactions could lead them to not want to even look at the feedback again or think about it at all.

A reflective assignment to close the feedback loop

Rebecca shared with us an assignment she gives to students that asks them to reflect on their feedback, that forces them to read it and consider it and reflect on what they want to change for the future based on it. And the first item on that assignment is a question, asking them what their immediate reaction was on receiving the feedback. The idea is that maybe if they have an outlet to write it down, to let you know their emotional reaction, this might help them move past it.

But I think the rest of the assignment might help with that too. Because it goes on to ask students to

  • write down how many comments they got in each of several categories (to help them see which areas they need to work on, and to ensure that they read the comments or at least skim them),
  • what grade they expected, what grade they got and what do they think explains the difference between these
  • how much of the feedback do they feel they understand
  • what two things do they want to work on for the next assignment, and
  • whether they have any questions or comments about the feedback they received

How might all of this help with the emotional reaction issue? Besides making them continue to think about the feedback even if they get angry instead of just ignoring it, it also gives them a chance to give feedback on the feedback, to try to figure out what could explain the difference between the grade they expected and the grade they got, which could include thinking about the feedback and how it might suggest that the grade makes at least some sense. Or, if they disagree with the feedback, it gives them an outlet to do so, and the instructor can follow up with them later to discuss the issue.

How I’d like to adapt this assignment, and also address a couple of the other problems above

I like this idea of a reflection on the feedback that you submit to the instructor, but I also want them to have a kind of running record of the feedback they’ve received, the 2-3 things they want to work on for the next time, what they did well and want to keep doing, etc. In addition, I want to make sure that they have to look back at this feedback for the next paper they write.

So, here’s an idea.

1. For the Arts One course I teach, in which students write a paper every 2 weeks (12 over the course of a year), I think I’ll ask them to include on each new essay:

  • a list of at least two things they tried to do better on this one, based on feedback from the last one
  • at least one thing they themselves noticed from their previous essay that either they think was good or that they would like to improve on, that no one else pointed out
    • this is so that they don’t just look back at the feedback but also back at their previous essay and see what they themselves think, in order to do some self-assessment

2. I would also like to institute a policy in terms of my own feedback: that I will point out one or two instances of a certain type of mistake, and ask them to look for more instances (if I saw more in the essay, that is). Then, also on the next essay:

  • Point out at least one other place in the previous essay where one of the comments I made applies elsewhere too.
    • This is again so that they need to go do some self assessment of their work, and so I don’t need to go through and point out every single mistake. I think this could help with the issue of being overwhelmed by too much feedback

3. Finally, I think it would be great if they could keep a learning log, digitally, where they keep track of, for each essay: the comments they’ve gotten from peers, at least two things from me that they want to work on, the things they’re doing well and want to keep doing. That way they have a running record and periodically I can ask them to reflect on whether there are any patterns/repeated comments, or whether they are getting better because certain sorts of comments aren’t being said anymore.

These things could hopefully all help with the issue of not connecting feedback on previous work to later work. But I have to figure out how much of this is adding too much work for the students, or whether it is all so pedagogically valuable as to be worth it.

Back to when I didn’t respond well

At first, I just shut down. So I can understand when students do that. I didn’t want to think about it and just wanted to move past it. But I did eventually do something: I emailed all my students and asked them to fill in another feedback form, anonymously, that would just go to me. I asked them to be as specific as possible, because I didn’t get quite enough details on the first one. I got a few more details on this second round, which helped me understand some of the concerns expressed and how students may have come to the conclusion they did (and even that I might have been unconsciously doing some of what they thought, even though I’m still reluctant to believe that). But not entirely fully. I think there was some miscommunication somewhere that I just can’t rectify now.

All the more reason to give students more of a chance to give feedback during the course so problems can be solved earlier! (I just did it once, during the first term, and not at all during the second: lesson learned!)