AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Why do I assign writing in philosophy courses?

What do I hope students in my introduction to philosophy courses will get out of writing philosophy essays, or other philosophical writing tasks? And might AI help support their learning in any of these areas? There are numerous ways that AI can support the writing process, from brainstorming to research support to generating outlines to providing feedback, and more. I’m interested in reflecting on how these may support learning in the areas below (or not).

Note: none of this is meant to say that writing is the only way to accomplish the goals below; clearly it’s not! I’m just reflecting on why I have assigned writing and whether AI may help in some way with that.

Learning to write

Learning how to write a well-crafted philosophical passage, blog post, or essay is useful if one is going to go on and do such things later, such as perhaps in other courses or in graduate school or in research, though again that’s not that many students. I’d like to think that the skills we teach in writing philosophy can be ported over to other subjects as well in terms of their focus on clarity in argumentation. If so, learning to create a well-crafted piece of philosophical writing could be helpful for writing in other courses, and possibly also for employment contexts.

AI and learning to write

Well, this is a huge topic, and one that could use a fair bit of research I think. AI can certainly do writing, but can it help support students in learning how to write themselves? I haven’t yet looked into the literature to see if there is empirical research on this, though it’s early days yet. In addition, though I don’t know for sure what the future may bring, I believe at least some level of skill with writing with AI is likely to be useful for students in the future.

In terms of learning how to write a philosophy essay or other piece of writing, and also to develop writing skills that could be useful for other contexts, one thing I think might be useful is to use an AI application as a kind of peer reviewer, who could take a marking rubric and provide feedback on a draft. One would have to think about privacy and student IP of course, and some students may not be willing to submit their work to an AI platform.

This does have the benefit that it may be easier to hear constructive criticism from an AI than a person you will be interacting with in the class, even if the feedback is anonymous. E.g., Chris Mayer argues in “Navigating the New Frontier of Generative AI in Peer Review and Academic Writing” that “The emotional neutrality of a machine-based augmented peer review using ChatGPT could mitigate some of the anxiety-inducing elements of a human-to-human peer review.” Mayer also notes, however, that Aadownside of AI-assisted feedback is the perpetuation of a focus on Standard Academic English.

Having AI provide feedback instead of peers could mean students don’t gain a learning benefit of providing feedback on others’ work, which I believe can be a useful means of learning (see, e.g., the first in a series of my blog posts on empirical research about the value for students themselves of providing peer feedback). Perhaps this could be done by students providing feedback on an AI-generated piece of writing instead.

What about asking an AI system to rewrite parts of a draft, say to make it more concise and less wordy, or to make it sound more formal for an academic audience, and the like? In some courses the skills to be able to do that are really important for students to learn. For me, I’m more focused on the ideas and less on the presentation or tone. Still, my question here is whether a use of AI may be helpful for learning, and would this be the case?

I suppose it’s possible to learn from example by outsourcing work like this to an AI and reviewing the end product, particularly if students are asked to reflect on the changes and whether/why they think these strengthen the essay. At the same time, having an AI do this work could also lead to little motivation to learn to do it oneself. Though then again, the future may be such that it is common to have AI applications do this work for one!

Writing to learn

Many folks have heard the old adage (and some have also experienced) that you really come to understand something better when you teach it to others, or explain it to others. And summarizing what one has read or heard or watched can be a good way, I think, to get a better grasp on them. So writing can be a way to hopefully support learning about the questions and topics we’re discussing, to better clarify them for oneself.

AI and writing to learn

If part of the idea of asking students to write is so that they can better learn course material by explaining it, then outsourcing that entirely to AI can hinder that goal. Perhaps, though, one could use AI to generate questions to help start some writing that could summarize/explain course topics and materials.

Writing to analyze and synthesize the views of others

After doing research on a topic, writing can be a way to analyze and synthesize results. In philosophy, this frequently looks like summarizing the views of others, comparing and contrasting, engaging in critique, noting gaps in what has been said so far, and the like. In a philosophy essay this is frequently done before one then offers one’s own view.

AI and writing to analyze others’ views

AI can be pretty good at summarizing others’ work (depending on the model), and this could be a time-saver for students, say, for literature reviews or short summaries of views before getting to the focus of one’s own argument. Having an AI do this entirely, though, loses the opportunity to write to learn, as noted in the previous point. And there is still, at least for now, the issue of mistakes coming from the LLMs, and basing one’s own argument on an interpretation of others’ work that is incorrect would be very problematic.

One could also ask an AI to come up with criticisms of a text or other work, though this misses out on the benefit of learning how to do that oneself and practice critical thinking. Perhaps one might come up with some critiques and then ask a chatbot for feedback on those, and things one may not have thought about.

Note also that copyright and intellectual property come into this one: though this is a fraught area, there is at least the possibility that feeding copyrighted works into an AI tool without permission, particularly if those works are going to be used for training, may run afoul of IP rights.

Writing to think and crystallize our own views

Then there’s the idea that writing can help thinking, as I discussed in my previous post on this topic. That is at least true for me, though I don’t want to assume it helps everyone. Indeed, that’s why I blog…to better clarify my own ideas.

Someone in the AI in Education google group recently pointed to an article by Peter Elbow on teaching thinking through teaching writing, which I found in Change magazine from 1983: “Teaching Thinking by Teaching Writing,” that talks about how different kinds of writing can support different kinds of thinking. A couple of passages stand out to me, relating to the kind of writing and thinking we often associate with essays in university courses:

By writing down our thoughts we can put them aside and come back to them with renewed critical energy and a fresh point of view. We can better criticize because writing helps us achieve the perennially difficult talk of standing outside our own thinking.

Since we are trying for the tricky goal of thinking about our subject and thinking about our thinking about it, putting our thoughts on paper gives us a fighting chance. But notice that what most heightens this critical awareness is not so much the writing down of words in the first place, but the coming back to a text and re-seeing it from the outside (in space) instead of just hearing it from the inside (in time). (p. 38)

This resonates with me, and argues not just for writing a first draft but for revising (which many of us encourage and sometimes require in courses). Writing, then, may be a way to better surface and clarify one’s own views for oneself, so they can then be communicated to others (sometimes in the same piece of writing, sometimes in a different one).

AI and writing to think

This one seems challenging to support with AI, since the whole point is to reflect on and impact one’s own thinking. But I still have a couple of ideas.

A number of folks who discuss using generative AI for writing talk about possibly inviting students to use it for idea generation/brainstorming what they may want to write. This could, I think, be very helpful for those who struggle with even getting started. A worry, though, is that this may take away the opportunity to guide the writing through one’s own thoughts and ideas. This is part of the reason Leon Furze questions the idea of having AI write a first draft (which of course is different than brainstorming ideas, but some similar concerns hold): “When students use AI to generate a first draft, they … [create] something that may well be worth sharing, but which has not in any way helped them form and make concrete their own understanding.” Still, it may be possible to use AI for high-level brainstorming that one then shapes with one’s own thoughts.

In addition, I am excited by a project that some colleagues at UBC have worked on, a WordPress plugin called “Socrates.” This uses a chatbot interface connected to an LLM with instructions to prompt students with questions, to help them crystallize and clarify their own views as a preface to beginning to write an essay or other writing assignment. It is specifically directed to only ask questions, inviting students to dig deeper into a topic and express what they think, rather than the chatbot providing any answers itself. I love this idea and am looking forward to experimenting with the platform to see if I might use it in an upcoming course.

Another possible way AI could help is to help refine or expand one’s ideas. E.g., if one has some fragments of ideas and just isn’t able to put them together in a connected fashion, maybe an AI could help to add some organization and clarity to the fragments and then students could judge whether the connections made fit their ideas. This would be similar to the “compiler” from Goblin Tools, which takes a “braindump” and creates a to-do list out of it; only this would take a braindump and attempt to find connections. Alternatively, one could put some high-level ideas into a chatbot and ask for ways these might be expanded, so that one is still starting from one’s own ideas.

Writing to communicate one’s views

In philosophical essays, it’s common to start by summarizing and analyzing arguments from others, and then evaluating those and sometimes presenting alternatives from one’s own thinking. Sometimes we talk to students about including their own, original ideas, and during a session on AI and writing that I attended at UBC recently, the presenters emphasized that students may not know what we mean by “originality.”

This is an excellent point, since if you take it far enough there is little that would be purely original given the interactions we have with other people, with writing and other media from which we may glean ideas without even realizing it and even less citing them. Plus, for introductory-level courses, I’m not looking for originality exactly (it doesn’t matter to me whether no one has said it before) but rather that the ideas emerge from your own brain somehow, even if influenced by others (as they always will be). It’s important to explain that to students because otherwise I may be causing stress by trying to set a standard that seems impossibly high!

Then again, why does it matter to me that students get practice in expressing their own views rather than just sharing possible views one could hold? After all, another feasible activity could be for students to take up a position and argue for it even if it’s not their own, to practice engaging in philosophical activity. Partly it’s because students can come up with really interesting ideas and arguments that I and others can learn from (though they could do so even if they themselves don’t subscribe to these). Partly it’s because I think it can be enriching for one’s own life to become more clear about one’s own views and how they impact one’s actions, and to practice and understand that one has really valuable things to say and share with others.

I vividly remember a professor’s comments on a research essay I did as an undergrad in a philosophy course. Apparently I had done an excellent job of summarizing others’ views that I had researched, which she complimented, but she also said I could add more of my own views in, as that was the really interesting and important part. This was quite validating for me, though I can imagine it might be intimidating for some students, particularly if these views are then going to be evaluated either by peers, a TA, or the professor. So I do have a bit of ambivalence about emphasizing the value of expressing one’s own view, and of course I would never know if students actually did so or not.

AI and writing to communicate one’s own views

Here is one where outsourcing the writing entirely to AI defeats the purpose if one thinks there is value in having students learn to communicate their own views to others. Some combinations of the ideas above might still work though–e.g., asking a chatbot to ask you questions to help clarify your own view, asking a chatbot to help organize your thoughts, provide feedback to refine your writing after you’ve written a draft. All of these have in common that you’re not asking the bot to generate views for you but you are contributing ideas yourself.

The process of writing

Reflecting on the above list of things I hope students can learn by practicing writing in philosophy, it’s clear that the main things I’m after have to do with the work happening within and for students themselves, which is more tied to the process of writing than the end product. And to learn many of the things I’m hoping students will learn through writing assignments, they need to engage in much of the process of writing.

This is a fairly common theme I’ve heard in discussions of writing and AI, that the product is less important for a number of folks than what students learn during the process. And so some conversations have turned towards trying to have students provide evidence of their process along with the finished product. E.g., they could create a kind of portfolio with drafts, feedback and responses to feedback, conversations with chatbots if relevant, etc.

It seems intuitively right to me that documenting one’s process could help with learning, because one would then be encouraged to reflect more on one’s writing, but I haven’t looked into any research on this. And Tim Fawns recently asked some good questions in a LinkedIn post: “Are we equipped to evaluate the quality of processes of learning?” “Can we extrapolate from learning processes what students have learned or what they can do?” What might it look like to assess the process of learning from, in my case here, writing? I think these are important questions to consider, and I don’t have answers yet.

One last thought to conclude is an area I’d like to look into more: perhaps alternative grading approaches may help with a focus on process, and on students’ metacognition about their own work. This is an area I’ve been meaning to dive into more and haven’t yet!

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.