Category Archives: AI

AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Continue reading

AI & philosophical activity in courses part 1

I was reading through some resources on the Educause AI … Friend or Foe showcase, specifically the one on AI and inclusive excellence in higher education, and one thing in particular struck me. The resource talks, among other things, about helping students to understand the ways of thinking, speaking, and acting in a particular discipline, about making that clearer and whether AI might support this in some way.

This resonates with some ideas that have been bouncing a bit in my head the past few weeks on whether/how AI might help or hinder some of the activities I ask students to do in my courses, which led me to think about why I even ask them to do those activities in the first place. And thinking about this from a disciplinary perspective might help. What kinds of activities might be philosophical? And I don’t mean just those that professional philosophers engage in, because few students in my courses will go on to be professional philosophers, but all of them will do some kinds of philosophical thinking, questioning, discussing, etc. at some point in their lives I believe.

So what might it mean to engage in philosophical activities and can AI help students engage in these better in some way, or not? This is part one of me thinking through this question; there will be at least a part two soon, because I have enough thoughts that I don’t want to write a book-length blog post…

Asking philosophical questions

This is something all philosophers do in one way or another, and that I think can be helpful for many people in various contexts. And yet I find it challenging to define what a philosophical question is, even though I do it all the time. I don’t teach this directly, but I should probably be more conscious about it because I do think it would be helpful for students to be able to engage in this activity more after the class ends.

This reminds me of a post I also read today, this time by Ryan J. Johnson on the American Philosophical Association blog called “How I Got to Questions.” Johnson describes a question-focused pedagogy, in which students spend a lot of their time and effort in a philosophy course formulating and revising questions, only answering them in an assignment towards the end. Part of the point is to help students to better understand over time what makes a question philosophical through such activities.

Johnson credits Stephen Bloch-Schulman in part, from whom I first heard about this approach, and who writes about question-focused pedagogy on another post on the APA blog. Bloch-Schulman did a study that showed philosophy faculty using questions more often and in different ways when reading the same text as undergraduates and other faculty. I appreciated this point (among others!):

I believe that much of the most important desiderata of inclusive pedagogy is to make visible, for students, these same skills we hide from ourselves as experts, to make the acquisition of these skills as accessible as possible, particularly for those students who are least likely to pick up those skills without that work on our part. Question-skills being high on that list. (Introducing the Question-Focused Pedagogy Series)

One step for me in doing this more in my teaching would be to do more research and reflecting myself on what makes some questions more philosophical than others (Erica Stonestreet’s post called “Where Questions Come From” is one helpful resource, for example).

AI and learning/practicing philosophical questions

But this post is also focused on AI: might AI be used in a way to help support students to learn how to ask philosophical questions?

Continue reading

Values-based tugging

Okay, so the title of this post may seem a little strange but bear with me. Yesterday I listened to a fantastic session by Dave Cormier for the DS106 Radio Summer Camp this week, called “A year of uncertainty – fighting the fight against the RAND corporation.” I wasn’t entirely sure what to expect, as I hadn’t managed to find the abstract/description of this session until after it was over (click on the session link on the schedule for the summer camp), but I knew Dave is amazing, so of course I had to listen! And it was very thought-provoking as I figured it would be.

Problem solving and uncertainty

One of the main points Dave was talking about was in how many aspects of our social, political, educational, and other lives are focused on problem-solving, on addressing well-defined problems that can have well-defined answers that we just need to work hard to find. This is not necessarily a problem in itself, Dave noted, as there do exist such problems and there can be very useful methods for working to address them. The issue is if we focus on that to ignore the less-easily-defined problems, the messier issues, the more uncertain situations where a single right answer is not going to be forthcoming no matter what kinds of problem-solving methodologies we throw at it.

Dave mentioned medical students coming out of their education into practice and, when confronted with complex, uncertain, grey areas where a medical solution isn’t immediately forthcoming they tended to focus on blaming themselves, as if it was their failure for not finding an answer where none was to be found. He also noted how, at least in English, it is common when someone asks a question like “what is your view of X,” or “is Y right or wrong,” you feel like you have to answer, even if you aren’t sure, or there isn’t a clear-cut answer. It’s just part of the accepted norms of speaking that you should have an answer.

Both of these resonated with me, and perhaps especially the second; I have sometimes been asked, in various contexts, to provide my view on something that is of a more uncertain nature, or to say if I think it’s right, or to say what I think the future will bring, and I do feel pressured to respond. But maybe because of my background in philosophy I’m actually pretty comfortable with saying that I am not sure, or I’d need to look into it more, because such situations really do require more thought, research, reflection before coming to a conclusion.

There is the danger of jumping in too quickly with an answer, but there is also a danger in spending too much time in the thinking and reflection and not moving past that towards making some kind of decision or other. And sometimes I get stuck in that latter step when faced with really complex issues–there is so much to consider and so much value in multiple perspectives that it can be hard to “land” somewhere, as it were. It’s tempting to remain up in the air while not being sure of which alternatives are best (because there are no easy answers).

Landing on values and pulling from there

I really appreciated where Dave landed in his presentation: rather than only feeling stuck, suspended, we can consult our values and make a move based on those, we can tug the rope in a tug of war in the direction of our values and work to move things from there. The focus on values is key here: ask yourself what are your values as they relate to this situation, and make decisions and act based on those, knowing that’s enough in uncertain situations. Which doesn’t mean, of course, that you can’t revisit your values and how they apply to the situation if either of those things changes, but that it’s a landing place and it’s solid enough for the moment. He talked about how we can have conversations with students and others about why we would do something in a particular situation, rather than what the right answer is, focusing on the values that are moving us.

To do so requires that we are clear about what our values are, which is in some cases more easily said than done. This is something near and dear to my heart as a philosopher, as trying to distill what is underlying our views and our decisions, what kinds of reasons and values, is part of our bread and butter. But when I reflect on how I’ve taught over the years, I’m not sure I’ve focused as much as I could have on helping students be clear about their values, instead focusing on the “content” of course quite a bit. The latter has been in the service of helping students understand that when we make ethical choices there are (or should be) reasons behind those, and some options as to what kinds of reasons those could be. I, like many other philosophers, have then also asked students to provide their own arguments related to various ethical and other philosophical questions, which does at times mean providing reasons based on values. But how much have I really spent time supporting students to  define and articulate their own values in addition to applying them through writing arguments? I’m not sure, and this session was really generative for me in thinking about that (as well as being generative in multiple other ways!).

A couple of years ago I wrote a blog post as part of MYFest 2022, talking about how I had a hard time imagining a more just future for education just because I kept focusing on all of the structural complexities involved in educational systems and how changing one thing would require changing many more interconnected aspects and … it all felt pretty overwhelming. The metaphor I used was of rocks and boulders, which came to me as I was passing multiple rock formations on a walk. Some piles rocks are fairly easy to move; others are locked into network like shapes where to move one would require moving all the others, and they are after all very heavy. If I think in these terms then of course it’s hard to imagine change. Things are literally set in stone!


But what if we thought about complex issues and structures more like flexible webs? (Which is an image that reminds me of other of Dave Cormier’s work such as that on rhizomatic learning.) So that if you tug on one part it can still move and the other parts will move as well (or break I suppose, which in some cases may not be a bad thing).

This feels more hopeful to me–it still respects the interconnectedness of structures but also notes there can be some movement, some wiggle room. Perhaps the spider web is too flexible to respect the challenges of moving some of the more entrenched structures, though. Even though spider silk is incredibly strong, it seems a bit too easy to just sweep away with the swoosh of one’s hand.

How about a net:

This feels stronger, and like a spider web, meant to catch and hold things tight, but which can still be moved, shaped, morphed, or even broken. I like the image above because a piece of the net is fraying, noting its fragility amidst the otherwise tight knots.

A line that Dave ended on will stick with me: “Ask yourself what you care about, and then do what you can.” That feels empowering.

Applying to AI

One of the things that feels uncertain to me in this moment is where things are going with AI, what the future holds, and what the best approaches are to using AI (or not!) in education. How might those of us who are educators address the question of whether and/or how to adopt AI in our courses, in our teaching practices, to encourage our students to use it, etc.? Of course, all of this is going to differ according to context, discipline, teaching and learning goals, and more. But I think Dave’s session provides a fruitful way to approach this question. This is a complicated and uncertain situation but what we can do is consult our values: what do we value, what do we care about, what do we want to promote and avoid?

This may seem fairly elementary in a way–might we already frequently act from our values? Maybe, but there are also times when I know I have done things in teaching because they just seemed like the usual thing to do, what I had experienced, that they just seemed right and “normal,” but when I took a step back to think about my values and what I care about then things changed. For example, I used to get upset when people would leave during the middle of class, until I reflected on how I care about supporting students to learn in the ways that are helpful for them, coupled with learning about how some students need to move around, or to take breaks from stimulation, or need to leave for other reasons. It’s still not easy, especially in small courses, but I’m focusing less on how I feel in that situation and more on how being able to take a break may be helpful for some students more than sitting in one place for 50-80 minutes.

It’s perhaps that previous point about taking some time to reflect on one’s values, what is important, what one cares about, and then applying to one’s teaching practice. At one point last year I took the time to write out my values in terms of leadership, and that was immensely helpful in focusing my attention on areas I wanted to work on. I may have acted from some of those automatically, but bringing them to the surface helped me not only see what was grounding some of my actions but also where I professed values but that my actions could do a better job of supporting.

Now, this process isn’t going to lead to easy answers (there are none for the kinds of issues Dave was talking about), and our values may lead to conflicting viewpoints. For example, I care about allowing students to use technology that will support their learning, and I think that generative AI may be helpful for student learning in some cases–I’ve been looking into how it can support students with various disabilities, for example, and should blog about that later. Then there is the value of equity and how not all students have equal access to generative AI tools, so some may get supports that others don’t. But digging into what one values can help clarify why to go one direction or another, putting one on more solid footing while starting to tug, even if one isn’t entirely sure that is the best direction. It is the best one for this moment while recognizing the complexity that makes it a difficult, but at least grounded, choice.

And if we go with the net metaphor, then tugging in one place can pull other threads, moving things in a local area to start, and maybe in larger areas over time. Particularly if more people are tugging in similar directions (organized action, e.g.). One person can make a difference, but it is more likely that many, working together, can make a larger difference. And we may fray that net to the point of finding ways to morph or break some of the confining structures we find ourselves in.

All of this is also bringing to mind the idea of “entangled pedagogy” from Tim Fawns, which I wrote a blog post about in 2022. Rather than reviewing that blog post, I’ll just say that he has an aspirational view of the relationship between technology and pedagogy in which we focus on purposes, values, and contexts in an entangled relationship with technology and pedagogy. Rather than trying to emphasize pedagogy over technology or vice versa, or even how they are connected to each other, we focus instead on the purposes and the values we have in teaching and learning, and the specificity of our contexts, and how those can shape our choices in both pedagogy and technology (and how they intertwine).

In a quote that resonates with some of what Dave said and what I’ve written here, Tim notes:

Attending to values, purposes and context can help us identify problematic assumptions, such as those embedded in simple solutions to complex problems, reductive characterisations of students (e.g. as ‘digital natives’, see Oliver 2011), or assertions that teachers should conform to modern digital culture and practices (Clegg 2011).

Conclusion

I really appreciated the opportunity to participate in this session with Dave. There was a lot more than what I’ve been able to talk about above, so I highly suggest you listen to the recording when it’s posted on the DS106 Radio Summer Camp recordings page (and check out other fantastic sessions while you’re at it of course!). Big thank you to Dave for a thought-provoking session!

Principles of ethics in Ed Tech & AI (running list)

I’m going to use this post just to note a few resources on ethical principles around educational technology that I haven’t yet discussed in the series I’ve been writing about ethics & ed tech so far. I will at some point get around to writing about these, or at least synthesizing them with others I’ve reviewed so far.

This post will be updated over time. It’s meant as a way for me to keep track of things I want to look into more carefully and/or collate with other principles. Eventually I’d like to map out common ones and pay attention to those that are not commonly included in sets of already-existing principles as well.

I also have a Zotero library about ethics of educational technology and artificial intelligence that I update too.

Ethics in Ed Tech

Ethical Ed Tech Workshop at CUNY

Information and resources for a workshop on Ethical Approaches to Ed Tech, by Laurie Hurson and Talisa Feliciano, as part of a Teach@CUNY 2020 Summer Institute. This web page includes a handout for workshop participants that lists the following categories of questions to ask in regard to ethics & ed tech:

  • Access
  • Control
  • Data
  • Inclusion
  • Intellectual Property & Copyright
  • Privacy
  • Source

See the handout for more details!

UTS Ed Tech Ethics Report

The University of Technology, Sydney, went through a deliberative democracy process in 2021 to address the following question:

What principles should govern UTS use of analytics and artificial intelligence to improve teaching and learning for all, while minimising the possibility of harmful outcomes?

A report on the process and the draft principles was published in 2022. The categories of principles in that report are:

  • Accountability/Transparency
  • Bias/Fairness
  • Equity and Access
  • Safety and Security
  • Human Authority
  • Justifications/Evidence
  • Consent

Again, see the report for more details–the principles are in the Appendix.

Ethics in Artificial Intelligence

EU Ethical Guidelines on AI

In October 2022 the European Commission published a set of Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators.

The categories of these principles are:

  • Human agency and oversight
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Privacy and data governance
  • Technical robustness and safety
  • Accountability

See the PDF version of the report for more detail.

UNESCO Recommendations on the Ethics of AI

In 2022, UNESCO published a report about ethics and AI as well. The main categories of their ethical principles are:

  • Proportionality and do no harm
  • Safety and security
  • Fairness and non-discrimination
  • Sustainability
  • Right to privacy, and data protection
  • Human oversight and determination
  • Transparency and explainability
  • Responsibility and accountability
  • Awareness and literacy
  • Multi-stakeholder and adaptive governance and collaboration

Some ethical considerations in ChatGPT and other LLMs

Like many others, I’ve been thinking about GPT and ChatGPT lately, and I’m particularly interested in diving deeper into ethical considerations and issues related to these kinds of tools. As I start looking into this sort of question, I realize there are a lot of such considerations. And here I’m only going to be able to scratch the surface. But I wanted to pull together for myself some ethical areas that I think may be particularly important for post-secondary students, faculty, and staff to consider.

Notes:

  • This post will focus on ethical issues outside of academic integrity, which is certainly an important issue but not my particular focus here.
  • An area I barely touch on below, but plan to look into more, is AI and Indigenous approaches, protocols, and data sovereignty. One place I will likely start is by digging into a 2020 position paper by an Indigenous protocol and AI working group.
  • This post is quite long! I frequently make long blog posts but this one may be one of the longest. There is a lot to consider.
  • I am focusing here on ethical issues and concerns, and there are quite a few. It may sound like I may be arguing we should not use AI language models like ChatGPT in teaching and learning. That is not my point here; rather, I think it’s important to recognize ethical issues when considering whether or how to use such tools in an educational context, and discuss them with students.

Some of the texts I especially relied on when crafting this post, that I recommend:

And shortly before publishing I learned of this excellent post by Leon Furze on ethical considerations regarding AI in teaching and learning. It has many similar points to the below, along with teaching points and example ways to engage students in discussing these issues, focused on different disciplines. It’s very good, and comes complete with an infographic.

My post here has been largely a way for me to think through the issues by writing.

Continue reading

Early thoughts on ChatGPT & writing in philosophy courses

Yes, it’s another post on ChatGPT! Who needs another post? I do! Because one of the main reasons I blog is as a reflective space to think through ideas by writing them down, and then I have a record for later. I’m also very happy if my reflections are helpful to others in some way of course!

Like so many others, I’ve been learning a bit about and reflecting on GPT-3 and ChatGPT, and I must start off by saying I know very little so far. I took a full break from all work-related things from around December 20 until earlier this week, and I plan to do some deeper dives to learn more in the coming days and weeks. I should also say that though this is focused on GPT, that’s just because it’s the only one I’ve looked into at this point.

Mainly why I’m writing this post is to do some deeper reflection on why I have many writing assignments in my philosophy courses, what I hope they will do for students. And as I was thinking about this, I started reflecting on the role of writing in philosophy more generally, since philosophy classes teach…philosophy.

Academic philosophy and writing

Okay, a whole book could be written about the role of writing in academic philosophy. Here are just a few anecdotal reflections.

Philosophy as I have been trained in it and practice it in academia is frequently focused on writing. We also speak orally, and that’s really important to the discipline as well. Conversations in hallways, in classes, with visiting speakers, at conferences, etc. are all crucial ways we engage in thinking, discussing, making arguments as well as critiquing and improving them. This may not be agreed upon by all, but I still think writing is more heavily emphasized. Maybe I think that partly because for hiring, tenure, and promotion processes what seems to count most are written works rather than oral presentations, lectures, or workshops. Maybe it’s because most of what we do when we do research in philosophy is read written works by others and then write articles, chapters, or books ourselves.

Oral conversations tend to be places where philosophers test out ideas, brainstorm new ideas, give and receive feedback, iterate, discuss, do Q&A, and communicate (among other purposes). Interestingly, even at philosophy conferences, at least the ones in North America I’ve attended, it’s common to read written works out loud during research presentation sessions. (This is not the case for sessions focused on teaching philosophy, which are often more workshop-like and focused more on interactive activities.) For me it can be very challenging to pay attention for a long time by just listening, and I personally appreciate when there are slides or a handout to help keep one’s thinking on track and following along. Writing again! Oral conversations and presentations are also not accessible to all, of course, and one alternative (in addition to sign language) is writing, either in captions or transcripts.

Writing is also a way that some folks (maybe many?) think their way through philosophical or other arguments and ideas. As noted at the top of this post, this is certainly the case for me. I have to put things into words in order to really piece them together and form more coherent thoughts, and though that can be done orally (say, through a recording device), for me it works better in writing.

From these brief reflections, here are some of the likely many roles of writing in doing philosophy. This is not a comprehensive list by any means! And it’s likely similar for at least some other disciplines.

  • Writing to think and understand: Sometimes summarizing works by others helps one to understand them better (e.g., outlining premises and conclusions from a complicated text, or recording what one thinks are the main claims and overall conclusion of a text). In addition, sometimes writing helps one to understand better one’s own somewhat vague thoughts, to clarify, delineate, group them into categories, think of possible objections, etc. (That’s what I’m doing with this blog post)
  • Writing to communicate: communicating our own ideas and arguments, and taking in communications by others of theirs by reading them (as one means; communication of philosophical ideas and arguments can happen in other ways too!). Communicating the ideas and arguments of others, as often happens in lectures in philosophy classes, or when summarizing someone else’s argument before critiquing it and offering a revised version or something new.
  • Writing as a memory aid: Taking notes when reading texts, or listening to a speaker, or during class. Writing down notes to remind oneself what to say when teaching, or giving a lecture or conference presentation, or facilitating a workshop. Writing one’s thoughts down to be able to return to them later and review, revise, etc. (as in the last point).

The point of these musings is that at least in my experience, a lot of philosophical work, at least in academia, is done in or through writing, even though many of us also engage in non-written discussions and communications. And for me, this is important context to consider when thinking about teaching philosophy and writing, and what it may mean when tools like ChatGPT come onto the scene.

Teaching philosophy and writing

I came to the thoughts above because I was thinking about how it is very common in philosophy courses to have writing assignments–frequently the major assignments are essays in one form or another–and I started to reflect on why that might be. It could be argued that writing is pretty well baked into what it means to do (academic) philosophy, at least in the philosophical traditions I’m familiar with. So it could make sense that teaching students how to do philosophy, and having them do philosophical work in class, means teaching them to write and having them write! (Of course, academic philosophy is not all of what philosophy can be…this is another area on its own, but I think at least some of the focus on writing in philosophy courses may be related to its focus in academic philosophy.)

And like many academic and disciplinary skills, it can be helpful to build up towards philosophical writing skills by practising the kinds of steps that are needed to do it well. So, for example, in philosophy courses we often ask students to review an argument presented by someone else (usually in writing) and summarize it, perhaps by outlining the premises and conclusion. Then maybe in a later step we’ll ask them to offer questions or critiques of the argument, or alternative views or approaches, all of which are important parts of doing philosophy in the traditions in which I’m immersed. In later stages or upper-level courses we’ll ask students to do research where they gather arguments from multiple sources on a particular topic, analyze them, and offer their own original contributions to the philosophical discussion.

All of this is similar to the sort of work professional philosophers do in their own research, and to me just seems like natural ways of doing philosophy given my own experience. It’s just that we do it at different levels and often in a scaffolded way in teaching.

However, mostly I teach introductory-level courses, and the number of students who will go on to do any more philosophy, much less become professional philosophers, is relatively small. So personally, I including writing assignments not just because they are part of what it means to do philosophy (though it’s partly that), but also because I think the skills developed are useful in other contexts. Being able to take in and understand arguments by others (whether textual or otherwise), break them down into component parts to help support both understanding and evaluation, evaluate them, and revise or come up with different ideas if needed, are I think pretty basic and important skills in many, many areas of work and life. I think this (or something like it) may (?) continue to be the case as AI writing tools become more and more ubiquitous, but of course I’m not sure, and that’s a question for further thought.

Process and product

When teaching it’s much more about the learning and thinking that happens through the process of writing activities that’s important. The essay or parts of an essay that result are not the critical pieces. After all, if I ask 100 or more students to analyze the same argument and produce a set of premises and conclusions (for example), the resulting summary/analysis of the argument isn’t the important piece there, especially when there will be many, many of them. It’s the learning and thinking that’s happening to get to that point. The summary is there as a stand-in for the thinking and learning. And in some cases it’s the same for the critiques, feedback, or alternative ideas that students may offer in response to someone else’s argument–what I may care about more is what they’re learning through doing that thinking rather than the specific replies they produce. Many will be really interesting and thought-provoking. Others may be will be similar across multiple students. Depending on the level of the course and the learning outcomes, all of these may be fine as results; what I care about is that they are putting in the thought and reflection to hone skills of (to use a too-well-worn term) “critical thinking.”

When I think about it this way, I wonder what is the purpose of the actual essay or paragraph or outline of an argument that I assign in courses. It’s often not the actual end product (though sometimes it is, particularly for upper level or graduate courses). The end product is mostly a vehicle and proxy for me as a teacher to review whether the thinking, reflecting, and learning is taking place.

So, thinking about the several ways writing is used in philosophy noted in the previous section, I think largely I’m assigning writing for the purposes of thinking and understanding, and also communicating–maybe to other students, to me, to TAs, etc. And my assumption, when marking writing, is that the written text is actually communicating the student’s thinking and understanding, that the communication and the thinking are linked.

Teaching writing in philosophy, and ChatGPT

One of the things that the emergence of ChatGPT really emphasizes for me is that that end product isn’t really a good communication vehicle to assess whether the thinking and understanding has taken place. This really hit home for me through a post on Crooked Timber by philosopher Eric Schliesser. Schliesser notes that several professors have said that the essays produced by ChatGPT are decent enough to earn a passing grade, if not higher. “But this means that many students pass through our courses and pass them in virtue of generating passable paragraphs that do not reveal any understanding,” Schliesser points out.

This made me think: the essay may not only not be a reliable communication of the student’s own thinking (which we knew already due to concerns about plagiarism, people paying others to write their essays, etc.), but may not be communicating thinking and understanding at all. The link between the two can be completely severed. (This is assuming, as I think it’s safe to assume at this point, that tools like ChatGPT are not doing any thinking or understanding…I know this is a philosophical question but for the moment I’m going to go with the seemingly-reasonable-at-this-point claim that they’re not.)

In one respect, this is an extension of previous academic integrity concerns: if what we want to be assessing is the student’s own thinking and understanding, then ChatGPT and the like are similar issues in that a student could submit something that does not communicate their own understanding–it’s just that in this case, rather than communicating the understanding someone, somewhere, at some point had, it’s not communicating understanding at all.

But of course, we have academic integrity concerns for a reason, and for me it’s not just that I want to be able to tie the writing to the individual student for the sake of integrity and fairness of assessment (though that is important too), it’s also that I want to engage students in activities that will develop skills that will be useful to them in the future. And it’s seeming more and more the case that the written texts I have used in the past as a vehicle to review whether they have developed those skills is less and less useful for that purpose.

At the moment, I can think of a few options, some of which could be combined for a particular assignment or class:

  1. Continue to try to find ways to connect the writing students do out of class to themselves–an extension of academic integrity approaches we already have. These can include:
    • using plagiarism checkers (which right now I think do not work with tools like ChatGPT
    • comparing earlier, in-class writing to later, out-of-class writing
    • quizzing students orally on the content of their written work
    • asking students to do multiple steps for writing assignments, some of which could be done in class, and also ask them to explain their reasoning for the choices they are making (this one from Julia Staffel–see more from her below)
  2. Find other ways for students to show their thinking and understanding than assigning written work done outside of class.
    • E.g., Ryan Watkins from George Washington University suggests (among other things) having students create mind maps (which ChatGPT can’t do … yet?) and holding in-class debates where students could show their thinking, understanding, and skills in communicating.
    • Julia Staffel from the University of Colorado Boulder talks in a video posted on Daily Nous about alternative approaches in philosophy courses, such as in-class essays, oral exams, oral presentations (synchronous or recorded), and assignments based on non-textual sources such as podcasts or videos (but that only works until the tools can start using those as source material).
  3. Use ChatGPT or similar in writing assignments

    • Numerous people have also suggested assignments in which students need to work with ChatGPT; if we think of it like a helper tool that can generate some early ideas for us to build on or critique, or that can provide summaries of others’ work that we can evaluate for ourselves, etc., then we could still be supporting students to build some similar kinds of skills as earlier writing assignments.
    • Still, inspired by a blog post by Autumm Caines, I’m wary of doing this until I look more into privacy implications, who has access to what data and how it’s used. Autumm also talks about the ethics of requiring students to provide free labour to companies to train tools like this. And what happens when the tool or ones like it are no longer offered for free?
    • Finally since ChatGPT can already mark and provide feedback on its own writing (albeit not perhaps the best), it’s not clear to me that having students have the tool draft something and then comment on it/revise it is going to necessarily get around the tie-the-work-to-a-mind issue.

A number of the ideas above have to do with doing things synchronously, in a way that the instructor and/or TAs can witness. Some are alternative approaches to providing evidence of thinking and understanding done outside of class that work for now, just based on what the tech can do at the moment. And maybe those will continue to work for some time, or maybe not. It feels a bit like trying to do catch-up with an ever-changing landscape.

I have many more thoughts, but this blog post is already too long so I’ll save them for later. For now, a takeaway is that maybe one of the things that I’ll need to do in the future is spend more time in class on activities that develop and allow students to communicate the thinking and understanding I’m hoping to support them in. If I have to assess them (which I do), then I’d like to bring the communication and the thinking parts back together. I want to think through pros and cons of a number of suggestions noted above, and similar ones, particularly around what they are actually measuring and whether it’s connecting to my learning goals in teaching (which, incidentally, is an important exercise to do for out-of-class writing too of  course!).

I also have some ill-formed thoughts about the value of teaching students to write philosophy essays at all, if they can be written so easily by a bot that doesn’t think or understand. But that’s for another day!