Category Archives: AI

AI & relationships: Vallor, The AI Mirror

As discussed in a recent blog post, I’ve been thinking a lot about AI and relationships recently, and in this post I’m going to discuss a few points related to this topic from a book by Shannon Vallor called The AI Mirror (2024). Vallor doesn’t directly address AI and relationships, but I think a number of her arguments do relate to various ways in which humans relate to themselves, each other, and AI.

Mirrors and their distortions

Vallor focuses throughout the book on the metaphor of AI as a mirror, which she uses to make a few different points. First, she talks about how many current AI systems function as mirrors to humanity in the sense that how they operate is based on training data that reflects current and past ideas, beliefs, values, practices, emotions, imagination, and more. They reflect back to humans an image of what many (not all, since this data is partial and reflects dominant perspectives) have already been.

In one sense, there can be some silver lining in this, Vallor notes, as such mirrors can show things in stark relief that might further emphasize the need for action:

AI today makes the scale, ubiquity, and structural acceptance of our racism, sexism, ableism, classism, and other forms of bias against marginalized communities impossible to deny or minimize with a straight face. It is right there in the data, being endlessly spit back in our faces by the very tools we celebrate as the apotheosis of rational achievement. (46)

But of course, these biases showing up in AI outputs are harmful, and she spends a lot of the book focusing on the downsides of relying too much on AI mirrors for decision making, for understanding ourselves and the world around us, given that they, like any mirror, provide only a surface, distorted reflection. For one thing, as noted above, their reflections tend to show only part of humanity’s current and past thoughts, values, and dreams, with outputs that, in the case of LLMs for example, focus on what is most likely given what is most prevalent in training data.

In addition, AI mirrors can only capture limited aspects of human experience, since they don’t have the capacity for lived experience of the world or being embodied creatures. For example, language models can talk about pleasure, pain, the taste of a strawberry, a sense of injustice, etc., but they do not of course have experiences of such things. This can have profound impacts on humans’ relationships with each other, if those are mediated by AI systems that reduce people to machine-readable data. Vallor illustrates this by pointing to the philosopher Emmanuel Levinas’ account of encountering another person as a person and the call to responsibility and justice that ensues:

As … Emmanuel Levinas wrote in his first major work Totality and Infinity, when I truly meet the gaze of the Other, I do not experience this as a meeting of two visible things. Yet the Other (the term Levinas capitalizes to emphasize the other party’s personhood) is not an object I possess, encapsulated in my own private mental life. The Other is always more than what my consciousness can mirror. This radical difference of perspective that emanates from the Other’s living gaze, if I meet it, pulls me out of the illusion of self-possession, and into responsibility….

In this gaze that holds me at a distance from myself, that gaze of which an AI mirror can see or say nothing, Levinas observes that I am confronted with the original call to justice. When a person is not an abstraction, not a data point or generic “someone,” but a unique, irreplaceable life standing before you and addressing you, there is a feeling, a kind of moral weight in their presence, that is hard to ignore. (60)

The more people treat each other through the lens of data that can be “classified, labeled, counted, coordinated, ranked, distributed, manipulated, or exploited” rather than as “subjects of experience,” the more we may lose that already too-rare encounter (61). This is nothing new of course; it’s a trend that has been continuing for a long time in many human communities. But it can be made worse by outsourcing decisions, such as those related to health care, insurance, jobs, access to educational institutions, who may be a repeat offender, and more, which can in some cases reduce opportunities for human judgment in the name of efficiency.

Continue reading

Workshop idea on AI ethical decision making

I am thinking about whether/how I might be able to take the very drafty personal AI ethics framework idea from a recent blog post and do something with it during a synchronous workshop for faculty, students, and staff. As I was working on that blog post I started to think that to really work through one’s ethical views on AI is very complex and might be best done through something like a set of online modules rather than a short engagement like a workshop. But I’m going to use this blog post to try to see what might be possible–as I frequently think best by writing, I’m going to use this opportunity to do that!

I’m imagining a 1.5 or 2-hour workshop on this topic, and wondering what might be both feasible, and of course useful for participants to help folks think carefully about ethical considerations in possible uses of generative AI in teaching and learning. My main worry, as I think about this, is that making ethical decisions is really complicated, and I don’t want to overwhelm people with things to consider to the degree that some may end up feeling like it’s too much to try to do. I really want to find a middle ground between a deep ethical analysis for decisions around generative AI (which could and has been done in book-length manuscripts!) and providing little in the way of guidance on how to make ethical decisions in this area. This is challenging, I’m finding as I think this through.

Below is a draft outline for a workshop, with some early ideas that will need further refinement.

Outline for a workshop

1. Ethical decision making & use cases

Framework

I think it could be helpful to have some kind of ethical decision making framework. What I have in my earlier blog post is not quite there yet; I don’t think it includes everything it needs to, though it’s a start. After doing a quick web search on ethical frameworks, and considering my own thoughts, here are some elements that it would be good to include for the purposes of this kind of workshop. I’m numbering them just for ease of referring to them later, but they may not necessarily be in exactly this order.

  1. Identify the question/decision to be made, and what options are available
  2. List various entities involved, including people and also other living and non-living entities as relevant
  3. Identify possible ethical issues involved
  4. Gather information relevant to those issues as best you can; note questions you still have and where you would like to have further information
  5. Evaluate options according to ethical values and principles
  6. Make a decision
  7. Develop and then act on next steps

There are likely more things to consider, such as reviewing the outcome of the decision to consider its positive and negative ethical impacts and learn for the future, but for the current purpose the above is a decent start I think.

This section of a workshop could include brief introduction to an ethical decision making framework being used in the session, and that will guide parts of the session. We won’t be able to do all of the above steps in a short workshop.

Brainstorming use cases

In addition, at this point we could ask participants to brainstorm one or more possible use cases for generative AI in teaching and learning (or in some other context, depending on audience). This would be step 1 in the framework above. These could be contributed individually on a shared google doc perhaps, to be used later in the session. Time permitting, they could also include information on people and other entities involved (step 2 in the framework).

For example, one use case could be the decide whether to use generative AI tools to make comments on student written work. It would be helpful to consider some further specifics, such as possible tools to be used and the kind of assignment and feedback one is thinking about. Those involved would be students, the instructor, possibly TAs.

Continue reading

AI and relationships: Indigenous Protocol and AI paper

I’ve been thinking a lot lately about generative AI and relationships. Not just in terms of how people might use platforms to create AI companions for themselves, though that is part of it. I’ve been thinking more broadly about how development and use of generative AI connects with our relationships with other people, with other living things and the environment, and with ourselves. I’ve also been thinking about our relationships as individuals with generative AI tools themselves; for example, how my interactions with them may change me and how what I do may change the tools, directly or indirectly.

For example, the following kinds of questions have been on my mind:

  • Relationships with other people: How do interactions with AI directly or indirectly benefit or harm others? What impacts do various uses of AI have on both individuals and communities?
  • Relationships with oneself: How do interactions with AI change me? How do my uses of it fit with my values?
  • Relationships with the environment: How do development and use of AI affect the natural world and the relationships that individuals and communities have with living and non-living entities?
  • Relationships with AI systems themselves: How might individuals or communities change AI systems and how are they changed by them?
  • Relationships with AI developers: What kinds of relationships might one have/is one having with the organizations that create AI platforms?

More broadly: What is actually happening in the space between human and AI? What is this conjunction/collaboration? What are we creating through this interaction?

These are pretty large questions, and I’m going to focus in this and some other blog posts on some texts I’ve read recently that have guided my interest in thinking further about AI and relationships. Then later I will hopefully have a few clearer ideas to share.

Indigenous Protocol and AI position paper

My interest in this topic was at first sparked by reading a position paper on Indigenous Protocol and Artificial Intelligence (2020), produced by participants the Indigenous Protocols and Artificial Intelligence Working Group that participated in two workshops in 2019. This work is a collection of papers, many of which were written by workshop participants. I found this work incredibly thought-provoking and important, and I am only going to barely touch on small portions of it. For the purposes of this post, I want to discuss a few points about AI and relationships from the position paper.

Continue reading

Draft idea for an AI personal ethical decision framework

I recently wrote two blog posts on possible ways that generative AI might be able to support student learning in philosophy courses (part 1, part 2). But through doing so, and also through a thought-provoking comment by Alan Levine on my earlier blog post reflecting on a presentation by Dave Cormier on focusing on values in situations of uncertainty, I’m now starting to think more carefully about my use of AI and how it intersects with my values.

Alan Levine noted in his comment that sometimes people talking about generative AI start by acknowledging problems with it, and then “jump in full speed” to talking about its capabilities and possible benefits while no longer engaging with the original issues. This really struck me, because it’s something I could easily see myself doing too.

I started reflecting a lot on various problems with generative AI tools, as well as potential benefits I can imagine, how all of these intersect with my values, to try to make more conscious ethical decisions about using generative AI in various situations, or not. On one hand, one could make philosophical arguments about what should be done “in general,” but even then each individual needs to weigh various considerations and their own values, and make their own decisions as to what they want to do.

I decided, then, to try to come up with a framework of some kind to support folks making those decisions. This is an early brainstorm; it will likely be refined over time and I welcome feedback! It is something that would take time, effort, and fairly deep reflection to go through, and it may go too far in that direction. Especially since I can imagine something like this being used in a workshop (or series of workshops) or a course, and those have time limits (of course, there is no requirement that people must work through something like this in a limited time period; they could always go through it on their own later…it’s just that I know myself and I often will have the intention to return to things like this later and, well, just get busy). This is one aspect that needs more work.

The general idea is to go through possible benefits and problems with using generative AI tools, connect these to one’s values, and then brainstorm: whether one will use generative AI in a particular context, and if so, how one might address the problems and further support possible benefits.

I think it would be helpful to start with a set of possible uses in one’s particular context and arrange the rest from there, because a number of the possible benefits and problems can differ according to particular use cases. But there are some problems that are more general–e.g., issues with how generative AI tools are developed, trained, and maintained on the “back end,” as it were, which would apply to any downstream uses (such as energy usage, harm to data workers, violations of Indigenous data sovereignty in training, etc.). So I think some of the problems, at least, could be considered regardless of particular context of use.

First draft of framework

Without further ado, here is the very drafty first draft of the kind of thing I’m thinking about. At this point it’s just structured as a worksheet that starts off with brainstorming some possible uses of generative AI in one’s own work (e.g., teaching, learning, research, coding, data analysis, communications, and more). Then folks can pick one or two of those to focus on. The rest is a set of tables to fill out about potential benefits and problems with using generative AI in this way, and then a final one where folks make at least a provisional decision and then brainstorm one or two next steps.

Brainstorm possible uses

Think of a few possible uses of generative AI in your own work or study that you’d like to explore further, or ones you’re already engaged in. Take __ minutes to write down a list. [Providing a few example lists for folks could be helpful]

Then choose 2-3 of these to investigate further in the following steps.

Benefits and problems

Regarding problems with using AI, as noted above, some problems can apply regardless of the particular use case, and I think it’s important for folks to grapple with those even though they may be more challenging for individuals to address. Some background and resources on these would be useful to discuss in a facilitated session, ideally with some pre-reading. A number of the issues are fairly complex and would benefit from time to learn and discuss, so one can’t go through all of them in a limited time period.

The same goes for possible benefits; it would be useful to list a few possible areas in which there could be benefits for generative AI use, such as supporting student learning, doing repetitive tasks to free people up to have more time for complex or more interesting tasks, supporting accessibility in some cases. These will necessarily be high level while participants would brainstorm benefits that are more specific to their use case.

One could ask folks to brainstorm a few problems and benefits for generative AI in their use cases, including one of the more general problems as well as at least one that is specific to their use case.

Problem or Benefit Evidence Impacts Further info My view Value
E.g., climate impacts in both training and use This could be links Who is harmed? Who benefits? What other info would be helpful? One’s view on the topic at the moment Related value(s) one holds

This is not very nice looking in a blog post but hopefully you get the idea.

Decisions

Then participants could be encouraged to try to make an initial decision on use of GenAI in a particular use case, even if that might change later.

Use case Use GenAI? Why? If yes, how? Next steps
E.g., feedback on student work Your choice, and why/why not How to do so, including how you will address benefits and problems What one or two next steps will you take? This can include how you would go about getting more information you need to decide.

 

Reflections

The idea here is not necessarily to have people try to weigh the benefits against the problems–that is too complicated and would require that one go through all possible benefits and problems one can think of. Instead, the point is to start to engage in deeper ethical reflection on a particular use case and try to come to some preliminary decision afterwards, even if that decision may change with further information.

One place where I think folks may get hung up is on feeling like they need more information to make decisions. That is completely understandable, and in a limited time frame participants wouldn’t be able to go do a bunch of research on their own. But the framework at least may be able to bring to the surface that ethical issues are complex, and one needs to spend time with them, including finding out more information where one doesn’t have it yet, or has only one or two sources and needs more. That’s why I put in the column about “more info” into the first table example. It’s also why under “my view” I suggested this be one’s view at this time, recognizing that things may change as one investigates further. And one of the next steps could be to investigate some of these things further.

Of course, one reasonable response to this exercise is to decide that some of the general problems are bad enough that one feels one shouldn’t use generative AI tools at all. I mean for this kind of exercise to leave that option open.

The more I think about this, the more I think it would probably be better to do something like this in at least two steps; one where ethical issues and benefits are discussed to the degree feasible in a certain time frame, and then the next one where folks go through their own use cases with the tables as noted above. Otherwise it’s likely to be too rushed.

 

This is a rough sketch of an idea at the moment that I will likely refine. I feel like something along these lines could be useful, even if this isn’t quite it. So I’m happy for feedback!

AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Continue reading

AI & philosophical activity in courses part 1

I was reading through some resources on the Educause AI … Friend or Foe showcase, specifically the one on AI and inclusive excellence in higher education, and one thing in particular struck me. The resource talks, among other things, about helping students to understand the ways of thinking, speaking, and acting in a particular discipline, about making that clearer and whether AI might support this in some way.

This resonates with some ideas that have been bouncing a bit in my head the past few weeks on whether/how AI might help or hinder some of the activities I ask students to do in my courses, which led me to think about why I even ask them to do those activities in the first place. And thinking about this from a disciplinary perspective might help. What kinds of activities might be philosophical? And I don’t mean just those that professional philosophers engage in, because few students in my courses will go on to be professional philosophers, but all of them will do some kinds of philosophical thinking, questioning, discussing, etc. at some point in their lives I believe.

So what might it mean to engage in philosophical activities and can AI help students engage in these better in some way, or not? This is part one of me thinking through this question; there will be at least a part two soon, because I have enough thoughts that I don’t want to write a book-length blog post…

Asking philosophical questions

This is something all philosophers do in one way or another, and that I think can be helpful for many people in various contexts. And yet I find it challenging to define what a philosophical question is, even though I do it all the time. I don’t teach this directly, but I should probably be more conscious about it because I do think it would be helpful for students to be able to engage in this activity more after the class ends.

This reminds me of a post I also read today, this time by Ryan J. Johnson on the American Philosophical Association blog called “How I Got to Questions.” Johnson describes a question-focused pedagogy, in which students spend a lot of their time and effort in a philosophy course formulating and revising questions, only answering them in an assignment towards the end. Part of the point is to help students to better understand over time what makes a question philosophical through such activities.

Johnson credits Stephen Bloch-Schulman in part, from whom I first heard about this approach, and who writes about question-focused pedagogy on another post on the APA blog. Bloch-Schulman did a study that showed philosophy faculty using questions more often and in different ways when reading the same text as undergraduates and other faculty. I appreciated this point (among others!):

I believe that much of the most important desiderata of inclusive pedagogy is to make visible, for students, these same skills we hide from ourselves as experts, to make the acquisition of these skills as accessible as possible, particularly for those students who are least likely to pick up those skills without that work on our part. Question-skills being high on that list. (Introducing the Question-Focused Pedagogy Series)

One step for me in doing this more in my teaching would be to do more research and reflecting myself on what makes some questions more philosophical than others (Erica Stonestreet’s post called “Where Questions Come From” is one helpful resource, for example).

AI and learning/practicing philosophical questions

But this post is also focused on AI: might AI be used in a way to help support students to learn how to ask philosophical questions?

Continue reading

Values-based tugging

Okay, so the title of this post may seem a little strange but bear with me. Yesterday I listened to a fantastic session by Dave Cormier for the DS106 Radio Summer Camp this week, called “A year of uncertainty – fighting the fight against the RAND corporation.” I wasn’t entirely sure what to expect, as I hadn’t managed to find the abstract/description of this session until after it was over (click on the session link on the schedule for the summer camp), but I knew Dave is amazing, so of course I had to listen! And it was very thought-provoking as I figured it would be.

Problem solving and uncertainty

One of the main points Dave was talking about was in how many aspects of our social, political, educational, and other lives are focused on problem-solving, on addressing well-defined problems that can have well-defined answers that we just need to work hard to find. This is not necessarily a problem in itself, Dave noted, as there do exist such problems and there can be very useful methods for working to address them. The issue is if we focus on that to ignore the less-easily-defined problems, the messier issues, the more uncertain situations where a single right answer is not going to be forthcoming no matter what kinds of problem-solving methodologies we throw at it.

Dave mentioned medical students coming out of their education into practice and, when confronted with complex, uncertain, grey areas where a medical solution isn’t immediately forthcoming they tended to focus on blaming themselves, as if it was their failure for not finding an answer where none was to be found. He also noted how, at least in English, it is common when someone asks a question like “what is your view of X,” or “is Y right or wrong,” you feel like you have to answer, even if you aren’t sure, or there isn’t a clear-cut answer. It’s just part of the accepted norms of speaking that you should have an answer.

Both of these resonated with me, and perhaps especially the second; I have sometimes been asked, in various contexts, to provide my view on something that is of a more uncertain nature, or to say if I think it’s right, or to say what I think the future will bring, and I do feel pressured to respond. But maybe because of my background in philosophy I’m actually pretty comfortable with saying that I am not sure, or I’d need to look into it more, because such situations really do require more thought, research, reflection before coming to a conclusion.

There is the danger of jumping in too quickly with an answer, but there is also a danger in spending too much time in the thinking and reflection and not moving past that towards making some kind of decision or other. And sometimes I get stuck in that latter step when faced with really complex issues–there is so much to consider and so much value in multiple perspectives that it can be hard to “land” somewhere, as it were. It’s tempting to remain up in the air while not being sure of which alternatives are best (because there are no easy answers).

Landing on values and pulling from there

I really appreciated where Dave landed in his presentation: rather than only feeling stuck, suspended, we can consult our values and make a move based on those, we can tug the rope in a tug of war in the direction of our values and work to move things from there. The focus on values is key here: ask yourself what are your values as they relate to this situation, and make decisions and act based on those, knowing that’s enough in uncertain situations. Which doesn’t mean, of course, that you can’t revisit your values and how they apply to the situation if either of those things changes, but that it’s a landing place and it’s solid enough for the moment. He talked about how we can have conversations with students and others about why we would do something in a particular situation, rather than what the right answer is, focusing on the values that are moving us.

To do so requires that we are clear about what our values are, which is in some cases more easily said than done. This is something near and dear to my heart as a philosopher, as trying to distill what is underlying our views and our decisions, what kinds of reasons and values, is part of our bread and butter. But when I reflect on how I’ve taught over the years, I’m not sure I’ve focused as much as I could have on helping students be clear about their values, instead focusing on the “content” of course quite a bit. The latter has been in the service of helping students understand that when we make ethical choices there are (or should be) reasons behind those, and some options as to what kinds of reasons those could be. I, like many other philosophers, have then also asked students to provide their own arguments related to various ethical and other philosophical questions, which does at times mean providing reasons based on values. But how much have I really spent time supporting students to  define and articulate their own values in addition to applying them through writing arguments? I’m not sure, and this session was really generative for me in thinking about that (as well as being generative in multiple other ways!).

A couple of years ago I wrote a blog post as part of MYFest 2022, talking about how I had a hard time imagining a more just future for education just because I kept focusing on all of the structural complexities involved in educational systems and how changing one thing would require changing many more interconnected aspects and … it all felt pretty overwhelming. The metaphor I used was of rocks and boulders, which came to me as I was passing multiple rock formations on a walk. Some piles rocks are fairly easy to move; others are locked into network like shapes where to move one would require moving all the others, and they are after all very heavy. If I think in these terms then of course it’s hard to imagine change. Things are literally set in stone!


But what if we thought about complex issues and structures more like flexible webs? (Which is an image that reminds me of other of Dave Cormier’s work such as that on rhizomatic learning.) So that if you tug on one part it can still move and the other parts will move as well (or break I suppose, which in some cases may not be a bad thing).

This feels more hopeful to me–it still respects the interconnectedness of structures but also notes there can be some movement, some wiggle room. Perhaps the spider web is too flexible to respect the challenges of moving some of the more entrenched structures, though. Even though spider silk is incredibly strong, it seems a bit too easy to just sweep away with the swoosh of one’s hand.

How about a net:

This feels stronger, and like a spider web, meant to catch and hold things tight, but which can still be moved, shaped, morphed, or even broken. I like the image above because a piece of the net is fraying, noting its fragility amidst the otherwise tight knots.

A line that Dave ended on will stick with me: “Ask yourself what you care about, and then do what you can.” That feels empowering.

Applying to AI

One of the things that feels uncertain to me in this moment is where things are going with AI, what the future holds, and what the best approaches are to using AI (or not!) in education. How might those of us who are educators address the question of whether and/or how to adopt AI in our courses, in our teaching practices, to encourage our students to use it, etc.? Of course, all of this is going to differ according to context, discipline, teaching and learning goals, and more. But I think Dave’s session provides a fruitful way to approach this question. This is a complicated and uncertain situation but what we can do is consult our values: what do we value, what do we care about, what do we want to promote and avoid?

This may seem fairly elementary in a way–might we already frequently act from our values? Maybe, but there are also times when I know I have done things in teaching because they just seemed like the usual thing to do, what I had experienced, that they just seemed right and “normal,” but when I took a step back to think about my values and what I care about then things changed. For example, I used to get upset when people would leave during the middle of class, until I reflected on how I care about supporting students to learn in the ways that are helpful for them, coupled with learning about how some students need to move around, or to take breaks from stimulation, or need to leave for other reasons. It’s still not easy, especially in small courses, but I’m focusing less on how I feel in that situation and more on how being able to take a break may be helpful for some students more than sitting in one place for 50-80 minutes.

It’s perhaps that previous point about taking some time to reflect on one’s values, what is important, what one cares about, and then applying to one’s teaching practice. At one point last year I took the time to write out my values in terms of leadership, and that was immensely helpful in focusing my attention on areas I wanted to work on. I may have acted from some of those automatically, but bringing them to the surface helped me not only see what was grounding some of my actions but also where I professed values but that my actions could do a better job of supporting.

Now, this process isn’t going to lead to easy answers (there are none for the kinds of issues Dave was talking about), and our values may lead to conflicting viewpoints. For example, I care about allowing students to use technology that will support their learning, and I think that generative AI may be helpful for student learning in some cases–I’ve been looking into how it can support students with various disabilities, for example, and should blog about that later. Then there is the value of equity and how not all students have equal access to generative AI tools, so some may get supports that others don’t. But digging into what one values can help clarify why to go one direction or another, putting one on more solid footing while starting to tug, even if one isn’t entirely sure that is the best direction. It is the best one for this moment while recognizing the complexity that makes it a difficult, but at least grounded, choice.

And if we go with the net metaphor, then tugging in one place can pull other threads, moving things in a local area to start, and maybe in larger areas over time. Particularly if more people are tugging in similar directions (organized action, e.g.). One person can make a difference, but it is more likely that many, working together, can make a larger difference. And we may fray that net to the point of finding ways to morph or break some of the confining structures we find ourselves in.

All of this is also bringing to mind the idea of “entangled pedagogy” from Tim Fawns, which I wrote a blog post about in 2022. Rather than reviewing that blog post, I’ll just say that he has an aspirational view of the relationship between technology and pedagogy in which we focus on purposes, values, and contexts in an entangled relationship with technology and pedagogy. Rather than trying to emphasize pedagogy over technology or vice versa, or even how they are connected to each other, we focus instead on the purposes and the values we have in teaching and learning, and the specificity of our contexts, and how those can shape our choices in both pedagogy and technology (and how they intertwine).

In a quote that resonates with some of what Dave said and what I’ve written here, Tim notes:

Attending to values, purposes and context can help us identify problematic assumptions, such as those embedded in simple solutions to complex problems, reductive characterisations of students (e.g. as ‘digital natives’, see Oliver 2011), or assertions that teachers should conform to modern digital culture and practices (Clegg 2011).

Conclusion

I really appreciated the opportunity to participate in this session with Dave. There was a lot more than what I’ve been able to talk about above, so I highly suggest you listen to the recording when it’s posted on the DS106 Radio Summer Camp recordings page (and check out other fantastic sessions while you’re at it of course!). Big thank you to Dave for a thought-provoking session!

Principles of ethics in Ed Tech & AI (running list)

I’m going to use this post just to note a few resources on ethical principles around educational technology that I haven’t yet discussed in the series I’ve been writing about ethics & ed tech so far. I will at some point get around to writing about these, or at least synthesizing them with others I’ve reviewed so far.

This post will be updated over time. It’s meant as a way for me to keep track of things I want to look into more carefully and/or collate with other principles. Eventually I’d like to map out common ones and pay attention to those that are not commonly included in sets of already-existing principles as well.

I also have a Zotero library about ethics of educational technology and artificial intelligence that I update too.

Ethics in Ed Tech

Ethical Ed Tech Workshop at CUNY

Information and resources for a workshop on Ethical Approaches to Ed Tech, by Laurie Hurson and Talisa Feliciano, as part of a Teach@CUNY 2020 Summer Institute. This web page includes a handout for workshop participants that lists the following categories of questions to ask in regard to ethics & ed tech:

  • Access
  • Control
  • Data
  • Inclusion
  • Intellectual Property & Copyright
  • Privacy
  • Source

See the handout for more details!

UTS Ed Tech Ethics Report

The University of Technology, Sydney, went through a deliberative democracy process in 2021 to address the following question:

What principles should govern UTS use of analytics and artificial intelligence to improve teaching and learning for all, while minimising the possibility of harmful outcomes?

A report on the process and the draft principles was published in 2022. The categories of principles in that report are:

  • Accountability/Transparency
  • Bias/Fairness
  • Equity and Access
  • Safety and Security
  • Human Authority
  • Justifications/Evidence
  • Consent

Again, see the report for more details–the principles are in the Appendix.

Ethics in Artificial Intelligence

EU Ethical Guidelines on AI

In October 2022 the European Commission published a set of Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators.

The categories of these principles are:

  • Human agency and oversight
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Privacy and data governance
  • Technical robustness and safety
  • Accountability

See the PDF version of the report for more detail.

UNESCO Recommendations on the Ethics of AI

In 2022, UNESCO published a report about ethics and AI as well. The main categories of their ethical principles are:

  • Proportionality and do no harm
  • Safety and security
  • Fairness and non-discrimination
  • Sustainability
  • Right to privacy, and data protection
  • Human oversight and determination
  • Transparency and explainability
  • Responsibility and accountability
  • Awareness and literacy
  • Multi-stakeholder and adaptive governance and collaboration

Some ethical considerations in ChatGPT and other LLMs

Like many others, I’ve been thinking about GPT and ChatGPT lately, and I’m particularly interested in diving deeper into ethical considerations and issues related to these kinds of tools. As I start looking into this sort of question, I realize there are a lot of such considerations. And here I’m only going to be able to scratch the surface. But I wanted to pull together for myself some ethical areas that I think may be particularly important for post-secondary students, faculty, and staff to consider.

Notes:

  • This post will focus on ethical issues outside of academic integrity, which is certainly an important issue but not my particular focus here.
  • An area I barely touch on below, but plan to look into more, is AI and Indigenous approaches, protocols, and data sovereignty. One place I will likely start is by digging into a 2020 position paper by an Indigenous protocol and AI working group.
  • This post is quite long! I frequently make long blog posts but this one may be one of the longest. There is a lot to consider.
  • I am focusing here on ethical issues and concerns, and there are quite a few. It may sound like I may be arguing we should not use AI language models like ChatGPT in teaching and learning. That is not my point here; rather, I think it’s important to recognize ethical issues when considering whether or how to use such tools in an educational context, and discuss them with students.

Some of the texts I especially relied on when crafting this post, that I recommend:

And shortly before publishing I learned of this excellent post by Leon Furze on ethical considerations regarding AI in teaching and learning. It has many similar points to the below, along with teaching points and example ways to engage students in discussing these issues, focused on different disciplines. It’s very good, and comes complete with an infographic.

My post here has been largely a way for me to think through the issues by writing.

Continue reading

Early thoughts on ChatGPT & writing in philosophy courses

Yes, it’s another post on ChatGPT! Who needs another post? I do! Because one of the main reasons I blog is as a reflective space to think through ideas by writing them down, and then I have a record for later. I’m also very happy if my reflections are helpful to others in some way of course!

Like so many others, I’ve been learning a bit about and reflecting on GPT-3 and ChatGPT, and I must start off by saying I know very little so far. I took a full break from all work-related things from around December 20 until earlier this week, and I plan to do some deeper dives to learn more in the coming days and weeks. I should also say that though this is focused on GPT, that’s just because it’s the only one I’ve looked into at this point.

Mainly why I’m writing this post is to do some deeper reflection on why I have many writing assignments in my philosophy courses, what I hope they will do for students. And as I was thinking about this, I started reflecting on the role of writing in philosophy more generally, since philosophy classes teach…philosophy.

Academic philosophy and writing

Okay, a whole book could be written about the role of writing in academic philosophy. Here are just a few anecdotal reflections.

Philosophy as I have been trained in it and practice it in academia is frequently focused on writing. We also speak orally, and that’s really important to the discipline as well. Conversations in hallways, in classes, with visiting speakers, at conferences, etc. are all crucial ways we engage in thinking, discussing, making arguments as well as critiquing and improving them. This may not be agreed upon by all, but I still think writing is more heavily emphasized. Maybe I think that partly because for hiring, tenure, and promotion processes what seems to count most are written works rather than oral presentations, lectures, or workshops. Maybe it’s because most of what we do when we do research in philosophy is read written works by others and then write articles, chapters, or books ourselves.

Oral conversations tend to be places where philosophers test out ideas, brainstorm new ideas, give and receive feedback, iterate, discuss, do Q&A, and communicate (among other purposes). Interestingly, even at philosophy conferences, at least the ones in North America I’ve attended, it’s common to read written works out loud during research presentation sessions. (This is not the case for sessions focused on teaching philosophy, which are often more workshop-like and focused more on interactive activities.) For me it can be very challenging to pay attention for a long time by just listening, and I personally appreciate when there are slides or a handout to help keep one’s thinking on track and following along. Writing again! Oral conversations and presentations are also not accessible to all, of course, and one alternative (in addition to sign language) is writing, either in captions or transcripts.

Writing is also a way that some folks (maybe many?) think their way through philosophical or other arguments and ideas. As noted at the top of this post, this is certainly the case for me. I have to put things into words in order to really piece them together and form more coherent thoughts, and though that can be done orally (say, through a recording device), for me it works better in writing.

From these brief reflections, here are some of the likely many roles of writing in doing philosophy. This is not a comprehensive list by any means! And it’s likely similar for at least some other disciplines.

  • Writing to think and understand: Sometimes summarizing works by others helps one to understand them better (e.g., outlining premises and conclusions from a complicated text, or recording what one thinks are the main claims and overall conclusion of a text). In addition, sometimes writing helps one to understand better one’s own somewhat vague thoughts, to clarify, delineate, group them into categories, think of possible objections, etc. (That’s what I’m doing with this blog post)
  • Writing to communicate: communicating our own ideas and arguments, and taking in communications by others of theirs by reading them (as one means; communication of philosophical ideas and arguments can happen in other ways too!). Communicating the ideas and arguments of others, as often happens in lectures in philosophy classes, or when summarizing someone else’s argument before critiquing it and offering a revised version or something new.
  • Writing as a memory aid: Taking notes when reading texts, or listening to a speaker, or during class. Writing down notes to remind oneself what to say when teaching, or giving a lecture or conference presentation, or facilitating a workshop. Writing one’s thoughts down to be able to return to them later and review, revise, etc. (as in the last point).

The point of these musings is that at least in my experience, a lot of philosophical work, at least in academia, is done in or through writing, even though many of us also engage in non-written discussions and communications. And for me, this is important context to consider when thinking about teaching philosophy and writing, and what it may mean when tools like ChatGPT come onto the scene.

Teaching philosophy and writing

I came to the thoughts above because I was thinking about how it is very common in philosophy courses to have writing assignments–frequently the major assignments are essays in one form or another–and I started to reflect on why that might be. It could be argued that writing is pretty well baked into what it means to do (academic) philosophy, at least in the philosophical traditions I’m familiar with. So it could make sense that teaching students how to do philosophy, and having them do philosophical work in class, means teaching them to write and having them write! (Of course, academic philosophy is not all of what philosophy can be…this is another area on its own, but I think at least some of the focus on writing in philosophy courses may be related to its focus in academic philosophy.)

And like many academic and disciplinary skills, it can be helpful to build up towards philosophical writing skills by practising the kinds of steps that are needed to do it well. So, for example, in philosophy courses we often ask students to review an argument presented by someone else (usually in writing) and summarize it, perhaps by outlining the premises and conclusion. Then maybe in a later step we’ll ask them to offer questions or critiques of the argument, or alternative views or approaches, all of which are important parts of doing philosophy in the traditions in which I’m immersed. In later stages or upper-level courses we’ll ask students to do research where they gather arguments from multiple sources on a particular topic, analyze them, and offer their own original contributions to the philosophical discussion.

All of this is similar to the sort of work professional philosophers do in their own research, and to me just seems like natural ways of doing philosophy given my own experience. It’s just that we do it at different levels and often in a scaffolded way in teaching.

However, mostly I teach introductory-level courses, and the number of students who will go on to do any more philosophy, much less become professional philosophers, is relatively small. So personally, I including writing assignments not just because they are part of what it means to do philosophy (though it’s partly that), but also because I think the skills developed are useful in other contexts. Being able to take in and understand arguments by others (whether textual or otherwise), break them down into component parts to help support both understanding and evaluation, evaluate them, and revise or come up with different ideas if needed, are I think pretty basic and important skills in many, many areas of work and life. I think this (or something like it) may (?) continue to be the case as AI writing tools become more and more ubiquitous, but of course I’m not sure, and that’s a question for further thought.

Process and product

When teaching it’s much more about the learning and thinking that happens through the process of writing activities that’s important. The essay or parts of an essay that result are not the critical pieces. After all, if I ask 100 or more students to analyze the same argument and produce a set of premises and conclusions (for example), the resulting summary/analysis of the argument isn’t the important piece there, especially when there will be many, many of them. It’s the learning and thinking that’s happening to get to that point. The summary is there as a stand-in for the thinking and learning. And in some cases it’s the same for the critiques, feedback, or alternative ideas that students may offer in response to someone else’s argument–what I may care about more is what they’re learning through doing that thinking rather than the specific replies they produce. Many will be really interesting and thought-provoking. Others may be will be similar across multiple students. Depending on the level of the course and the learning outcomes, all of these may be fine as results; what I care about is that they are putting in the thought and reflection to hone skills of (to use a too-well-worn term) “critical thinking.”

When I think about it this way, I wonder what is the purpose of the actual essay or paragraph or outline of an argument that I assign in courses. It’s often not the actual end product (though sometimes it is, particularly for upper level or graduate courses). The end product is mostly a vehicle and proxy for me as a teacher to review whether the thinking, reflecting, and learning is taking place.

So, thinking about the several ways writing is used in philosophy noted in the previous section, I think largely I’m assigning writing for the purposes of thinking and understanding, and also communicating–maybe to other students, to me, to TAs, etc. And my assumption, when marking writing, is that the written text is actually communicating the student’s thinking and understanding, that the communication and the thinking are linked.

Teaching writing in philosophy, and ChatGPT

One of the things that the emergence of ChatGPT really emphasizes for me is that that end product isn’t really a good communication vehicle to assess whether the thinking and understanding has taken place. This really hit home for me through a post on Crooked Timber by philosopher Eric Schliesser. Schliesser notes that several professors have said that the essays produced by ChatGPT are decent enough to earn a passing grade, if not higher. “But this means that many students pass through our courses and pass them in virtue of generating passable paragraphs that do not reveal any understanding,” Schliesser points out.

This made me think: the essay may not only not be a reliable communication of the student’s own thinking (which we knew already due to concerns about plagiarism, people paying others to write their essays, etc.), but may not be communicating thinking and understanding at all. The link between the two can be completely severed. (This is assuming, as I think it’s safe to assume at this point, that tools like ChatGPT are not doing any thinking or understanding…I know this is a philosophical question but for the moment I’m going to go with the seemingly-reasonable-at-this-point claim that they’re not.)

In one respect, this is an extension of previous academic integrity concerns: if what we want to be assessing is the student’s own thinking and understanding, then ChatGPT and the like are similar issues in that a student could submit something that does not communicate their own understanding–it’s just that in this case, rather than communicating the understanding someone, somewhere, at some point had, it’s not communicating understanding at all.

But of course, we have academic integrity concerns for a reason, and for me it’s not just that I want to be able to tie the writing to the individual student for the sake of integrity and fairness of assessment (though that is important too), it’s also that I want to engage students in activities that will develop skills that will be useful to them in the future. And it’s seeming more and more the case that the written texts I have used in the past as a vehicle to review whether they have developed those skills is less and less useful for that purpose.

At the moment, I can think of a few options, some of which could be combined for a particular assignment or class:

  1. Continue to try to find ways to connect the writing students do out of class to themselves–an extension of academic integrity approaches we already have. These can include:
    • using plagiarism checkers (which right now I think do not work with tools like ChatGPT
    • comparing earlier, in-class writing to later, out-of-class writing
    • quizzing students orally on the content of their written work
    • asking students to do multiple steps for writing assignments, some of which could be done in class, and also ask them to explain their reasoning for the choices they are making (this one from Julia Staffel–see more from her below)
  2. Find other ways for students to show their thinking and understanding than assigning written work done outside of class.
    • E.g., Ryan Watkins from George Washington University suggests (among other things) having students create mind maps (which ChatGPT can’t do … yet?) and holding in-class debates where students could show their thinking, understanding, and skills in communicating.
    • Julia Staffel from the University of Colorado Boulder talks in a video posted on Daily Nous about alternative approaches in philosophy courses, such as in-class essays, oral exams, oral presentations (synchronous or recorded), and assignments based on non-textual sources such as podcasts or videos (but that only works until the tools can start using those as source material).
  3. Use ChatGPT or similar in writing assignments

    • Numerous people have also suggested assignments in which students need to work with ChatGPT; if we think of it like a helper tool that can generate some early ideas for us to build on or critique, or that can provide summaries of others’ work that we can evaluate for ourselves, etc., then we could still be supporting students to build some similar kinds of skills as earlier writing assignments.
    • Still, inspired by a blog post by Autumm Caines, I’m wary of doing this until I look more into privacy implications, who has access to what data and how it’s used. Autumm also talks about the ethics of requiring students to provide free labour to companies to train tools like this. And what happens when the tool or ones like it are no longer offered for free?
    • Finally since ChatGPT can already mark and provide feedback on its own writing (albeit not perhaps the best), it’s not clear to me that having students have the tool draft something and then comment on it/revise it is going to necessarily get around the tie-the-work-to-a-mind issue.

A number of the ideas above have to do with doing things synchronously, in a way that the instructor and/or TAs can witness. Some are alternative approaches to providing evidence of thinking and understanding done outside of class that work for now, just based on what the tech can do at the moment. And maybe those will continue to work for some time, or maybe not. It feels a bit like trying to do catch-up with an ever-changing landscape.

I have many more thoughts, but this blog post is already too long so I’ll save them for later. For now, a takeaway is that maybe one of the things that I’ll need to do in the future is spend more time in class on activities that develop and allow students to communicate the thinking and understanding I’m hoping to support them in. If I have to assess them (which I do), then I’d like to bring the communication and the thinking parts back together. I want to think through pros and cons of a number of suggestions noted above, and similar ones, particularly around what they are actually measuring and whether it’s connecting to my learning goals in teaching (which, incidentally, is an important exercise to do for out-of-class writing too of  course!).

I also have some ill-formed thoughts about the value of teaching students to write philosophy essays at all, if they can be written so easily by a bot that doesn’t think or understand. But that’s for another day!