Author Archives: chendric

Parker, The Art of Gathering: Part 1

A tree in winter with no leaves, and a lot of crows sitting in its branches.

Vancouver crows near dusk, by Christina Hendricks, licensed CC BY 4.0

I am currently in a Book Circle that is reading Priya Parker’s The Art of Gathering: How We Meet and Why It Matters (2018). We have read and discussed the first four out of eight chapters so far, and I am using this blog post to write about a few things that are really standing out to me as especially useful.

The book is about all kinds of gatherings, from family dinners to birthday parties to fundraising events to team meetings. Personally I am most interested in thinking about Parker’s advice as it relates to meetings, or facilitating workshops or classes, as those are the kinds of things I spend most of my work life doing. I don’t host a lot of other kinds of gatherings. And I have found many excellent things to consider and work to put into practice in relation to meetings, workshops, and classes!

Purpose: the why

The first chapter is called “Decide why you’re really gathering,” and it is focused on what Parker says is the important first step when planning any kind of gathering: having a “sharp, b0ld, meaningful gathering purpose” (17). Why are people coming together? Is this a purpose that needs a meeting? If so, how to design the gathering to meet that purpose?

Without a clear purpose, it’s too easy to fall into the mistake of having a category as a purpose, Parker says. Examples of categories include a regular team meeting, a retirement dinner, a panel discussion, a keynote speaker, a book launch, etc. These tend to have certain forms and it is easy to choose a category and a form without a clear sense of purpose: we’ll have a panel discussion and it looks like this because that’s what panels are like. But what is the purpose of the event, and is a panel discussion the right form? And even if it is, do we need to change the way the panel discussion works in order to fit the purpose better?

Continue reading

Using generative AI for environmental scan

One of the things I did while on administrative leave from the Centre for Teaching, Learning, and Technology is to review mission, vision, and values statements, as well as strategic goals and plans, from other Canadian centres for teaching and learning.

I focused on Canada partly because that’s where I’m situated, and also because there has recently been a book published about similar things in U.S. centres for teaching and learning (Wright, Centers for Teaching and Learning), and the post-secondary system in these two countries is fairly similar so this could possibly be an interesting point of comparison to see if there are any significant overlaps or differences. It would be even more interesting to review similar points for centres in other countries, but that’s for another time.

I thought I’d use this opportunity to test out a few generative AI platforms and tools to see whether they may or may not be helpful in this work. Short answer: either I’m doing it wrong (highly possible!) or things just aren’t quite there yet to be super helpful. I’ll explain more below.

Note: long post follows! I decided to put several different attempts all together into one post, which makes it very long. And also note: I did all this in mid-December 2024, but only got around to finalizing this post in early Jan. 2025. What worked/didn’t work as of that time may change very quickly!

Another note: I revised this post on Jan 5, 2024, to add resources at the end.

Continue reading

AI & Relationships: Mollick, Co-Intelligence

Statue of a centaur playing a bugle.

Le Centaure qui danse (Cité Internationale Universitaire de Paris), photo shared on Flickr by Jean-Pierre Dalbéra, licensed CC BY 2.0

As part of the series of posts I’m writing on AI and relationships, I want to discuss a few points from Ethan Mollick’s book, Co-Intelligence: Living and Working with AI (Penguin, 2024). As with some other works discussed in the series, the book doesn’t necessarily focus directly on the theme of human relationships with AI, with ourselves, with each other, or with other entities as affected by AI, but the overarching theme of working with AI as a “co-intelligence” is certainly relevant, and interesting, in my view.

This book covers a lot of helpful topics about working with AI, including a clear explainer about generative AI; ethical topics related to AI; AI and creativity, education, and work; possible scenarios for the future of AI; and more. But here I’ll just focus on some of the broad ideas about working with AI as a co-intelligence.

Note: I bought an e-book copy of the book and don’t have stable page numbers to cite, so I’ll be citing quotes from chapters instead of pages.

Alien minds

Mollick ends the first chapter of the book, “Creating Alien Minds,” by stating that humans have created “a kind of alien mind” with recent forms of AI, since they can act in many ways like humans. Mollick isn’t saying that AI systems have minds in the ways humans do, nor that they are sentient, though they may seem so. Rather, Mollick suggests that we “treat AI as if it were human, because in many ways it behaves like one” (Chpt 4). Still, that doesn’t mean believing that these systems replicate human minds in a deep way; the idea is to “remember that AI is not human, but often works in the ways that we would expect humans to act” (Chpt. 4). AI systems are not necessarily intelligent in the same way humans are, but they do act in “ways that we would expect humans to act” sometimes.

As an aside, I recently also finished a book by Luciano Floridi called The Ethics of Artificial Intelligence (Oxford UP, 2023), in which Floridi argues for conceiving of AI in terms of what it can do, not in terms of whether it possesses similar kinds of intelligence to humans:

… ‘for the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving’. This is obviously a counterfactual. It has nothing to do with thinking, but everything to do with behaving: were a human to behave in that way, that behaviour would be called intelligent. (16)

According to Floridi, humans have made great strides in developing artificial systems that can produce behaviour that could be called intelligent, but not so much in producing “the non-biological equivalent of our intelligence, that is, the source of such behaviour” (20).

I’m not sure if Mollick would necessarily agree with Floridi here, and I am not enough of an expert to have an opinion on whether Floridi is right about machine intelligence, but Floridi’s view helps me to consider what it might be like to treat AI as if it were human because of its behaviour, not necessarily thinking it is intelligent or sentient.

Returning to Mollick’s book, the idea of being able to work with an “alien mind” can be useful, Mollick says, “because an alien perspective can be helpful” to address human biases “that come from us being stuck in our own minds” (Chpt. 3).

Now we have another (strange, artificial) co-intelligence we can turn to for help. AI can assist us as a thinking companion to improve our own decision-making, helping us reflect on our own choices (rather than simply relying on the AI to make choices for us). (Chpt. 3)

I think this idea of interacting with an outside, even alien perspective is interesting, but I wonder…is what we can get from AI really very alien? Being trained on human inputs, will it provide a significantly different perspective than one could get from talking with other humans?

Mollick goes on to say that the “diversity of thought and approach” that one can get from an AI could lead to novel ideas “that might never occur to a human mind” (Chpt. 3). Perhaps this is the case; I am not expert enough in what AI can do to be able to really judge well. Mollick argues in Chapter 5 that innovation and novelty can come from combining “distant, seemingly unrelated ideas,” and that LLMs can do this well–they are “combination machines” that also add in a bit of randomness, making them “a powerful tool for innovation.” LLMs can certainly come up with novel ways to combine seemingly random ideas and concepts (Mollick provides an example of asking for business ideas that combine fast food, lava lamps, and 14th century England), and in this sense I can understand the point that you may get new ideas from working with this “alien mind.”

Another thought I have about working with such an alien mind, and that Mollick also talks about in the book (Chapter 2), is that there is bias in AI systems, which still continues even with attempts to correct for it. So the AI alien mind one is thinking with may be one that is steering one’s ideas and decisions towards, e.g., dominant perspectives and approaches in the training data. Of course, talking with individual humans means dealing with bias as well, so this isn’t unique to AI. One way to try to address this when working with humans is to seek out multiple perspectives from people with many different kinds of backgrounds and experiences. A worry is if there is too much reliance on working with AI to help us reflect on our own views, develop ideas, make decisions, we may not take the time to ensure we are getting diverse perspectives and approaches.

Co-intelligence

Mollick is sensitive to overreliance on AI: “it is true that thoughtlessly handing decision-making over to AI could erode our judgment” (Chpt. 3). Similarly, when talking in Chapter 5 about relying on AI to write first drafts of various kinds of work, Mollick states that doing so can contribute to eroding our own creativity and thought processes. It’s easy to just go with the ideas and approaches the AI comes up with for the first draft, even as folks may then revise and edit, meaning they may lose the opportunity to provide their own original thoughts and approaches to some extent. Further, if “we rely on the machine to do the hard work of analysis and synthesis, [then] we don’t engage in critical and reflective thinking ourselves,” and we lose the opportunity to “develop our own style.”

To work with AI as a co-intelligence, as a partner in some sense, humans need to remain strongly in the loop, according to Mollick: “You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations” (Chpt. 3). The idea is to use AI to help support human thinking and creativity, not replace them. When people work with AI but also keep themselves and their own creativity and criticality in the loop, the results are better than if they just rely on what the AI outputs as responses (Chpt. 6). In addition, when people “let the AI take over instead of using it as a tool, [this] can hurt human learning, skill development, and productivity” (Chpt. 6). Working with AI as a co-intelligence means still developing and practicing human skills but augmenting with AI work where it makes sense.

Further, Mollick states: “Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving” (Chpt. 3). I find this focus on learning from AI and adapting our ways of thinking interesting…how might working with AI more regularly shape the ways humans tend to think and make decisions? And how can humans remain critical and active partners in this relationship, retaining what is valuable and useful in human ways of thinking and being? And which such ways are those that it is important to retain even as people may adapt somewhat to the ways today’s AI systems work (or tomorrow’s, or next decade’s, or…)?

Centaurs and cyborgs

One of the memorable (for me) aspects of Mollick’s book is his use of the metaphors of centaurs and cyborgs for working with AI:

Centaur work has a clear line between person and machine, like the clear line between the human torso and horse body of the mythical centaur. It depends on a strategic division of labor, switching between AI and human tasks, allocating responsibilities based on the strengths of each entity.

Cyborgs blend machine and person, integrating the two deeply. Cyborgs don’t just delegate tasks; they intertwine their efforts with AI …. (Chpt. 6)

Working with AI as a centaur, from what I understand, would mean having different tasks done by the person vs the AI. Mollick gives the example of having AI produce graphs from data while the human decides on statistical analysis approaches. But a centaur way of working would be more like weaving human and AI activity together on a task, such as doing some writing, asking the AI for feedback and revising, or asking the AI to help finish a thought or a paragraph in a useful way (and then revising as needed). Mollick suggests that people start off working with AI as a centaur, and then they may gradually start to act as a cyborg; and at that point, they will have “found a co-intelligence” (Chpt. 6).

This idea of being a cyborg goes back to the questions at the end of the previous section, around how working closely with AI and adapting to it (and having it adapt to us) may change human ways of thinking, acting, and making decisions. In addition to sharing tasks with AI, both co-intelligences, as it were, are likely to be changed by this relationship, and I find it very interesting to consider who and what humans might become, and what we should hold onto.

One might argue that we are already cyborgs to some extent, as what and how we think, write, and interact are significantly shaped by technologies of many kinds, from handwriting to typesetting to word processing, the internet, and much more. Somehow, to me, what Mollick calls the “alien mind” of AI feels like an even deeper level of connection between human thinking, creativity, and technology, but I haven’t thought about this in depth enough to have a great deal more to say about it yet. I find it more worrying, that we may lose more of value in human being in interacting to a great extent with entities that act like humans in various ways. Or maybe we will find ways to retain what is meaningful in our relationships to ourselves and each other, and even find new forms of such meaning.

Human relationships with each other

So far I’ve been focusing in this discussion of Mollick’s book on humans’ relationships with AI (how they may act as a kind of co-intelligence) and relationships with ourselves (how overreliance on AI may lead to some erosion of human capacities). I’m also very interested in humans’ relationships with each other as AI use increases.

Though talking about human relationships isn’t directly his point in this section, I appreciated Mollick’s discussion in Chapter 5 of using AI to write recommendation letters, performance reviews, speeches, feedback, and other works that are meaningful because they reflect what other humans think and the time and effort they have put into them. Part of the issue, he points out in this section, is that AI systems could actually do a better job in some cases than if a human produced these. But they lose an important sense of meaning, and Mollick argues that “we are going to need to reconstruct meaning, in art and in the rituals of creative work” (Chpt. 5).

This leads me to wonder: what degree of AI use may still preserve the important meaning, the connection (though not direct) to another person that can come from such works, and what degree of use may erode that meaning too much? This is likely an empirical question, in which people are asked about their perceptions of AI-written or AI-augmented work in domains where that work is partly made meaningful because it was produced by a particular human and is meant to express their own views.

For example, I’m reminded of a research study I heard about this week in which the researchers sought student perceptions on feedback produced by humans vs. AI: “AI or Human? Evaluating Student Feedback Perceptions in Higher Education” (Nazaretsky et al., 2024; open access preprint here). Interestingly, the authors report that students in the study tended to revise their opinion of various aspects of the AI feedback after learning it came from AI, including measures of “genuineness,” “objectivity,” and “usefulness” (Section 5, p. 294). Among other conclusions, they note that their study “reveals a strong preference for human guidance over AI-generated suggestions … , indicating a fundamental human inclination for personal interaction and judgment” (p. 295). There is much more that could be said about this paper, and I may discuss it (and related studies) in more detail in another blog post.

Conclusion

One similar topic in both Mollick’s book as well as Vallor’s The AI Mirror, discussed in an earlier blog post, is about the dangers of outsourcing too much of our own capacities for critical and creative thinking and decision-making to AI. Mollick’s book is, though, I think much more positive about the potential value of humans working with AI for various purposes (for creativity/art, teaching and learning, work, and more), and provides many practical ideas for doing so. I have tended to focus on some of the more critical aspects of Mollick’s book, which is reflective of my own interests and sense of caution. I am very interested to work with others to figure out just what kind of cyborgs we might become, but I also am likely to tend to be a voice of critique as I fear there may be a fair bit to lose in our relationships with ourselves and others. I look forward to also figuring out what we might gain, though!

 

AI & relationships: Vallor, The AI Mirror

As discussed in a recent blog post, I’ve been thinking a lot about AI and relationships recently, and in this post I’m going to discuss a few points related to this topic from a book by Shannon Vallor called The AI Mirror (2024). Vallor doesn’t directly address AI and relationships, but I think a number of her arguments do relate to various ways in which humans relate to themselves, each other, and AI.

Mirrors and their distortions

Vallor focuses throughout the book on the metaphor of AI as a mirror, which she uses to make a few different points. First, she talks about how many current AI systems function as mirrors to humanity in the sense that how they operate is based on training data that reflects current and past ideas, beliefs, values, practices, emotions, imagination, and more. They reflect back to humans an image of what many (not all, since this data is partial and reflects dominant perspectives) have already been.

In one sense, there can be some silver lining in this, Vallor notes, as such mirrors can show things in stark relief that might further emphasize the need for action:

AI today makes the scale, ubiquity, and structural acceptance of our racism, sexism, ableism, classism, and other forms of bias against marginalized communities impossible to deny or minimize with a straight face. It is right there in the data, being endlessly spit back in our faces by the very tools we celebrate as the apotheosis of rational achievement. (46)

But of course, these biases showing up in AI outputs are harmful, and she spends a lot of the book focusing on the downsides of relying too much on AI mirrors for decision making, for understanding ourselves and the world around us, given that they, like any mirror, provide only a surface, distorted reflection. For one thing, as noted above, their reflections tend to show only part of humanity’s current and past thoughts, values, and dreams, with outputs that, in the case of LLMs for example, focus on what is most likely given what is most prevalent in training data.

In addition, AI mirrors can only capture limited aspects of human experience, since they don’t have the capacity for lived experience of the world or being embodied creatures. For example, language models can talk about pleasure, pain, the taste of a strawberry, a sense of injustice, etc., but they do not of course have experiences of such things. This can have profound impacts on humans’ relationships with each other, if those are mediated by AI systems that reduce people to machine-readable data. Vallor illustrates this by pointing to the philosopher Emmanuel Levinas’ account of encountering another person as a person and the call to responsibility and justice that ensues:

As … Emmanuel Levinas wrote in his first major work Totality and Infinity, when I truly meet the gaze of the Other, I do not experience this as a meeting of two visible things. Yet the Other (the term Levinas capitalizes to emphasize the other party’s personhood) is not an object I possess, encapsulated in my own private mental life. The Other is always more than what my consciousness can mirror. This radical difference of perspective that emanates from the Other’s living gaze, if I meet it, pulls me out of the illusion of self-possession, and into responsibility….

In this gaze that holds me at a distance from myself, that gaze of which an AI mirror can see or say nothing, Levinas observes that I am confronted with the original call to justice. When a person is not an abstraction, not a data point or generic “someone,” but a unique, irreplaceable life standing before you and addressing you, there is a feeling, a kind of moral weight in their presence, that is hard to ignore. (60)

The more people treat each other through the lens of data that can be “classified, labeled, counted, coordinated, ranked, distributed, manipulated, or exploited” rather than as “subjects of experience,” the more we may lose that already too-rare encounter (61). This is nothing new of course; it’s a trend that has been continuing for a long time in many human communities. But it can be made worse by outsourcing decisions, such as those related to health care, insurance, jobs, access to educational institutions, who may be a repeat offender, and more, which can in some cases reduce opportunities for human judgment in the name of efficiency.

Continue reading

Workshop idea on AI ethical decision making

I am thinking about whether/how I might be able to take the very drafty personal AI ethics framework idea from a recent blog post and do something with it during a synchronous workshop for faculty, students, and staff. As I was working on that blog post I started to think that to really work through one’s ethical views on AI is very complex and might be best done through something like a set of online modules rather than a short engagement like a workshop. But I’m going to use this blog post to try to see what might be possible–as I frequently think best by writing, I’m going to use this opportunity to do that!

I’m imagining a 1.5 or 2-hour workshop on this topic, and wondering what might be both feasible, and of course useful for participants to help folks think carefully about ethical considerations in possible uses of generative AI in teaching and learning. My main worry, as I think about this, is that making ethical decisions is really complicated, and I don’t want to overwhelm people with things to consider to the degree that some may end up feeling like it’s too much to try to do. I really want to find a middle ground between a deep ethical analysis for decisions around generative AI (which could and has been done in book-length manuscripts!) and providing little in the way of guidance on how to make ethical decisions in this area. This is challenging, I’m finding as I think this through.

Below is a draft outline for a workshop, with some early ideas that will need further refinement.

Outline for a workshop

1. Ethical decision making & use cases

Framework

I think it could be helpful to have some kind of ethical decision making framework. What I have in my earlier blog post is not quite there yet; I don’t think it includes everything it needs to, though it’s a start. After doing a quick web search on ethical frameworks, and considering my own thoughts, here are some elements that it would be good to include for the purposes of this kind of workshop. I’m numbering them just for ease of referring to them later, but they may not necessarily be in exactly this order.

  1. Identify the question/decision to be made, and what options are available
  2. List various entities involved, including people and also other living and non-living entities as relevant
  3. Identify possible ethical issues involved
  4. Gather information relevant to those issues as best you can; note questions you still have and where you would like to have further information
  5. Evaluate options according to ethical values and principles
  6. Make a decision
  7. Develop and then act on next steps

There are likely more things to consider, such as reviewing the outcome of the decision to consider its positive and negative ethical impacts and learn for the future, but for the current purpose the above is a decent start I think.

This section of a workshop could include brief introduction to an ethical decision making framework being used in the session, and that will guide parts of the session. We won’t be able to do all of the above steps in a short workshop.

Brainstorming use cases

In addition, at this point we could ask participants to brainstorm one or more possible use cases for generative AI in teaching and learning (or in some other context, depending on audience). This would be step 1 in the framework above. These could be contributed individually on a shared google doc perhaps, to be used later in the session. Time permitting, they could also include information on people and other entities involved (step 2 in the framework).

For example, one use case could be the decide whether to use generative AI tools to make comments on student written work. It would be helpful to consider some further specifics, such as possible tools to be used and the kind of assignment and feedback one is thinking about. Those involved would be students, the instructor, possibly TAs.

Continue reading

AI and relationships: Indigenous Protocol and AI paper

I’ve been thinking a lot lately about generative AI and relationships. Not just in terms of how people might use platforms to create AI companions for themselves, though that is part of it. I’ve been thinking more broadly about how development and use of generative AI connects with our relationships with other people, with other living things and the environment, and with ourselves. I’ve also been thinking about our relationships as individuals with generative AI tools themselves; for example, how my interactions with them may change me and how what I do may change the tools, directly or indirectly.

For example, the following kinds of questions have been on my mind:

  • Relationships with other people: How do interactions with AI directly or indirectly benefit or harm others? What impacts do various uses of AI have on both individuals and communities?
  • Relationships with oneself: How do interactions with AI change me? How do my uses of it fit with my values?
  • Relationships with the environment: How do development and use of AI affect the natural world and the relationships that individuals and communities have with living and non-living entities?
  • Relationships with AI systems themselves: How might individuals or communities change AI systems and how are they changed by them?
  • Relationships with AI developers: What kinds of relationships might one have/is one having with the organizations that create AI platforms?

More broadly: What is actually happening in the space between human and AI? What is this conjunction/collaboration? What are we creating through this interaction?

These are pretty large questions, and I’m going to focus in this and some other blog posts on some texts I’ve read recently that have guided my interest in thinking further about AI and relationships. Then later I will hopefully have a few clearer ideas to share.

Indigenous Protocol and AI position paper

My interest in this topic was at first sparked by reading a position paper on Indigenous Protocol and Artificial Intelligence (2020), produced by participants the Indigenous Protocols and Artificial Intelligence Working Group that participated in two workshops in 2019. This work is a collection of papers, many of which were written by workshop participants. I found this work incredibly thought-provoking and important, and I am only going to barely touch on small portions of it. For the purposes of this post, I want to discuss a few points about AI and relationships from the position paper.

Continue reading

Draft idea for an AI personal ethical decision framework

I recently wrote two blog posts on possible ways that generative AI might be able to support student learning in philosophy courses (part 1, part 2). But through doing so, and also through a thought-provoking comment by Alan Levine on my earlier blog post reflecting on a presentation by Dave Cormier on focusing on values in situations of uncertainty, I’m now starting to think more carefully about my use of AI and how it intersects with my values.

Alan Levine noted in his comment that sometimes people talking about generative AI start by acknowledging problems with it, and then “jump in full speed” to talking about its capabilities and possible benefits while no longer engaging with the original issues. This really struck me, because it’s something I could easily see myself doing too.

I started reflecting a lot on various problems with generative AI tools, as well as potential benefits I can imagine, how all of these intersect with my values, to try to make more conscious ethical decisions about using generative AI in various situations, or not. On one hand, one could make philosophical arguments about what should be done “in general,” but even then each individual needs to weigh various considerations and their own values, and make their own decisions as to what they want to do.

I decided, then, to try to come up with a framework of some kind to support folks making those decisions. This is an early brainstorm; it will likely be refined over time and I welcome feedback! It is something that would take time, effort, and fairly deep reflection to go through, and it may go too far in that direction. Especially since I can imagine something like this being used in a workshop (or series of workshops) or a course, and those have time limits (of course, there is no requirement that people must work through something like this in a limited time period; they could always go through it on their own later…it’s just that I know myself and I often will have the intention to return to things like this later and, well, just get busy). This is one aspect that needs more work.

The general idea is to go through possible benefits and problems with using generative AI tools, connect these to one’s values, and then brainstorm: whether one will use generative AI in a particular context, and if so, how one might address the problems and further support possible benefits.

I think it would be helpful to start with a set of possible uses in one’s particular context and arrange the rest from there, because a number of the possible benefits and problems can differ according to particular use cases. But there are some problems that are more general–e.g., issues with how generative AI tools are developed, trained, and maintained on the “back end,” as it were, which would apply to any downstream uses (such as energy usage, harm to data workers, violations of Indigenous data sovereignty in training, etc.). So I think some of the problems, at least, could be considered regardless of particular context of use.

First draft of framework

Without further ado, here is the very drafty first draft of the kind of thing I’m thinking about. At this point it’s just structured as a worksheet that starts off with brainstorming some possible uses of generative AI in one’s own work (e.g., teaching, learning, research, coding, data analysis, communications, and more). Then folks can pick one or two of those to focus on. The rest is a set of tables to fill out about potential benefits and problems with using generative AI in this way, and then a final one where folks make at least a provisional decision and then brainstorm one or two next steps.

Brainstorm possible uses

Think of a few possible uses of generative AI in your own work or study that you’d like to explore further, or ones you’re already engaged in. Take __ minutes to write down a list. [Providing a few example lists for folks could be helpful]

Then choose 2-3 of these to investigate further in the following steps.

Benefits and problems

Regarding problems with using AI, as noted above, some problems can apply regardless of the particular use case, and I think it’s important for folks to grapple with those even though they may be more challenging for individuals to address. Some background and resources on these would be useful to discuss in a facilitated session, ideally with some pre-reading. A number of the issues are fairly complex and would benefit from time to learn and discuss, so one can’t go through all of them in a limited time period.

The same goes for possible benefits; it would be useful to list a few possible areas in which there could be benefits for generative AI use, such as supporting student learning, doing repetitive tasks to free people up to have more time for complex or more interesting tasks, supporting accessibility in some cases. These will necessarily be high level while participants would brainstorm benefits that are more specific to their use case.

One could ask folks to brainstorm a few problems and benefits for generative AI in their use cases, including one of the more general problems as well as at least one that is specific to their use case.

Problem or Benefit Evidence Impacts Further info My view Value
E.g., climate impacts in both training and use This could be links Who is harmed? Who benefits? What other info would be helpful? One’s view on the topic at the moment Related value(s) one holds

This is not very nice looking in a blog post but hopefully you get the idea.

Decisions

Then participants could be encouraged to try to make an initial decision on use of GenAI in a particular use case, even if that might change later.

Use case Use GenAI? Why? If yes, how? Next steps
E.g., feedback on student work Your choice, and why/why not How to do so, including how you will address benefits and problems What one or two next steps will you take? This can include how you would go about getting more information you need to decide.

 

Reflections

The idea here is not necessarily to have people try to weigh the benefits against the problems–that is too complicated and would require that one go through all possible benefits and problems one can think of. Instead, the point is to start to engage in deeper ethical reflection on a particular use case and try to come to some preliminary decision afterwards, even if that decision may change with further information.

One place where I think folks may get hung up is on feeling like they need more information to make decisions. That is completely understandable, and in a limited time frame participants wouldn’t be able to go do a bunch of research on their own. But the framework at least may be able to bring to the surface that ethical issues are complex, and one needs to spend time with them, including finding out more information where one doesn’t have it yet, or has only one or two sources and needs more. That’s why I put in the column about “more info” into the first table example. It’s also why under “my view” I suggested this be one’s view at this time, recognizing that things may change as one investigates further. And one of the next steps could be to investigate some of these things further.

Of course, one reasonable response to this exercise is to decide that some of the general problems are bad enough that one feels one shouldn’t use generative AI tools at all. I mean for this kind of exercise to leave that option open.

The more I think about this, the more I think it would probably be better to do something like this in at least two steps; one where ethical issues and benefits are discussed to the degree feasible in a certain time frame, and then the next one where folks go through their own use cases with the tables as noted above. Otherwise it’s likely to be too rushed.

 

This is a rough sketch of an idea at the moment that I will likely refine. I feel like something along these lines could be useful, even if this isn’t quite it. So I’m happy for feedback!

AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Continue reading

AI & philosophical activity in courses part 1

I was reading through some resources on the Educause AI … Friend or Foe showcase, specifically the one on AI and inclusive excellence in higher education, and one thing in particular struck me. The resource talks, among other things, about helping students to understand the ways of thinking, speaking, and acting in a particular discipline, about making that clearer and whether AI might support this in some way.

This resonates with some ideas that have been bouncing a bit in my head the past few weeks on whether/how AI might help or hinder some of the activities I ask students to do in my courses, which led me to think about why I even ask them to do those activities in the first place. And thinking about this from a disciplinary perspective might help. What kinds of activities might be philosophical? And I don’t mean just those that professional philosophers engage in, because few students in my courses will go on to be professional philosophers, but all of them will do some kinds of philosophical thinking, questioning, discussing, etc. at some point in their lives I believe.

So what might it mean to engage in philosophical activities and can AI help students engage in these better in some way, or not? This is part one of me thinking through this question; there will be at least a part two soon, because I have enough thoughts that I don’t want to write a book-length blog post…

Asking philosophical questions

This is something all philosophers do in one way or another, and that I think can be helpful for many people in various contexts. And yet I find it challenging to define what a philosophical question is, even though I do it all the time. I don’t teach this directly, but I should probably be more conscious about it because I do think it would be helpful for students to be able to engage in this activity more after the class ends.

This reminds me of a post I also read today, this time by Ryan J. Johnson on the American Philosophical Association blog called “How I Got to Questions.” Johnson describes a question-focused pedagogy, in which students spend a lot of their time and effort in a philosophy course formulating and revising questions, only answering them in an assignment towards the end. Part of the point is to help students to better understand over time what makes a question philosophical through such activities.

Johnson credits Stephen Bloch-Schulman in part, from whom I first heard about this approach, and who writes about question-focused pedagogy on another post on the APA blog. Bloch-Schulman did a study that showed philosophy faculty using questions more often and in different ways when reading the same text as undergraduates and other faculty. I appreciated this point (among others!):

I believe that much of the most important desiderata of inclusive pedagogy is to make visible, for students, these same skills we hide from ourselves as experts, to make the acquisition of these skills as accessible as possible, particularly for those students who are least likely to pick up those skills without that work on our part. Question-skills being high on that list. (Introducing the Question-Focused Pedagogy Series)

One step for me in doing this more in my teaching would be to do more research and reflecting myself on what makes some questions more philosophical than others (Erica Stonestreet’s post called “Where Questions Come From” is one helpful resource, for example).

AI and learning/practicing philosophical questions

But this post is also focused on AI: might AI be used in a way to help support students to learn how to ask philosophical questions?

Continue reading

Blogging on blogging again: more meta!

Screen shot of the title of this blog, You're the Teacher, set against an image of misty mountains with a tree in the foreground.

Metapic

I’m joining the DS106 Radio Summer Camp this week, and Jim Groom put out an invitation to all of us to join in a session today about blogging called “Blog or Die!” Why does blogging rule all media, as Jim asked? I thought I’d blog a few notes about blogging as prep for joining this session.

I seem incapable of writing blog posts under 2000 words, but for this one I’m really gonna try!

Benefits of blogging myself

I started blogging in 2006, after learning about WordPress and blogs from the amazing Brian Lamb (who was at the University of British Columbia at the time, but who is now doing fantastic work over at Thompson Rivers University). Funny enough, one of my first posts was called “Why blog?”. Coming around to the same theme I guess!

In reading over that post I find it still resonates with me eighteen years later. Benefits of blogging I wrote back then:

  • Reflecting on teaching and learning so as to improve
  • Sharing back with others, since I have learned so much from those who have shared their reflections
  • Connecting with a community
  • Thinking things out for oneself and being able to find those reflections fairly quickly later

Continue reading