Author Archives: chendric

AI & relationships: Vallor, The AI Mirror

As discussed in a recent blog post, I’ve been thinking a lot about AI and relationships recently, and in this post I’m going to discuss a few points related to this topic from a book by Shannon Vallor called The AI Mirror (2024). Vallor doesn’t directly address AI and relationships, but I think a number of her arguments do relate to various ways in which humans relate to themselves, each other, and AI.

Mirrors and their distortions

Vallor focuses throughout the book on the metaphor of AI as a mirror, which she uses to make a few different points. First, she talks about how many current AI systems function as mirrors to humanity in the sense that how they operate is based on training data that reflects current and past ideas, beliefs, values, practices, emotions, imagination, and more. They reflect back to humans an image of what many (not all, since this data is partial and reflects dominant perspectives) have already been.

In one sense, there can be some silver lining in this, Vallor notes, as such mirrors can show things in stark relief that might further emphasize the need for action:

AI today makes the scale, ubiquity, and structural acceptance of our racism, sexism, ableism, classism, and other forms of bias against marginalized communities impossible to deny or minimize with a straight face. It is right there in the data, being endlessly spit back in our faces by the very tools we celebrate as the apotheosis of rational achievement. (46)

But of course, these biases showing up in AI outputs are harmful, and she spends a lot of the book focusing on the downsides of relying too much on AI mirrors for decision making, for understanding ourselves and the world around us, given that they, like any mirror, provide only a surface, distorted reflection. For one thing, as noted above, their reflections tend to show only part of humanity’s current and past thoughts, values, and dreams, with outputs that, in the case of LLMs for example, focus on what is most likely given what is most prevalent in training data.

In addition, AI mirrors can only capture limited aspects of human experience, since they don’t have the capacity for lived experience of the world or being embodied creatures. For example, language models can talk about pleasure, pain, the taste of a strawberry, a sense of injustice, etc., but they do not of course have experiences of such things. This can have profound impacts on humans’ relationships with each other, if those are mediated by AI systems that reduce people to machine-readable data. Vallor illustrates this by pointing to the philosopher Emmanuel Levinas’ account of encountering another person as a person and the call to responsibility and justice that ensues:

As … Emmanuel Levinas wrote in his first major work Totality and Infinity, when I truly meet the gaze of the Other, I do not experience this as a meeting of two visible things. Yet the Other (the term Levinas capitalizes to emphasize the other party’s personhood) is not an object I possess, encapsulated in my own private mental life. The Other is always more than what my consciousness can mirror. This radical difference of perspective that emanates from the Other’s living gaze, if I meet it, pulls me out of the illusion of self-possession, and into responsibility….

In this gaze that holds me at a distance from myself, that gaze of which an AI mirror can see or say nothing, Levinas observes that I am confronted with the original call to justice. When a person is not an abstraction, not a data point or generic “someone,” but a unique, irreplaceable life standing before you and addressing you, there is a feeling, a kind of moral weight in their presence, that is hard to ignore. (60)

The more people treat each other through the lens of data that can be “classified, labeled, counted, coordinated, ranked, distributed, manipulated, or exploited” rather than as “subjects of experience,” the more we may lose that already too-rare encounter (61). This is nothing new of course; it’s a trend that has been continuing for a long time in many human communities. But it can be made worse by outsourcing decisions, such as those related to health care, insurance, jobs, access to educational institutions, who may be a repeat offender, and more, which can in some cases reduce opportunities for human judgment in the name of efficiency.

Continue reading

Workshop idea on AI ethical decision making

I am thinking about whether/how I might be able to take the very drafty personal AI ethics framework idea from a recent blog post and do something with it during a synchronous workshop for faculty, students, and staff. As I was working on that blog post I started to think that to really work through one’s ethical views on AI is very complex and might be best done through something like a set of online modules rather than a short engagement like a workshop. But I’m going to use this blog post to try to see what might be possible–as I frequently think best by writing, I’m going to use this opportunity to do that!

I’m imagining a 1.5 or 2-hour workshop on this topic, and wondering what might be both feasible, and of course useful for participants to help folks think carefully about ethical considerations in possible uses of generative AI in teaching and learning. My main worry, as I think about this, is that making ethical decisions is really complicated, and I don’t want to overwhelm people with things to consider to the degree that some may end up feeling like it’s too much to try to do. I really want to find a middle ground between a deep ethical analysis for decisions around generative AI (which could and has been done in book-length manuscripts!) and providing little in the way of guidance on how to make ethical decisions in this area. This is challenging, I’m finding as I think this through.

Below is a draft outline for a workshop, with some early ideas that will need further refinement.

Outline for a workshop

1. Ethical decision making & use cases

Framework

I think it could be helpful to have some kind of ethical decision making framework. What I have in my earlier blog post is not quite there yet; I don’t think it includes everything it needs to, though it’s a start. After doing a quick web search on ethical frameworks, and considering my own thoughts, here are some elements that it would be good to include for the purposes of this kind of workshop. I’m numbering them just for ease of referring to them later, but they may not necessarily be in exactly this order.

  1. Identify the question/decision to be made, and what options are available
  2. List various entities involved, including people and also other living and non-living entities as relevant
  3. Identify possible ethical issues involved
  4. Gather information relevant to those issues as best you can; note questions you still have and where you would like to have further information
  5. Evaluate options according to ethical values and principles
  6. Make a decision
  7. Develop and then act on next steps

There are likely more things to consider, such as reviewing the outcome of the decision to consider its positive and negative ethical impacts and learn for the future, but for the current purpose the above is a decent start I think.

This section of a workshop could include brief introduction to an ethical decision making framework being used in the session, and that will guide parts of the session. We won’t be able to do all of the above steps in a short workshop.

Brainstorming use cases

In addition, at this point we could ask participants to brainstorm one or more possible use cases for generative AI in teaching and learning (or in some other context, depending on audience). This would be step 1 in the framework above. These could be contributed individually on a shared google doc perhaps, to be used later in the session. Time permitting, they could also include information on people and other entities involved (step 2 in the framework).

For example, one use case could be the decide whether to use generative AI tools to make comments on student written work. It would be helpful to consider some further specifics, such as possible tools to be used and the kind of assignment and feedback one is thinking about. Those involved would be students, the instructor, possibly TAs.

Continue reading

AI and relationships: Indigenous Protocol and AI paper

I’ve been thinking a lot lately about generative AI and relationships. Not just in terms of how people might use platforms to create AI companions for themselves, though that is part of it. I’ve been thinking more broadly about how development and use of generative AI connects with our relationships with other people, with other living things and the environment, and with ourselves. I’ve also been thinking about our relationships as individuals with generative AI tools themselves; for example, how my interactions with them may change me and how what I do may change the tools, directly or indirectly.

For example, the following kinds of questions have been on my mind:

  • Relationships with other people: How do interactions with AI directly or indirectly benefit or harm others? What impacts do various uses of AI have on both individuals and communities?
  • Relationships with oneself: How do interactions with AI change me? How do my uses of it fit with my values?
  • Relationships with the environment: How do development and use of AI affect the natural world and the relationships that individuals and communities have with living and non-living entities?
  • Relationships with AI systems themselves: How might individuals or communities change AI systems and how are they changed by them?
  • Relationships with AI developers: What kinds of relationships might one have/is one having with the organizations that create AI platforms?

More broadly: What is actually happening in the space between human and AI? What is this conjunction/collaboration? What are we creating through this interaction?

These are pretty large questions, and I’m going to focus in this and some other blog posts on some texts I’ve read recently that have guided my interest in thinking further about AI and relationships. Then later I will hopefully have a few clearer ideas to share.

Indigenous Protocol and AI position paper

My interest in this topic was at first sparked by reading a position paper on Indigenous Protocol and Artificial Intelligence (2020), produced by participants the Indigenous Protocols and Artificial Intelligence Working Group that participated in two workshops in 2019. This work is a collection of papers, many of which were written by workshop participants. I found this work incredibly thought-provoking and important, and I am only going to barely touch on small portions of it. For the purposes of this post, I want to discuss a few points about AI and relationships from the position paper.

Continue reading

Draft idea for an AI personal ethical decision framework

I recently wrote two blog posts on possible ways that generative AI might be able to support student learning in philosophy courses (part 1, part 2). But through doing so, and also through a thought-provoking comment by Alan Levine on my earlier blog post reflecting on a presentation by Dave Cormier on focusing on values in situations of uncertainty, I’m now starting to think more carefully about my use of AI and how it intersects with my values.

Alan Levine noted in his comment that sometimes people talking about generative AI start by acknowledging problems with it, and then “jump in full speed” to talking about its capabilities and possible benefits while no longer engaging with the original issues. This really struck me, because it’s something I could easily see myself doing too.

I started reflecting a lot on various problems with generative AI tools, as well as potential benefits I can imagine, how all of these intersect with my values, to try to make more conscious ethical decisions about using generative AI in various situations, or not. On one hand, one could make philosophical arguments about what should be done “in general,” but even then each individual needs to weigh various considerations and their own values, and make their own decisions as to what they want to do.

I decided, then, to try to come up with a framework of some kind to support folks making those decisions. This is an early brainstorm; it will likely be refined over time and I welcome feedback! It is something that would take time, effort, and fairly deep reflection to go through, and it may go too far in that direction. Especially since I can imagine something like this being used in a workshop (or series of workshops) or a course, and those have time limits (of course, there is no requirement that people must work through something like this in a limited time period; they could always go through it on their own later…it’s just that I know myself and I often will have the intention to return to things like this later and, well, just get busy). This is one aspect that needs more work.

The general idea is to go through possible benefits and problems with using generative AI tools, connect these to one’s values, and then brainstorm: whether one will use generative AI in a particular context, and if so, how one might address the problems and further support possible benefits.

I think it would be helpful to start with a set of possible uses in one’s particular context and arrange the rest from there, because a number of the possible benefits and problems can differ according to particular use cases. But there are some problems that are more general–e.g., issues with how generative AI tools are developed, trained, and maintained on the “back end,” as it were, which would apply to any downstream uses (such as energy usage, harm to data workers, violations of Indigenous data sovereignty in training, etc.). So I think some of the problems, at least, could be considered regardless of particular context of use.

First draft of framework

Without further ado, here is the very drafty first draft of the kind of thing I’m thinking about. At this point it’s just structured as a worksheet that starts off with brainstorming some possible uses of generative AI in one’s own work (e.g., teaching, learning, research, coding, data analysis, communications, and more). Then folks can pick one or two of those to focus on. The rest is a set of tables to fill out about potential benefits and problems with using generative AI in this way, and then a final one where folks make at least a provisional decision and then brainstorm one or two next steps.

Brainstorm possible uses

Think of a few possible uses of generative AI in your own work or study that you’d like to explore further, or ones you’re already engaged in. Take __ minutes to write down a list. [Providing a few example lists for folks could be helpful]

Then choose 2-3 of these to investigate further in the following steps.

Benefits and problems

Regarding problems with using AI, as noted above, some problems can apply regardless of the particular use case, and I think it’s important for folks to grapple with those even though they may be more challenging for individuals to address. Some background and resources on these would be useful to discuss in a facilitated session, ideally with some pre-reading. A number of the issues are fairly complex and would benefit from time to learn and discuss, so one can’t go through all of them in a limited time period.

The same goes for possible benefits; it would be useful to list a few possible areas in which there could be benefits for generative AI use, such as supporting student learning, doing repetitive tasks to free people up to have more time for complex or more interesting tasks, supporting accessibility in some cases. These will necessarily be high level while participants would brainstorm benefits that are more specific to their use case.

One could ask folks to brainstorm a few problems and benefits for generative AI in their use cases, including one of the more general problems as well as at least one that is specific to their use case.

Problem or Benefit Evidence Impacts Further info My view Value
E.g., climate impacts in both training and use This could be links Who is harmed? Who benefits? What other info would be helpful? One’s view on the topic at the moment Related value(s) one holds

This is not very nice looking in a blog post but hopefully you get the idea.

Decisions

Then participants could be encouraged to try to make an initial decision on use of GenAI in a particular use case, even if that might change later.

Use case Use GenAI? Why? If yes, how? Next steps
E.g., feedback on student work Your choice, and why/why not How to do so, including how you will address benefits and problems What one or two next steps will you take? This can include how you would go about getting more information you need to decide.

 

Reflections

The idea here is not necessarily to have people try to weigh the benefits against the problems–that is too complicated and would require that one go through all possible benefits and problems one can think of. Instead, the point is to start to engage in deeper ethical reflection on a particular use case and try to come to some preliminary decision afterwards, even if that decision may change with further information.

One place where I think folks may get hung up is on feeling like they need more information to make decisions. That is completely understandable, and in a limited time frame participants wouldn’t be able to go do a bunch of research on their own. But the framework at least may be able to bring to the surface that ethical issues are complex, and one needs to spend time with them, including finding out more information where one doesn’t have it yet, or has only one or two sources and needs more. That’s why I put in the column about “more info” into the first table example. It’s also why under “my view” I suggested this be one’s view at this time, recognizing that things may change as one investigates further. And one of the next steps could be to investigate some of these things further.

Of course, one reasonable response to this exercise is to decide that some of the general problems are bad enough that one feels one shouldn’t use generative AI tools at all. I mean for this kind of exercise to leave that option open.

The more I think about this, the more I think it would probably be better to do something like this in at least two steps; one where ethical issues and benefits are discussed to the degree feasible in a certain time frame, and then the next one where folks go through their own use cases with the tables as noted above. Otherwise it’s likely to be too rushed.

 

This is a rough sketch of an idea at the moment that I will likely refine. I feel like something along these lines could be useful, even if this isn’t quite it. So I’m happy for feedback!

AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Continue reading

AI & philosophical activity in courses part 1

I was reading through some resources on the Educause AI … Friend or Foe showcase, specifically the one on AI and inclusive excellence in higher education, and one thing in particular struck me. The resource talks, among other things, about helping students to understand the ways of thinking, speaking, and acting in a particular discipline, about making that clearer and whether AI might support this in some way.

This resonates with some ideas that have been bouncing a bit in my head the past few weeks on whether/how AI might help or hinder some of the activities I ask students to do in my courses, which led me to think about why I even ask them to do those activities in the first place. And thinking about this from a disciplinary perspective might help. What kinds of activities might be philosophical? And I don’t mean just those that professional philosophers engage in, because few students in my courses will go on to be professional philosophers, but all of them will do some kinds of philosophical thinking, questioning, discussing, etc. at some point in their lives I believe.

So what might it mean to engage in philosophical activities and can AI help students engage in these better in some way, or not? This is part one of me thinking through this question; there will be at least a part two soon, because I have enough thoughts that I don’t want to write a book-length blog post…

Asking philosophical questions

This is something all philosophers do in one way or another, and that I think can be helpful for many people in various contexts. And yet I find it challenging to define what a philosophical question is, even though I do it all the time. I don’t teach this directly, but I should probably be more conscious about it because I do think it would be helpful for students to be able to engage in this activity more after the class ends.

This reminds me of a post I also read today, this time by Ryan J. Johnson on the American Philosophical Association blog called “How I Got to Questions.” Johnson describes a question-focused pedagogy, in which students spend a lot of their time and effort in a philosophy course formulating and revising questions, only answering them in an assignment towards the end. Part of the point is to help students to better understand over time what makes a question philosophical through such activities.

Johnson credits Stephen Bloch-Schulman in part, from whom I first heard about this approach, and who writes about question-focused pedagogy on another post on the APA blog. Bloch-Schulman did a study that showed philosophy faculty using questions more often and in different ways when reading the same text as undergraduates and other faculty. I appreciated this point (among others!):

I believe that much of the most important desiderata of inclusive pedagogy is to make visible, for students, these same skills we hide from ourselves as experts, to make the acquisition of these skills as accessible as possible, particularly for those students who are least likely to pick up those skills without that work on our part. Question-skills being high on that list. (Introducing the Question-Focused Pedagogy Series)

One step for me in doing this more in my teaching would be to do more research and reflecting myself on what makes some questions more philosophical than others (Erica Stonestreet’s post called “Where Questions Come From” is one helpful resource, for example).

AI and learning/practicing philosophical questions

But this post is also focused on AI: might AI be used in a way to help support students to learn how to ask philosophical questions?

Continue reading

Blogging on blogging again: more meta!

Screen shot of the title of this blog, You're the Teacher, set against an image of misty mountains with a tree in the foreground.

Metapic

I’m joining the DS106 Radio Summer Camp this week, and Jim Groom put out an invitation to all of us to join in a session today about blogging called “Blog or Die!” Why does blogging rule all media, as Jim asked? I thought I’d blog a few notes about blogging as prep for joining this session.

I seem incapable of writing blog posts under 2000 words, but for this one I’m really gonna try!

Benefits of blogging myself

I started blogging in 2006, after learning about WordPress and blogs from the amazing Brian Lamb (who was at the University of British Columbia at the time, but who is now doing fantastic work over at Thompson Rivers University). Funny enough, one of my first posts was called “Why blog?”. Coming around to the same theme I guess!

In reading over that post I find it still resonates with me eighteen years later. Benefits of blogging I wrote back then:

  • Reflecting on teaching and learning so as to improve
  • Sharing back with others, since I have learned so much from those who have shared their reflections
  • Connecting with a community
  • Thinking things out for oneself and being able to find those reflections fairly quickly later

Continue reading

Values-based tugging

Okay, so the title of this post may seem a little strange but bear with me. Yesterday I listened to a fantastic session by Dave Cormier for the DS106 Radio Summer Camp this week, called “A year of uncertainty – fighting the fight against the RAND corporation.” I wasn’t entirely sure what to expect, as I hadn’t managed to find the abstract/description of this session until after it was over (click on the session link on the schedule for the summer camp), but I knew Dave is amazing, so of course I had to listen! And it was very thought-provoking as I figured it would be.

Problem solving and uncertainty

One of the main points Dave was talking about was in how many aspects of our social, political, educational, and other lives are focused on problem-solving, on addressing well-defined problems that can have well-defined answers that we just need to work hard to find. This is not necessarily a problem in itself, Dave noted, as there do exist such problems and there can be very useful methods for working to address them. The issue is if we focus on that to ignore the less-easily-defined problems, the messier issues, the more uncertain situations where a single right answer is not going to be forthcoming no matter what kinds of problem-solving methodologies we throw at it.

Dave mentioned medical students coming out of their education into practice and, when confronted with complex, uncertain, grey areas where a medical solution isn’t immediately forthcoming they tended to focus on blaming themselves, as if it was their failure for not finding an answer where none was to be found. He also noted how, at least in English, it is common when someone asks a question like “what is your view of X,” or “is Y right or wrong,” you feel like you have to answer, even if you aren’t sure, or there isn’t a clear-cut answer. It’s just part of the accepted norms of speaking that you should have an answer.

Both of these resonated with me, and perhaps especially the second; I have sometimes been asked, in various contexts, to provide my view on something that is of a more uncertain nature, or to say if I think it’s right, or to say what I think the future will bring, and I do feel pressured to respond. But maybe because of my background in philosophy I’m actually pretty comfortable with saying that I am not sure, or I’d need to look into it more, because such situations really do require more thought, research, reflection before coming to a conclusion.

There is the danger of jumping in too quickly with an answer, but there is also a danger in spending too much time in the thinking and reflection and not moving past that towards making some kind of decision or other. And sometimes I get stuck in that latter step when faced with really complex issues–there is so much to consider and so much value in multiple perspectives that it can be hard to “land” somewhere, as it were. It’s tempting to remain up in the air while not being sure of which alternatives are best (because there are no easy answers).

Landing on values and pulling from there

I really appreciated where Dave landed in his presentation: rather than only feeling stuck, suspended, we can consult our values and make a move based on those, we can tug the rope in a tug of war in the direction of our values and work to move things from there. The focus on values is key here: ask yourself what are your values as they relate to this situation, and make decisions and act based on those, knowing that’s enough in uncertain situations. Which doesn’t mean, of course, that you can’t revisit your values and how they apply to the situation if either of those things changes, but that it’s a landing place and it’s solid enough for the moment. He talked about how we can have conversations with students and others about why we would do something in a particular situation, rather than what the right answer is, focusing on the values that are moving us.

To do so requires that we are clear about what our values are, which is in some cases more easily said than done. This is something near and dear to my heart as a philosopher, as trying to distill what is underlying our views and our decisions, what kinds of reasons and values, is part of our bread and butter. But when I reflect on how I’ve taught over the years, I’m not sure I’ve focused as much as I could have on helping students be clear about their values, instead focusing on the “content” of course quite a bit. The latter has been in the service of helping students understand that when we make ethical choices there are (or should be) reasons behind those, and some options as to what kinds of reasons those could be. I, like many other philosophers, have then also asked students to provide their own arguments related to various ethical and other philosophical questions, which does at times mean providing reasons based on values. But how much have I really spent time supporting students to  define and articulate their own values in addition to applying them through writing arguments? I’m not sure, and this session was really generative for me in thinking about that (as well as being generative in multiple other ways!).

A couple of years ago I wrote a blog post as part of MYFest 2022, talking about how I had a hard time imagining a more just future for education just because I kept focusing on all of the structural complexities involved in educational systems and how changing one thing would require changing many more interconnected aspects and … it all felt pretty overwhelming. The metaphor I used was of rocks and boulders, which came to me as I was passing multiple rock formations on a walk. Some piles rocks are fairly easy to move; others are locked into network like shapes where to move one would require moving all the others, and they are after all very heavy. If I think in these terms then of course it’s hard to imagine change. Things are literally set in stone!


But what if we thought about complex issues and structures more like flexible webs? (Which is an image that reminds me of other of Dave Cormier’s work such as that on rhizomatic learning.) So that if you tug on one part it can still move and the other parts will move as well (or break I suppose, which in some cases may not be a bad thing).

This feels more hopeful to me–it still respects the interconnectedness of structures but also notes there can be some movement, some wiggle room. Perhaps the spider web is too flexible to respect the challenges of moving some of the more entrenched structures, though. Even though spider silk is incredibly strong, it seems a bit too easy to just sweep away with the swoosh of one’s hand.

How about a net:

This feels stronger, and like a spider web, meant to catch and hold things tight, but which can still be moved, shaped, morphed, or even broken. I like the image above because a piece of the net is fraying, noting its fragility amidst the otherwise tight knots.

A line that Dave ended on will stick with me: “Ask yourself what you care about, and then do what you can.” That feels empowering.

Applying to AI

One of the things that feels uncertain to me in this moment is where things are going with AI, what the future holds, and what the best approaches are to using AI (or not!) in education. How might those of us who are educators address the question of whether and/or how to adopt AI in our courses, in our teaching practices, to encourage our students to use it, etc.? Of course, all of this is going to differ according to context, discipline, teaching and learning goals, and more. But I think Dave’s session provides a fruitful way to approach this question. This is a complicated and uncertain situation but what we can do is consult our values: what do we value, what do we care about, what do we want to promote and avoid?

This may seem fairly elementary in a way–might we already frequently act from our values? Maybe, but there are also times when I know I have done things in teaching because they just seemed like the usual thing to do, what I had experienced, that they just seemed right and “normal,” but when I took a step back to think about my values and what I care about then things changed. For example, I used to get upset when people would leave during the middle of class, until I reflected on how I care about supporting students to learn in the ways that are helpful for them, coupled with learning about how some students need to move around, or to take breaks from stimulation, or need to leave for other reasons. It’s still not easy, especially in small courses, but I’m focusing less on how I feel in that situation and more on how being able to take a break may be helpful for some students more than sitting in one place for 50-80 minutes.

It’s perhaps that previous point about taking some time to reflect on one’s values, what is important, what one cares about, and then applying to one’s teaching practice. At one point last year I took the time to write out my values in terms of leadership, and that was immensely helpful in focusing my attention on areas I wanted to work on. I may have acted from some of those automatically, but bringing them to the surface helped me not only see what was grounding some of my actions but also where I professed values but that my actions could do a better job of supporting.

Now, this process isn’t going to lead to easy answers (there are none for the kinds of issues Dave was talking about), and our values may lead to conflicting viewpoints. For example, I care about allowing students to use technology that will support their learning, and I think that generative AI may be helpful for student learning in some cases–I’ve been looking into how it can support students with various disabilities, for example, and should blog about that later. Then there is the value of equity and how not all students have equal access to generative AI tools, so some may get supports that others don’t. But digging into what one values can help clarify why to go one direction or another, putting one on more solid footing while starting to tug, even if one isn’t entirely sure that is the best direction. It is the best one for this moment while recognizing the complexity that makes it a difficult, but at least grounded, choice.

And if we go with the net metaphor, then tugging in one place can pull other threads, moving things in a local area to start, and maybe in larger areas over time. Particularly if more people are tugging in similar directions (organized action, e.g.). One person can make a difference, but it is more likely that many, working together, can make a larger difference. And we may fray that net to the point of finding ways to morph or break some of the confining structures we find ourselves in.

All of this is also bringing to mind the idea of “entangled pedagogy” from Tim Fawns, which I wrote a blog post about in 2022. Rather than reviewing that blog post, I’ll just say that he has an aspirational view of the relationship between technology and pedagogy in which we focus on purposes, values, and contexts in an entangled relationship with technology and pedagogy. Rather than trying to emphasize pedagogy over technology or vice versa, or even how they are connected to each other, we focus instead on the purposes and the values we have in teaching and learning, and the specificity of our contexts, and how those can shape our choices in both pedagogy and technology (and how they intertwine).

In a quote that resonates with some of what Dave said and what I’ve written here, Tim notes:

Attending to values, purposes and context can help us identify problematic assumptions, such as those embedded in simple solutions to complex problems, reductive characterisations of students (e.g. as ‘digital natives’, see Oliver 2011), or assertions that teachers should conform to modern digital culture and practices (Clegg 2011).

Conclusion

I really appreciated the opportunity to participate in this session with Dave. There was a lot more than what I’ve been able to talk about above, so I highly suggest you listen to the recording when it’s posted on the DS106 Radio Summer Camp recordings page (and check out other fantastic sessions while you’re at it of course!). Big thank you to Dave for a thought-provoking session!

Imagination by Benjamin, Part 3

stacks of books in boxes with a sign above them that says "Libros Libres" (free books).

Libros Libres by Alan Levine on Flickr, licensed CC0.

 

In the last two posts I have been talking about Ruha Benjamin’s book Imagination: A Manifesto, which I’m reading as part of a book club for MYFest 2024. Here is part 1 on chapters 1 and 2, and here is part 2 on chapters 3 and 4. In this last post I’ll discuss chapters 5 and 6.

Among other things, chapter 5 focuses on how art and stories are crucial for changing imaginations and providing new visions. Benjamin quotes Angela Y. Davis:

… if we believe that revolutions are possible, then we have to be able to imagine different modes of being, different ways of existing in society, different social relations. In this sense art is crucial. Art is at the forefront of social change. Art often allows us to grasp what we cannot yet understand. (98-99)

For example, Benjamin points to artistic and imaginative dreams of what border areas between nations could be, rather than policed, surveilled, violent structures. One idea is a “binational library on the Mexico-US border” that would make the border “nothing more than a bookshelf allowing for ‘transnational exchanges of books, ideas, and knowledge'” (quoting Ronald Rael; 95). In a twist, Benjamin points out that such a library existed between the US and Canada in the early 20th century, and while that may not seem so far-fetched, if the former does then what does that tell us about the differences? About ourselves? Instead of the harsh break between ourselves and others, us and them, Benjamin states, “We must populate our imaginations with images and stories of our shared humanity of our interconnectedness, of our solidarity as people. A poetics of welcome, not walls” (p. 102).

Chapter 5 includes multiple examples of organizations dedicated to imagining the future, telling new stories about interconnection, collaboration, and interdependence, and working towards implementing them. Benjamin also dedicates space to discussing Afrofuturism and Indigenous futurism, as imaginations that counter a prevailing trend in which “Indigenous and racialized peoples, who know all too well what it means to live in a dystopian present, get suspended in time, never imagined among those peopling the future” (112).

I had heard these terms before but Benjamin’s short discussion helped me grasp them better. I have experienced that it is fairly common to think about Indigenous communities and cultures in terms of their past traditions, whereas Indigenous futurism, according to Grace Dillon, folds the past into the present, “which is folded into the future–a philosophical wormhole that renders the very definitions of time and space fluid in the imagination” (113). Benjamin points to the Initiative for Indigenous Futures at Concordia University here in Canada that, among other activities, teaches Indigenous youth how to “adapt stories from their community into experimental digital media,” to “envision themselves in the future while drawing from their heritage” (114). This is another form of breaking down walls, those between times as well as spaces, weaving the past into the present into the future and back.

My own visions of the future will of course draw from my past and present experiences, which will be limited (as any individual’s would be) and based in my privileged position. One thing I’m taking from these chapters is that imagining new worlds should be a collaborative activity with people involved who bring many different epistemologies, experiences, and identities.

This is a good segue into chapter 6, which is a short, practical chapter that provides sample activities to expand and strengthen imaginations. While Benjamin notes these can be done through individual reflections, she encourages readers to engage in these activities with others, through collective imagination: “Like mushrooms, the kind of imagination that can potentially transform toxic environments into habitable ones relies on a vast network of underground connections–with people, organizations, and histories” (p. 122). The appendix includes discussion- and activity-based prompts for individuals or groups that are short, but no less inviting and open-ended.

I am very grateful for the opportunity in MYFest to join a reading circle about this book, and not only reflect on imagination through reading the book, but also practice it in our group meetings. One activity I found particularly engaging (among many!) during the group meetings I was able to attend was a collaborative story-building activity. We started with a scenario, and then one person would have to brainstorm a challenge or obstacle, and another person would come up with an idea for how to address that, and then there would be another challenge, and so on, until we brought the story to a conclusion. These were short exercises, just a few back-and-forths of a challenge plus addressing that, but it was incredibly powerful to have the chance to imagine both how things can go well and also the reality that there will be complexity and obstacles, and then be nudged to really think hard about how to address those. This was a very hopeful exercise, in that we didn’t get stuck with the obstacles, but moved through and beyond them to something new.

There have been a few sessions in MYFest on imagination and speculative futures, including one that happened today on Imagination as a Liberatory Practice with Jasmine Roberts-Crews. During this session participants were encouraged to reflect on their practices of dreaming and play, and if those were challenging, to then reflect on why and what the obstacles are. I found myself thinking that I don’t have much of a dreaming practice, partly because I’ve been so influenced by the idea that such things aren’t “productive,” and it’s better to spend one’s time doing work that is more traditionally considered so. Of course, if I were someone involved in more creative pursuits I might feel differently!

Through reading Benjamin’s book and discussing it in the reading circle meetings, as well as attending Jasmine’s session (and others noted below), I’m realizing the deep importance of dreaming, daydreaming, imagination, and play to ideating and working towards necessary social change. Otherwise it’s too easy to get caught in how things are, adhering to existing practices and values and their notion of what counts as important and “productive.”

There have been several sessions so far (and more to come!) on imagination and speculative ideation, and I haven’t been able to attend them all. Thankfully, many have been recorded. You can find recordings of MYFest sessions on the MYFest website. For example, I attended the first session of by Dani Dilkes on Speculative Futures for Education, and am looking forward to reviewing the recording for the second (Session 1 and Session 2). Though I will not be in town and not able to attend, there is one coming up July 31 on Future Dreaming Part 2: Storytelling for Liberatory Education. The Wisdom of Fools Storytelling Activities and Readers Theatre are also wonderful examples of tending to our imaginations.

I am so grateful for the MYFest facilitators and the community generally for many things we’re learning together, including focusing on the importance of imagination, dreaming, and play in our lives and in our work as educators!

Imagination by Benjamin, Part 2

A wooden bench in a park, with arm rests placed along the length of it so that you can't lie down.

Image by Nhung Tran from Pixabay

In the last post, I wrote down notes on chapters 1 and 2 in the book Imagination: A Manifesto by Ruha Benjamin. In this post I continue with chapters 3 and 4.

In these chapters, Benjamin talks about multiple imaginaries that support and perpetuate inequalities, and ways to imagine otherwise. Chapter 3 begins with a point that particularly struck me: in discussing how some who are working on virtual reality (VR) technologies note that they can help people to experience better living spaces that they don’t have in reality, Benjamin asks:

But how about a reality where everyone has sufficient resources? Instead of imagining a world where gross extremes between the wealthy and poor are ended, a growing industry fueled by the imagination of the uber-rich is working overtime to create virtual escapes from inequality and sell us on their dreams. (47)

While escape can be very important sometimes, I found her point very important that the effort and expense going into some applications of VR could be instead spent on imagining a world where VR isn’t needed to accommodate for vast differences in wealth.

And yet, VR and AR (augmented reality) can be very important as learning tools too; I can think of some applications at my institution that help students better learn about the brain, that support nurse practitioners to practice their skills with virtual patients, and that provide opportunities for students to practice new language skills with a virtual avatar (to name a few).

Benjamin herself points to a valuable use of augmented reality in Chapter 4, where she discusses an app called Breonna’s Garden, dedicated to celebrating the life of Breonna Taylor. So it’s not the technology that’s the problem, it’s the story, the imagination, that the technology is used to support.

Chapter 3 focuses on a “eugenics imagination,” in which “some lives are deemed desirable and others disposable” (49). Benjamin discusses some topics that many may more easily associate with eugenics, such as forced sterilization programs in women’s prisons, but also prisons themselves as “eugenic institution[s], snatching up and discarding those society deems human detritus” (58). She also points to structures that make play either difficult or downright dangerous for Black and Brown children as supporting an imaginary where some people’s lives are more disposable than others’. This is due to issues such as “under-resourced or understaffed daycare centers,” design decisions in cities and in marginalized neighbourhoods that “make it hard to play freely,” and police “disrupt[ing] Black leisure with stop-and-frisk, targeted harassment, and violence” such as being killed while at play (e.g., Tamir Rice and Raymond Chaluisant) (61-62).

Benjamin returns to design at the end of chapter 3, with another discussion that really stood out to me. She starts out talking about how park benches are often made to keep people from lying down and sleeping, including a spiked bench where the spikes would only retract if you paid money (67). She then points to a creative project called Arhcisuits by Sarah Ross, which are suits that have foam appendages that allow one to get around such architectural designs, such as putting the foam between the arm rests on benches so that you can lie down on top of the foam. This really struck me:

The bench is a great metaphor for the spikes built into our institutions, while the foam-lined suit epitomizes how individuals are made responsible for being smarter, fitter, more suitable, to avoid harm. (69)

Connecting this to education, I think of the various ways that students with disabilities are made to adjust to the spikes in our post-seconday educational institutions; it is often their responsibility get a diagnosis, advocate for themselves, sign up with the disability support centre, figure out what to do if faculty don’t provide the accommodation requested, and more. The burden of work is put onto them to find a way to continue to learn in a system that was not designed for them. Their foam suits are expensive and exhausting, and not within reach of all students who might use them, and who therefore either never go to university or drop out. How can we imagine and create instead post-secondary environments that have fewer spikes to begin with (or, dreaming big, that have no spikes at all?).

A quote from chapter 4 that also struck me is relevant to this question. Benjamin talks about a few organizations dedicated to imagining justice, more just and equitable communities, cities, and institutions, and notes in relation to some of them:

This is part of a radical tradition of people who have no interest in being “included” inside a burning house. Instead, they are sounding the alarm about the treacherous blaze while, at the same time, laying down the bricks for more habitable social structures. (87)

By asking students with disabilities to find ways to be included in existing structures, we’re putting the burden on them rather than changing the structures. This is not to criticize disability support centres, which often do very excellent work and are providing a lifeline to many, many students. Such lifelines may always be needed, as it would be very difficult for any institution to serve every individual. But perhaps we can adjust the environment so the house is more welcoming for more people and fewer foam pads, or lifelines, are needed.

Another quote that struck me from chapter 4 comes from Robin D.G. Kelly:

Without new visions we don’t know what to build, only what to knock down. We not only end up confused, rudderless, and cynical, but we forget that making a revolution is not a series of clever maneuvers and tactics but a process that can and must transform us. (86)

It is not just the house that needs transforming, but we who are changing it, who are building something better, need to be changed in the process or we will likely continue to build rickety structures.

See also part 3 of this series.