Author Archives: chendric

AI and relationships: Indigenous Protocol and AI paper

I’ve been thinking a lot lately about generative AI and relationships. Not just in terms of how people might use platforms to create AI companions for themselves, though that is part of it. I’ve been thinking more broadly about how development and use of generative AI connects with our relationships with other people, with other living things and the environment, and with ourselves. I’ve also been thinking about our relationships as individuals with generative AI tools themselves; for example, how my interactions with them may change me and how what I do may change the tools, directly or indirectly.

For example, the following kinds of questions have been on my mind:

  • Relationships with other people: How do interactions with AI directly or indirectly benefit or harm others? What impacts do various uses of AI have on both individuals and communities?
  • Relationships with oneself: How do interactions with AI change me? How do my uses of it fit with my values?
  • Relationships with the environment: How do development and use of AI affect the natural world and the relationships that individuals and communities have with living and non-living entities?
  • Relationships with AI systems themselves: How might individuals or communities change AI systems and how are they changed by them?
  • Relationships with AI developers: What kinds of relationships might one have/is one having with the organizations that create AI platforms?

More broadly: What is actually happening in the space between human and AI? What is this conjunction/collaboration? What are we creating through this interaction?

These are pretty large questions, and I’m going to focus in this some other blog posts on some texts I’ve read recently that have guided my interest in thinking further about AI and relationships. Then later I will hopefully have a few clearer ideas to share.

Indigenous Protocol and AI position paper

My interest in this topic was at first sparked by reading a position paper on Indigenous Protocol and Artificial Intelligence (2020), produced by participants the Indigenous Protocols and Artificial Intelligence Working Group that participated in two workshops in 2019. This work is a collection of papers, many of which were written by workshop participants. I found this work incredibly thought-provoking and important, and I am only going to barely touch on small portions of it. For the purposes of this post, I want to discuss a few points about AI and relationships from the position paper.

In the Introduction to this work, the authors explain that “a central proposition of the Indigenous Protocol and AI workshops is that we should critically examine our relationship with AI. In particular, we posted the question of whether AI systems should be given a place in our existing circle of relationships, and, if so, how we might go about bringing it into the circle” (7). For example, the Introduction notes that one of the themes discussed in the workshops in response to this broad question was what it might be like to have a situation in which “AI and humans are in reciprocal relations of care and support” (10).

The authors also emphasize that Indigenous protocols of kinship can help conceptualize the idea of how we may relate to AI systems. For example, “Such protocols would reinforce the notion that, while the developers might assume they are building a product or a tool, they are actually building a relationship to which they should attend” (8).

These protocols differ amongst Indigenous communities, as emphasized in some of the Guidelines for Indigenous-Centred AI Design that is included in the position paper. These guidelines include a discussion of relationality and reciprocity that emphasizes focus on particular community protocols:

  • AI systems should be designed to understand how humans and non-humans are related to and interdependent on each other. Understanding, supporting and encoding these relationships is a primary design goal.
  • AI systems are also part of the circle of relationships. Their place and status in that circle will depend on specific communities and their protocols for understanding, acknowledging and incorporating new entities into that circle. (21)

The guidelines also cover other topics, including:

  • Locality: “AI systems should be designed in partnership with specific Indigenous communities to ensure the systems are capable of responding to and helping care for that community (e.g., grounded in the local) as well as connecting to global contexts (e.g. connected to the universal).”
  • Responsibility and accountability: “AI systems developed by, with, or for Indigenous communities should be responsible to those communities, provide relevant support, and be accountable to those communities first and foremost.”
  • Indigenous data sovereignty: “Indigenous communities must control how their data is solicited, collected, analysed and operationalized.”

Some of the individual papers within this larger collection help flesh out further some possible human relationships with AI, each other, communities, and the environment. In “The IP AI Workshops as Future Imaginary,” Jason Lewis talks about how participants in the workshops focused on their own community protocols in considering what relationships with AI could be like. E.g.,

Anishinaabe participants talked about how oskabewis, helpers whose generous and engaged and not:invisible support for those participating in ceremony, could model how we might want AI systems to support us—and the obligations that we, in turn, would owe them. (41)

In addition, Hawaiian participants talked about how protocols of crafting a fishing net “including the layer upon layer of permission and appreciation and reciprocity” could potentially be reflected in how AI systems are built (41).

In “Gifts of Dentalium and Fire: Entwining Trust and Care with AI,” Ashley Cordes talks about engaging with AI with trust and care from the perspective of the Coquille Nation of the coast of Oregon, USA. Cordes discusses several ways in which AI and other technologies could be used to support Indigenous communities, and also notes that “trust and care is a two-way street; they must also be expressed towards AI” (66). For example, AI systems need “clean and nourishing food (a data diet), security, comfort in temperature, and capacity for fulfillment” (66), where a good data diet means ensuring the data they have is adequate to the task and that will reduce biased outputs, not including extraneous data that are not needed, and sourcing the data ethically. Security means in part, protecting systems from security breaches. In developing AI systems, it’s important to care for the needs of those systems as well as ensuring they are being used to care for people, communities, other living beings, and the environment.

Another paper in the collection also talks about different aspects of our relationships with AI and with each other: “How to Build Anything Ethically,” by Suzanne Kite in discussion with Corey Stover, Melita Stover Janis, and Scott Benesiinaabandan. This paper is focused on teachings about stones from a Lakota perspective, which they use to invite the reader to “consider at which point one affords respect to materials or objects or nonhumans outside of oneself” (75). The authors also provides a side-by-side discussion of how to build a sweat lodge in a good way, and how to build an AI system in a good way, according to Lakota teachings. As just one small part of this, one needs to identify and consider the many living and non-living entities involved:

  • the communities of the location where raw materials originate

  • the raw materials themselves

  • the environment around them

  • the communities affected by transportation and devices built for transportation

  • the communities with the knowledge to build these objects

  • the communities who build the objects

  • the communities who will use and be affected by their use

  • the creators of the objects (p. 77)

Then in terms of extracting and refining raw materials, consideration needs to be given to reciprocity: reciprocity to the individuals and communities for their labour, for the effects on their lands, to other living creatures and the Earth for effects on the environment by restoring it back to health. And care must be taken at the end of a computing device’s lifecycle as well: “A physical computing device, created in a Good Way, must be designed for the Right to Repair, as well as to recycle, transform, and reuse. The creators of any object are responsible for the effects of its creation, use, and its afterlife, caring for this physical computing device in life and in death” (81).

Here too there is an emphasis on relationships with each other and the natural world in terms of working with technology, including AI, and also on relationships with technological entities themselves and how we take care of their generation and their end.

Reflection

The emphasis on relationships that is found in various ways in this collection is one I haven’t seen a lot on other writings about AI, specifically, the relationships people form with AI as well as those we form with each other and other entities (living or otherwise) around AI development and use. A number of folks (including me) talk about related topics, such as ethical considerations and how AI use can perpetuate harm or, on the other hand, provide benefits to some folks that can support equity considerations–these do involve our relationships with AI and with each other, but I haven’t really heard it this discussed so clearly in terms of relationships. I particularly haven’t heard a lot of folks talking about our relationships with AI entities themselves and our responsibilities towards them, which I find very interesting and thought-provoking.

There are multiple ways that relationships involving AI, each other, and the world around us are reflected in this collection. The following are but a few:

  • Relationships with Indigenous communities: As noted in the Guidelines quoted above, AI developed with or for Indigenous communities should be responsible and accountable to those communities, such systems should support care for communities, and respect Indigenous knowledges and data sovereignty.
  • Relationships with other humans in developing and using AI, including those who are involved in extracting raw materials, building hardware and software, those who use the tools, those who are impacted by the tools, and more
  • Relationships with the natural world, including environmental impacts of developing and using AI systems
  • Relationships with AI systems, including how they may support our needs and what responsibilities we may have towards them

 

This collection is much more complex and richer than I can do justice to in a relatively short blog post. It includes stories, poetry, visual art, and descriptions of AI prototypes, among other contributions. The above just barely scratches the surface; careful reading brings out so much more. Here my purpose has been to focus on a few points about AI and relationships that stood out to me on a first and second read, partly to have notes to remind myself, and partly to encourage others to engage with this work!

Draft idea for an AI personal ethical decision framework

I recently wrote two blog posts on possible ways that generative AI might be able to support student learning in philosophy courses (part 1, part 2). But through doing so, and also through a thought-provoking comment by Alan Levine on my earlier blog post reflecting on a presentation by Dave Cormier on focusing on values in situations of uncertainty, I’m now starting to think more carefully about my use of AI and how it intersects with my values.

Alan Levine noted in his comment that sometimes people talking about generative AI start by acknowledging problems with it, and then “jump in full speed” to talking about its capabilities and possible benefits while no longer engaging with the original issues. This really struck me, because it’s something I could easily see myself doing too.

I started reflecting a lot on various problems with generative AI tools, as well as potential benefits I can imagine, how all of these intersect with my values, to try to make more conscious ethical decisions about using generative AI in various situations, or not. On one hand, one could make philosophical arguments about what should be done “in general,” but even then each individual needs to weigh various considerations and their own values, and make their own decisions as to what they want to do.

I decided, then, to try to come up with a framework of some kind to support folks making those decisions. This is an early brainstorm; it will likely be refined over time and I welcome feedback! It is something that would take time, effort, and fairly deep reflection to go through, and it may go too far in that direction. Especially since I can imagine something like this being used in a workshop (or series of workshops) or a course, and those have time limits (of course, there is no requirement that people must work through something like this in a limited time period; they could always go through it on their own later…it’s just that I know myself and I often will have the intention to return to things like this later and, well, just get busy). This is one aspect that needs more work.

The general idea is to go through possible benefits and problems with using generative AI tools, connect these to one’s values, and then brainstorm: whether one will use generative AI in a particular context, and if so, how one might address the problems and further support possible benefits.

I think it would be helpful to start with a set of possible uses in one’s particular context and arrange the rest from there, because a number of the possible benefits and problems can differ according to particular use cases. But there are some problems that are more general–e.g., issues with how generative AI tools are developed, trained, and maintained on the “back end,” as it were, which would apply to any downstream uses (such as energy usage, harm to data workers, violations of Indigenous data sovereignty in training, etc.). So I think some of the problems, at least, could be considered regardless of particular context of use.

First draft of framework

Without further ado, here is the very drafty first draft of the kind of thing I’m thinking about. At this point it’s just structured as a worksheet that starts off with brainstorming some possible uses of generative AI in one’s own work (e.g., teaching, learning, research, coding, data analysis, communications, and more). Then folks can pick one or two of those to focus on. The rest is a set of tables to fill out about potential benefits and problems with using generative AI in this way, and then a final one where folks make at least a provisional decision and then brainstorm one or two next steps.

Brainstorm possible uses

Think of a few possible uses of generative AI in your own work or study that you’d like to explore further, or ones you’re already engaged in. Take __ minutes to write down a list. [Providing a few example lists for folks could be helpful]

Then choose 2-3 of these to investigate further in the following steps.

Benefits and problems

Regarding problems with using AI, as noted above, some problems can apply regardless of the particular use case, and I think it’s important for folks to grapple with those even though they may be more challenging for individuals to address. Some background and resources on these would be useful to discuss in a facilitated session, ideally with some pre-reading. A number of the issues are fairly complex and would benefit from time to learn and discuss, so one can’t go through all of them in a limited time period.

The same goes for possible benefits; it would be useful to list a few possible areas in which there could be benefits for generative AI use, such as supporting student learning, doing repetitive tasks to free people up to have more time for complex or more interesting tasks, supporting accessibility in some cases. These will necessarily be high level while participants would brainstorm benefits that are more specific to their use case.

One could ask folks to brainstorm a few problems and benefits for generative AI in their use cases, including one of the more general problems as well as at least one that is specific to their use case.

Problem or Benefit Evidence Impacts Further info My view Value
E.g., climate impacts in both training and use This could be links Who is harmed? Who benefits? What other info would be helpful? One’s view on the topic at the moment Related value(s) one holds

This is not very nice looking in a blog post but hopefully you get the idea.

Decisions

Then participants could be encouraged to try to make an initial decision on use of GenAI in a particular use case, even if that might change later.

Use case Use GenAI? Why? If yes, how? Next steps
E.g., feedback on student work Your choice, and why/why not How to do so, including how you will address benefits and problems What one or two next steps will you take? This can include how you would go about getting more information you need to decide.

 

Reflections

The idea here is not necessarily to have people try to weigh the benefits against the problems–that is too complicated and would require that one go through all possible benefits and problems one can think of. Instead, the point is to start to engage in deeper ethical reflection on a particular use case and try to come to some preliminary decision afterwards, even if that decision may change with further information.

One place where I think folks may get hung up is on feeling like they need more information to make decisions. That is completely understandable, and in a limited time frame participants wouldn’t be able to go do a bunch of research on their own. But the framework at least may be able to bring to the surface that ethical issues are complex, and one needs to spend time with them, including finding out more information where one doesn’t have it yet, or has only one or two sources and needs more. That’s why I put in the column about “more info” into the first table example. It’s also why under “my view” I suggested this be one’s view at this time, recognizing that things may change as one investigates further. And one of the next steps could be to investigate some of these things further.

Of course, one reasonable response to this exercise is to decide that some of the general problems are bad enough that one feels one shouldn’t use generative AI tools at all. I mean for this kind of exercise to leave that option open.

The more I think about this, the more I think it would probably be better to do something like this in at least two steps; one where ethical issues and benefits are discussed to the degree feasible in a certain time frame, and then the next one where folks go through their own use cases with the tables as noted above. Otherwise it’s likely to be too rushed.

 

This is a rough sketch of an idea at the moment that I will likely refine. I feel like something along these lines could be useful, even if this isn’t quite it. So I’m happy for feedback!

AI & philosophical activity in courses, part 2

Introduction

This is part 2 of my discussion of ways to possibly use AI tools to support philosophical activities in courses. In my part 1 blog post I talked about using AI to support learning about asking philosophical questions, analyzing arguments, and engaging in philosophical discussion. In this post I focus on AI and writing philosophy.

Caveats:

There are a lot of resources out there on AI and writing, and I’m purposefully focusing largely with my own thoughts at the moment, though likely many of those will have been influenced by the many things I’ve read so far. I may include a few links here and there, and use other blog posts to review and talk about some ideas from others on AI and writing that may be relevant for philosophy.

In this post I’m not going to focus on trying to generate AI-proof writing assignments, or ways to detect AI writing…I think both are very challenging and likely to change quickly over time. My focus is on whether AI may be helpful for learning in terms of writing, not so much for the purposes of this post on AI and academic integrity (though that is also very important!).

Note that by engaging in these reflections I’m not saying that use of generative AI in courses is by any means non-problematic. There are numerous concerns to take into account, some of which are noted on a newly-released set of guidelines on the use of generative AI for teaching and learning that I worked on with numerous other folks at our institution. The point here is just to focus in on whether there might be at least some ways in which AI might support students in doing philosophical work in courses; I may not necessarily adopt any of these, and even if I do there will be numerous other things to consider.

I’m also not saying that writing assignments are the only or best way to do philosophy; it’s just that writing is something that characterizes much of philosophical work. It is of course important to question whether this should be the case, and consider alternative activities that can still show philosophical thinking, and I have done that in some courses in the past. But all of this would take us down a different path than the point of this particular blog post.

Finally I want to note that these are initial thoughts from me, not settled conclusions. I may and likely will change my mind later as I learn and think more. Also, a number of sections below are pretty sketchy ideas, but that’s because this is just meant as a brainstorm.

To begin:

Before asking whether/how AI might support student learning in terms of writing philosophy, I want to interrogate for myself the purposes of why I ask students to write in my philosophy courses, particularly in first-year courses. After all, in my introductory level course, few students are going to go on and continue to write specifically for philosophy contexts; some will go on to other philosophy courses, but many will not, and even fewer will go on to grad school or to do professional philosophy.

Continue reading

AI & philosophical activity in courses part 1

I was reading through some resources on the Educause AI … Friend or Foe showcase, specifically the one on AI and inclusive excellence in higher education, and one thing in particular struck me. The resource talks, among other things, about helping students to understand the ways of thinking, speaking, and acting in a particular discipline, about making that clearer and whether AI might support this in some way.

This resonates with some ideas that have been bouncing a bit in my head the past few weeks on whether/how AI might help or hinder some of the activities I ask students to do in my courses, which led me to think about why I even ask them to do those activities in the first place. And thinking about this from a disciplinary perspective might help. What kinds of activities might be philosophical? And I don’t mean just those that professional philosophers engage in, because few students in my courses will go on to be professional philosophers, but all of them will do some kinds of philosophical thinking, questioning, discussing, etc. at some point in their lives I believe.

So what might it mean to engage in philosophical activities and can AI help students engage in these better in some way, or not? This is part one of me thinking through this question; there will be at least a part two soon, because I have enough thoughts that I don’t want to write a book-length blog post…

Asking philosophical questions

This is something all philosophers do in one way or another, and that I think can be helpful for many people in various contexts. And yet I find it challenging to define what a philosophical question is, even though I do it all the time. I don’t teach this directly, but I should probably be more conscious about it because I do think it would be helpful for students to be able to engage in this activity more after the class ends.

This reminds me of a post I also read today, this time by Ryan J. Johnson on the American Philosophical Association blog called “How I Got to Questions.” Johnson describes a question-focused pedagogy, in which students spend a lot of their time and effort in a philosophy course formulating and revising questions, only answering them in an assignment towards the end. Part of the point is to help students to better understand over time what makes a question philosophical through such activities.

Johnson credits Stephen Bloch-Schulman in part, from whom I first heard about this approach, and who writes about question-focused pedagogy on another post on the APA blog. Bloch-Schulman did a study that showed philosophy faculty using questions more often and in different ways when reading the same text as undergraduates and other faculty. I appreciated this point (among others!):

I believe that much of the most important desiderata of inclusive pedagogy is to make visible, for students, these same skills we hide from ourselves as experts, to make the acquisition of these skills as accessible as possible, particularly for those students who are least likely to pick up those skills without that work on our part. Question-skills being high on that list. (Introducing the Question-Focused Pedagogy Series)

One step for me in doing this more in my teaching would be to do more research and reflecting myself on what makes some questions more philosophical than others (Erica Stonestreet’s post called “Where Questions Come From” is one helpful resource, for example).

AI and learning/practicing philosophical questions

But this post is also focused on AI: might AI be used in a way to help support students to learn how to ask philosophical questions?

Continue reading

Blogging on blogging again: more meta!

Screen shot of the title of this blog, You're the Teacher, set against an image of misty mountains with a tree in the foreground.

Metapic

I’m joining the DS106 Radio Summer Camp this week, and Jim Groom put out an invitation to all of us to join in a session today about blogging called “Blog or Die!” Why does blogging rule all media, as Jim asked? I thought I’d blog a few notes about blogging as prep for joining this session.

I seem incapable of writing blog posts under 2000 words, but for this one I’m really gonna try!

Benefits of blogging myself

I started blogging in 2006, after learning about WordPress and blogs from the amazing Brian Lamb (who was at the University of British Columbia at the time, but who is now doing fantastic work over at Thompson Rivers University). Funny enough, one of my first posts was called “Why blog?”. Coming around to the same theme I guess!

In reading over that post I find it still resonates with me eighteen years later. Benefits of blogging I wrote back then:

  • Reflecting on teaching and learning so as to improve
  • Sharing back with others, since I have learned so much from those who have shared their reflections
  • Connecting with a community
  • Thinking things out for oneself and being able to find those reflections fairly quickly later

Continue reading

Values-based tugging

Okay, so the title of this post may seem a little strange but bear with me. Yesterday I listened to a fantastic session by Dave Cormier for the DS106 Radio Summer Camp this week, called “A year of uncertainty – fighting the fight against the RAND corporation.” I wasn’t entirely sure what to expect, as I hadn’t managed to find the abstract/description of this session until after it was over (click on the session link on the schedule for the summer camp), but I knew Dave is amazing, so of course I had to listen! And it was very thought-provoking as I figured it would be.

Problem solving and uncertainty

One of the main points Dave was talking about was in how many aspects of our social, political, educational, and other lives are focused on problem-solving, on addressing well-defined problems that can have well-defined answers that we just need to work hard to find. This is not necessarily a problem in itself, Dave noted, as there do exist such problems and there can be very useful methods for working to address them. The issue is if we focus on that to ignore the less-easily-defined problems, the messier issues, the more uncertain situations where a single right answer is not going to be forthcoming no matter what kinds of problem-solving methodologies we throw at it.

Dave mentioned medical students coming out of their education into practice and, when confronted with complex, uncertain, grey areas where a medical solution isn’t immediately forthcoming they tended to focus on blaming themselves, as if it was their failure for not finding an answer where none was to be found. He also noted how, at least in English, it is common when someone asks a question like “what is your view of X,” or “is Y right or wrong,” you feel like you have to answer, even if you aren’t sure, or there isn’t a clear-cut answer. It’s just part of the accepted norms of speaking that you should have an answer.

Both of these resonated with me, and perhaps especially the second; I have sometimes been asked, in various contexts, to provide my view on something that is of a more uncertain nature, or to say if I think it’s right, or to say what I think the future will bring, and I do feel pressured to respond. But maybe because of my background in philosophy I’m actually pretty comfortable with saying that I am not sure, or I’d need to look into it more, because such situations really do require more thought, research, reflection before coming to a conclusion.

There is the danger of jumping in too quickly with an answer, but there is also a danger in spending too much time in the thinking and reflection and not moving past that towards making some kind of decision or other. And sometimes I get stuck in that latter step when faced with really complex issues–there is so much to consider and so much value in multiple perspectives that it can be hard to “land” somewhere, as it were. It’s tempting to remain up in the air while not being sure of which alternatives are best (because there are no easy answers).

Landing on values and pulling from there

I really appreciated where Dave landed in his presentation: rather than only feeling stuck, suspended, we can consult our values and make a move based on those, we can tug the rope in a tug of war in the direction of our values and work to move things from there. The focus on values is key here: ask yourself what are your values as they relate to this situation, and make decisions and act based on those, knowing that’s enough in uncertain situations. Which doesn’t mean, of course, that you can’t revisit your values and how they apply to the situation if either of those things changes, but that it’s a landing place and it’s solid enough for the moment. He talked about how we can have conversations with students and others about why we would do something in a particular situation, rather than what the right answer is, focusing on the values that are moving us.

To do so requires that we are clear about what our values are, which is in some cases more easily said than done. This is something near and dear to my heart as a philosopher, as trying to distill what is underlying our views and our decisions, what kinds of reasons and values, is part of our bread and butter. But when I reflect on how I’ve taught over the years, I’m not sure I’ve focused as much as I could have on helping students be clear about their values, instead focusing on the “content” of course quite a bit. The latter has been in the service of helping students understand that when we make ethical choices there are (or should be) reasons behind those, and some options as to what kinds of reasons those could be. I, like many other philosophers, have then also asked students to provide their own arguments related to various ethical and other philosophical questions, which does at times mean providing reasons based on values. But how much have I really spent time supporting students to  define and articulate their own values in addition to applying them through writing arguments? I’m not sure, and this session was really generative for me in thinking about that (as well as being generative in multiple other ways!).

A couple of years ago I wrote a blog post as part of MYFest 2022, talking about how I had a hard time imagining a more just future for education just because I kept focusing on all of the structural complexities involved in educational systems and how changing one thing would require changing many more interconnected aspects and … it all felt pretty overwhelming. The metaphor I used was of rocks and boulders, which came to me as I was passing multiple rock formations on a walk. Some piles rocks are fairly easy to move; others are locked into network like shapes where to move one would require moving all the others, and they are after all very heavy. If I think in these terms then of course it’s hard to imagine change. Things are literally set in stone!


But what if we thought about complex issues and structures more like flexible webs? (Which is an image that reminds me of other of Dave Cormier’s work such as that on rhizomatic learning.) So that if you tug on one part it can still move and the other parts will move as well (or break I suppose, which in some cases may not be a bad thing).

This feels more hopeful to me–it still respects the interconnectedness of structures but also notes there can be some movement, some wiggle room. Perhaps the spider web is too flexible to respect the challenges of moving some of the more entrenched structures, though. Even though spider silk is incredibly strong, it seems a bit too easy to just sweep away with the swoosh of one’s hand.

How about a net:

This feels stronger, and like a spider web, meant to catch and hold things tight, but which can still be moved, shaped, morphed, or even broken. I like the image above because a piece of the net is fraying, noting its fragility amidst the otherwise tight knots.

A line that Dave ended on will stick with me: “Ask yourself what you care about, and then do what you can.” That feels empowering.

Applying to AI

One of the things that feels uncertain to me in this moment is where things are going with AI, what the future holds, and what the best approaches are to using AI (or not!) in education. How might those of us who are educators address the question of whether and/or how to adopt AI in our courses, in our teaching practices, to encourage our students to use it, etc.? Of course, all of this is going to differ according to context, discipline, teaching and learning goals, and more. But I think Dave’s session provides a fruitful way to approach this question. This is a complicated and uncertain situation but what we can do is consult our values: what do we value, what do we care about, what do we want to promote and avoid?

This may seem fairly elementary in a way–might we already frequently act from our values? Maybe, but there are also times when I know I have done things in teaching because they just seemed like the usual thing to do, what I had experienced, that they just seemed right and “normal,” but when I took a step back to think about my values and what I care about then things changed. For example, I used to get upset when people would leave during the middle of class, until I reflected on how I care about supporting students to learn in the ways that are helpful for them, coupled with learning about how some students need to move around, or to take breaks from stimulation, or need to leave for other reasons. It’s still not easy, especially in small courses, but I’m focusing less on how I feel in that situation and more on how being able to take a break may be helpful for some students more than sitting in one place for 50-80 minutes.

It’s perhaps that previous point about taking some time to reflect on one’s values, what is important, what one cares about, and then applying to one’s teaching practice. At one point last year I took the time to write out my values in terms of leadership, and that was immensely helpful in focusing my attention on areas I wanted to work on. I may have acted from some of those automatically, but bringing them to the surface helped me not only see what was grounding some of my actions but also where I professed values but that my actions could do a better job of supporting.

Now, this process isn’t going to lead to easy answers (there are none for the kinds of issues Dave was talking about), and our values may lead to conflicting viewpoints. For example, I care about allowing students to use technology that will support their learning, and I think that generative AI may be helpful for student learning in some cases–I’ve been looking into how it can support students with various disabilities, for example, and should blog about that later. Then there is the value of equity and how not all students have equal access to generative AI tools, so some may get supports that others don’t. But digging into what one values can help clarify why to go one direction or another, putting one on more solid footing while starting to tug, even if one isn’t entirely sure that is the best direction. It is the best one for this moment while recognizing the complexity that makes it a difficult, but at least grounded, choice.

And if we go with the net metaphor, then tugging in one place can pull other threads, moving things in a local area to start, and maybe in larger areas over time. Particularly if more people are tugging in similar directions (organized action, e.g.). One person can make a difference, but it is more likely that many, working together, can make a larger difference. And we may fray that net to the point of finding ways to morph or break some of the confining structures we find ourselves in.

All of this is also bringing to mind the idea of “entangled pedagogy” from Tim Fawns, which I wrote a blog post about in 2022. Rather than reviewing that blog post, I’ll just say that he has an aspirational view of the relationship between technology and pedagogy in which we focus on purposes, values, and contexts in an entangled relationship with technology and pedagogy. Rather than trying to emphasize pedagogy over technology or vice versa, or even how they are connected to each other, we focus instead on the purposes and the values we have in teaching and learning, and the specificity of our contexts, and how those can shape our choices in both pedagogy and technology (and how they intertwine).

In a quote that resonates with some of what Dave said and what I’ve written here, Tim notes:

Attending to values, purposes and context can help us identify problematic assumptions, such as those embedded in simple solutions to complex problems, reductive characterisations of students (e.g. as ‘digital natives’, see Oliver 2011), or assertions that teachers should conform to modern digital culture and practices (Clegg 2011).

Conclusion

I really appreciated the opportunity to participate in this session with Dave. There was a lot more than what I’ve been able to talk about above, so I highly suggest you listen to the recording when it’s posted on the DS106 Radio Summer Camp recordings page (and check out other fantastic sessions while you’re at it of course!). Big thank you to Dave for a thought-provoking session!

Imagination by Benjamin, Part 3

stacks of books in boxes with a sign above them that says "Libros Libres" (free books).

Libros Libres by Alan Levine on Flickr, licensed CC0.

 

In the last two posts I have been talking about Ruha Benjamin’s book Imagination: A Manifesto, which I’m reading as part of a book club for MYFest 2024. Here is part 1 on chapters 1 and 2, and here is part 2 on chapters 3 and 4. In this last post I’ll discuss chapters 5 and 6.

Among other things, chapter 5 focuses on how art and stories are crucial for changing imaginations and providing new visions. Benjamin quotes Angela Y. Davis:

… if we believe that revolutions are possible, then we have to be able to imagine different modes of being, different ways of existing in society, different social relations. In this sense art is crucial. Art is at the forefront of social change. Art often allows us to grasp what we cannot yet understand. (98-99)

For example, Benjamin points to artistic and imaginative dreams of what border areas between nations could be, rather than policed, surveilled, violent structures. One idea is a “binational library on the Mexico-US border” that would make the border “nothing more than a bookshelf allowing for ‘transnational exchanges of books, ideas, and knowledge'” (quoting Ronald Rael; 95). In a twist, Benjamin points out that such a library existed between the US and Canada in the early 20th century, and while that may not seem so far-fetched, if the former does then what does that tell us about the differences? About ourselves? Instead of the harsh break between ourselves and others, us and them, Benjamin states, “We must populate our imaginations with images and stories of our shared humanity of our interconnectedness, of our solidarity as people. A poetics of welcome, not walls” (p. 102).

Chapter 5 includes multiple examples of organizations dedicated to imagining the future, telling new stories about interconnection, collaboration, and interdependence, and working towards implementing them. Benjamin also dedicates space to discussing Afrofuturism and Indigenous futurism, as imaginations that counter a prevailing trend in which “Indigenous and racialized peoples, who know all too well what it means to live in a dystopian present, get suspended in time, never imagined among those peopling the future” (112).

I had heard these terms before but Benjamin’s short discussion helped me grasp them better. I have experienced that it is fairly common to think about Indigenous communities and cultures in terms of their past traditions, whereas Indigenous futurism, according to Grace Dillon, folds the past into the present, “which is folded into the future–a philosophical wormhole that renders the very definitions of time and space fluid in the imagination” (113). Benjamin points to the Initiative for Indigenous Futures at Concordia University here in Canada that, among other activities, teaches Indigenous youth how to “adapt stories from their community into experimental digital media,” to “envision themselves in the future while drawing from their heritage” (114). This is another form of breaking down walls, those between times as well as spaces, weaving the past into the present into the future and back.

My own visions of the future will of course draw from my past and present experiences, which will be limited (as any individual’s would be) and based in my privileged position. One thing I’m taking from these chapters is that imagining new worlds should be a collaborative activity with people involved who bring many different epistemologies, experiences, and identities.

This is a good segue into chapter 6, which is a short, practical chapter that provides sample activities to expand and strengthen imaginations. While Benjamin notes these can be done through individual reflections, she encourages readers to engage in these activities with others, through collective imagination: “Like mushrooms, the kind of imagination that can potentially transform toxic environments into habitable ones relies on a vast network of underground connections–with people, organizations, and histories” (p. 122). The appendix includes discussion- and activity-based prompts for individuals or groups that are short, but no less inviting and open-ended.

I am very grateful for the opportunity in MYFest to join a reading circle about this book, and not only reflect on imagination through reading the book, but also practice it in our group meetings. One activity I found particularly engaging (among many!) during the group meetings I was able to attend was a collaborative story-building activity. We started with a scenario, and then one person would have to brainstorm a challenge or obstacle, and another person would come up with an idea for how to address that, and then there would be another challenge, and so on, until we brought the story to a conclusion. These were short exercises, just a few back-and-forths of a challenge plus addressing that, but it was incredibly powerful to have the chance to imagine both how things can go well and also the reality that there will be complexity and obstacles, and then be nudged to really think hard about how to address those. This was a very hopeful exercise, in that we didn’t get stuck with the obstacles, but moved through and beyond them to something new.

There have been a few sessions in MYFest on imagination and speculative futures, including one that happened today on Imagination as a Liberatory Practice with Jasmine Roberts-Crews. During this session participants were encouraged to reflect on their practices of dreaming and play, and if those were challenging, to then reflect on why and what the obstacles are. I found myself thinking that I don’t have much of a dreaming practice, partly because I’ve been so influenced by the idea that such things aren’t “productive,” and it’s better to spend one’s time doing work that is more traditionally considered so. Of course, if I were someone involved in more creative pursuits I might feel differently!

Through reading Benjamin’s book and discussing it in the reading circle meetings, as well as attending Jasmine’s session (and others noted below), I’m realizing the deep importance of dreaming, daydreaming, imagination, and play to ideating and working towards necessary social change. Otherwise it’s too easy to get caught in how things are, adhering to existing practices and values and their notion of what counts as important and “productive.”

There have been several sessions so far (and more to come!) on imagination and speculative ideation, and I haven’t been able to attend them all. Thankfully, many have been recorded. You can find recordings of MYFest sessions on the MYFest website. For example, I attended the first session of by Dani Dilkes on Speculative Futures for Education, and am looking forward to reviewing the recording for the second (Session 1 and Session 2). Though I will not be in town and not able to attend, there is one coming up July 31 on Future Dreaming Part 2: Storytelling for Liberatory Education. The Wisdom of Fools Storytelling Activities and Readers Theatre are also wonderful examples of tending to our imaginations.

I am so grateful for the MYFest facilitators and the community generally for many things we’re learning together, including focusing on the importance of imagination, dreaming, and play in our lives and in our work as educators!

Imagination by Benjamin, Part 2

A wooden bench in a park, with arm rests placed along the length of it so that you can't lie down.

Image by Nhung Tran from Pixabay

In the last post, I wrote down notes on chapters 1 and 2 in the book Imagination: A Manifesto by Ruha Benjamin. In this post I continue with chapters 3 and 4.

In these chapters, Benjamin talks about multiple imaginaries that support and perpetuate inequalities, and ways to imagine otherwise. Chapter 3 begins with a point that particularly struck me: in discussing how some who are working on virtual reality (VR) technologies note that they can help people to experience better living spaces that they don’t have in reality, Benjamin asks:

But how about a reality where everyone has sufficient resources? Instead of imagining a world where gross extremes between the wealthy and poor are ended, a growing industry fueled by the imagination of the uber-rich is working overtime to create virtual escapes from inequality and sell us on their dreams. (47)

While escape can be very important sometimes, I found her point very important that the effort and expense going into some applications of VR could be instead spent on imagining a world where VR isn’t needed to accommodate for vast differences in wealth.

And yet, VR and AR (augmented reality) can be very important as learning tools too; I can think of some applications at my institution that help students better learn about the brain, that support nurse practitioners to practice their skills with virtual patients, and that provide opportunities for students to practice new language skills with a virtual avatar (to name a few).

Benjamin herself points to a valuable use of augmented reality in Chapter 4, where she discusses an app called Breonna’s Garden, dedicated to celebrating the life of Breonna Taylor. So it’s not the technology that’s the problem, it’s the story, the imagination, that the technology is used to support.

Chapter 3 focuses on a “eugenics imagination,” in which “some lives are deemed desirable and others disposable” (49). Benjamin discusses some topics that many may more easily associate with eugenics, such as forced sterilization programs in women’s prisons, but also prisons themselves as “eugenic institution[s], snatching up and discarding those society deems human detritus” (58). She also points to structures that make play either difficult or downright dangerous for Black and Brown children as supporting an imaginary where some people’s lives are more disposable than others’. This is due to issues such as “under-resourced or understaffed daycare centers,” design decisions in cities and in marginalized neighbourhoods that “make it hard to play freely,” and police “disrupt[ing] Black leisure with stop-and-frisk, targeted harassment, and violence” such as being killed while at play (e.g., Tamir Rice and Raymond Chaluisant) (61-62).

Benjamin returns to design at the end of chapter 3, with another discussion that really stood out to me. She starts out talking about how park benches are often made to keep people from lying down and sleeping, including a spiked bench where the spikes would only retract if you paid money (67). She then points to a creative project called Arhcisuits by Sarah Ross, which are suits that have foam appendages that allow one to get around such architectural designs, such as putting the foam between the arm rests on benches so that you can lie down on top of the foam. This really struck me:

The bench is a great metaphor for the spikes built into our institutions, while the foam-lined suit epitomizes how individuals are made responsible for being smarter, fitter, more suitable, to avoid harm. (69)

Connecting this to education, I think of the various ways that students with disabilities are made to adjust to the spikes in our post-seconday educational institutions; it is often their responsibility get a diagnosis, advocate for themselves, sign up with the disability support centre, figure out what to do if faculty don’t provide the accommodation requested, and more. The burden of work is put onto them to find a way to continue to learn in a system that was not designed for them. Their foam suits are expensive and exhausting, and not within reach of all students who might use them, and who therefore either never go to university or drop out. How can we imagine and create instead post-secondary environments that have fewer spikes to begin with (or, dreaming big, that have no spikes at all?).

A quote from chapter 4 that also struck me is relevant to this question. Benjamin talks about a few organizations dedicated to imagining justice, more just and equitable communities, cities, and institutions, and notes in relation to some of them:

This is part of a radical tradition of people who have no interest in being “included” inside a burning house. Instead, they are sounding the alarm about the treacherous blaze while, at the same time, laying down the bricks for more habitable social structures. (87)

By asking students with disabilities to find ways to be included in existing structures, we’re putting the burden on them rather than changing the structures. This is not to criticize disability support centres, which often do very excellent work and are providing a lifeline to many, many students. Such lifelines may always be needed, as it would be very difficult for any institution to serve every individual. But perhaps we can adjust the environment so the house is more welcoming for more people and fewer foam pads, or lifelines, are needed.

Another quote that struck me from chapter 4 comes from Robin D.G. Kelly:

Without new visions we don’t know what to build, only what to knock down. We not only end up confused, rudderless, and cynical, but we forget that making a revolution is not a series of clever maneuvers and tactics but a process that can and must transform us. (86)

It is not just the house that needs transforming, but we who are changing it, who are building something better, need to be changed in the process or we will likely continue to build rickety structures.

See also part 3 of this series.

Imagination by Benjamin, Part 1

Hot air balloons going upwards into a blue sky; the one that dominates the view has a rainbow pattern with a triangular basket underneath.

Hot air balloons in Boise, Idaho, 2018 (photo by Christina Hendricks)

As part of Mid-Year Festival 2024, I’m participating in a book circle on Ruha Benjamin’s book, Imagination: A Manifesto. I am going to add a few reflections here on the Introduction and chapters 1 and 2, in preparation for our meeting about those chapters.

I wanted to join this book circle because I have a strained relationship with imagination sometimes. In some ways I feel I have a great deal of imagination (I love drawing even though I’m not great at it, for example, and doing very short, 6-10 word stories), but in other ways I feel like I tend to just continue with things as they are because I struggle with understanding how they might change. This is especially the case with systemic issues that would require very complex work in many ways to even start to approach.

Back in MYFest 2022 I wrote a blog post about imagining higher education futures, and how much difficulty I found with that task because of the interlocking structures that all need to change in order to make bigger changes. I felt somewhat stuck, because while trying to change one thing it bumped up against so many others that made it difficult to move. I’m hoping that while reading and discussing this book I feel even more unstuck.

Continue reading

Principles of ethics in Ed Tech & AI (running list)

I’m going to use this post just to note a few resources on ethical principles around educational technology that I haven’t yet discussed in the series I’ve been writing about ethics & ed tech so far. I will at some point get around to writing about these, or at least synthesizing them with others I’ve reviewed so far.

This post will be updated over time. It’s meant as a way for me to keep track of things I want to look into more carefully and/or collate with other principles. Eventually I’d like to map out common ones and pay attention to those that are not commonly included in sets of already-existing principles as well.

I also have a Zotero library about ethics of educational technology and artificial intelligence that I update too.

Ethics in Ed Tech

Ethical Ed Tech Workshop at CUNY

Information and resources for a workshop on Ethical Approaches to Ed Tech, by Laurie Hurson and Talisa Feliciano, as part of a Teach@CUNY 2020 Summer Institute. This web page includes a handout for workshop participants that lists the following categories of questions to ask in regard to ethics & ed tech:

  • Access
  • Control
  • Data
  • Inclusion
  • Intellectual Property & Copyright
  • Privacy
  • Source

See the handout for more details!

UTS Ed Tech Ethics Report

The University of Technology, Sydney, went through a deliberative democracy process in 2021 to address the following question:

What principles should govern UTS use of analytics and artificial intelligence to improve teaching and learning for all, while minimising the possibility of harmful outcomes?

A report on the process and the draft principles was published in 2022. The categories of principles in that report are:

  • Accountability/Transparency
  • Bias/Fairness
  • Equity and Access
  • Safety and Security
  • Human Authority
  • Justifications/Evidence
  • Consent

Again, see the report for more details–the principles are in the Appendix.

Ethics in Artificial Intelligence

EU Ethical Guidelines on AI

In October 2022 the European Commission published a set of Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators.

The categories of these principles are:

  • Human agency and oversight
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental wellbeing
  • Privacy and data governance
  • Technical robustness and safety
  • Accountability

See the PDF version of the report for more detail.

UNESCO Recommendations on the Ethics of AI

In 2022, UNESCO published a report about ethics and AI as well. The main categories of their ethical principles are:

  • Proportionality and do no harm
  • Safety and security
  • Fairness and non-discrimination
  • Sustainability
  • Right to privacy, and data protection
  • Human oversight and determination
  • Transparency and explainability
  • Responsibility and accountability
  • Awareness and literacy
  • Multi-stakeholder and adaptive governance and collaboration