Draft idea for an AI personal ethical decision framework

I recently wrote two blog posts on possible ways that generative AI might be able to support student learning in philosophy courses (part 1, part 2). But through doing so, and also through a thought-provoking comment by Alan Levine on my earlier blog post reflecting on a presentation by Dave Cormier on focusing on values in situations of uncertainty, I’m now starting to think more carefully about my use of AI and how it intersects with my values.

Alan Levine noted in his comment that sometimes people talking about generative AI start by acknowledging problems with it, and then “jump in full speed” to talking about its capabilities and possible benefits while no longer engaging with the original issues. This really struck me, because it’s something I could easily see myself doing too.

I started reflecting a lot on various problems with generative AI tools, as well as potential benefits I can imagine, how all of these intersect with my values, to try to make more conscious ethical decisions about using generative AI in various situations, or not. On one hand, one could make philosophical arguments about what should be done “in general,” but even then each individual needs to weigh various considerations and their own values, and make their own decisions as to what they want to do.

I decided, then, to try to come up with a framework of some kind to support folks making those decisions. This is an early brainstorm; it will likely be refined over time and I welcome feedback! It is something that would take time, effort, and fairly deep reflection to go through, and it may go too far in that direction. Especially since I can imagine something like this being used in a workshop (or series of workshops) or a course, and those have time limits (of course, there is no requirement that people must work through something like this in a limited time period; they could always go through it on their own later…it’s just that I know myself and I often will have the intention to return to things like this later and, well, just get busy). This is one aspect that needs more work.

The general idea is to go through possible benefits and problems with using generative AI tools, connect these to one’s values, and then brainstorm: whether one will use generative AI in a particular context, and if so, how one might address the problems and further support possible benefits.

I think it would be helpful to start with a set of possible uses in one’s particular context and arrange the rest from there, because a number of the possible benefits and problems can differ according to particular use cases. But there are some problems that are more general–e.g., issues with how generative AI tools are developed, trained, and maintained on the “back end,” as it were, which would apply to any downstream uses (such as energy usage, harm to data workers, violations of Indigenous data sovereignty in training, etc.). So I think some of the problems, at least, could be considered regardless of particular context of use.

First draft of framework

Without further ado, here is the very drafty first draft of the kind of thing I’m thinking about. At this point it’s just structured as a worksheet that starts off with brainstorming some possible uses of generative AI in one’s own work (e.g., teaching, learning, research, coding, data analysis, communications, and more). Then folks can pick one or two of those to focus on. The rest is a set of tables to fill out about potential benefits and problems with using generative AI in this way, and then a final one where folks make at least a provisional decision and then brainstorm one or two next steps.

Brainstorm possible uses

Think of a few possible uses of generative AI in your own work or study that you’d like to explore further, or ones you’re already engaged in. Take __ minutes to write down a list. [Providing a few example lists for folks could be helpful]

Then choose 2-3 of these to investigate further in the following steps.

Benefits and problems

Regarding problems with using AI, as noted above, some problems can apply regardless of the particular use case, and I think it’s important for folks to grapple with those even though they may be more challenging for individuals to address. Some background and resources on these would be useful to discuss in a facilitated session, ideally with some pre-reading. A number of the issues are fairly complex and would benefit from time to learn and discuss, so one can’t go through all of them in a limited time period.

The same goes for possible benefits; it would be useful to list a few possible areas in which there could be benefits for generative AI use, such as supporting student learning, doing repetitive tasks to free people up to have more time for complex or more interesting tasks, supporting accessibility in some cases. These will necessarily be high level while participants would brainstorm benefits that are more specific to their use case.

One could ask folks to brainstorm a few problems and benefits for generative AI in their use cases, including one of the more general problems as well as at least one that is specific to their use case.

Problem or Benefit Evidence Impacts Further info My view Value
E.g., climate impacts in both training and use This could be links Who is harmed? Who benefits? What other info would be helpful? One’s view on the topic at the moment Related value(s) one holds

This is not very nice looking in a blog post but hopefully you get the idea.

Decisions

Then participants could be encouraged to try to make an initial decision on use of GenAI in a particular use case, even if that might change later.

Use case Use GenAI? Why? If yes, how? Next steps
E.g., feedback on student work Your choice, and why/why not How to do so, including how you will address benefits and problems What one or two next steps will you take? This can include how you would go about getting more information you need to decide.

 

Reflections

The idea here is not necessarily to have people try to weigh the benefits against the problems–that is too complicated and would require that one go through all possible benefits and problems one can think of. Instead, the point is to start to engage in deeper ethical reflection on a particular use case and try to come to some preliminary decision afterwards, even if that decision may change with further information.

One place where I think folks may get hung up is on feeling like they need more information to make decisions. That is completely understandable, and in a limited time frame participants wouldn’t be able to go do a bunch of research on their own. But the framework at least may be able to bring to the surface that ethical issues are complex, and one needs to spend time with them, including finding out more information where one doesn’t have it yet, or has only one or two sources and needs more. That’s why I put in the column about “more info” into the first table example. It’s also why under “my view” I suggested this be one’s view at this time, recognizing that things may change as one investigates further. And one of the next steps could be to investigate some of these things further.

Of course, one reasonable response to this exercise is to decide that some of the general problems are bad enough that one feels one shouldn’t use generative AI tools at all. I mean for this kind of exercise to leave that option open.

The more I think about this, the more I think it would probably be better to do something like this in at least two steps; one where ethical issues and benefits are discussed to the degree feasible in a certain time frame, and then the next one where folks go through their own use cases with the tables as noted above. Otherwise it’s likely to be too rushed.

 

This is a rough sketch of an idea at the moment that I will likely refine. I feel like something along these lines could be useful, even if this isn’t quite it. So I’m happy for feedback!

2 comments

  1. I hope you understand my comment was not a criticism nor have I found an ideal way to swim with values in face of the wave. I feel stuck.

    But some of this is familiar with the earlier ride of EdTech, much excitement and tool fascination (surely this does not happen any more, ha) rather than doing a pedagogical breakdown like yours here. We never really abandon values, but we are not machines either. I read recently an IHE piece from that era that described institutions operating under “Innovation Theater” with the promise of the early web, moocs, et al (my wife and I were chatting this morning about the heady era of smart boards in k12).

    This is the biggest show on earth of innovation theater. We are buying into the “ huile de serpent” of it magically managing our mundane tasks and chores so we can frolic at the top of Maslow’s pyramid.

    Keep on fleshing this out, Christina! I’m reading.

    1. Thanks so much for engaging, Alan! I do understand that this is hard stuff and I feel stuck too. I have a very long draft blog post of me trying to work through some of the benefits and problems with GenAI tools and linking this to my values, and I haven’t published and possibly won’t do so because, well, it’s just really complicated and I feel I need more information on so many things to come to a good conclusion. I also feel like these are things that individuals need to work through for themselves, thus the focus here on some ideas for a way to encourage folks to do that, rather than on my own views.

      I have thought of creating a custom chatbot that helps people work through the kinds of questions discussed here, but then there’s the tension of using a tool while going through the process of possibly deciding not to use such a tool after all. Using a tool while also questioning the ethics…seems questionable!

      I definitely feel the pull of the excitement and tool fascination in innovation theatre, and am trying to balance. And it is challenging!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.