Workshop idea on AI ethical decision making

I am thinking about whether/how I might be able to take the very drafty personal AI ethics framework idea from a recent blog post and do something with it during a synchronous workshop for faculty, students, and staff. As I was working on that blog post I started to think that to really work through one’s ethical views on AI is very complex and might be best done through something like a set of online modules rather than a short engagement like a workshop. But I’m going to use this blog post to try to see what might be possible–as I frequently think best by writing, I’m going to use this opportunity to do that!

I’m imagining a 1.5 or 2-hour workshop on this topic, and wondering what might be both feasible, and of course useful for participants to help folks think carefully about ethical considerations in possible uses of generative AI in teaching and learning. My main worry, as I think about this, is that making ethical decisions is really complicated, and I don’t want to overwhelm people with things to consider to the degree that some may end up feeling like it’s too much to try to do. I really want to find a middle ground between a deep ethical analysis for decisions around generative AI (which could and has been done in book-length manuscripts!) and providing little in the way of guidance on how to make ethical decisions in this area. This is challenging, I’m finding as I think this through.

Below is a draft outline for a workshop, with some early ideas that will need further refinement.

Outline for a workshop

1. Ethical decision making & use cases

Framework

I think it could be helpful to have some kind of ethical decision making framework. What I have in my earlier blog post is not quite there yet; I don’t think it includes everything it needs to, though it’s a start. After doing a quick web search on ethical frameworks, and considering my own thoughts, here are some elements that it would be good to include for the purposes of this kind of workshop. I’m numbering them just for ease of referring to them later, but they may not necessarily be in exactly this order.

  1. Identify the question/decision to be made, and what options are available
  2. List various entities involved, including people and also other living and non-living entities as relevant
  3. Identify possible ethical issues involved
  4. Gather information relevant to those issues as best you can; note questions you still have and where you would like to have further information
  5. Evaluate options according to ethical values and principles
  6. Make a decision
  7. Develop and then act on next steps

There are likely more things to consider, such as reviewing the outcome of the decision to consider its positive and negative ethical impacts and learn for the future, but for the current purpose the above is a decent start I think.

This section of a workshop could include brief introduction to an ethical decision making framework being used in the session, and that will guide parts of the session. We won’t be able to do all of the above steps in a short workshop.

Brainstorming use cases

In addition, at this point we could ask participants to brainstorm one or more possible use cases for generative AI in teaching and learning (or in some other context, depending on audience). This would be step 1 in the framework above. These could be contributed individually on a shared google doc perhaps, to be used later in the session. Time permitting, they could also include information on people and other entities involved (step 2 in the framework).

For example, one use case could be the decide whether to use generative AI tools to make comments on student written work. It would be helpful to consider some further specifics, such as possible tools to be used and the kind of assignment and feedback one is thinking about. Those involved would be students, the instructor, possibly TAs.

2. Generative AI and ethics

Here we could review some ethical concerns that have been raised by many folks already. This would be for step 3 in the framework above. Maybe choose a few ethical issues and devote one slide each, with further resources for reviewing/reading on a handout/worksheet.

Also, for a UBC session we could review the UBC guidelines on generative AI, both for administrative use and for use in Teaching and Learning. These have some ethical considerations, and the teaching and learning guidelines also have a section on Indigenous data sovereignty and knowledge protocols.

It would be important to open the floor to other ethical considerations that haven’t yet been mentioned, as well.

The point of this section would be to provide information on some of the ethical issues involved in decisions around whether and how to use generative AI tools. This can only be very limited information, of course, in a short workshop. It’s very easy to end up overwhelming folks with information, so probably better to provide a limited amount in the session and then have links to further info for reviewing at another time.

3. Evaluating options

Questions re: ethics & values

When it comes time to evaluate one or more options according to ethical values and principles (number 5 in the ethical decision making framework above), this is where things get complex. As a philosopher who has taught ethics multiple times in the past I know just how deeply detailed one can go into ethical theories and approaches. I would need to temper that tendency I have from my background with feasibility and usefulness for a short engagement with a broad audience.

The Markkula Centre for Applied Ethics at Santa Clara University in California, USA, simplifies some of this complexity somewhat in their framework for ethical decision making. They point out that there are multiple theoretical approaches to ethics and ask decision makers to consider each of them. These include a focus on rights, justice, utilitarianism, care, and more. They provide a list of questions, one for each approach, to consider in making decisions around ethics. I’m trying to take something of a similar approach, but also attempting to simplify even more from the list of various approaches they provide.

Here are some draft ideas on what folks might be able to consider in a limited period of time (this could be on a slide and a worksheet):

  • Potential benefits: What might be some beneficial impacts to using a generative AI tool in this way, and to whom?
    • Be sure to also consider impacts to relationships with others, human or otherwise.
    • Might there be inequities in how these benefits are distributed?
    • Is there anything that could be done to better support the likelihood of those impacts?
  • Potential harms: What are some possible harms that could result, and to whom?
    • Be sure to also consider impacts to relationships with others, human or otherwise.
    • Might there be inequities in how these harms are distributed?
    • Are there some potential harms that are so weighty that they can’t be outweighed by other possible benefits?
  • What rights are involved and that must be respected?
  • What are some ethical values that relate to this situation and how might these help guide a decision?
    • [for this one there would probably need to be a list of some sample values to help people better answer the question, such as decolonization, equity, accessibility, autonomy, transparency, privacy, fairness, sustainability, and more]

I think this covers a lot of the bases, but I think it may still be too much. I really want to include an emphasis on relationships, because I think that is particularly important when considering the use of generative AI, and I haven’t seen as much about that as about numerous other topics related to generative AI and ethics. I have one blog post on AI and relationships so far here, and I am planning to write others.

Questions and further information

In addition, in order to really address the above questions thoroughly, one would need a good deal of information; e.g., to consider impacts on the environment one would need a good amount of specific information about energy needs for training and use of particular models, much of which just isn’t actually fully available. And even information that is available won’t be something participants can get or review in a short workshop.

There needs to be some way to acknowledge that it’s okay, and necessary, to make decisions even if we don’t have or can’t get full information. We can always try to find more information, but it’s often the case that we can’t wait to have full information before making decisions around ethics; we have to do the best we can and recognize that our decisions may need to change later.

So other questions that would be important for participants to consider are along the lines of:

  • What questions do you still have related to making an ethical decision about this use case for AI? What further information would it be helpful to have?

Idea for how to run this section

There could be an individual part where people go through the brainstormed use cases (generated earlier in the session on a shared doc), pick one, and start filling in a few ideas in answer to the above questions.

Then perhaps participants could get together in groups, choose one use case to focus on, and discuss what has been typed in the document already and what else could be added in response to the above questions. They could also discuss what questions they have about the use case and ethical considerations, and what further information it would be helpful to have.

Sharing with the larger group

Each group could be asked to share reflections on the activity, what questions are still left over, whether they have gained any new insights from this process, whether the questions were helpful for reflection or perhaps something was missing, etc.

4. Wrap up

What we won’t have time to get to in this plan are steps 6 and 7 in the framework at the beginning of this outline: make a decision and consider next steps. So that could be something participants are prompted to consider outside the workshop.

The workshop time could be used to start filling in considerations for various use cases, and if individuals would like to pursue thinking about any of those further they can do so on their own and make their own decisions. At least, they can make decisions for now: the “for now” is important here, given the point above about not always having full information, and because folks will not have time to go through all of the above questions about ethics in depth.

A last part of the handout/worksheet could could have a table something like what I included on an earlier blog post, and then participants could be encouraged to review that later.

Use case Use GenAI? Why? If yes, how? Next steps
E.g., feedback on student work Yes/no, and why/why not Any specifics on how you might implement, including to address ethical issues. What one or two next steps will you take? This can include how you would go about getting more information you need to decide.

 

Reflection on the above

I had earlier drafts of this blog post with a different design: I had earlier thought something like this could be focused on individuals coming up with their own use cases and working on answering the questions related to ethics and values on their own. But this would have meant a lot of time in a workshop just doing individual work (section e above would have been mostly individual work, and I can see that section taking upwards of 30 minutes), and I’m not sure that’s the best use of time together in a (physical or virtual) room. Plus, it would mean everyone would have to try to answer all the questions on their own, which means they aren’t getting the benefit of learning from and with others and getting new ideas one might not have on one’s own. It could even mean that two or more folks are working on pretty much the same use case and doing the same things separately instead of together.

So I changed it to be the format above, where instead of everyone coming up with their own use case and making their own decisions about it (the earlier version did have time to reflect on one’s own decision, as is now left to be done after the workshop in this version), in this version there is some individual work and then groups work together on answering the ethics & values questions as well. This way we could start building up a repository of ideas for answering the questions in section e for various use cases that could be re-used later so there isn’t duplication of effort; nor would there be duplication of effort of two or more people trying to work individually on addressing those questions for similar use cases.

Of course, the downside of this approach is that ultimately, ethical decision making around using AI for particular purposes needs to be an individual decision, and our reasons for making those decisions may differ from what others think, including which values we hold and how we interpret them. So I’m still thinking about which design is best.

I also still think this whole approach may be too complicated. It’s a lot to ask people to consider in order to make a decision around using an AI tool. I mean, I think it is all important to consider, but is it realistic that folks would actually address all of these kinds of questions, given the effort and complexity involved? Will it help surface new ideas and insights? Perhaps hard to tell until we try….

And when I do some rough calculations of possible timing for each of these sections, it is maybe doable in two hours. It’s probably too long overall for a short workshop. Hmmmm.

(I also feel like I should probably do some research into the literature on ethical decision frameworks and what is helpful/less helpful, what considerations need to be addressed, etc. Right now I’m making this up without a lot of expertise.)

As always, happy to hear any thoughts or comments!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.