I’ve been thinking a lot lately about generative AI and relationships. Not just in terms of how people might use platforms to create AI companions for themselves, though that is part of it. I’ve been thinking more broadly about how development and use of generative AI connects with our relationships with other people, with other living things and the environment, and with ourselves. I’ve also been thinking about our relationships as individuals with generative AI tools themselves; for example, how my interactions with them may change me and how what I do may change the tools, directly or indirectly.
For example, the following kinds of questions have been on my mind:
- Relationships with other people: How do interactions with AI directly or indirectly benefit or harm others? What impacts do various uses of AI have on both individuals and communities?
- Relationships with oneself: How do interactions with AI change me? How do my uses of it fit with my values?
- Relationships with the environment: How do development and use of AI affect the natural world and the relationships that individuals and communities have with living and non-living entities?
- Relationships with AI systems themselves: How might individuals or communities change AI systems and how are they changed by them?
- Relationships with AI developers: What kinds of relationships might one have/is one having with the organizations that create AI platforms?
More broadly: What is actually happening in the space between human and AI? What is this conjunction/collaboration? What are we creating through this interaction?
These are pretty large questions, and I’m going to focus in this some other blog posts on some texts I’ve read recently that have guided my interest in thinking further about AI and relationships. Then later I will hopefully have a few clearer ideas to share.
Indigenous Protocol and AI position paper
My interest in this topic was at first sparked by reading a position paper on Indigenous Protocol and Artificial Intelligence (2020), produced by participants the Indigenous Protocols and Artificial Intelligence Working Group that participated in two workshops in 2019. This work is a collection of papers, many of which were written by workshop participants. I found this work incredibly thought-provoking and important, and I am only going to barely touch on small portions of it. For the purposes of this post, I want to discuss a few points about AI and relationships from the position paper.
In the Introduction to this work, the authors explain that “a central proposition of the Indigenous Protocol and AI workshops is that we should critically examine our relationship with AI. In particular, we posted the question of whether AI systems should be given a place in our existing circle of relationships, and, if so, how we might go about bringing it into the circle” (7). For example, the Introduction notes that one of the themes discussed in the workshops in response to this broad question was what it might be like to have a situation in which “AI and humans are in reciprocal relations of care and support” (10).
The authors also emphasize that Indigenous protocols of kinship can help conceptualize the idea of how we may relate to AI systems. For example, “Such protocols would reinforce the notion that, while the developers might assume they are building a product or a tool, they are actually building a relationship to which they should attend” (8).
These protocols differ amongst Indigenous communities, as emphasized in some of the Guidelines for Indigenous-Centred AI Design that is included in the position paper. These guidelines include a discussion of relationality and reciprocity that emphasizes focus on particular community protocols:
- AI systems should be designed to understand how humans and non-humans are related to and interdependent on each other. Understanding, supporting and encoding these relationships is a primary design goal.
- AI systems are also part of the circle of relationships. Their place and status in that circle will depend on specific communities and their protocols for understanding, acknowledging and incorporating new entities into that circle. (21)
The guidelines also cover other topics, including:
- Locality: “AI systems should be designed in partnership with specific Indigenous communities to ensure the systems are capable of responding to and helping care for that community (e.g., grounded in the local) as well as connecting to global contexts (e.g. connected to the universal).”
- Responsibility and accountability: “AI systems developed by, with, or for Indigenous communities should be responsible to those communities, provide relevant support, and be accountable to those communities first and foremost.”
- Indigenous data sovereignty: “Indigenous communities must control how their data is solicited, collected, analysed and operationalized.”
Some of the individual papers within this larger collection help flesh out further some possible human relationships with AI, each other, communities, and the environment. In “The IP AI Workshops as Future Imaginary,” Jason Lewis talks about how participants in the workshops focused on their own community protocols in considering what relationships with AI could be like. E.g.,
Anishinaabe participants talked about how oskabewis, helpers whose generous and engaged and not:invisible support for those participating in ceremony, could model how we might want AI systems to support us—and the obligations that we, in turn, would owe them. (41)
In addition, Hawaiian participants talked about how protocols of crafting a fishing net “including the layer upon layer of permission and appreciation and reciprocity” could potentially be reflected in how AI systems are built (41).
In “Gifts of Dentalium and Fire: Entwining Trust and Care with AI,” Ashley Cordes talks about engaging with AI with trust and care from the perspective of the Coquille Nation of the coast of Oregon, USA. Cordes discusses several ways in which AI and other technologies could be used to support Indigenous communities, and also notes that “trust and care is a two-way street; they must also be expressed towards AI” (66). For example, AI systems need “clean and nourishing food (a data diet), security, comfort in temperature, and capacity for fulfillment” (66), where a good data diet means ensuring the data they have is adequate to the task and that will reduce biased outputs, not including extraneous data that are not needed, and sourcing the data ethically. Security means in part, protecting systems from security breaches. In developing AI systems, it’s important to care for the needs of those systems as well as ensuring they are being used to care for people, communities, other living beings, and the environment.
Another paper in the collection also talks about different aspects of our relationships with AI and with each other: “How to Build Anything Ethically,” by Suzanne Kite in discussion with Corey Stover, Melita Stover Janis, and Scott Benesiinaabandan. This paper is focused on teachings about stones from a Lakota perspective, which they use to invite the reader to “consider at which point one affords respect to materials or objects or nonhumans outside of oneself” (75). The authors also provides a side-by-side discussion of how to build a sweat lodge in a good way, and how to build an AI system in a good way, according to Lakota teachings. As just one small part of this, one needs to identify and consider the many living and non-living entities involved:
-
the communities of the location where raw materials originate
-
the raw materials themselves
-
the environment around them
-
the communities affected by transportation and devices built for transportation
-
the communities with the knowledge to build these objects
-
the communities who build the objects
-
the communities who will use and be affected by their use
-
the creators of the objects (p. 77)
Then in terms of extracting and refining raw materials, consideration needs to be given to reciprocity: reciprocity to the individuals and communities for their labour, for the effects on their lands, to other living creatures and the Earth for effects on the environment by restoring it back to health. And care must be taken at the end of a computing device’s lifecycle as well: “A physical computing device, created in a Good Way, must be designed for the Right to Repair, as well as to recycle, transform, and reuse. The creators of any object are responsible for the effects of its creation, use, and its afterlife, caring for this physical computing device in life and in death” (81).
Here too there is an emphasis on relationships with each other and the natural world in terms of working with technology, including AI, and also on relationships with technological entities themselves and how we take care of their generation and their end.
Reflection
The emphasis on relationships that is found in various ways in this collection is one I haven’t seen a lot on other writings about AI, specifically, the relationships people form with AI as well as those we form with each other and other entities (living or otherwise) around AI development and use. A number of folks (including me) talk about related topics, such as ethical considerations and how AI use can perpetuate harm or, on the other hand, provide benefits to some folks that can support equity considerations–these do involve our relationships with AI and with each other, but I haven’t really heard it this discussed so clearly in terms of relationships. I particularly haven’t heard a lot of folks talking about our relationships with AI entities themselves and our responsibilities towards them, which I find very interesting and thought-provoking.
There are multiple ways that relationships involving AI, each other, and the world around us are reflected in this collection. The following are but a few:
- Relationships with Indigenous communities: As noted in the Guidelines quoted above, AI developed with or for Indigenous communities should be responsible and accountable to those communities, such systems should support care for communities, and respect Indigenous knowledges and data sovereignty.
- Relationships with other humans in developing and using AI, including those who are involved in extracting raw materials, building hardware and software, those who use the tools, those who are impacted by the tools, and more
- Relationships with the natural world, including environmental impacts of developing and using AI systems
- Relationships with AI systems, including how they may support our needs and what responsibilities we may have towards them
This collection is much more complex and richer than I can do justice to in a relatively short blog post. It includes stories, poetry, visual art, and descriptions of AI prototypes, among other contributions. The above just barely scratches the surface; careful reading brings out so much more. Here my purpose has been to focus on a few points about AI and relationships that stood out to me on a first and second read, partly to have notes to remind myself, and partly to encourage others to engage with this work!