Tag Archives: AI_relationships

AI & Relationships: Mollick, Co-Intelligence

Statue of a centaur playing a bugle.

Le Centaure qui danse (Cité Internationale Universitaire de Paris), photo shared on Flickr by Jean-Pierre Dalbéra, licensed CC BY 2.0

As part of the series of posts I’m writing on AI and relationships, I want to discuss a few points from Ethan Mollick’s book, Co-Intelligence: Living and Working with AI (Penguin, 2024). As with some other works discussed in the series, the book doesn’t necessarily focus directly on the theme of human relationships with AI, with ourselves, with each other, or with other entities as affected by AI, but the overarching theme of working with AI as a “co-intelligence” is certainly relevant, and interesting, in my view.

This book covers a lot of helpful topics about working with AI, including a clear explainer about generative AI; ethical topics related to AI; AI and creativity, education, and work; possible scenarios for the future of AI; and more. But here I’ll just focus on some of the broad ideas about working with AI as a co-intelligence.

Note: I bought an e-book copy of the book and don’t have stable page numbers to cite, so I’ll be citing quotes from chapters instead of pages.

Alien minds

Mollick ends the first chapter of the book, “Creating Alien Minds,” by stating that humans have created “a kind of alien mind” with recent forms of AI, since they can act in many ways like humans. Mollick isn’t saying that AI systems have minds in the ways humans do, nor that they are sentient, though they may seem so. Rather, Mollick suggests that we “treat AI as if it were human, because in many ways it behaves like one” (Chpt 4). Still, that doesn’t mean believing that these systems replicate human minds in a deep way; the idea is to “remember that AI is not human, but often works in the ways that we would expect humans to act” (Chpt. 4). AI systems are not necessarily intelligent in the same way humans are, but they do act in “ways that we would expect humans to act” sometimes.

As an aside, I recently also finished a book by Luciano Floridi called The Ethics of Artificial Intelligence (Oxford UP, 2023), in which Floridi argues for conceiving of AI in terms of what it can do, not in terms of whether it possesses similar kinds of intelligence to humans:

… ‘for the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving’. This is obviously a counterfactual. It has nothing to do with thinking, but everything to do with behaving: were a human to behave in that way, that behaviour would be called intelligent. (16)

According to Floridi, humans have made great strides in developing artificial systems that can produce behaviour that could be called intelligent, but not so much in producing “the non-biological equivalent of our intelligence, that is, the source of such behaviour” (20).

I’m not sure if Mollick would necessarily agree with Floridi here, and I am not enough of an expert to have an opinion on whether Floridi is right about machine intelligence, but Floridi’s view helps me to consider what it might be like to treat AI as if it were human because of its behaviour, not necessarily thinking it is intelligent or sentient.

Returning to Mollick’s book, the idea of being able to work with an “alien mind” can be useful, Mollick says, “because an alien perspective can be helpful” to address human biases “that come from us being stuck in our own minds” (Chpt. 3).

Now we have another (strange, artificial) co-intelligence we can turn to for help. AI can assist us as a thinking companion to improve our own decision-making, helping us reflect on our own choices (rather than simply relying on the AI to make choices for us). (Chpt. 3)

I think this idea of interacting with an outside, even alien perspective is interesting, but I wonder…is what we can get from AI really very alien? Being trained on human inputs, will it provide a significantly different perspective than one could get from talking with other humans?

Mollick goes on to say that the “diversity of thought and approach” that one can get from an AI could lead to novel ideas “that might never occur to a human mind” (Chpt. 3). Perhaps this is the case; I am not expert enough in what AI can do to be able to really judge well. Mollick argues in Chapter 5 that innovation and novelty can come from combining “distant, seemingly unrelated ideas,” and that LLMs can do this well–they are “combination machines” that also add in a bit of randomness, making them “a powerful tool for innovation.” LLMs can certainly come up with novel ways to combine seemingly random ideas and concepts (Mollick provides an example of asking for business ideas that combine fast food, lava lamps, and 14th century England), and in this sense I can understand the point that you may get new ideas from working with this “alien mind.”

Another thought I have about working with such an alien mind, and that Mollick also talks about in the book (Chapter 2), is that there is bias in AI systems, which still continues even with attempts to correct for it. So the AI alien mind one is thinking with may be one that is steering one’s ideas and decisions towards, e.g., dominant perspectives and approaches in the training data. Of course, talking with individual humans means dealing with bias as well, so this isn’t unique to AI. One way to try to address this when working with humans is to seek out multiple perspectives from people with many different kinds of backgrounds and experiences. A worry is if there is too much reliance on working with AI to help us reflect on our own views, develop ideas, make decisions, we may not take the time to ensure we are getting diverse perspectives and approaches.

Co-intelligence

Mollick is sensitive to overreliance on AI: “it is true that thoughtlessly handing decision-making over to AI could erode our judgment” (Chpt. 3). Similarly, when talking in Chapter 5 about relying on AI to write first drafts of various kinds of work, Mollick states that doing so can contribute to eroding our own creativity and thought processes. It’s easy to just go with the ideas and approaches the AI comes up with for the first draft, even as folks may then revise and edit, meaning they may lose the opportunity to provide their own original thoughts and approaches to some extent. Further, if “we rely on the machine to do the hard work of analysis and synthesis, [then] we don’t engage in critical and reflective thinking ourselves,” and we lose the opportunity to “develop our own style.”

To work with AI as a co-intelligence, as a partner in some sense, humans need to remain strongly in the loop, according to Mollick: “You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations” (Chpt. 3). The idea is to use AI to help support human thinking and creativity, not replace them. When people work with AI but also keep themselves and their own creativity and criticality in the loop, the results are better than if they just rely on what the AI outputs as responses (Chpt. 6). In addition, when people “let the AI take over instead of using it as a tool, [this] can hurt human learning, skill development, and productivity” (Chpt. 6). Working with AI as a co-intelligence means still developing and practicing human skills but augmenting with AI work where it makes sense.

Further, Mollick states: “Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving” (Chpt. 3). I find this focus on learning from AI and adapting our ways of thinking interesting…how might working with AI more regularly shape the ways humans tend to think and make decisions? And how can humans remain critical and active partners in this relationship, retaining what is valuable and useful in human ways of thinking and being? And which such ways are those that it is important to retain even as people may adapt somewhat to the ways today’s AI systems work (or tomorrow’s, or next decade’s, or…)?

Centaurs and cyborgs

One of the memorable (for me) aspects of Mollick’s book is his use of the metaphors of centaurs and cyborgs for working with AI:

Centaur work has a clear line between person and machine, like the clear line between the human torso and horse body of the mythical centaur. It depends on a strategic division of labor, switching between AI and human tasks, allocating responsibilities based on the strengths of each entity.

Cyborgs blend machine and person, integrating the two deeply. Cyborgs don’t just delegate tasks; they intertwine their efforts with AI …. (Chpt. 6)

Working with AI as a centaur, from what I understand, would mean having different tasks done by the person vs the AI. Mollick gives the example of having AI produce graphs from data while the human decides on statistical analysis approaches. But a centaur way of working would be more like weaving human and AI activity together on a task, such as doing some writing, asking the AI for feedback and revising, or asking the AI to help finish a thought or a paragraph in a useful way (and then revising as needed). Mollick suggests that people start off working with AI as a centaur, and then they may gradually start to act as a cyborg; and at that point, they will have “found a co-intelligence” (Chpt. 6).

This idea of being a cyborg goes back to the questions at the end of the previous section, around how working closely with AI and adapting to it (and having it adapt to us) may change human ways of thinking, acting, and making decisions. In addition to sharing tasks with AI, both co-intelligences, as it were, are likely to be changed by this relationship, and I find it very interesting to consider who and what humans might become, and what we should hold onto.

One might argue that we are already cyborgs to some extent, as what and how we think, write, and interact are significantly shaped by technologies of many kinds, from handwriting to typesetting to word processing, the internet, and much more. Somehow, to me, what Mollick calls the “alien mind” of AI feels like an even deeper level of connection between human thinking, creativity, and technology, but I haven’t thought about this in depth enough to have a great deal more to say about it yet. I find it more worrying, that we may lose more of value in human being in interacting to a great extent with entities that act like humans in various ways. Or maybe we will find ways to retain what is meaningful in our relationships to ourselves and each other, and even find new forms of such meaning.

Human relationships with each other

So far I’ve been focusing in this discussion of Mollick’s book on humans’ relationships with AI (how they may act as a kind of co-intelligence) and relationships with ourselves (how overreliance on AI may lead to some erosion of human capacities). I’m also very interested in humans’ relationships with each other as AI use increases.

Though talking about human relationships isn’t directly his point in this section, I appreciated Mollick’s discussion in Chapter 5 of using AI to write recommendation letters, performance reviews, speeches, feedback, and other works that are meaningful because they reflect what other humans think and the time and effort they have put into them. Part of the issue, he points out in this section, is that AI systems could actually do a better job in some cases than if a human produced these. But they lose an important sense of meaning, and Mollick argues that “we are going to need to reconstruct meaning, in art and in the rituals of creative work” (Chpt. 5).

This leads me to wonder: what degree of AI use may still preserve the important meaning, the connection (though not direct) to another person that can come from such works, and what degree of use may erode that meaning too much? This is likely an empirical question, in which people are asked about their perceptions of AI-written or AI-augmented work in domains where that work is partly made meaningful because it was produced by a particular human and is meant to express their own views.

For example, I’m reminded of a research study I heard about this week in which the researchers sought student perceptions on feedback produced by humans vs. AI: “AI or Human? Evaluating Student Feedback Perceptions in Higher Education” (Nazaretsky et al., 2024; open access preprint here). Interestingly, the authors report that students in the study tended to revise their opinion of various aspects of the AI feedback after learning it came from AI, including measures of “genuineness,” “objectivity,” and “usefulness” (Section 5, p. 294). Among other conclusions, they note that their study “reveals a strong preference for human guidance over AI-generated suggestions … , indicating a fundamental human inclination for personal interaction and judgment” (p. 295). There is much more that could be said about this paper, and I may discuss it (and related studies) in more detail in another blog post.

Conclusion

One similar topic in both Mollick’s book as well as Vallor’s The AI Mirror, discussed in an earlier blog post, is about the dangers of outsourcing too much of our own capacities for critical and creative thinking and decision-making to AI. Mollick’s book is, though, I think much more positive about the potential value of humans working with AI for various purposes (for creativity/art, teaching and learning, work, and more), and provides many practical ideas for doing so. I have tended to focus on some of the more critical aspects of Mollick’s book, which is reflective of my own interests and sense of caution. I am very interested to work with others to figure out just what kind of cyborgs we might become, but I also am likely to tend to be a voice of critique as I fear there may be a fair bit to lose in our relationships with ourselves and others. I look forward to also figuring out what we might gain, though!

 

AI & relationships: Vallor, The AI Mirror

As discussed in a recent blog post, I’ve been thinking a lot about AI and relationships recently, and in this post I’m going to discuss a few points related to this topic from a book by Shannon Vallor called The AI Mirror (2024). Vallor doesn’t directly address AI and relationships, but I think a number of her arguments do relate to various ways in which humans relate to themselves, each other, and AI.

Mirrors and their distortions

Vallor focuses throughout the book on the metaphor of AI as a mirror, which she uses to make a few different points. First, she talks about how many current AI systems function as mirrors to humanity in the sense that how they operate is based on training data that reflects current and past ideas, beliefs, values, practices, emotions, imagination, and more. They reflect back to humans an image of what many (not all, since this data is partial and reflects dominant perspectives) have already been.

In one sense, there can be some silver lining in this, Vallor notes, as such mirrors can show things in stark relief that might further emphasize the need for action:

AI today makes the scale, ubiquity, and structural acceptance of our racism, sexism, ableism, classism, and other forms of bias against marginalized communities impossible to deny or minimize with a straight face. It is right there in the data, being endlessly spit back in our faces by the very tools we celebrate as the apotheosis of rational achievement. (46)

But of course, these biases showing up in AI outputs are harmful, and she spends a lot of the book focusing on the downsides of relying too much on AI mirrors for decision making, for understanding ourselves and the world around us, given that they, like any mirror, provide only a surface, distorted reflection. For one thing, as noted above, their reflections tend to show only part of humanity’s current and past thoughts, values, and dreams, with outputs that, in the case of LLMs for example, focus on what is most likely given what is most prevalent in training data.

In addition, AI mirrors can only capture limited aspects of human experience, since they don’t have the capacity for lived experience of the world or being embodied creatures. For example, language models can talk about pleasure, pain, the taste of a strawberry, a sense of injustice, etc., but they do not of course have experiences of such things. This can have profound impacts on humans’ relationships with each other, if those are mediated by AI systems that reduce people to machine-readable data. Vallor illustrates this by pointing to the philosopher Emmanuel Levinas’ account of encountering another person as a person and the call to responsibility and justice that ensues:

As … Emmanuel Levinas wrote in his first major work Totality and Infinity, when I truly meet the gaze of the Other, I do not experience this as a meeting of two visible things. Yet the Other (the term Levinas capitalizes to emphasize the other party’s personhood) is not an object I possess, encapsulated in my own private mental life. The Other is always more than what my consciousness can mirror. This radical difference of perspective that emanates from the Other’s living gaze, if I meet it, pulls me out of the illusion of self-possession, and into responsibility….

In this gaze that holds me at a distance from myself, that gaze of which an AI mirror can see or say nothing, Levinas observes that I am confronted with the original call to justice. When a person is not an abstraction, not a data point or generic “someone,” but a unique, irreplaceable life standing before you and addressing you, there is a feeling, a kind of moral weight in their presence, that is hard to ignore. (60)

The more people treat each other through the lens of data that can be “classified, labeled, counted, coordinated, ranked, distributed, manipulated, or exploited” rather than as “subjects of experience,” the more we may lose that already too-rare encounter (61). This is nothing new of course; it’s a trend that has been continuing for a long time in many human communities. But it can be made worse by outsourcing decisions, such as those related to health care, insurance, jobs, access to educational institutions, who may be a repeat offender, and more, which can in some cases reduce opportunities for human judgment in the name of efficiency.

Continue reading

AI and relationships: Indigenous Protocol and AI paper

I’ve been thinking a lot lately about generative AI and relationships. Not just in terms of how people might use platforms to create AI companions for themselves, though that is part of it. I’ve been thinking more broadly about how development and use of generative AI connects with our relationships with other people, with other living things and the environment, and with ourselves. I’ve also been thinking about our relationships as individuals with generative AI tools themselves; for example, how my interactions with them may change me and how what I do may change the tools, directly or indirectly.

For example, the following kinds of questions have been on my mind:

  • Relationships with other people: How do interactions with AI directly or indirectly benefit or harm others? What impacts do various uses of AI have on both individuals and communities?
  • Relationships with oneself: How do interactions with AI change me? How do my uses of it fit with my values?
  • Relationships with the environment: How do development and use of AI affect the natural world and the relationships that individuals and communities have with living and non-living entities?
  • Relationships with AI systems themselves: How might individuals or communities change AI systems and how are they changed by them?
  • Relationships with AI developers: What kinds of relationships might one have/is one having with the organizations that create AI platforms?

More broadly: What is actually happening in the space between human and AI? What is this conjunction/collaboration? What are we creating through this interaction?

These are pretty large questions, and I’m going to focus in this and some other blog posts on some texts I’ve read recently that have guided my interest in thinking further about AI and relationships. Then later I will hopefully have a few clearer ideas to share.

Indigenous Protocol and AI position paper

My interest in this topic was at first sparked by reading a position paper on Indigenous Protocol and Artificial Intelligence (2020), produced by participants the Indigenous Protocols and Artificial Intelligence Working Group that participated in two workshops in 2019. This work is a collection of papers, many of which were written by workshop participants. I found this work incredibly thought-provoking and important, and I am only going to barely touch on small portions of it. For the purposes of this post, I want to discuss a few points about AI and relationships from the position paper.

Continue reading