As part of the series of posts I’m writing on AI and relationships, I want to discuss a few points from Ethan Mollick’s book, Co-Intelligence: Living and Working with AI (Penguin, 2024). As with some other works discussed in the series, the book doesn’t necessarily focus directly on the theme of human relationships with AI, with ourselves, with each other, or with other entities as affected by AI, but the overarching theme of working with AI as a “co-intelligence” is certainly relevant, and interesting, in my view.
This book covers a lot of helpful topics about working with AI, including a clear explainer about generative AI; ethical topics related to AI; AI and creativity, education, and work; possible scenarios for the future of AI; and more. But here I’ll just focus on some of the broad ideas about working with AI as a co-intelligence.
Note: I bought an e-book copy of the book and don’t have stable page numbers to cite, so I’ll be citing quotes from chapters instead of pages.
Alien minds
Mollick ends the first chapter of the book, “Creating Alien Minds,” by stating that humans have created “a kind of alien mind” with recent forms of AI, since they can act in many ways like humans. Mollick isn’t saying that AI systems have minds in the ways humans do, nor that they are sentient, though they may seem so. Rather, Mollick suggests that we “treat AI as if it were human, because in many ways it behaves like one” (Chpt 4). Still, that doesn’t mean believing that these systems replicate human minds in a deep way; the idea is to “remember that AI is not human, but often works in the ways that we would expect humans to act” (Chpt. 4). AI systems are not necessarily intelligent in the same way humans are, but they do act in “ways that we would expect humans to act” sometimes.
As an aside, I recently also finished a book by Luciano Floridi called The Ethics of Artificial Intelligence (Oxford UP, 2023), in which Floridi argues for conceiving of AI in terms of what it can do, not in terms of whether it possesses similar kinds of intelligence to humans:
… ‘for the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving’. This is obviously a counterfactual. It has nothing to do with thinking, but everything to do with behaving: were a human to behave in that way, that behaviour would be called intelligent. (16)
According to Floridi, humans have made great strides in developing artificial systems that can produce behaviour that could be called intelligent, but not so much in producing “the non-biological equivalent of our intelligence, that is, the source of such behaviour” (20).
I’m not sure if Mollick would necessarily agree with Floridi here, and I am not enough of an expert to have an opinion on whether Floridi is right about machine intelligence, but Floridi’s view helps me to consider what it might be like to treat AI as if it were human because of its behaviour, not necessarily thinking it is intelligent or sentient.
Returning to Mollick’s book, the idea of being able to work with an “alien mind” can be useful, Mollick says, “because an alien perspective can be helpful” to address human biases “that come from us being stuck in our own minds” (Chpt. 3).
Now we have another (strange, artificial) co-intelligence we can turn to for help. AI can assist us as a thinking companion to improve our own decision-making, helping us reflect on our own choices (rather than simply relying on the AI to make choices for us). (Chpt. 3)
I think this idea of interacting with an outside, even alien perspective is interesting, but I wonder…is what we can get from AI really very alien? Being trained on human inputs, will it provide a significantly different perspective than one could get from talking with other humans?
Mollick goes on to say that the “diversity of thought and approach” that one can get from an AI could lead to novel ideas “that might never occur to a human mind” (Chpt. 3). Perhaps this is the case; I am not expert enough in what AI can do to be able to really judge well. Mollick argues in Chapter 5 that innovation and novelty can come from combining “distant, seemingly unrelated ideas,” and that LLMs can do this well–they are “combination machines” that also add in a bit of randomness, making them “a powerful tool for innovation.” LLMs can certainly come up with novel ways to combine seemingly random ideas and concepts (Mollick provides an example of asking for business ideas that combine fast food, lava lamps, and 14th century England), and in this sense I can understand the point that you may get new ideas from working with this “alien mind.”
Another thought I have about working with such an alien mind, and that Mollick also talks about in the book (Chapter 2), is that there is bias in AI systems, which still continues even with attempts to correct for it. So the AI alien mind one is thinking with may be one that is steering one’s ideas and decisions towards, e.g., dominant perspectives and approaches in the training data. Of course, talking with individual humans means dealing with bias as well, so this isn’t unique to AI. One way to try to address this when working with humans is to seek out multiple perspectives from people with many different kinds of backgrounds and experiences. A worry is if there is too much reliance on working with AI to help us reflect on our own views, develop ideas, make decisions, we may not take the time to ensure we are getting diverse perspectives and approaches.
Co-intelligence
Mollick is sensitive to overreliance on AI: “it is true that thoughtlessly handing decision-making over to AI could erode our judgment” (Chpt. 3). Similarly, when talking in Chapter 5 about relying on AI to write first drafts of various kinds of work, Mollick states that doing so can contribute to eroding our own creativity and thought processes. It’s easy to just go with the ideas and approaches the AI comes up with for the first draft, even as folks may then revise and edit, meaning they may lose the opportunity to provide their own original thoughts and approaches to some extent. Further, if “we rely on the machine to do the hard work of analysis and synthesis, [then] we don’t engage in critical and reflective thinking ourselves,” and we lose the opportunity to “develop our own style.”
To work with AI as a co-intelligence, as a partner in some sense, humans need to remain strongly in the loop, according to Mollick: “You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations” (Chpt. 3). The idea is to use AI to help support human thinking and creativity, not replace them. When people work with AI but also keep themselves and their own creativity and criticality in the loop, the results are better than if they just rely on what the AI outputs as responses (Chpt. 6). In addition, when people “let the AI take over instead of using it as a tool, [this] can hurt human learning, skill development, and productivity” (Chpt. 6). Working with AI as a co-intelligence means still developing and practicing human skills but augmenting with AI work where it makes sense.
Further, Mollick states: “Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving” (Chpt. 3). I find this focus on learning from AI and adapting our ways of thinking interesting…how might working with AI more regularly shape the ways humans tend to think and make decisions? And how can humans remain critical and active partners in this relationship, retaining what is valuable and useful in human ways of thinking and being? And which such ways are those that it is important to retain even as people may adapt somewhat to the ways today’s AI systems work (or tomorrow’s, or next decade’s, or…)?
Centaurs and cyborgs
One of the memorable (for me) aspects of Mollick’s book is his use of the metaphors of centaurs and cyborgs for working with AI:
Centaur work has a clear line between person and machine, like the clear line between the human torso and horse body of the mythical centaur. It depends on a strategic division of labor, switching between AI and human tasks, allocating responsibilities based on the strengths of each entity.
Cyborgs blend machine and person, integrating the two deeply. Cyborgs don’t just delegate tasks; they intertwine their efforts with AI …. (Chpt. 6)
Working with AI as a centaur, from what I understand, would mean having different tasks done by the person vs the AI. Mollick gives the example of having AI produce graphs from data while the human decides on statistical analysis approaches. But a centaur way of working would be more like weaving human and AI activity together on a task, such as doing some writing, asking the AI for feedback and revising, or asking the AI to help finish a thought or a paragraph in a useful way (and then revising as needed). Mollick suggests that people start off working with AI as a centaur, and then they may gradually start to act as a cyborg; and at that point, they will have “found a co-intelligence” (Chpt. 6).
This idea of being a cyborg goes back to the questions at the end of the previous section, around how working closely with AI and adapting to it (and having it adapt to us) may change human ways of thinking, acting, and making decisions. In addition to sharing tasks with AI, both co-intelligences, as it were, are likely to be changed by this relationship, and I find it very interesting to consider who and what humans might become, and what we should hold onto.
One might argue that we are already cyborgs to some extent, as what and how we think, write, and interact are significantly shaped by technologies of many kinds, from handwriting to typesetting to word processing, the internet, and much more. Somehow, to me, what Mollick calls the “alien mind” of AI feels like an even deeper level of connection between human thinking, creativity, and technology, but I haven’t thought about this in depth enough to have a great deal more to say about it yet. I find it more worrying, that we may lose more of value in human being in interacting to a great extent with entities that act like humans in various ways. Or maybe we will find ways to retain what is meaningful in our relationships to ourselves and each other, and even find new forms of such meaning.
Human relationships with each other
So far I’ve been focusing in this discussion of Mollick’s book on humans’ relationships with AI (how they may act as a kind of co-intelligence) and relationships with ourselves (how overreliance on AI may lead to some erosion of human capacities). I’m also very interested in humans’ relationships with each other as AI use increases.
Though talking about human relationships isn’t directly his point in this section, I appreciated Mollick’s discussion in Chapter 5 of using AI to write recommendation letters, performance reviews, speeches, feedback, and other works that are meaningful because they reflect what other humans think and the time and effort they have put into them. Part of the issue, he points out in this section, is that AI systems could actually do a better job in some cases than if a human produced these. But they lose an important sense of meaning, and Mollick argues that “we are going to need to reconstruct meaning, in art and in the rituals of creative work” (Chpt. 5).
This leads me to wonder: what degree of AI use may still preserve the important meaning, the connection (though not direct) to another person that can come from such works, and what degree of use may erode that meaning too much? This is likely an empirical question, in which people are asked about their perceptions of AI-written or AI-augmented work in domains where that work is partly made meaningful because it was produced by a particular human and is meant to express their own views.
For example, I’m reminded of a research study I heard about this week in which the researchers sought student perceptions on feedback produced by humans vs. AI: “AI or Human? Evaluating Student Feedback Perceptions in Higher Education” (Nazaretsky et al., 2024; open access preprint here). Interestingly, the authors report that students in the study tended to revise their opinion of various aspects of the AI feedback after learning it came from AI, including measures of “genuineness,” “objectivity,” and “usefulness” (Section 5, p. 294). Among other conclusions, they note that their study “reveals a strong preference for human guidance over AI-generated suggestions … , indicating a fundamental human inclination for personal interaction and judgment” (p. 295). There is much more that could be said about this paper, and I may discuss it (and related studies) in more detail in another blog post.
Conclusion
One similar topic in both Mollick’s book as well as Vallor’s The AI Mirror, discussed in an earlier blog post, is about the dangers of outsourcing too much of our own capacities for critical and creative thinking and decision-making to AI. Mollick’s book is, though, I think much more positive about the potential value of humans working with AI for various purposes (for creativity/art, teaching and learning, work, and more), and provides many practical ideas for doing so. I have tended to focus on some of the more critical aspects of Mollick’s book, which is reflective of my own interests and sense of caution. I am very interested to work with others to figure out just what kind of cyborgs we might become, but I also am likely to tend to be a voice of critique as I fear there may be a fair bit to lose in our relationships with ourselves and others. I look forward to also figuring out what we might gain, though!