AI & relationships: Vallor, The AI Mirror

As discussed in a recent blog post, I’ve been thinking a lot about AI and relationships recently, and in this post I’m going to discuss a few points related to this topic from a book by Shannon Vallor called The AI Mirror (2024). Vallor doesn’t directly address AI and relationships, but I think a number of her arguments do relate to various ways in which humans relate to themselves, each other, and AI.

Mirrors and their distortions

Vallor focuses throughout the book on the metaphor of AI as a mirror, which she uses to make a few different points. First, she talks about how many current AI systems function as mirrors to humanity in the sense that how they operate is based on training data that reflects current and past ideas, beliefs, values, practices, emotions, imagination, and more. They reflect back to humans an image of what many (not all, since this data is partial and reflects dominant perspectives) have already been.

In one sense, there can be some silver lining in this, Vallor notes, as such mirrors can show things in stark relief that might further emphasize the need for action:

AI today makes the scale, ubiquity, and structural acceptance of our racism, sexism, ableism, classism, and other forms of bias against marginalized communities impossible to deny or minimize with a straight face. It is right there in the data, being endlessly spit back in our faces by the very tools we celebrate as the apotheosis of rational achievement. (46)

But of course, these biases showing up in AI outputs are harmful, and she spends a lot of the book focusing on the downsides of relying too much on AI mirrors for decision making, for understanding ourselves and the world around us, given that they, like any mirror, provide only a surface, distorted reflection. For one thing, as noted above, their reflections tend to show only part of humanity’s current and past thoughts, values, and dreams, with outputs that, in the case of LLMs for example, focus on what is most likely given what is most prevalent in training data.

In addition, AI mirrors can only capture limited aspects of human experience, since they don’t have the capacity for lived experience of the world or being embodied creatures. For example, language models can talk about pleasure, pain, the taste of a strawberry, a sense of injustice, etc., but they do not of course have experiences of such things. This can have profound impacts on humans’ relationships with each other, if those are mediated by AI systems that reduce people to machine-readable data. Vallor illustrates this by pointing to the philosopher Emmanuel Levinas’ account of encountering another person as a person and the call to responsibility and justice that ensues:

As … Emmanuel Levinas wrote in his first major work Totality and Infinity, when I truly meet the gaze of the Other, I do not experience this as a meeting of two visible things. Yet the Other (the term Levinas capitalizes to emphasize the other party’s personhood) is not an object I possess, encapsulated in my own private mental life. The Other is always more than what my consciousness can mirror. This radical difference of perspective that emanates from the Other’s living gaze, if I meet it, pulls me out of the illusion of self-possession, and into responsibility….

In this gaze that holds me at a distance from myself, that gaze of which an AI mirror can see or say nothing, Levinas observes that I am confronted with the original call to justice. When a person is not an abstraction, not a data point or generic “someone,” but a unique, irreplaceable life standing before you and addressing you, there is a feeling, a kind of moral weight in their presence, that is hard to ignore. (60)

The more people treat each other through the lens of data that can be “classified, labeled, counted, coordinated, ranked, distributed, manipulated, or exploited” rather than as “subjects of experience,” the more we may lose that already too-rare encounter (61). This is nothing new of course; it’s a trend that has been continuing for a long time in many human communities. But it can be made worse by outsourcing decisions, such as those related to health care, insurance, jobs, access to educational institutions, who may be a repeat offender, and more, which can in some cases reduce opportunities for human judgment in the name of efficiency.

Human self-making and creativity

Further, Vallor argues that humans need to be able to understand ourselves and each other well in order to face and address effectively existential risks such as climate change, and to ideate futures in which equity, justice, and human flourishing (in its many forms) are paramount. This also requires an ability for what philosopher José Ortega y Gasset called “autofabrication”: “the task of making ourselves” (12), and doing so differently into the future.

Relying too much on mirrors that reflect the past of what (some) humans have been can “occlude human spontaneity and adaptability: our profound potential for change” (56). Vallor writes, “As researcher Abeba Birhane has repeatedly argued, AI mirrors are profoundly conservative seers. That is, they are literally built to conserve the patterns of the past and extend them into our futures” (57). This is not to say that, e.g., generative AI systems can create nothing new; clearly they can generate new articles, books, artworks, music, etc. But what they produce is often designed to largely adhere to broad patterns in training data, altering them somewhat to generate new things but without radical transformation (140-141).

In addition, Vallor points to an important disconnect between AI generation and human creation:

To merely create is to bring into being what was not there before. It is now a fairly trivial operation for an AI model to “create” in this limited sense, by producing new variations on an existing data set. To express is different. To express is to bring into existence something that speaks of something else. … To express is to have something inside oneself that needs to come out. (141)

This AI models cannot do, since they “have nothing to say,” nothing “inside” that needs to emerge outwards (141). When interacting with purely AI-generated works, then, we don’t have a connection, however tenuous, to a conscious being who is expressing something. According to Vallor, there is only a pale surface image that isn’t an expression at all.

Relatedly, Vallor points to thoughts by the musical artist and writer Nick Cave, who has stated that writing a good song involves a creative act of self-destruction, of going beyond what one has been and has done in the past (158). Similar to the above point, Vallor emphasizes that while AI systems can create mere novelty, breaking existing patterns to some degree, they cannot engage in the kind of creative self-destruction Cave is talking about, which involves remaking oneself as well as expressing something that needs to be brought into the world. “It’s the inner need to change oneself, so that one can make a new part of the world and give it to others” (159).

The question here, so far as I understand it, is what can humans lose if too much of our creative capacity, whether in art or other endeavours, is outsourced to AI systems? There can be a loss of connection, again though tenuous, to other people through communication of their own expressions, their own ideas, values, emotions, and more. There can also be a loss, over time, of practice in the ability to do the thinking and other work needed to generate new ideas and approaches, to make hard social, political, and ethical decisions, to imagine radically different futures.

The space of reasons

According to Vallor, the more humans outsource decisions to AI, the more we may limit “our own space for thinking” (106), and for making decisions based on reasons. Referring to the view of philosopher Wilfred Sellars, Vallor notes that humans often operate “within the space of reasons,” which means “being in a mental position to hear, identify, offer, and evaluate reasons, typically with others” (107). Knowledge work involves developing knowledge, evaluating claims, communicating to others, and listening to, evaluating, and providing reasons are crucial parts of that process. Vallor argues that they are also skills that need time and practice to develop, and reducing opportunities to do so can be problematic.

She also points to philosopher John McDowell’s extension of Sellars’ idea of the space of reasons into the moral realm, to posit a space of moral reasons. This can be internal to oneself, when considering moral dilemmas and how to act, or it can also be communal, when a group of people does so (108-109). Vallor worries that “the space of moral reasons is shrinking, both personally and in public life, as a result of our growing reliance on increasingly opaque and automated machine decisions” (110), where opacity makes it even more difficult to identify and evaluate possible reasons for those decisions. A possible result is “moral de-skilling,” due to reduced opportunities to practice moral reasoning (117).

Social, political, and ethical decisions could be made more quickly and easily by AI systems, but what would be lost? Partly the nuance, the meaning, the ethical relation between individuals that can bring life to those decisions beyond calculations of data. AI systems may make decisions without emotions, but emotions may play an important role in ethical decisions (130). Human decision making may be slower and more complex, but this need not mean it is worse; indeed, this may be precisely what is needed to address complex, contextual, and relational moral decisions. Having AI make many such decisions also reduces opportunities for humans to become proficient in the space of moral and other reasons. Vallor emphasizes: “If we choose to embrace certain AI technologies, let us embrace them not as transcendence or liberation from our frail humanity, but as tools to enlarge our freedom for it” (217-218).

Vallor suggests asking questions such as the following about AI decision tools (130):

  • AI developers: “What kinds of thinking does this system mirror or duplicate? Are those thought processes of no real value? Or are they among the thoughts that we must keep?”
  • AI deployers: “How can this tool be used to preserve or augment, rather than shrink or eliminate, the space for human thought and reasoning in this organization?”
  • AI researchers: “How could the computational power of AI expand the space of reasons for people, and enable wider, more effective and equitable access to it?”

A future for AI

Vallor is not arguing for the rejection of AI systems entirely; many are useful for various purposes. But in addition to emphasizing the dangers noted above, she also argues that it is imperative to critically reflect on the purposes for this kind of technology. She refers again to philosopher Ortega y Gasset, who states that while technology can help support implementation of human projects, it doesn’t create those projects; “the final aims it has to pursue come from elsewhere. The vital program is pretechnical” (Ortega y Gasset, “Man the Technician,” p. 119; quoted in Vallor, p. 205).

Vallor makes a case that among those purposes should be, among others, a focus on supporting and caring for human needs (191), supporting human rights and the UN Sustainable Development goals (207), “locating and remedying injustice” (210), “building solidarity and establishing networks of mutual aid” (212), and generally, “reclaiming technology as a human-wielded instrument of care, responsibility, and service” (217).

My thoughts

There is a lot here that I found thought-provoking. In particular, I appreciated the point that the more humans rely on AI for creativity and decision making, the fewer opportunities we may have to develop those capacities ourselves. Of course, this depends on how prevalent such reliance is, and how it works; it may be that human decision making capacity is still involved in important ways, and also that there may be other opportunities for developing such skills. While the actual impacts this may have are perhaps uncertain, I think there is something to the idea that it’s important for people to develop and sustain capacities for moral, social, and political decision making.

I also appreciate the emphasis here on being clear about purposes for AI. Personally, I feel like many of the generative AI tools just landed in our laps and many are still struggling to figure out whether and how to use them, and for what purposes. Vallor’s suggested purposes are very abstract and high-level, being about support and care for human wants and needs, but personally I do find this more compelling than productivity and efficiency that might not in the end actually reduce most peoples’ workloads (which will move to other tasks). Taking such purposes seriously might change how some folks use AI systems.

The point about connecting with others in genuine human encounters, including through artistic or other expressions, I find important. Still, it’s also useful to reflect on what it might mean to not just encounter human expression vs purely AI creation; it’s more likely a combination of the two in some way. After all, creators have for many centuries expressed themselves through various technologies (e.g., pen, brush, keyboard), and the question to my mind is what changes in our connections with other humans when the mediating technology creates its own words, images, music, films, etc., which hasn’t been so prevalently possible in the past. The technology feels to me more immediately present, but the human is usually still there too.

Finally, Vallor’s general approach is to speak frequently in terms of what “we” as humans need or should be doing, which is a way of speaking that I try to critically question (though I myself fall into it sometimes too)–who is included in the “we” and who is excluded? Does this adequately take into account community, cultural, religious, and other differences? I’m not going to dive into all the ways this might or might not be the case for the various uses of “we” in Vallor’s book (which are many), but it did give me pause at some points.

Though this isn’t necessarily a main focus of the book, there are numerous points in it that touch on the topic of AI and relationships. Here are a few:

  • Relationships between people:
    • Connecting with others in an encounter as Levinas describes, where we are called into responsibility by the person we encounter
    • Connecting with others through them communicating their genuine expressions rather than purely AI-generated art- or other works
    • Opportunities to connect with others to engage in ethical, social, political, and other discussions in which we share, discuss, and evaluate reasons to make decisions
  • Relationships with oneself:
    • How might extensive us of AI systems affect the development of human capacities for thinking, decision making, and creating new ideas and practices for the future?
  • Relationships with AI: How might humans affect AI and vice versa?
    • How might individuals, communities, organizations, governments shape AI systems to fit various purposes? While developers can do so of course, what might others be able to do in this area?
    • How might use of AI affect individuals, communities, organizations, etc.? The point about developing capacities for thinking and decision making can fit here, but there are numerous other possibilities.

There is not a great deal of focus in the book on relationships with non-human living beings or the environment as in some other works, including the Indigenous Protocol and Artificial Intelligence position paper I discussed in a recent blog post.

Overall, I found this an interesting and thought-provoking work that raised some new ideas for me regarding human relationships with AI and with each other. I’m planning to find more works related to AI and relationships in the coming weeks and months, so stay tuned for further reflections!

2 comments

    1. Thanks again for reading, Alan! And for sharing the Indigenous Protocol and AI position paper. I agree, it’s an important perspective that I’m interested in continuing to learn about. I just completed an Indigenous Perspectives in AI asynchronous course developed with support of CIFAR (Canadian Institute for Advanced Research). It is quite focused on the Canadian context so may be less relevant for people in other countries, though some of the general principles may carry over!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.