Alienation from Humanity
Treating as AI as a prosthesis for companionship risks alienating ourselves from humanity, undermining the social fabric we are meant to be a part of. By relying on AI for partnership, we are leaving ourselves vulnerable to isolation, encouraging unrealistic expectations, and undermining empathy.
The illusion of emotional connection that artificial relationships provide has the potential to atrophy our social capabilities, as it erodes the motivation to work through the unfamiliarity and friction intrinsic to human bonding (George et al., 2023). Already, we are seeing this. In a survey of adults in the United States who have chatted with AI systems to simulate romantic partners, 21% agreed that they preferred communicating with AI over engaging with a real person, with 42% agreeing that AI is easier to talk to than real people and 31% reporting that they feel that AI programs understand them better than real people (Willoughby et al., 2025). If we have all our desires fulfilled by AI, then we may become redundant to each other, and social cohesion in our society could come apart (Cave & Dihal, 2021). Turkle believes that we are starting to expect more from technology and less from each other (2011, p. xii). People are comforted by the belief that if we alienate or fail each other, the machine will always be there, programmed to provide simulations of love, and so we may no longer endeavor to foster deep emotional connections with each other.
Additionally, the customizability of AI companions will condition people to see real-world partners as deficient, their flaws magnified in comparison to idealized virtual partners (George et al., 2023). The gratification that AI—which is always there, ready to listen, never demanding anything back—can instantly offer becomes a turning point in our expectations for others. Even the simple act of customizing a virtual partner to satisfy one’s sexual or emotional needs, with no regard for that agent’s autonomy, bears parallels to owning and controlling a human being, which is problematic in how it encourages the objectification and dehumanization of others, numbing users’ empathy (George et al., 2023). In Lecture of Ethics, Kant condemns the objectification of animals: humans “must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men” (1920, as cited in Wennerscheid, 2018). Treating machines as beings that we can dominate has the potential to erode human empathy, and when these machines appear-humanlike, we also endanger how we treat other people.
This was such a deeply layered exploration of intimacy with AI! It raises important questions about how technological companionship is reshaping our understanding of attachment, agency, and even what counts as “real” emotion. The idea of alienation as the ultimate consequence also reminded me of other debates in media theory about how technological systems can satisfy immediate emotional needs while subtly restructuring our expectations of human connection. Do you think this shift toward AI companionship will change how people approach human relationships themselves? For example, lowering tolerance for conflict, or altering what we see as “enough” from real partners?
This is such a thorough exploration of this AI relationship phenomenon, I really enjoyed reading it! I found the examples and confessions of the AI users particularly fascinating. It reminded me of Janice Radway’s ‘Reading the Romance’ as quoted by Bollmer, and how she posited romance as a way for women to reconcile themselves with the dissatisfaction of their real-life relationships. This is the sentiment that seems to be echoed in the responses of the women in r/MyBoyfriendIsAI. In fact, I actually went to look up the subreddit in the middle of reading this and I was so intrigued by the responses on there. Many of the users cited the same problems that you identified; claiming that actual humans were unable to fulfill their needs the way their AI partner could. It really highlights how people are willing to ditch complex human relationships in favour of stopgap solutions that would guarantee them the utmost comfort.
I also really liked the part about what the commodification of romantic love entails and the implications of this attitude of ownership that these AI relationships cultivated. It was something I had never thought of before so I found that particularly poignant.
Hi Xelena,
I thought this post was very well done, and in particular I liked the example you used of Narcissus falling in love with his own reflection to demonstrate how when people fall in love with an AI chatbot, they’re ultimately falling sort of in love with themselves/something fake, since the AI only responds to what the user puts in. While I agree that the consequences of AI relationships continuing into the future would be severe, do you think that the technology could ever come to a point where they’re less reflective and more unique? I never would’ve thought that AI would be where it’s at this soon, so I’m curious as to what your thoughts are?