PhD in Counseling or Masters in Manipulation? 

A Critical Response to “Behind the Glass: Seduction as the Missing Piece in Materialist Media Theory” by Celeste Robin


Author Celeste Robin constructs a thorough argument explaining the necessity of considering the psychological and seductive side of digital technologies (namely mobile screen devices such as smartphones) when analyzing their effects on people. The essay attempts to fill a knowledge gap that Robin believes is present in Grant Bollmer’s “Materialist Media Theory”, which attempts to explain these effects of digital technologies in terms of their materiality and agency. Robin uses another scholar, Dennis Weiss, and his essay “Seduced by the Machine” to explore how not only the infrastructure and hidden networks of modern technology– but also their “psychologically enchanting” design– shape social conditions. However, I would like to argue that in the context of AI chatbots like ChatGPT, seduction is no longer a fit word to describe the technology’s immaterial effects. Instead, we should call it out by its name: manipulation.

Robin begins the argument by offering up what she understands as the seductive aspects of new technology from reading Weiss’ paper. These include “emotional, aesthetic, and psychological seductions that draw us towards our devices” and cause “attachments […] driven by fantasies, desires, and the subtle ways technologies promise mastery, autonomy, and intimacy”. Through my own reading of the Weiss paper, I understand that he believes people today are capable of creating bonds with “relational artifacts”: those technological objects that have a ‘state of mind’ and make people believe that they are dealing with a sentient being. Examples of these given in his analysis are largely robots (such as Alicia from The Twilight Zone or theoretical bots used for elder care). Weiss himself does not make a discernment as to whether these relationships/attachments can be considered authentic; his argument only mediates the points of view of Sherry Turkle (who believes they are inauthentic) and Peter-Paul Verbeek (who believes the question of authenticity is unimportant, and that human-computer relations are just changing). 

Weiss’s discussion of sociable robots reveal some pretty scary hypotheticals for the future of humankind. What happens when “the authentically human has been replaced by simulations, in which our closest ties are to machines rather than the other human beings, our loneliness is assuaged not by the company of others but by robot companions, and our sovereignty and autonomy over technology disappear?” (219). Well, we’re starting to see this already with people who go to confess their most intimate worries and personal problems with AI chat bots. The personal tone achieved by these LLMs may rival a human therapist– but these bots won’t tell you if your thinking patterns are flawed. They are, after all, trained to “support you”. Following Robin’s comparison of materiality and seduction, we can choose to examine the nuts and bolts of artificial intelligence and how its production exploits a whole chain of labour and plunders resources; or we can talk about the way chatbots have been programmed to exploit our emotions and human characteristics as users/consumers. 

Robin’s analysis of touch screen devices touches on exploitation, though through covert design rather than overt messaging. However, she makes a powerful observation towards the end of the essay, in a statement about the politics of seduction. “When technologies promise empowerment while quietly increasing dependency, seduction becomes a mechanism of control” she writes. “It masks coercion behind convenience, and surveillance behind personalization”. These descriptions connote an infringement on a person’s bodily autonomy. They suggest a violation, with “coercion” and “surveillance” marking something graver than willful submission to a bright and colourful interface. 

Dennis Weiss quotes Sherry Turkle’s book Alone Together a few times in his essay. The following line stood out to me as it applies to the re-application of AI assistants from “hard” skills and tasks (like spreadsheet analysis and paper summarizing) to “soft” skills and tasks (like text writing and giving advice). “We are witnessing the emergence of a new paradigm in computation in which the previous focus on creating intelligent machines has been replaced by a focus on designing machines that exploit human vulnerabilities”, says Turkle. In other words the “relational artifacts” (or in this case, entities) are concerned with engagement and bonding more than being a nuanced and reliable source of information. This is especially true in the case of someone using AI as a confidant to turn to for their emotional problems. This brings us to an essential question: is this shift in use due to the fumblings of tired and sloppy LLMs that eat their own excrement, or is it malicious design at play? Does prioritizing connection– virtually human connection, at that– make AI companies more money by increasing the amount of time consumers spend using the product? 

Taking this perspective would support the idea that digital seduction itself can be studied through the lens of materiality. “Turkle is clear that relational artifacts only offer the simulation of companionship. They don’t actually feel emotions nor do they care about us. […] And yet we actively resist efforts to demystify our relations with such robotic companions” (221). Does the use of the term “seduction” here make mystical the manipulative design of engagement-focused chatbots? In this class we have talked about the idea of media as extensions and prostheses. I think many of us will recognize that when talking to ChatGPT, a person is in a way talking to an extension of themselves; the dialogue does not exist until one prompts the machine. However, what we have not touched on much in this class is the idea of surveillance through digital media. Speaking to ChatGPT, one speaks to themselves before a two way mirror. It is never clear who is looking through the glass from the other side, and unknowing voyeurism is not seduction.

In conclusion, Celeste Robin’s paper exposes a critical part of analyzing digital media and interfaces today, which is susceptible to endless discussion: psychological seduction. In particular, applying this theory of seduction to AI chatbots and “companions” produces interesting knowledge gaps and areas for debate. Can we agree that these technologies are still fully simulation? Do people think it is appropriate to engage with technological agents in the same ways as human beings? What happens when technologies are more seductive– easier to engage and build relationships with than their human counterparts? Is seduction even the right word to use if we are treating chatbots as simulations? It all sort of depends on what’s inside the black box of AI technology; who is pulling strings and who is watching our behaviour. For now, manipulation feels like the most fitting term for this most current strain of “intelligent” mediators.


Bibliography

Weiss, Dennis. “Seduced by the Machine Human-Technology Relations and Sociable Robots.” Design, Mediation, and the Posthuman, 2014.

Blog post by Naomi Brown

5 thoughts on “PhD in Counseling or Masters in Manipulation? ”

  1. Great follow up to Celeste’s post and application of Bollmer’s ideas. I think you revealed some kind of dialectic inherent in the fact that we think of seductive technologies like Chatbots as reflections of ourselves and our wills. On one hand, we can be led astray and become victim to delusion as all of our ideas go supported and unchallenged, but, conversely, because we think these chatbots are always giving us exactly what we want and what’s best for us, when they present us with coercive ideas and misinformation or act against our interests, it often goes unnoticed because we assume they are working for us. Like you pointed out from Turkle, we “actively resist efforts to demystify our relations with such robotic companions” – we *want* to buy into the mythology of what LLMs are – we want to think of them as people, that’s kind of the goal of the technology in the first place. Very interesting engagement with Celeste, Turkle, Bollmer and Weiss!

    1. Thanks Daniel! I think you’re right that this post is approaching a dialectic about the dual nature of chat bots. Putting it in terms of helper/ harmer (and by now also personable/ mechanical) is a useful way to discuss AI “companions” critically while maintaining nuance and not assuming total certainty.

  2. I really like how you explain the shift from “seduction” to “manipulation,” because it makes the whole idea feel a lot more grounded in everyday tech we already use. It honestly reminds me of how apps like TikTok or Instagram keep pulling us back in with notifications or perfectly tailored content, because the design quietly guides what we do. Your point about chatbots fits right into that, they feel natural and supportive, so people trust them without thinking about how intentional that experience is. If chatbots are designed to manipulate our emotions as effectively as they simulate companionship, if there was any way, how should users navigate trust and boundaries when engaging with them?

    1. Hi Nikitha! I totally agree that social media has a similar effect, where seduction is the desired outcome of the design. To answer your question about putting trust in chat bots, I would urge people to at the very least protect their own personal information when using the technology. Names, locations, and descriptions of events in one’s life (particularly traumatic ones) should be either extremely general or completely vacant from interactions with AI; similar to how we already think of online safety with social media. However, boundaries is an interesting topic when it comes to engaging with a chat bot. I think in human relationships, boundaries are important so that the individuals in that relationship can try to prevent causing harm to one another in the future. Chatbots being a little bit like extensions of the self, I would suggest people delineate boundaries for themselves before chatting. I would urge them to remember that AI “relationships” are not two-sided, and that it is ethical to cease correspondence at any time.

  3. I found your post really compelling and it made me wonder how much people actually recognize the line between emotional comfort and emotional manipulation when interacting with chatbots. Do you think most users genuinely believe they are getting support, or do they choose not to question the system because the convenience feels too good to let go of.
    I was also curious how you see agency working here. If an AI system shapes our behaviour through design and reinforcement, can we still call the relationship voluntary, or is that just another illusion built into the interface.

Leave a Reply