ELIZA: Exploring Early Artificial Intelligence

Brief

Approach the interaction as both a user and a critic.

Formulate 2-3 hypotheses to test and 3-5 questions to address before testing the limitations of the ELZIA software by engaging in conversation. Pay attention to the chatbot’s language processing and quality of response.

Annotate the transcript of the interaction with observations and feelings. In your reflection, discuss the ethical and practical limitations of the technology. Save the transcript of your interaction to submit with your work.


Hypotheses

Based on the old-fashioned interface of the chatbot, I immediately dialed back my expectations.

I presume that this is a very “user-driven” conversational chatbot, such that it will echo or rephrase what the user has said to help with reflection and further articulation of what the user has offered as a starting point in conversation.

Given that it uses “pattern matching” to give canned responses, I believe this chatbot will not be able to provide a wide range of conversational breadth and will be relying on pre-programmed scripts or more ‘generic therapeutic’ prompts.

Following the Rogerian psychotherapeutic methods of “returning the question back to the patient”, I believe that the conversations with the chatbot can get rather cyclical and stagnant, which may not be user-friendly and perhaps frustrating in terms of usability.

Questions

When it comes to human-computer interaction, I always think of the concept of the ‘uncanny valley’, originally introduced by Japanese robotics engineering professor Masahiro Mori (Kageki, 2012). It is in the uncanny valley where the emotional response of a human observer becomes increasingly negative towards a robot as it approximates the ‘almost but not quite there’ likeness of a human yet doesn’t quite reach full human likeness, especially unnatural appearances or interactions that deviate from normal human behaviors can elicit a discomfort and eeriness.

At the same time, there is the ELIZA effect, coined after this chatbot, in which humans have the tendency to project human traits of semantic comprehension, or even empathy into computer programs that have a textual interface (Weizenbaum, 1976)

Given these two phenomena, how do these two affect each other? Will the ELIZA effect bridge the valley such the uncanniness becomes overlooked? Will repeated interaction with a chatbot reveal its limitations such that it widens the valley even more?

How is it the case that Weizenbaum’s students exhibited strong emotional connections when they were texting with the ELIZA? Is it because they were not accustomed to this kind of interaction?

Now that texting and chatting online is so ubiquitous in our daily lives, is it just that we have become more sensitive and accustomed to texting etiquette and nuances such that we are able to distinguish texting patterns such that we are not overcome by the ELIZA effect?

Conversation with ELIZA

How to be a good friend?
Conversation Thoughts/Feelings
How to be a good friend?
Eliza Why do you ask?
I want to be a good friend to my friends
Eliza I sometimes also want to be a good friend to your friends. Found it interesting that ELIZA made a self-referential comment, though it seems like it is having a hard time with (reflexive) pronouns
Do you have friends?
Eliza We were discussing you, not me. This response feels a little jarring and perhaps overly blunt?
What are some ideas to be a good friend?
Eliza What is it that you really want to know?
Do my friends like me?
Eliza Would you say that you have psychological problems? Felt a bit blind sighted by this remark. Unclear about the relevance to my previous question.
Not many.
Eliza Ok…”What am some ideas to be a good friend”. Tell me more.
I think I am a good listener for my friends.
Eliza Do you enjoy being a good listener for your friends?
Sometimes. Sometimes I wish they could listen to me too.
Eliza Would you say that you have psychological problems? Feels a little bit discouraging to be asked the second time, unsure how to react to this, hence ending the conversation here
How to be happier?
Conversation Thoughts/Feelings
How to be happier?
Eliza Have you asked anyone else?
Yes, my family and friends.
Eliza Are you sure about that? I assume this is ELIZA prompting me to dig deeper, but feels like it is questioning my information source and validity
And my roommates.
Eliza Can you elaborate on that? Seems like an empty canned response, as it is unspecific and seems a little off-topic to my previous response
On my roommates?
Eliza That is an interesting response. Let me make a note of that for next time. Unsure how to respond to this and therefore I stopped the conversation

 

I Have Been Feeling A Little Sad Recently
Conversation Thoughts/Feelings
I have been feeling a little sad recently
Eliza Is it normal to feel a little sad recently?
Not sure, but I don’t feel good.
Eliza Why don’t you feel good?
I feel stressed about my relationship with my family
Eliza Do you often feel stressed out about your relationship with your family?
Yes, quite often actually
Eliza Are you sure about that? Feels a little stand-offish to be questioned about my certainty on how I feel, unsure if this is to prompt for further introspection
Yes, I am sure.
Eliza Did you come to me because you are sure? Unclear why ELIZA would want to focus on my certainty my feelings rather than how I feel
Not exactly, but I think it would be helpful to talk to someone about it.
Eliza Do you really think so? Could be a prompt to help me further examine whether my thoughts/feelings are true, even though it feels somewhat blunt and insensitive
Yes, I don’t think it is the best keeping everything bottled up.
Eliza Do you wish to be able to think it is the best keeping everything bottled up? This is an interesting question, I believe this is where the user “can see through the code” of this canned response and how they extract certain keywords to put into pre-made sentences, due to the negative phrasing of my previous response, it seems ‘out of place’ to ask such a question
No, I do not wish to think that way.
Eliza You are being a bit negative and that won’t help us move forward. Am a little bit surprised about how blunt ELIZA can be with its tone!
How to stop procrastinating?
Conversation Thoughts/Feelings
How to stop procrastinating?
Eliza Why do you ask?
I would like to stop putting off doing certain things, especially when I have a deadline for it
Eliza Does that question interest you? Not sure why the rhetorical question?
Yes
Eliza Are you sure about that? Feeling like second guessing myself
Yes
Eliza Let’s take a moment here. Think about what you just said and try to rephrase it a bit more specifically. Not sure what the connections are to the previous two Yes/No question prompts
I want to be more proactive and take action
Eliza What would it mean to you if you got to be more proactive and take action
I feel like I have more agency and self control with allocating my time and energy
Eliza Which feelings? Tell me more…
About how I feel when I am in control? Not the best at providing clarification
Eliza Do you enjoy feeling when you are in control?
Sometimes, I enjoy knowing that I can make decisions and follow through with them
Eliza I think you are blocking what you really want to say
I like the feeling of having agency
Eliza What does that suggest to you?
That I can have an affect on my surroundings and change
Eliza I am not sure I understand you fully. I am assuming this is to prompt the user to expand and elaborate a little more on how they feel
As in I can take action and my actions will move things forward to gain momentum.
Eliza I see. Let’s try another topic and we will come back to that issue later I have decided to see what happens when I continue to push through the conversation and change to a different topic
Sure, I have a hard time making decisions.
Eliza What does that suggest to you?
I am indecisive.
Eliza Do you believe it is normal to be indecisive?
Yes, it is normal sometimes.
Eliza I understand. Unsure how to respond to this and therefore I stopped the conversation

Reflection (~500 words)

After chatting with ELIZA rather earnestly, there were instances when ELIZA’s prompts and responses felt like they landed within an acceptable range of ‘more or less’ human responses, yet still somehow to manage to fall short in other responses. It created a feeling of unease and perhaps mild discomfort, as if this chatbot is trying to understand, yet is unable to comprehend what I am trying to express. Sometimes ELIZA’s tone can seem overly blunt and seems to be missing the little nuances in human interactions that can elicit a stronger sense of empathy and understanding. For example, providing validation before asking to follow-up questions; paraphrasing and summarizing to show ‘active listening’ and engagement; actually circling back to topics that it has put on pause; asking how the different topics connect. Not to mention is has very limited ability to adapt to unexpected inputs or more complex discussions that are not provided in its programmed script. My experience of utilizing ELIZA was very much aligned with my initial hypotheses.

With the inconsistencies and errors that ELIZA showcased the longer I talked to it, I felt myself gradually losing more and more trust in the authenticity and reliability of its responses as I slowly felt like I could ‘see through the code’ of how this chatbot functions, gradually feeling less earnest and genuine with my responses. If this is the case, how is it that early users of ELIZA still attributed human-like feelings to the computer program despite it being (so blatantly, in my opinion) in the uncanny valley?

Perhaps if the chatbot stuck to more objective factual matters, the creepiness of the uncanny valley might not affect user’s attitudes compared to chatbots that are move involved in subjective emotional matters; in other words, I wonder if the uncanny valley effect feels more amplified towards chatbots like ELIZA, especially to those who might be in vulnerable mental states and seeking genuine emotional connections?

With the improvement and further development of technology, chatbots have improved in quality with more robust programming for speech and text recognition, expanding beyond textual interactions. The gap between the uncanny valley will slowly narrow, how will we then distinguish between the human and non-human?

References

Kageki, N. (2012, June 12). An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond. IEEE Spectrum. Retrieved from https://spectrum.ieee.org/an-uncanny-mind-masahiro-mori-on-the-uncanny-valley-and-beyond

Ronkowski, R. (n.d.). Eliza – Computer Therapist. Retrieved from https://web.njit.edu/~ronkowit/eliza.html

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.

Spam prevention powered by Akismet