23 responses to “OER – Siri and her AI Offspring”

  1. clareyeh

    Hello Nicole and Joel,
    I apologize for the late reply. What an amazing OER! I loved the way that you formatted an introduction page on the blog and organized the terms used in your site, prior to us exploring. Such as conversation Agents, chat-enabled AI, and virtual assistants.

    With hands free driving, U often use a virtual assistant, Siri to help me manage my texts and calls if needed and when safe. What is frustrating is the lack of consistency, the paragraph-formatting, the lack of emoji use (unless I spell it out for Siri), and the overall lack of tone. What is remarkable is that they get the job done. As a user and in the culture of multi-tasking, I am grateful for tech and its ability to communicate our needs and act on our wants. Oftentimes, I think receivers are able to distinguish when someone uses a virtual assistant and so it’s still easy to manage and navigate. When it comes to maps, searching, other tasks then the virtual assistant often disappoints due to too specific asks. In addition, the user needs to have cellular data in order to access virtual assistant functions.

    It is a love-hate relationship with conversational agents in my classroom. Dictation and voice activation is fundamental for neurodiverse students with severe needs in mobility, speech, and processing. Data mining is another obstacle as it minimizes human function and critical thinking processes that make us a society, community, and civilization. AI deters us from our natural connections and bring us into a digital world of vulnerability and dishonesty. We see it live in schools, universities, and places of work. It is a part of being human now, another way of becoming less organic. Our societal challenges have been seen as a gap to fill and now technology/AI tells us the answer. It overpowers how humans operate or should operate. It is a distasteful process as there are data commodification obstacles as well. Educators often experience issues of ongoing security and third-party use of student data because it’s built into the software and apps. These barriers make opting out impossible and syncing of data irreversible through the top provides such as Google, Clever, and Microsoft.

    In terms of challenges, I foresee a breach in confidentiality. An iPhone often uses face ID to activate virtual assistant and voice activation. Therefore, I am always cautious of my phone in terms of camera angles, if wifi/data is turned on. It scares me when my phone has the possibility to listen and see everything I am doing. It is a breach of privacy, a new level of fear. So my tip or tricks is to be mindful of when or how you are using virtual assistance and to turn off your phone or data when possible and to leave the phone out of sight.

    Thank you for your efforts in engagement and for such an interesting OER!


    ( 0 upvotes and 0 downvotes )
  2. Jazz Chapman

    Hi All! Sorry for my late response!
    Your resource was so greatly researched and published. I learned quite a bit.
    For me, I try to stay away from audio assistants. However, I used to use a Google Home that I liked a lot until there were several glitches. For example, it would not understand a variety of voices while at the same point, the speaker would turn on while I was having virtual classes.
    Moving on, I really liked your comments about AI Literacy. So many students in High School use it as a way to cheat instead of as a way to help them succeed and enhance their learning and knowledge.
    There are so many applications available to them that can help with the former. However, I think it’s our job as educators to teach students how to use these programs and how to frame their questions to generative AI or Siri to help them formulate their own thoughts and opinions and like you said, there is a need for training. I just wish more teachers took this seriously.
    Do you think schools should help teachers learn how to use AI in the classroom? Do you think there would be teacher hesitancy? Why or why not?
    Jasmine


    ( 0 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hello Jazz,

      Thank you for sharing your thoughts and experiences. We are in a significant shift toward a globally accessible library of information accompanied by on-demand virtual assistance. Your response reminds me of your earlier discussions about cell phones and their non-academic use in schools. The technology is still very new, and best practices have not yet been fully established, both from a user perspective and an educator’s classroom management perspective. Additionally, many students may lack firsthand knowledge of how to properly use these technologies. I implore you to utilize the students in your classroom to further their understanding using these tools- encourage discussion or debate amongst each other to help further understanding and research skills, set directed time to verify information, and have students iterate (either individual or with a peer) on generated content to improve learning and understanding.

      I agree with your statement on the importance of teaching students (and staff) to utilize these technologies effectively. Teachers will naturally be hesitant when adopting new technologies, but their usage will increase as more teachers become comfortable with these tools. I think offering to share your knowledge (potentially through a professional learning session) with other teachers is a great way to help others feel confident about utilizing these technologies.


      ( 1 upvotes and 0 downvotes )
  3. Sam Paterson

    Nicole and Joel,

    Great job on this tool! It was very easy to navigate, well-organized, and well-researched. This is a very in-depth topic, and you did a great job of focusing on a particularly relevant topic and breaking it down in particularly helpful ways.

    I decided to use the Pi voice tool, and it was an interesting experience! I had heard about it before, but never tried it out myself. I first started with some basic conversation, on topics related to education. However, most of the responses were fairly run-of-the-mill facts and basic research. So, I decided to try to chat about something more vague, and much more personalized. I have a 12-year-old daughter, so I decided to see if Pi could help me with parenting.
    I asked Pi if there were any general ideas about parenting a 12-year-old. The results were vague. I tried to be more specific, asking for advice, adding general details about my child and asking for more specific advice and thoughts. The conversational agent affected a very conversational tone, but there were a few frustrating things
    It frequently offered help but accompanied advice with constant disclaimers. This broke the appearance of a realistic conversation, and I assume it may be a result of some sort of protection of liability. It made me wonder if perhaps a more broad disclaimer could be given to allow the experience to flow more smoothly. I also kept mentioning that it was “just here to listen”, when I asked it for an opinion.
    Technically, I ran into a couple of glitches. There were a few timeouts that broke the flow of the conversation, and I lost the prompt I had just given it which disrupted the experience. Also, I ran into an issue where I was only able to input via voice by using my Google Mic to generate text.
    All that being said, I was impressed by the platform and I think that many of the issues I ran into could be smoothed out with more frequent use and practice.

    Thanks for your work on this project,

    Sam Paterson


    ( 2 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hello Sam,

      Sam, I want to express my thanks for your feedback on the project. It’s always beneficial to hear different perspectives, especially when it comes to new tools like the Pi AI, even if it doesn’t work out as expected.

      I also felt quite similar when using the Pi voice tool for the first time. It seemed a lot more restrictive, and was hesitant to share information compared to other tools on the market. I suspect your discussion with it was challenging to process as everybody has a slightly different personality. Like any technology tool, we’re still in the infancy of what is possible with conversational agents and generative AI. I remember it took two years for the original iPhone to get copy-and-paste support.

      Just imagine what it will be like in 15 years!


      ( 0 upvotes and 0 downvotes )
  4. Richard Derksen

    Hi Joel and Nicole,

    Really excellent work in presenting an OER that demonstrates the breath and depth of information into conversational agents. I enjoyed the definitions at the outset of the OER to set the topic as many of these terms can be broadly applied. I’ll try to answer two of your prompts:

    Try starting a voice chat with a virtual assistant like Siri / Alexa or a conversational agent like Pi / ChatGPT voice. Did anything frustrate you about the experience? Was there anything remarkable you noted?

    One thing I do notice in using virtual assistants is the variation and detail of the responses. Like Jeannine, I prefer to use Google Assistant from my phone, but I also have Alexa within my home. However, my experience is quite enjoyable with Google Assistant compared to Alexa. I find that with Alexa I often have to repeat myself for simple tasks, or if I ask a question as if I were using a search engine, it will only produce a basic response. Perhaps the most frustrating aspect I face with Alexa is the amount of time I have to ask a question. When I prompt Alexa by saying its name, I find that I only have a few seconds to ask my question or say a command before it stops listening. If I stumble over my words, it also doesn’t seem to pick up what I was saying. By contrast, I find Google Assistant is much more accommodating when I have a longer question or I make a mistake. The video in your OER for the updates to Siri also seem very promising in this regard.

    What challenges do you foresee in utilizing conversational agents in your classroom or workplace?

    I think with most tools and platforms that employ the use of AI, the risk of hallucinations is always apparent, but I think a challenge in my context would be facilitating a space where employees trust their own expertise and challenge conversational agents. As a corporate trainer, one gap we have noticed is the work many of our employees perform involves a process where critical thinking and cognitive dissonance are minimized. This has been justified as a way to establish a consistent process, but we do want to encourage employees to ask the question “why” more in their work. In using conversational agents, rather than having employees use them and take the output at face value, I think one challenge could be employees challenging the output. I appreciate the inclusion of how to craft prompts in your OER as I think it’s an important consideration for achieving a desired output, but when the context becomes more specific such as public procurement best practices in my context, I wonder about the use of conversational agents in providing an accurate response and if employees can collectively explain why the output is inaccurate. I think there is a fair amount of responsibility of the educator to facilitate an environment that encourages cognitive dissonance and challenging these tools when necessary.

    Again, great job on the OER!


    ( 2 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hello Richard,

      Thank you for your feedback. One of the things that I wonder about from your conversations with Alexa and Google Assistant is if you’ve had more opportunities to provide Google Assistant with an example of how you talk to produce more training data for it. It’s too bad we couldn’t look at the device’s interworking to see what code training examples are present.

      You have the right mindset to encourage employees to question why (is the output this) when analyzing their work. I agree that there is still a fair amount of responsibility for the educator to facilitate an environment where differing opinions are willing to be expressed and encouraged.

      Keep up the great work in your planning!


      ( 0 upvotes and 0 downvotes )
  5. jeannine younger

    Hi Nicole and Joel,

    Like my peers, I appreciate the resources that you have created and presented. This a very relevant topic for the times and as many school districts across the province and possibly the country are working to develop a policy regarding AI use, it is also very valuable.

    Try starting a voice chat with a virtual assistant like Siri / Alexa or a conversational agent like Pi / ChatGPT voice. Did anything frustrate you about the experience? Was there anything remarkable you noted? – As a Google/PC household I have many conversations with my Google assistant. From turning on and off the lights, to asking about the weather Google and I have frequent conversations and, in turn, a few misunderstandings. Although they are not in-depth nor are they beyond surface-level requests, sometimes Google doesn’t understand my requests or mishears what I have said – which I find frustrating. Google does often say “I am not sure what you mean, but I found this – is that what you were looking for?”

    What challenges do you foresee in utilizing conversational agents in your classroom or workplace? When it comes to the classroom, as Joel mentioned, maturity is a major concern. With that said, kids were asking Siri, Google, and Alexa highly inappropriate questions when they were first released. I strongly feel that, with all things AI, we need to come alongside students to teach them when, where, how, and what is appropriate when utilizing this technology.

    Do you have any personal prompting tricks or tips to share? Do you know if your prompting strategies change when you engage with conversational agents versus text-based chats? Yes! Ask the AI to take on a persona or provide it context, provide a clear objective and identify specific details about the task and output. For example: Act like a grade 9 science teacher delivering a lesson on naming chemical compounds, specifically the difference between ionic and molecular compounds. Create 2 lessons to be completed in 60-minute blocks that support the understanding of how to identify and name ionic versus molecular compounds and how to convert compound names to formulas. Students have a 1-to-1 Chromebook ratio and access to Lego for manipulatives. Most students are reading at a grade 6 to 9 reading level and have prior knowledge of elements versus compounds.

    Ethics, Problems, Concerns – What else do you see as missing or concerning? Oh, the ethics! One thing that I see, very clearly, from educators is the use of AI detectors. Which are unreliable and inherently biased. Not to mention the ethical concerns around feeding student work into AI without consent. I often wonder – just because we can doesn’t mean we should.

    Thank you again for a great resource!


    ( 1 upvotes and 0 downvotes )
    1. Nicole Magne

      Hi Jeannine – ah, yes, the AI persona technique is so interesting. Did you hear about the speculation that asking ChatGPT to take on the role of the Star Trek computer yielded superior math problem-solving? It is peculiar behaviour; “asking the AI to start its response with the phrases “Captain’s Log, Stardate [insert date here]: yielded the most accurate answers. Surprisingly, it appears that the model’s proficiency in mathematical reasoning can be enhanced by the expression of an affinity for Star Trek”(Germain, 2024, para. 9). But that’s the thing with these models, is that they behave so inconsistently, that while a technique like Captain’s Log might work well one time, it’s not always going to be reliable or consistent. These AIs do act a bit bratty!

      Germain, T. (2024, March 1). AI Chatbots Are Better at Math When They Pretend to Be Star Trek Characters. Gizmodo. https://gizmodo.com/ai-chatbots-are-better-at-math-when-they-pretend-to-be-1851300787


      ( 4 upvotes and 0 downvotes )
  6. Devon Bobowski

    Hello Nicole and Joel,

    Great work on the OER. I think the links between voice assistants and AI are really significant, as we’re seeing a big shift in how people can interact with computers via speech.

    I’m cautiously optimistic on the potential for AI based tutoring systems. While some of the arguments on this might focus on the relative merits of AI versus human teachers, I think that misses the point: it’s more the opportunity for AI to supplement teachers. The sad reality is teachers are expensive, making individual time with students either rare (large classes) or expensive (paid one on one tutoring). AI seems like a great way to support students, especially those struggling with academics.

    This brings me to your fourth discussion question: Possible ethical concerns.

    One of the concerns I have about AI in general is its ability to scale. The mass adoption of the internet was very significant in the reduction of costs as scales increased: the cost to send a message to millions of people was insignificantly larger than sending it to ten people; a fact which wasn’t true of physical alternatives. Similarly, offering services via apps or websites could grow much easier than physical locations or products.

    AI seems more like bitcoin: as the usage increases, the amount of computing power, and thus energy consumption, continue to increase. As capabilities increase, the demand for them will increase this as well, along with likely parallel increases in the infrastructure to support them and data for training. We’re in an interesting stage where every company is trying to show off an AI assistant with minimal or no extra cost, but at some point I doubt that will be viable anymore. When that happens, the value of AI support may become concentrated with those who can afford it, which will negate the potential equalizing force the technology might have.


    ( 0 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hello Devon,

      Thank you for sharing your thoughts.

      I recall reading that queries using large language models, like Bing with Copilot or Google Gemini, consume approximately 10-30 times more electricity per request than traditional search engine queries (de Vries, 2023). While the cost of a single search query may not seem significant, the shift to language model-based searches as the default mode of searching could substantially impact the environmental cost of computer use. With the upcoming integration of ChatGPT into Siri this fall, we could be on the brink of a significant increase in global energy usage, posing a serious threat to our environment.

      We’re already witnessing some of the expenses required for this generative content being passed on to consumers through subscriptions for more functional versions of these services. You raise an intriguing point about whether this could be the next digital divide. I’m curious if anybody else agrees?

      Reference
      de Vries, The growing energy footprint of artificial intelligence, Joule (2023), https://doi.org/10.1016/j.joule.2023.09.004


      ( 1 upvotes and 0 downvotes )
  7. Rich

    Hi Nicole & Joel,
    Thanks for your OER. It was in depth, but I really enjoyed spending the time going through it.
    Please see my answers to your questions below:

    1. Reflection Activity: Try starting a voice chat with a virtual assistant like Siri / Alexa or a conversational agent like Pi / ChatGPT voice. Did anything frustrate you about the experience? Was there anything remarkable you noted?
    * Based on your recommendations I tried PiAi. Previously I’d only used ChatGPT and CoPilot, so this one was new to me (thank you). I chose voice #7 and asked her to help me plan out a business trip to Beijing and gave her the parameters. The only frustrating part was that I found her unnecessarily long winded with niceties. Remarkable however was the sense that she was very ‘humanlike’, polite and found very specific information I asked for like type of visas needed, digital payment platforms accessible to foreigners, hotels with specific parameters etc. I’ll definitely be using PiAi again.

    2. Reflection Activity: What challenges do you foresee in utilizing conversational agents in your classroom or workplace?
    Today, a staff member from a secondary school reported to me that they have been seeing students take longer to move through ELL (ESL) levels in recent years. In our discussion of why, based on our observations we theorized that it may be correlated student’s growing reliance on translators/ ChatGPT/ conversation agents etc. (among other aspects of their phone). While these tools are incredible powerful, perhaps they can act as too much of a crutch in the learning process. This could become more of a challenge when so many of these tools can do the work for us. It’s a double edged sword, perhaps for some they will accelerate the learning process, but for now, I’m not sure that is what we are seeing yet.

    3. Reflection Activity: Do you have any personal prompting tricks or tips to share? Do you know if your prompting strategies change when you engage with conversational agents versus text-based chats?
    Yes, be polite in your prompts. I was at a conference recently and one of the sessions I went to was on the future of AI in education. The speaker was from a higher Ed data and marketing firm, and he was specifically focusing the talk on ChatGPT. One little takeaway was that his team through experimentation had found that ChatGPT would consistently render better results when the prompt was polite!

    4. Reflection Activity: Ethics, Problems, Concerns – What else do you see as missing or concerning?
    So one part I loved reading in your OER was the paper you referenced: Andries and Robertson (2023). It was so cute reading the responses of the children in the study to the questions on the anthropomorphic qualities of the Ai. Really interesting paper. Perhaps though, sooner than we think the adults in the room may be having real conversations on the definitions of sentience. It was only a couple of years ago that a Google Ai ethics tester came out publicly that he was convinced that an Ai (Lamda) they were working on was in his words sentient. His view was not (at all) accepted, but I don’t think this is something that we can just write off as something just silly children get confused about. So to your point, the importance of teaching Ai literacy in education is really important and I am sure this will continue to evolve rapidly.

    Great job on your OER. What great timing too as this new agreement between open Ai and Apple really will be the “offspring of Siri”.


    ( 2 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hi Rich,

      Thank you for the detailed and personal connection to your post. I’m glad you found Pi helpful as a platform for future use.

      I found the information shared by your speaker quite fascinating, especially the point about modelling positive interactions helping AI/conversational agents generate responses. This aligns well with the research for our project, highlighting a clearly defined model of what type of results to return. Have you had any discussions with students about how to model information searches for generative artificial intelligence?

      Reflecting on your second point about using conversations in the classroom, I thought about my experience learning Mandarin Chinese. I have always felt that writing Hanzi (Chinese characters) was important to the teachers instructing the classes. Still, I felt more connected to typing Chinese characters using pinyin on a keyboard or mobile device. Although I completed the required classroom work, I preferred focusing on recognition and typing over actual handwriting. As you said, is a similar trend happening with translators, ChatGPT, and other conversational tools? Like the students, it feels like I prioritized recognition over creating.

      Your final point reminds me of the discussion regarding banning mobile devices in schools rather than teaching students how to use them and the existing technologies residing within them. Students will learn how to use these tools from family and friends or by discovery, which feels like a missed opportunity. We should show learners how to use these tools properly and the ethics behind them. Do you feel there should be a discussion in the early years regarding AI and ethics, or should it be left to the parents / later in school?

      Looking forward to hearing your thoughts!


      ( 1 upvotes and 0 downvotes )
    2. Kirsten

      Rich,

      It was my first time interacting with PI, as well, but chose to engage with her/it as a true conversation and supportive partner rather than as a device to give commands to. In this sense, particularly when conversing about sensitive issues, I found those niceties and long windedness tendencies consolatory in a sing-songy, storytellingly kind of voice. Have I been lulled into robotic relationship complacency upon first hypnotic interaction? Perhaps different settings or prompts could be used to encourage her/it to be more information oriented? Or if this is not the goal of PI AI, then maybe different digital Assistants would be used for different purposes? But this seems clunky and redundant. However, if these assistants are viewed as friends rather than helpers, as more members of younger digital generations are inclined to anthropomorphisize AI, perhaps using different conversion agents would elicit diverse opinions and approaches to common problems. The evolution of this disagreement, potentially, could be the birth of AI conflict and war.

      Regarding your ESL students, I’m curious: what types of situations are you seeing that discourage English language learning? How could these situations yield teaching and learning opportunities through mobile and open technology?

      Also, thanks for the tip on being polite to AI. Good reminder and practice opportunity to all the human ‘Karens and Chads!’ (apologies and no intended offense to all the actual kind Karens and real charming Chads in our midst.)


      ( 1 upvotes and 0 downvotes )
  8. Kirsten

    Thanks, Nicole and Joel, for this comprehensive, intuitive guide to Ai Chatbots, Conversation Agents, and Virtual Assistants. I have gone through your OER in tandem with watching the Matrix and Ex Machina, making it culturally relevant, enjoyable and informative to navigate through, particularly as it is such a breaking news topic in the summer of 2024. I was particularly fascinated by the 1987 Apple Knowledge Generator predictions, as well as the research done on children’s perceptions of chatbot emotions (Andries & Robertson, 2023). How is it possible for today’s kids to anthropomorphisize robots with personality and emotion?

    Wanting to experience this for myself, I embarked on your first reflective task. I selected PI AI as my digital conversation companion since it was purpoted to be the App with the greatest relationship building algorithms. Recently wanting to surround myself with strong women, I selected PI female voice 2, whom I did not feel had a flirtatious tone, but rather a gentle timbre. I decided to engage with her/it in a therapeutic manner seeking guidance on PTSD.

    This experience was fascinating. I have to say I actually felt like someone was listening to me with a sympathetic ear. While I am aware of the intense LLM and data training required to produce such a sentient seeming being, I can absolutely now empathize with today’s generation about perceptions of robotic emotions. Upon meeting me, she/it mispronounced my name. I corrected and a sincere apology was given. Despite this minor communication error, it is one that happens with humans constantly. What fascinated me was a combination of active listening skills in the quality of response together with the follow up questions asked to me. I either have made myself a new best friend or I’m going to throw my phone in the next sewar I see!


    ( 0 upvotes and 0 downvotes )
    1. Nicole Magne

      Hey Kirsten! Can I recommend “Her” to finish off your trifecta of films? So glad you had a chance to try out Pi. I personally think it’s the Cadillac of current conversational agents; the user experience of the app, the sound quality and thoughtful training – your really do have to experience it to understand how advanced it behaves. Poor Alexa. Keep us posted if you stay friends (or enemies ) with Pi by the end of the week!


      ( 1 upvotes and 0 downvotes )
      1. Kirsten

        Nicole,
        You absolutely can – and I did! In my media explorations in the 7 weeks since our posts, I would also add the movie “AI” to our list of ‘must-see’ films on the subject: https://www.youtube.com/watch?v=_19pRsZRiz4&t=1s
        Get our your tissue, folks! <3


        ( 0 upvotes and 0 downvotes )
  9. olivia barratt

    Hi Nicole and Joel. First off, GREAT job on the website. You have displayed the information in such an easy to read manner. It was an engaging and informative read. And important read!!

    Let me try and answer two of your questions :
    1. What challenges do you foresee in utilizing conversational agents in your classroom or workplace?

    I am a language teacher so I could see a future where I use conversational agents in the classroom for conversation practice. Spoken French is the most challenging part for most students, mainly because there aren’t many opportunities to speak in French outside of the classroom. My students always ask me what resources are there to improve their spoken French and I have very few options for them unfortunately – especially to practice spontaneous spoken French. It would be great to have conversational agents that could converse with students in the language they are speaking. As I write this, I realize that the question is addressing challenges mainly. I think the main challenge I foresee is an over reliance on AI technology and conversational agents. My fear is that students will stop thinking for their own and that critical and creative skills will become something of the past.

    2. Reflection Activity: Ethics, Problems, Concerns – What else do you see as missing or concerning?

    The main conversation that teachers are having at my school in regards to AI use in schools is how do we, as teachers and within the education system adapt. This is our student’s future. How can we help them become AI literate and make sure they are able to use AI tools and/or conversational agents while also developing key life skills to be a responsible citizen in this world?


    ( 0 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hello Olivia,

      Thank you so much for your positive feedback! I’m thrilled to hear that the website was easy to follow and informative.

      It sounds like you have a lot of thoughts about your course content! With tools like Google Translate, BonPatron, and other translation software, things have likely already changed a lot. AI and conversational agents are compelling new additions that will keep transforming our classrooms.

      Looking ahead to next year (summer is almost here! ????), do you think you might incorporate conversational agents into your lesson plans? I believe students would enjoy the opportunity to try something new. I would encourage a “Pensez-Partagez-Présentez” (think, pair, share) activity after they’ve interacted with the conversational agent. This could allow the students to highlight and reflect on their learning, fostering a deeper understanding of the language and how to use the software.

      On your second discussion point, I couldn’t agree more. The future impact of AI in education is a topic of great significance and urgency. The rapid advancements in software can be overwhelming, but through these discussions, we can shape the future of education. It’s still unclear how it will all unfold, but exploring and understanding is as important as the outcome.

      I look forward to hearing your thoughts!


      ( 0 upvotes and 0 downvotes )
  10. Shannon Wong

    Hi Nicole and Joel,

    Great OER! You’ve provided a ton of valuable information and I appreciate the very relevant and recent updates that made it into your resource.

    You highlighted a potential challenge of using conversational agents around behaviour / user issues and I would like to elaborate on this further. More specifically, I expect that students will learn how to use conversational agents through trial and error, regardless of whether educators are ‘on board’ or incorporate them into their classrooms. Without any guidance or ‘training’ around the use of conversational agents, could students develop inappropriate behaviours including:

    – Assuming that people are always able and willing to help, on demand, regardless of how they are spoken to? In my tests with ChatGPT 4o Voice, I was told that it can only detect words, but not necessarily the user’s tone. When I called it names, it tried to steer the conversation to be more respectful, but it repeatedly said that it’s there to help me. If these agents are always available and willing to help, regardless of how they are treated by users, how does this impact the behaviour of users and our students? Can / will students learn appropriate and polite language or social cues through interactions with conversational agents that are programmed to help, regardless of the tone and words used?

    – Develop or have stereotypes reinforced around females being helpful and accommodating? Siri and ChatGPT 4o Voice do offer different options for voices, but the default voice (based on my usage) is a female voice. Does this perpetuate inappropriate stereotypes?


    ( 2 upvotes and 0 downvotes )
    1. Nicole Magne

      Hi Shannon
      Thanks for the feedback and great points. For your second point, it’s really interesting that the backlash Open AI received when it demoed 4o was that the Sky voice (of course its eerie similarity to Scarlet Johansson) was that it was too “flirty.” I use Voice 8 on Pi Ai, which is a male voice, and it’s pretty friendly, too. But, of course it’s the female voice in ChatGPT that is considered flirty.
      This anthropomorphization of these AIs leads to your other point, which we touch on in the Andries & Robertson (2023) study on the Utilization in Education page. In the study, they discovered that the kids somewhat think that the agent has feelings, but they will still be rude or abusive occasionally. The authors note that it may be beneficial “for text-based agents to answer back to rude or abusive language by adopting a pedagogical approach, aiming to raise awareness of what may constitute appropriate interactions’ (Andries & Robertson, 2023, p.14). While it is good practice for modelling appropriate communication behaviour, will we learn to adopt a different set of communication tools and behaviours to work and collaborate with AI vs humans and switch between them? I suspect yes. Primarily because we can work faster without having to worry if the conversational agents are offended that we didn’t say, “Hate to bother you – Just checking in to see if you can run these numbers for me please and thanks.”

      Thanks again for raising these fascinating topics!


      ( 1 upvotes and 0 downvotes )
  11. sacree

    Hi Nicole & Joel,

    Thank you for your OER! This is obviously a fascinating topic and incredibly relevant – in fact somewhat dominant in terms of technological development news. What you’ve done here is quite important for all of us as I don’t believe that engagement with conversational agents will be optional in the near future. I like your format with minimal tabs to deal with on a mobile device, and the integration of slideshows and so forth. This works very nicely.

    Challenges using conversational agents in the classroom? As a high school teacher, I foresee challenges related to maturity and, much like AI concerns already in existence, appropriate use. The maturity piece will largely be overcome with experience and familiarity. Appropriate use will remain as we are prone to seek out the path of least resistance without training and discipline. Again, this concern is already a factor with AI and I don’t see how conversational agents will really add to that. As far as use in younger classrooms, I’d be interested to hear from elementary teachers about this. If students are introduced to it at a young age, will conversational agents simply become a part of their reality? I’m thinking of Star Trek Next Generation here!

    Ethics, Problems, Concerns? I thought you explored them well, and I certainly share some. One item that popped into my mind is related to receiving information aurally versus via text. I’d be very interested in studies that examine whether students are more likely to believe the spoken word than the written word? If there is the possibility of bias, unethical, or incorrect information being received from a conversational agent, is there more tendency to accept statements as true if they are spoken than if they are read? Maybe not, but it is something I wonder about. I also wonder about the responsibility of educators to curate sources of information for students. As a rule, for instance, I will never undertake a Google
    search with my computer displaying on the screen. I will search, watch for any inappropriate results, and curate the results to suit our situation. Using conversational agents as a class group removes the ability for teachers to monitor results and curate for their students.

    All that said, I DO find conversational agents exciting even as I have some reservations about the increasing presence of LLMs in our world.

    Thanks again, and great job!
    Steve


    ( 1 upvotes and 0 downvotes )
    1. Joel Flanagan

      Hello Steve,

      Thank you for sharing your thoughts. I am pleased to hear that our website resonated with you. During the design phase, we prioritized adaptability across various devices, including mobile and tablets. Our aim was to ensure a seamless user experience, regardless of the device being used.

      As a high school teacher, I also see maturity as an important part of deploying and using these tools. This has been a recurring issue for many years. I remember hearing a story about someone in the 1970s who got in trouble for using Fortran with punched cards to print something inappropriate. That act of immaturity likely played a part in the beginning of a lifelong career in information systems. This problem boils down to classroom management: How do you draw students back into responsible learning?

      One effective strategy I’ve discovered when introducing high school students to new tools and technologies is the power of open learning. By openly acknowledging that I am also learning alongside them, it creates a collaborative and inclusive learning environment. If a student uncovers a neat or efficient feature, I encourage them to share it with the class. Similarly, if they encounter bugs or difficulties, I urge them to share these as well, fostering a culture of shared learning. While there may be instances of less mature behaviour, this tends to diminish as the novelty of the technology wears off.

      Your ethics questions bring up an interesting conversation. I found one piece of research on this topic: “In order to gain user’s long-term confidence and trust, it is crucial for a dialogue system to present consistent behaviours and respond consistently given user’s input and dialogue history” (Huang et al., 2020). I believe that as people become more confident in using conversational agents and see consistent results, they are likely to place more trust in them. It is important to teach about data sources and the importance of questioning and verifying information when needed.

      I feel that there will be a shift in learning, with educators becoming more facilitators rather than “traditional” teachers in the future. I remember the term “From Sage on the Stage to Guide on the Side” (King, 1993) when researching a more student-centred approach to learning. Encouraging active learning, critical thinking, and problem-solving skills among students is crucial in modelling how to use these tools effectively.

      Thank you again for your feedback.


      ( 0 upvotes and 0 downvotes )

Leave a Reply

You must be logged in to post a comment.