Skip to content

Snapchat AI: Bots and Teens’ Safety

Posted in Mobile Culture

As Artificial Intelligence is becoming more widespread and accessible, apps like Instagram and Snapchat have started embedding AI features into their apps. However, for an app like Snapchat, where most users are primarily teens, we must ask ourselves what the negative impact of integrating AI on such an app may be. 

Adolescents rely heavily on Snapchat to connect with peers, share videos and photos, and engage in conversations in a space they perceive as temporary. The app’s “disappearing” features, or illusion of impermanence (with exceptions like “save in chat” options), make teens believe that their digital actions are momentary, unlike on iMessage or other platforms where content is permanent. This perception creates a false sense of security, leaving young users more susceptible to risky actions and emotional vulnerability on the platform. 

The introduction of an AI bot that automatically functions as a “friend” on Snapchat raises serious concerns in terms of safety and privacy. Teens can interact with the AI bot in a way that mimics real human behaviour, as the AI can send and receive messages, respond to any topic, analyze images users send, send images back, and even maintain snap streaks. For vulnerable adolescents, this can lead to emotional attachment, social withdrawal, and reliance on a digital entity rather than real friends. Teens may also become overly dependent on it for advice, validation, or companionship, particularly in situations that require personal judgment or emotional guidance. For example, a teen could consult with the AI mid-conversation with a peer to figure out how to respond, relying on the bot, rather than developing their own social or emotional skills. This poses a problem because it can stunt the development of critical thinking and emotional regulation, and create an environment where youth are engaging with AI in dangerous ways.

Furthermore, privacy is another major concern, as the AI analyzes personal data, including photos and messages, to generate responses. Teens may not fully understand who or what has access to their information or how it is used, putting them at risk of exploitation. Within the AI privacy policy, Snapchat also acknowledges that there can be safety lapses, stating, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit, or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” Therefore, if a teen is sending sensitive images or messages, there is a risk that the data could be stored, misused, or exposed, leading to immediate and long-term dangers. This danger is further compounded by the fact that Snapchat’s AI is not optional; it’s automatically added to users’ friend lists and cannot be deleted.

In my own experience teaching Grade 9 students, I have seen them seek relationship advice from the AI, ask it to answer homework questions, and interact with it as if it were a real friend. By blurring the lines between reality and digital interaction, this AI feature can undermine emotional development, social skills, safety, and digital literacy at a critical stage in adolescent growth. Therefore, we must question the design choices and motivations behind embedding AI into apps as a user, so that we can advocate for safer digital environments. 

References:
Snap Inc. (n.d.). Staying Safe with My AI. Snapchat Support. https://help.snapchat.com/hc/en-us/sections/21446373975572-Chatting-with-My-AI


( 3 upvotes and 0 downvotes )
( Average Rating: 4 )

2 Comments

  1. Kyle Gaudreau
    Kyle Gaudreau

    This is a great post. Since I have been mostly off social media for a while now and never quite understood the draw to Snapchat I sometimes forget just how ubiquitous AI has become.

    I am still not sure what happened but a friend of mine admitted to using ChatGPT as their relationship guide and occasional therapist in the winter. Given they are a practicing lawyer I found this to be really odd. I admit that my line of questioning may have got me cut off from this person since I may have been a bit sarcastic but I sometimes wonder if AI made the call.

    With the recent lawsuits directed at ChatGPT for the platform’s role in teen suicides, I do think ‘AI as therapist’ is an interesting concept and clearly, considering the historical role that programs like ELIZA play in the history of AI, this issue is not going away anytime soon. Part of me is curious if we will ever have a scandal where a social media AI is discovered to be curating content to manipulate relationships between people. While I feel that manipulation of the individual by these programs is assumed by now, it seems like AI may provide the means to manipulate multiple people simultaneously.


    ( 0 upvotes and 0 downvotes )
    September 20, 2025
    |
  2. Rie
    Rie

    Hi mmeshi, thank you for bringing this up. I feel this is a very important topic to discuss. One sentence that struck me when reading this post is “illusion of impermanence.”

    A lot of software, including social media apps, messaging apps, LMS, and AI, is deemed to be “private,” but it is often unclear how “private” this data really is. Even if an app promotes itself as “temporary” or “anonymous,” there is no guarantee that it will be completely private. Recently, Forbes reported that passwords and logins were leaked from Apple, Facebook, and Snapchat: https://www.forbes.com/sites/daveywinder/2025/05/23/184162718-passwords-and-logins-leaked—apple-facebook-snapchat/


    ( 0 upvotes and 0 downvotes )
    September 16, 2025
    |

Leave a Reply

Spam prevention powered by Akismet