Growing up in the digital age and with constant technological advancements happening left and right, it’s easy to become numb to the frequent sayings of “this is inevitable” or “everyone’s using it so you better get used to it”, or anything related to normalizing the rapid progress that tech receives. This particularly applies to Artificial Intelligence, as AI has become the central focus of not just young people, but the global economy as a whole, with OpenAI desperately trying to keep the bubble from bursting as companies send each other billions of dollars worth of “IOU’s”. Corporations and billionaires need AI to succeed, but governments seem to be sleeping at the wheel when it comes to actually regulating it, with the laws written either being outdated or nearly prevented from being made outright (Brown). I’ve written about AI a lot this semester, and in this blog post I am going to pull from various sources I used from this term to make the argument for why it needs strict regulation.
There have been countless news stories of people being scammed via fake AI voices of family members, to deepfakes and other image-generation technology used to sextort young individuals, and while the acts themselves are illegal, it’s still just as easy to go on a website and generate an image of someone without their consent as it was a few years ago. The only thing that’s actually gotten better is the tech itself, not the laws or guidelines surrounding it. Emily McArthur’s article, The IPhone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”, talks about technology when it comes to extension but it also highlights the responsibility that is shared between technology users and makers (McArthur). This is particularly applicable to AI today, since while obviously the users of the tech who use it for nefarious and illegal reasons should be punished, the creators of the tech itself should also be held accountable. There was a recent example of a teenager who committed suicide after a conversation with ChatGPT encouraged him to, and the parent company, OpenAI, denied responsibility because the teen had ‘misused’ the AI (Yang). If their response to a teenager killing themselves after being encouraged to by their product is “sorry, you weren’t authorized to talk to it that way”, there is clearly something extremely wrong with the way that the technology was created to begin with for this outcome to even have happened.
Another strong reason to support the increased regulation of AI is that our history depends on it. Photographic evidence and video evidence is a crucial part of our society and how we function as a people, how lessons are taught in school and how people are determined to be guilty or innocent in a court of law. The fact that those concrete forms of information are now at risk of being questioned forever should be an alarm bell for anyone who cares about truth. In Tony Horava’s article, eBooks and McLuhan: The Medium is Still the Message, Horava talks about how we can interpret and process the same information differently depending on the medium in which we consume it. The concept directly relates to AI images and videos, since a video made by a trusted source on a subject will be given more weight than an AI-generated version, even if it draws upon the same sources and delivers the same information. People already distrust AI videos since all we’ve seen them used for is memes and making fun of others, and so naturally if someone were to be accused of robbing a store for example, who’s to say that the security footage is even real to begin with. AI video and images only create distrust in the real, secure versions, so regulation needs to be in place to either limit or prohibit using the likeness of a real person, or ensure that any generated material has a permanent watermark that is easily visible or accessible. The alternative is that misinformation will only continue to spread at levels never seen before.
Relating to the believability of existing materials and physical media, Ingold in Making: Anthropology, Archeology, Art and Architecture discussed Michael Polanyi’s concept of ‘tacit knowledge’, and it talked about how Ingold did believe that all knowledge could be communicated or that even innate knowledge could be communicated (Ingold 111). I bring this up because when it comes to discerning whether or not an AI-generated creation is real or not, outside of the more obvious tells that sometimes appear, like messed up fingers or inconsistent patterns, people like to think that they can ‘tell’ when something is real or not. The whole concept of the uncanny valley is dedicated to this, the idea that people are able to tell when something looks off, or not human. Up until recently I was of the opinion that laws would come in place before AI-generation got to the point where it was impossible to tell what was real and what wasn’t, but Google’s most recent Nano Banana Pro model is already at that point, and the population isn’t ready. This technology threatens to make us lose our innate ability to tell between truth and fiction, to the point where trying to find irregularities may not be possible to communicate, which goes against Ingold’s thinking but as of this moment in AI history, it’s what appears to be the case.
While I have little faith that meaningful laws and regulations will be put into effect any time soon, I am still hopeful for the future and for the idea that AI will eventually exist in a limited capacity, driven by rules that prohibit stealing others’ likenesses, and ensuring that a permanent watermark resides on every piece of generated material.
Works Cited
Brown, Matt. “Senate pulls AI regulatory ban from GOP bill after complaints from states.” PBS, 1 July 2025, https://www.pbs.org/newshour/politics/senate-pulls-ai-regulatory-ban-from-gop-bill-after-complaints-from-states. Accessed 5 December 2025.
Horava, Tony. “eBooks and McLuhan: The Medium is Still the Message.” Against the Grain, vol. 28, no. 4, 2016, pp. 62-64. Library and Information Science Commons. Accessed 16 November 2025.
Ingold, Tim. Making: Anthropology, Archeology, Art and Architecture. 1st ed., Routledge, 2013, https://doi.org/10.4324/9780203559055. Accessed 4 December 2025.
McArthur, Emily. “The Iphone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”.” Design, Mediation, and the Posthuman. Ed. Dennis M. Weiss Ed. Amy D. Propen Ed. Colbey Emmerson Reid Lanham: Lexington Books, 2014. 113–128. Postphenomenology and the Philosophy of Technology. Bloomsbury Collections. Web. 1 Dec. 2025. <http://dx.doi.org/10.5040/9781666993851.ch-006>.
Yang, Angela. “OpenAI denies allegations that ChatGPT is to blame for a teenager’s suicide.” NBC News, 25 November 2025, https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946. Accessed 5 December 2025.
Hi ! This is such a strong and urgent post. The part that really struck me is your point about how quickly our trust in images and recordings is collapsing. It actually reminded me of something Shoshana Zuboff talks about, how technologies grow fastest in the spaces where regulation is weakest, and how that vacuum lets companies shape reality faster than governments can react. Your examples of AI-generated voices, deepfakes, and unverifiable footage feel like exactly that. I also appreciated your use of Ingold. It made me wonder whether even our intuitive sense of authenticity, something we normally rely on without thinking can survive when synthetic media becomes flawless. It’s unsettling to realize how quickly those instincts can be overwhelmed.
Overall, your post really captures the scale of the problem: AI isn’t just a technical tool, it’s something that’s actively rewriting the conditions under which truth, evidence, and trust are supposed to function. And you’re right, without regulation, it’s hard to see how any of this gets better.
Hi, thanks for your comment! I appreciate you mentioning Zuboff, as their perspective of unregulated technologies developing fastest definitely applies here with AI. And yes, the idea that our most innate, personal senses can be deceived isn’t a new idea by any means, but the speed at which its being stripped away from us is frightening, to the point where I legitimately don’t believe even the creators of the technology are able to ‘keep up’ with their own work, to tell what’s real or not. What do you think the next regulatory step will be? At this point, I’m just hoping for something in the EU to happen that then gets adopted worldwide.
Hi, great work! I think your point about responsibility is a very crucial thing for us to note. It’s wild how often companies try to frame AI harms as “user misuse” as if the entire purpose of responsible design isn’t to anticipate misuse in the first place. I also like how you brought in the historical dimension. The idea that Ai can destabilize things we rely on to prove what happened, then it’s terrifying in a way. Once the public loses trust in visual records, we’ll lose our foundations to journalism, justice and even everyday communication.