Tag Archives: ai

AI Isn’t Being Regulated and I’m Sick of It

Growing up in the digital age and with constant technological advancements happening left and right, it’s easy to become numb to the frequent sayings of “this is inevitable” or “everyone’s using it so you better get used to it”, or anything related to normalizing the rapid progress that tech receives. This particularly applies to Artificial Intelligence, as AI has become the central focus of not just young people, but the global economy as a whole, with OpenAI desperately trying to keep the bubble from bursting as companies send each other billions of dollars worth of “IOU’s”. Corporations and billionaires need AI to succeed, but governments seem to be sleeping at the wheel when it comes to actually regulating it, with the laws written either being outdated or nearly prevented from being made outright (Brown). I’ve written about AI a lot this semester, and in this blog post I am going to pull from various sources I used from this term to make the argument for why it needs strict regulation.

There have been countless news stories of people being scammed via fake AI voices of family members, to deepfakes and other image-generation technology used to sextort young individuals, and while the acts themselves are illegal, it’s still just as easy to go on a website and generate an image of someone without their consent as it was a few years ago. The only thing that’s actually gotten better is the tech itself, not the laws or guidelines surrounding it. Emily McArthur’s article, The IPhone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”, talks about technology when it comes to extension but it also highlights the responsibility that is shared between technology users and makers (McArthur). This is particularly applicable to AI today, since while obviously the users of the tech who use it for nefarious and illegal reasons should be punished, the creators of the tech itself should also be held accountable. There was a recent example of a teenager who committed suicide after a conversation with ChatGPT encouraged him to, and the parent company, OpenAI, denied responsibility because the teen had ‘misused’ the AI (Yang). If their response to a teenager killing themselves after being encouraged to by their product is “sorry, you weren’t authorized to talk to it that way”, there is clearly something extremely wrong with the way that the technology was created to begin with for this outcome to even have happened.

Another strong reason to support the increased regulation of AI is that our history depends on it. Photographic evidence and video evidence is a crucial part of our society and how we function as a people, how lessons are taught in school and how people are determined to be guilty or innocent in a court of law. The fact that those concrete forms of information are now at risk of being questioned forever should be an alarm bell for anyone who cares about truth. In Tony Horava’s article, eBooks and McLuhan: The Medium is Still the Message, Horava talks about how we can interpret and process the same information differently depending on the medium in which we consume it. The concept directly relates to AI images and videos, since a video made by a trusted source on a subject will be given more weight than an AI-generated version, even if it draws upon the same sources and delivers the same information. People already distrust AI videos since all we’ve seen them used for is memes and making fun of others, and so naturally if someone were to be accused of robbing a store for example, who’s to say that the security footage is even real to begin with. AI video and images only create distrust in the real, secure versions, so regulation needs to be in place to either limit or prohibit using the likeness of a real person, or ensure that any generated material has a permanent watermark that is easily visible or accessible. The alternative is that misinformation will only continue to spread at levels never seen before.

Relating to the believability of existing materials and physical media, Ingold in Making: Anthropology, Archeology, Art and Architecture discussed Michael Polanyi’s concept of ‘tacit knowledge’, and it talked about how Ingold did believe that all knowledge could be communicated or that even innate knowledge could be communicated (Ingold 111). I bring this up because when it comes to discerning whether or not an AI-generated creation is real or not, outside of the more obvious tells that sometimes appear, like messed up fingers or inconsistent patterns, people like to think that they can ‘tell’ when something is real or not. The whole concept of the uncanny valley is dedicated to this, the idea that people are able to tell when something looks off, or not human. Up until recently I was of the opinion that laws would come in place before AI-generation got to the point where it was impossible to tell what was real and what wasn’t, but Google’s most recent Nano Banana Pro model is already at that point, and the population isn’t ready. This technology threatens to make us lose our innate ability to tell between truth and fiction, to the point where trying to find irregularities may not be possible to communicate, which goes against Ingold’s thinking but as of this moment in AI history, it’s what appears to be the case.

While I have little faith that meaningful laws and regulations will be put into effect any time soon, I am still hopeful for the future and for the idea that AI will eventually exist in a limited capacity, driven by rules that prohibit stealing others’ likenesses, and ensuring that a permanent watermark resides on every piece of generated material.

Works Cited

Brown, Matt. “Senate pulls AI regulatory ban from GOP bill after complaints from states.” PBS, 1 July 2025, https://www.pbs.org/newshour/politics/senate-pulls-ai-regulatory-ban-from-gop-bill-after-complaints-from-states. Accessed 5 December 2025.

Horava, Tony. “eBooks and McLuhan: The Medium is Still the Message.” Against the Grain, vol. 28, no. 4, 2016, pp. 62-64. Library and Information Science Commons. Accessed 16 November 2025.

Ingold, Tim. Making: Anthropology, Archeology, Art and Architecture. 1st ed., Routledge, 2013, https://doi.org/10.4324/9780203559055. Accessed 4 December 2025.

McArthur, Emily. “The Iphone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”.” Design, Mediation, and the Posthuman. Ed. Dennis M. Weiss Ed. Amy D. Propen Ed. Colbey Emmerson Reid Lanham: Lexington Books, 2014. 113–128. Postphenomenology and the Philosophy of Technology. Bloomsbury Collections. Web. 1 Dec. 2025. <http://dx.doi.org/10.5040/9781666993851.ch-006>.

Yang, Angela. “OpenAI denies allegations that ChatGPT is to blame for a teenager’s suicide.” NBC News, 25 November 2025, https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946. Accessed 5 December 2025.

Podcast Episode: Is AI Killing Creativity? Or Making It Better?

In this podcast, Siming, Eira, and Aubrey explore whether Gen AI should be considered a creative medium and whether it suppresses or improves creativity. Through different examples in video editing, 3D modeling, and design, we explore what AI mediates and reflect on how these technologies reshape both creativity and authorship in contemporary media.

Citations 

Adobe. (n.d.). Automatic UV Unwrapping | Substance 3D Painter. https://helpx.adobe.com/substance-3d-painter/features/automatic-uv-unwrapping.html

Bollmer, G. (2019). Materialist media theory: An introduction.

Maisie, K. (2025). Why AI Action Figures Are Taking Over Your Feed. Preview.

https://www.preview.ph/culture/ai-action-figures-dolls-a5158-20250416-dyn

Ingold, T. (2013). Making: Anthropology, Archaeology, Art and Architecture. Routledge.

Salters, C. (2024). The New Premiere Pro AI Tools I’ll Definitely Be Using. Frame.io Insider.

https://blog.frame.io/2024/04/22/new-premiere-pro-generative-ai-tools-video-editing/

Schwartz, E. (2023). Adobe Brings Firefly Generative AI Tools to Photoshop. Voicebot.ai

https://voicebot.ai/2023/05/23/adobe-brings-firefly-generative-ai-tools-to-photoshop/

Faribault Mill. (n.d.). The Spinning Jenny: A Woolen Revolution. https://www.faribaultmill.com/pages/spinning-jenny

Van Den Eede, Yi. (2014). “Extending ‘Extension’: A Reappraisal of the Technology-as-Extension Idea through the Case of Self-Tracking Technologies.” Design, Mediation, and the Posthuman, edited by Pieter Vermaas et al., Lexington Books.

UX Pilot. (n.d.). UX Pilot: AI UI Generator & AI Wireframe Generator. https://www.figma.com/community/plugin/1257688030051249633/ux-pilot-ai-ui-generator-ai-wireframe-generator

Loveable. (n.d.). Learn about Lovable and how to get started. https://docs.lovable.dev/introduction/welcome

Analyzing Extension through the Modern Lens of AI

The two texts that I will be critically comparing are The Iphone Erfahrung by Emily McArthur, and Extending “Extension” by Yoni Van Den Eede, both found in the book Design, Mediation, and the Posthuman. They both talk about extension and evolutions in technology and how they relate to the human experience, and because of this they certainly relate.

The Iphone Erfahrung Summary

McArthur’s article focuses on Siri, which when it was written in 2014 was a fairly new piece and advancement of technology. Siri is talked about as being an extension of the human (McArthur), as any thought that enters someone’s mind can be nearly instantly asked to Siri. While Siri is primarily used as a faster Google, or an answering machine, the way in which individuals speak to their phone and receive a response from a voice is anything but normal, at least not 10 years ago. The article talks alot about Walter Benjamin’s concept of ‘aura’, and how Siri represents aura due to its magical nature and its place in the social hierarchy (McArthur); as in, it can be considered an authority for truth (like a faster Google). Despite Siri’s magical appearance though, all it really does in terms of looking back at the user is make a guess based on what its learned, rather than come up with something on its own (McArthur). The article also talks about how that applies to other algorithms and modern systems, like online shopping or digital newspapers recommending you articles based off your recent reads. All in all, McArthur’s article focuses on the aura of Siri, the way in which sound can penetrate the unconscious, and the limits of its capabilities.

Extending Extension Summary

Van Den Eede’s article briefly recaps the idea of extension through history and talking about McLuhan’s perspective on it, before narrowing its focus and discussing self-tracking software and applications, like FitBits and other technologies that we essentially input our data into, arguing with McLuhan’s help that they are unique extensions of the body(Van Den Eede). From surveillance issues, to the notion that self-tracking apps are solving a “problem”, this article and how it discusses technology certainly relates to McArthur’s article, as they both provide interesting perspectives on how humans interact with technology.

How the Texts can be Used Together

When reading through both of the articles, one topic in particular immediately came to mind, as this one tends to – artificial intelligence. When considering software like Siri and algorithms that predict behaviour and using technology as an extension of self, there are fewer subjects more applicable than AI. The texts relate in numerous ways, but because they were written over a decade ago, naturally the technological references they utilize and predict are outdated. Using the lens of AI when comparing them helps enhance their similarities and makes it more clear just how much not only AI affects us, but also how it will continue to in the future.

McArthur’s article talks about how Siri doesn’t necessarily know exactly what you say, but it uses its language processes to essentially make a guess to what you are saying. This applies moreso when verbally speaking, but this can also apply to text, since alot of meaning that can be inferred between two humans speaking can be lost when it is typed out. In today’s world, AI very much does the same thing, particularly in image and video generation. All it does is read what the user types in, and makes the best guess it can for what they imagine the user wants. This can also apply to students who use AI to sort and organize their notes for them, as even if the student emphasizes a certain way they’d like their information to be presented, only they truly know what that looks like, not the AI. 

All of this culminates in a couple of outcomes: ease of use, and extending one’s self. Both articles talk about how technology makes things easier, whether it be using Siri as an instant-answer machine, or using a self-tracking app to count one’s calories instead of using a book and doing calculations on their own. People use these apps because it is easier than doing the activity themselves, and that is how these companies make all the money that they do, because they promise an easier lifestyle. At the same time, this technology is an extension of the self. Using AI to sort through your notes, or generate an opening paragraph that ‘sounds like your writing’, is in essence an extension of one’s self. However, this dois not to say that what the AI generates is ‘yours’, or even creative. There is a lot of contention when it comes to passing off AI-generated art or video or content in general as one’s own, and that is not what is being advocated for. Despite the lack of authorship though, if someone puts in their notes or writing into an LLM and asks it to generate something, the product that emerges is an extension of them also because they asked the AI to generate it to begin with. It is an extension that highlights the user’s creativity (or lack thereof).

McLuhan also discusses an idea in Van Den Eede’s article about the medical concept of an irritant and counter-irritant, saying that many extensions in the world are created in response to a problem in order to solve the problem (Van Den Eede). However, there is always a cost, and any time a counter-irritant is used to enhance something or a body part, it also weakens something else, almost like a sort of exchange. This thinking can be applied to McArthur’s article, since using AI to do your thinking for you is a perfect example of this. While the problem may be that someone doesn’t know how best to plan someone’s 30th birthday, by asking the AI to help solve the problem (the irritant) through using an AI-generated plan after being fed all of the birthday person’s interests (the counter-irritant), the trade-off is part of their brain will inevitably suffer as they rely more and more on AI and outside help for idea generation and problem solving instead of using their own brain muscles to do it. Another interesting comparison is that McLuhan argues that people are aware of technology as an ‘other’ and it is obvious (Van Den Eede), but as more and more people get fooled by AI scams and as McArhur’s article discussed that sound penetrates the mind with relation to Siri, the lines get blurrier and blurrier.

Takeaways and Conclusion

In conclusion, McArthur’s text and Van Den Eede’s text both discuss extension in relation to technology, and by using the more modern perspective of AI and its impact on people, the two articles can be used as a helpful guide to highlight how Ai (and technology in general) greatly impact us all, and also discuss some interesting ways to talk about it, like the irritant and counter-irritant theory brought up by McLuhan in Van Den Eede’s article. This all is important to know for people my age as being able to discuss these processes and theories is more important than ever. As more and more people grow accustomed to AI being embedded in daily activities, whether it be apps or transactions or whatever else, the times from just a few years ago where that was not the case will slowly be lost. Being able to articulate these processes isn’t to wish for a return for the way things were, as that is nigh impossible at this point, but it is still critical to know so that we can still stay ahead of the technology as best we can, and stay informed through it all.

Works Cited


McArthur, Emily. “The Iphone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”.” Design, Mediation, and the Posthuman. Ed. Dennis M. Weiss Ed. Amy D. Propen Ed. Colbey Emmerson Reid Lanham: Lexington Books, 2014. 113–128. Postphenomenology and the Philosophy of Technology. Bloomsbury Collections. Web. 1 Dec. 2025. <http://dx.doi.org/10.5040/9781666993851.ch-006>.

Van Den Eede, Yoni. “Extending “Extension”: A Reappraisal of the Technology-as-Extension Idea through the Case of Self-Tracking Technologies.” Design, Mediation, and the Posthuman. Ed. Dennis M. Weiss Ed. Amy D. Propen Ed. Colbey Emmerson Reid Lanham: Lexington Books, 2014. 151–172. Postphenomenology and the Philosophy of Technology. Bloomsbury Collections. Web. 1 Dec. 2025. <http://dx.doi.org/10.5040/9781666993851.ch-008>.