{"id":1151,"date":"2025-12-05T23:02:38","date_gmt":"2025-12-06T06:02:38","guid":{"rendered":"https:\/\/blogs.ubc.ca\/mdia300\/?p=1151"},"modified":"2025-12-05T23:03:13","modified_gmt":"2025-12-06T06:03:13","slug":"ai-isnt-being-regulated-and-im-sick-of-it","status":"publish","type":"post","link":"https:\/\/blogs.ubc.ca\/mdia300\/archives\/1151","title":{"rendered":"AI Isn&#8217;t Being Regulated and I&#8217;m Sick of It"},"content":{"rendered":"\n<p>Growing up in the digital age and with constant technological advancements happening left and right, it\u2019s easy to become numb to the frequent sayings of \u201cthis is inevitable\u201d or \u201ceveryone\u2019s using it so you better get used to it\u201d, or anything related to normalizing the rapid progress that tech receives. This particularly applies to Artificial Intelligence, as AI has become the central focus of not just young people, but the global economy as a whole, with OpenAI desperately trying to keep the bubble from bursting as companies send each other billions of dollars worth of \u201cIOU\u2019s\u201d. Corporations and billionaires need AI to succeed, but governments seem to be sleeping at the wheel when it comes to actually regulating it, with the laws written either being outdated or nearly prevented from being made outright (Brown). I\u2019ve written about AI a lot this semester, and in this blog post I am going to pull from various sources I used from this term to make the argument for why it needs strict regulation.<\/p>\n\n\n\n<p>There have been countless news stories of people being scammed via fake AI voices of family members, to deepfakes and other image-generation technology used to sextort young individuals, and while the acts themselves are illegal, it\u2019s still just as easy to go on a website and generate an image of someone without their consent as it was a few years ago. The only thing that\u2019s actually gotten better is the tech itself, not the laws or guidelines surrounding it. Emily McArthur\u2019s article, <em>The IPhone Erfahrung: <\/em><em>Siri, the Auditory Unconscious, and Walter Benjamin\u2019s \u201cAura\u201d<\/em>, talks about technology when it comes to extension but it also highlights the responsibility that is shared between technology users and makers (McArthur). This is particularly applicable to AI today, since while obviously the users of the tech who use it for nefarious and illegal reasons should be punished, the creators of the tech itself should also be held accountable. There was a recent example of a teenager who committed suicide after a conversation with ChatGPT encouraged him to, and the parent company, OpenAI, denied responsibility because the teen had \u2018misused\u2019 the AI (Yang). If their response to a teenager killing themselves after being encouraged to by their product is \u201csorry, you weren\u2019t authorized to talk to it that way\u201d, there is clearly something extremely wrong with the way that the technology was created to begin with for this outcome to even have happened.<\/p>\n\n\n\n<p>Another strong reason to support the increased regulation of AI is that our history depends on it. Photographic evidence and video evidence is a crucial part of our society and how we function as a people, how lessons are taught in school and how people are determined to be guilty or innocent in a court of law. The fact that those concrete forms of information are now at risk of being questioned forever should be an alarm bell for anyone who cares about truth. In Tony Horava\u2019s article, <em>eBooks and McLuhan: The Medium is Still the Message<\/em>, Horava talks about how we can interpret and process the same information differently depending on the medium in which we consume it. The concept directly relates to AI images and videos, since a video made by a trusted source on a subject will be given more weight than an AI-generated version, even if it draws upon the same sources and delivers the same information. People already distrust AI videos since all we\u2019ve seen them used for is memes and making fun of others, and so naturally if someone were to be accused of robbing a store for example, who\u2019s to say that the security footage is even real to begin with. AI video and images only create distrust in the real, secure versions, so regulation needs to be in place to either limit or prohibit using the likeness of a real person, or ensure that any generated material has a permanent watermark that is easily visible or accessible. The alternative is that misinformation will only continue to spread at levels never seen before.<\/p>\n\n\n\n<p>Relating to the believability of existing materials and physical media, Ingold in <em>Making: Anthropology, Archeology, Art and Architecture<\/em> discussed Michael Polanyi\u2019s concept of \u2018tacit knowledge\u2019, and it talked about how Ingold did believe that all knowledge could be communicated or that even innate knowledge could be communicated (Ingold 111). I bring this up because when it comes to discerning whether or not an AI-generated creation is real or not, outside of the more obvious tells that sometimes appear, like messed up fingers or inconsistent patterns, people like to think that they can \u2018tell\u2019 when something is real or not. The whole concept of the uncanny valley is dedicated to this, the idea that people are able to tell when something looks off, or not human. Up until recently I was of the opinion that laws would come in place before AI-generation got to the point where it was impossible to tell what was real and what wasn\u2019t, but Google\u2019s most recent Nano Banana Pro model is already at that point, and the population isn\u2019t ready. This technology threatens to make us lose our innate ability to tell between truth and fiction, to the point where trying to find irregularities may not be possible to communicate, which goes against Ingold\u2019s thinking but as of this moment in AI history, it\u2019s what appears to be the case.<\/p>\n\n\n\n<p>While I have little faith that meaningful laws and regulations will be put into effect any time soon, I am still hopeful for the future and for the idea that AI will eventually exist in a limited capacity, driven by rules that prohibit stealing others\u2019 likenesses, and ensuring that a permanent watermark resides on every piece of generated material.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Works Cited<\/h2>\n\n\n\n<p>Brown, Matt. \u201cSenate pulls AI regulatory ban from GOP bill after complaints from states.\u201d <em>PBS<\/em>, 1 July 2025, https:\/\/www.pbs.org\/newshour\/politics\/senate-pulls-ai-regulatory-ban-from-gop-bill-after-complaints-from-states. Accessed 5 December 2025.<\/p>\n\n\n\n<p>Horava, Tony. \u201ceBooks and McLuhan: The Medium is Still the Message.\u201d <em>Against the Grain<\/em>, vol. 28, no. 4, 2016, pp. 62-64. <em>Library and Information Science Commons<\/em>. Accessed 16 November 2025.<\/p>\n\n\n\n<p>Ingold, Tim. <em>Making: Anthropology, Archeology, Art and Architecture<\/em>. 1st ed., Routledge, 2013, https:\/\/doi.org\/10.4324\/9780203559055. Accessed 4 December 2025.<\/p>\n\n\n\n<p>McArthur, Emily. &#8220;The Iphone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin\u2019s \u201cAura\u201d.&#8221; Design, Mediation, and the Posthuman. Ed. Dennis M. Weiss Ed. Amy D. Propen Ed. Colbey Emmerson Reid Lanham: Lexington Books, 2014. 113\u2013128. Postphenomenology and the Philosophy of Technology. Bloomsbury Collections. Web. 1 Dec. 2025. &lt;<a href=\"http:\/\/dx.doi.org\/10.5040\/9781666993851.ch-006\">http:\/\/dx.doi.org\/10.5040\/9781666993851.ch-006<\/a>&gt;.<\/p>\n\n\n\n<p>Yang, Angela. \u201cOpenAI denies allegations that ChatGPT is to blame for a teenager&#8217;s suicide.\u201d <em>NBC News<\/em>, 25 November 2025, https:\/\/www.nbcnews.com\/tech\/tech-news\/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946. Accessed 5 December 2025.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Growing up in the digital age and with constant technological advancements happening left and right, it\u2019s easy to become numb to the frequent sayings of \u201cthis is inevitable\u201d or \u201ceveryone\u2019s using it so you better get used to it\u201d, or anything related to normalizing the rapid progress that tech receives. This particularly applies to Artificial &hellip; <a href=\"https:\/\/blogs.ubc.ca\/mdia300\/archives\/1151\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">AI Isn&#8217;t Being Regulated and I&#8217;m Sick of It<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":103704,"featured_media":1152,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[204,218,88,8,171],"class_list":["post-1151","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general-media-theory","tag-ai","tag-emily-mcarthur","tag-ingold","tag-media-theory","tag-tony-horava"],"_links":{"self":[{"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/posts\/1151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/users\/103704"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/comments?post=1151"}],"version-history":[{"count":2,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/posts\/1151\/revisions"}],"predecessor-version":[{"id":1154,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/posts\/1151\/revisions\/1154"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/media\/1152"}],"wp:attachment":[{"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/media?parent=1151"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/categories?post=1151"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.ubc.ca\/mdia300\/wp-json\/wp\/v2\/tags?post=1151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}