Tag Archives: tony horava

AI Isn’t Being Regulated and I’m Sick of It

Growing up in the digital age and with constant technological advancements happening left and right, it’s easy to become numb to the frequent sayings of “this is inevitable” or “everyone’s using it so you better get used to it”, or anything related to normalizing the rapid progress that tech receives. This particularly applies to Artificial Intelligence, as AI has become the central focus of not just young people, but the global economy as a whole, with OpenAI desperately trying to keep the bubble from bursting as companies send each other billions of dollars worth of “IOU’s”. Corporations and billionaires need AI to succeed, but governments seem to be sleeping at the wheel when it comes to actually regulating it, with the laws written either being outdated or nearly prevented from being made outright (Brown). I’ve written about AI a lot this semester, and in this blog post I am going to pull from various sources I used from this term to make the argument for why it needs strict regulation.

There have been countless news stories of people being scammed via fake AI voices of family members, to deepfakes and other image-generation technology used to sextort young individuals, and while the acts themselves are illegal, it’s still just as easy to go on a website and generate an image of someone without their consent as it was a few years ago. The only thing that’s actually gotten better is the tech itself, not the laws or guidelines surrounding it. Emily McArthur’s article, The IPhone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”, talks about technology when it comes to extension but it also highlights the responsibility that is shared between technology users and makers (McArthur). This is particularly applicable to AI today, since while obviously the users of the tech who use it for nefarious and illegal reasons should be punished, the creators of the tech itself should also be held accountable. There was a recent example of a teenager who committed suicide after a conversation with ChatGPT encouraged him to, and the parent company, OpenAI, denied responsibility because the teen had ‘misused’ the AI (Yang). If their response to a teenager killing themselves after being encouraged to by their product is “sorry, you weren’t authorized to talk to it that way”, there is clearly something extremely wrong with the way that the technology was created to begin with for this outcome to even have happened.

Another strong reason to support the increased regulation of AI is that our history depends on it. Photographic evidence and video evidence is a crucial part of our society and how we function as a people, how lessons are taught in school and how people are determined to be guilty or innocent in a court of law. The fact that those concrete forms of information are now at risk of being questioned forever should be an alarm bell for anyone who cares about truth. In Tony Horava’s article, eBooks and McLuhan: The Medium is Still the Message, Horava talks about how we can interpret and process the same information differently depending on the medium in which we consume it. The concept directly relates to AI images and videos, since a video made by a trusted source on a subject will be given more weight than an AI-generated version, even if it draws upon the same sources and delivers the same information. People already distrust AI videos since all we’ve seen them used for is memes and making fun of others, and so naturally if someone were to be accused of robbing a store for example, who’s to say that the security footage is even real to begin with. AI video and images only create distrust in the real, secure versions, so regulation needs to be in place to either limit or prohibit using the likeness of a real person, or ensure that any generated material has a permanent watermark that is easily visible or accessible. The alternative is that misinformation will only continue to spread at levels never seen before.

Relating to the believability of existing materials and physical media, Ingold in Making: Anthropology, Archeology, Art and Architecture discussed Michael Polanyi’s concept of ‘tacit knowledge’, and it talked about how Ingold did believe that all knowledge could be communicated or that even innate knowledge could be communicated (Ingold 111). I bring this up because when it comes to discerning whether or not an AI-generated creation is real or not, outside of the more obvious tells that sometimes appear, like messed up fingers or inconsistent patterns, people like to think that they can ‘tell’ when something is real or not. The whole concept of the uncanny valley is dedicated to this, the idea that people are able to tell when something looks off, or not human. Up until recently I was of the opinion that laws would come in place before AI-generation got to the point where it was impossible to tell what was real and what wasn’t, but Google’s most recent Nano Banana Pro model is already at that point, and the population isn’t ready. This technology threatens to make us lose our innate ability to tell between truth and fiction, to the point where trying to find irregularities may not be possible to communicate, which goes against Ingold’s thinking but as of this moment in AI history, it’s what appears to be the case.

While I have little faith that meaningful laws and regulations will be put into effect any time soon, I am still hopeful for the future and for the idea that AI will eventually exist in a limited capacity, driven by rules that prohibit stealing others’ likenesses, and ensuring that a permanent watermark resides on every piece of generated material.

Works Cited

Brown, Matt. “Senate pulls AI regulatory ban from GOP bill after complaints from states.” PBS, 1 July 2025, https://www.pbs.org/newshour/politics/senate-pulls-ai-regulatory-ban-from-gop-bill-after-complaints-from-states. Accessed 5 December 2025.

Horava, Tony. “eBooks and McLuhan: The Medium is Still the Message.” Against the Grain, vol. 28, no. 4, 2016, pp. 62-64. Library and Information Science Commons. Accessed 16 November 2025.

Ingold, Tim. Making: Anthropology, Archeology, Art and Architecture. 1st ed., Routledge, 2013, https://doi.org/10.4324/9780203559055. Accessed 4 December 2025.

McArthur, Emily. “The Iphone Erfahrung: Siri, the Auditory Unconscious, and Walter Benjamin’s “Aura”.” Design, Mediation, and the Posthuman. Ed. Dennis M. Weiss Ed. Amy D. Propen Ed. Colbey Emmerson Reid Lanham: Lexington Books, 2014. 113–128. Postphenomenology and the Philosophy of Technology. Bloomsbury Collections. Web. 1 Dec. 2025. <http://dx.doi.org/10.5040/9781666993851.ch-006>.

Yang, Angela. “OpenAI denies allegations that ChatGPT is to blame for a teenager’s suicide.” NBC News, 25 November 2025, https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946. Accessed 5 December 2025.

Critical Response Post to “Morality and Materiality in Digital Technology and Cognition”: How Tony Horava’s Takeaway on ‘the Medium’ Will Always Affect Us

Introduction

In this critical response post, I will be adding onto ideas discussed in Molly Kingsley and Aminata Chipembere’s post, “Morality and Materiality in Digital Technology and Cognition”. In their blog post, they discussed Bollmer and Verbeek’s ideas on materiality and how they relate to digital technology, talking about the similarities in their perspectives while highlighting a couple important points: digital tech can be material even if it appears immaterial, and technology can influence humans and their decision making. This critical response will focus on the latter idea, and will incorporate the added perspective of Tony Horava on the ways in which the medium of something, whether it be technology or not, still affects us.

Original Post Overview

Kingsley and Chipembere discuss the notion that technology, despite being largely considered to be an ‘immaterial’ presence, still affects our decision making, how we feel, and how we may act in the future. I believe this idea to be very important in today’s culture, as the development of technology rapidly outpaces our capacity to wholly understand it and its effects. The purpose of this critique is to bring in some added perspectives on how exactly technology impacts how we feel and act, as it is not only interesting to think about, but also necessary.

Horava’s Perspective

Tony Horava in his journal article “eBooks and McLuhan: The Medium is Still the Message” talks about McLuhan’s original phrase and how that correlates to modern technology. For example, the way in which one interacts with a physical copy of a book compared to a digital copy of a book is different despite the materials being the same (Horava 62). The way in which our hands turn the page versus swipe a tablet, or the smell of paper versus the smell of a screen, all culminate to creating a unique reading experience that is definitely informed by the medium in which the contents are being gathered from. Using this lens, I want to take a look at some of the examples that Kingsley and Chipembere talk about in their original blog post.

In their post, the authors discuss several ways in which technologies can impact human behaviour, such as the ways in which doctors consult medical devices, as well as talking about hermeneutic media, which provides a representation of reality that requires interpretation (Kingsley and Chipembere). The medical example in particular is one I found especially interesting, as I believe that Horava’s perspective can play a role in how doctors use various medical machinery. As an example, when a doctor uses technology to fetch results, or analyze a sample, or conduct any sort of medical test, the doctor is inherently placing their faith in that technology to work. Contrast the technology available now compared to fifty years ago, and the attitudes would be much different. Doctors would still have faith in their machines, but presumably far less so than their modern-day counterparts, and as such it would take a different mental toll and reflection on their work. More would have to be done to ensure the results are accurate, or that the readings were saying what they thought they were: in short, Horava’s idea on how the medium affects the message applies to doctors’ reliance on technology over the years. Even if the message were the same, for example, on a more simple medical device that was used years ago that is still relevant now, the simple fact that we now live in the modern era with information at our fingertips and hospitals equipped with the latest advancements would add a level of confidence that prior generations wouldn’t have had. This will only continue on into the future too, as tech continues to evolve and early-onset detection systems reduce the amounts of deadlier conditions (hopefully).

Conclusion

This extra level of perspective on Kingsley and Chipembere’s post is not meant as a negative, as I thought their writing was very well done and presented dense ideas in a clear and digestible way. The purpose of this post is to also bring in a relevant newer course reading through Horava, and add his perspective on the concepts discussed by Bollmer and Verbeek, as I believe them to be related. We often talk in this class about how technology influences us, and even how it influences us, but Horava’s article has stuck with me in its ability to articulate the differences between an eBook and physical book, and I thought that the main takeaway from it was worthy to bring up again and apply to my peer’s work. I strongly believe that the medium of digital technology itself does impact us, and as it continues to evolve, so will its impact. What we feel now due to social media and the like will be far different just a few years in the future, and being able to properly communicate that effect is important.

Works Cited

Horava, Tony. “eBooks and McLuhan: The Medium is Still the Message.” Against the Grain, vol. 28, no. 4, 2016, pp. 62-64. Library and Information Science Commons. Accessed 16 November 2025.

Kingsley, Molly, and Aminata Chipembere. Morality and Materiality in Digital Technology and Cognition. 14 November 2025, Morality and Materiality in Digital Technology and Cognition. Accessed 16 November 2025. Blog Post.

Image Credit: https://mitsloan.mit.edu/sites/default/files/2022-07/MIT-Healthcare-Technology-01_0.jpg