Introduction
In this critical response post, I will be adding onto ideas discussed in Molly Kingsley and Aminata Chipembere’s post, “Morality and Materiality in Digital Technology and Cognition”. In their blog post, they discussed Bollmer and Verbeek’s ideas on materiality and how they relate to digital technology, talking about the similarities in their perspectives while highlighting a couple important points: digital tech can be material even if it appears immaterial, and technology can influence humans and their decision making. This critical response will focus on the latter idea, and will incorporate the added perspective of Tony Horava on the ways in which the medium of something, whether it be technology or not, still affects us.
Original Post Overview
Kingsley and Chipembere discuss the notion that technology, despite being largely considered to be an ‘immaterial’ presence, still affects our decision making, how we feel, and how we may act in the future. I believe this idea to be very important in today’s culture, as the development of technology rapidly outpaces our capacity to wholly understand it and its effects. The purpose of this critique is to bring in some added perspectives on how exactly technology impacts how we feel and act, as it is not only interesting to think about, but also necessary.
Horava’s Perspective
Tony Horava in his journal article “eBooks and McLuhan: The Medium is Still the Message” talks about McLuhan’s original phrase and how that correlates to modern technology. For example, the way in which one interacts with a physical copy of a book compared to a digital copy of a book is different despite the materials being the same (Horava 62). The way in which our hands turn the page versus swipe a tablet, or the smell of paper versus the smell of a screen, all culminate to creating a unique reading experience that is definitely informed by the medium in which the contents are being gathered from. Using this lens, I want to take a look at some of the examples that Kingsley and Chipembere talk about in their original blog post.
In their post, the authors discuss several ways in which technologies can impact human behaviour, such as the ways in which doctors consult medical devices, as well as talking about hermeneutic media, which provides a representation of reality that requires interpretation (Kingsley and Chipembere). The medical example in particular is one I found especially interesting, as I believe that Horava’s perspective can play a role in how doctors use various medical machinery. As an example, when a doctor uses technology to fetch results, or analyze a sample, or conduct any sort of medical test, the doctor is inherently placing their faith in that technology to work. Contrast the technology available now compared to fifty years ago, and the attitudes would be much different. Doctors would still have faith in their machines, but presumably far less so than their modern-day counterparts, and as such it would take a different mental toll and reflection on their work. More would have to be done to ensure the results are accurate, or that the readings were saying what they thought they were: in short, Horava’s idea on how the medium affects the message applies to doctors’ reliance on technology over the years. Even if the message were the same, for example, on a more simple medical device that was used years ago that is still relevant now, the simple fact that we now live in the modern era with information at our fingertips and hospitals equipped with the latest advancements would add a level of confidence that prior generations wouldn’t have had. This will only continue on into the future too, as tech continues to evolve and early-onset detection systems reduce the amounts of deadlier conditions (hopefully).
Conclusion
This extra level of perspective on Kingsley and Chipembere’s post is not meant as a negative, as I thought their writing was very well done and presented dense ideas in a clear and digestible way. The purpose of this post is to also bring in a relevant newer course reading through Horava, and add his perspective on the concepts discussed by Bollmer and Verbeek, as I believe them to be related. We often talk in this class about how technology influences us, and even how it influences us, but Horava’s article has stuck with me in its ability to articulate the differences between an eBook and physical book, and I thought that the main takeaway from it was worthy to bring up again and apply to my peer’s work. I strongly believe that the medium of digital technology itself does impact us, and as it continues to evolve, so will its impact. What we feel now due to social media and the like will be far different just a few years in the future, and being able to properly communicate that effect is important.
Works Cited
Horava, Tony. “eBooks and McLuhan: The Medium is Still the Message.” Against the Grain, vol. 28, no. 4, 2016, pp. 62-64. Library and Information Science Commons. Accessed 16 November 2025.
Kingsley, Molly, and Aminata Chipembere. Morality and Materiality in Digital Technology and Cognition. 14 November 2025, Morality and Materiality in Digital Technology and Cognition. Accessed 16 November 2025. Blog Post.
Image Credit: https://mitsloan.mit.edu/sites/default/files/2022-07/MIT-Healthcare-Technology-01_0.jpg
I really liked how you brought Horava into this discussion. The connection between “the medium is the message” and how doctors place trust in medical tech is such a clear and original example. It really shows how material form can shape not just how we use technology but how we feel and make decisions through it. It also made me think about how that trust develops over time, like how the medium almost trains us to see and think differently, not just deliver information. You explained that idea in a way that felt really grounded and easy to relate to. Great post!
Hi Meha,
Thank you for your kind words, and I really appreciate your perspective of trust developing over time, as while I did sort of write around that I didn’t really consider what that meant until reading your reply. Despite the technology improving over time and allowing us (not just doctors) to rely on it more and more, if the tech ever makes a mistake or slips up, how are we expected to notice? The vigilance of the past, while still present in some of today’s world with suspecting AI-generated material, is mostly long gone in place of over-reliance.
Hello! Your elaboration on how rapidly evolving technology impacts human life has given me much to reflect upon. It also led me to consider a related argument from the perspectives of Materiality and Making: even when an e-book and a physical book contain the exact same text, the difference in medium fundamentally alters the human experience of reading.
I recall a class discussion on Horava’s text, where a group member shared a similar experience—many of us feel that, compared to digital devices, it’s easier to maintain focus when engaging with print media. If we connect this with another post I recently read, “The Material Life of the Smartphone: a Critical Dialogue between Bollmer and Rosenberg”, a plausible explanation emerges: digital media have conditioned us to process the same information differently than we would in print. We’ve become highly skilled at capturing flickering information on a screen, but at the cost of more fragmented attention. As you said, this change have effects from both sides and humans will have to learn and navigate through this new age shaped by media.
Hi Betty,
I agree with your conclusion that digital media has conditioned us to process similar information differently than we would if it were physical media, and I think a large part of that has to do with how much cell phones have evolved. Computers, for example, have always been able to send messages to one another or read articles, but cell phones went from being solely calling/answering (and sometimes texting) machines to mini-supercomputers that simultaneously allows us to talk to our friends and anyone online, on top of playing games and watching a streaming show. Because all of this takes place on the same device, a part of our brain naturally treats the information encoding of watching an episode of Stranger Things a bit similarly to reading an article on the same device, because the hub is identical.
Hi! I really appreciated your post—your application of Horava’s argument about the medium to medical technology was really clever. I liked how you highlighted that even when the “message” stays the same, the medium changes our perception, confidence, and interaction. I’m curious: do you think there are cases where the medium could distort the message or create overreliance in ways that are ethically concerning, especially in high-stakes fields like medicine?
Hii Christina, I thought your question was interesting and so was this post, Owen! I also found the point about the medium shaping perception really interesting, especially in the medical examples. I agree with you that there’s definitely potential for the medium to distort the message in ethically concerning ways. It’s kind of scary how easily newer or more “polished” technologies can appear more accurate just because of how they’re designed or presented.
I think the over-reliance issue you mentioned is already showing up with things like AI tools. Even when two devices offer the same information, the one with a slicker interface or more confident metrics can subtly push people into trusting it more than they actually should. So in that sense, yeah, the medium can absolutely warp the message or create a false sense of certainty.
Curious what you think: do you feel like the solution is better design, or more human oversight, or something else entirely?
Hi Lea,
We both replied to Christina’s question/point, so I’ll respond to yours: I think that the solution to a lot of these problems is either more human oversight or just the human touch in general, and doing away with AI entirely when it comes to medicine. There are some exceptions to this, like the recent study that showed how AI was able to detect signs of breast cancer 5 years in advance, but outside of replacing tedious tasks that humans would’ve done to begin with (or couldn’t do due to the sheer amount of numbers), I feel as though it’s only a matter of time before something goes wrong and a massive lawsuit emerges from AI mishandling something in the medical field.
Hi Christina,
I agree with Lea’s connection to AI tools already creating an over-reliance to a certain extent when it comes to how the information is presented. I also think it depends on what medical system we talk about: for example, I could easily see a future where a lower-cost healthcare alternative for Americans is to consult with an AI doctor, rather than pay the hefty price to go to an actual one. Furtheremore, while it will happen less, there are always medical anomalies that occur, and if AI can only rely on precedent that’s fed into it, it will have no possible way to react to anything unique.