Task 7: Hypermediafication of what’s in my bag?

 

The contents of a small black bag are laid out on the floor. The contents include a wallet, a medication organizer, a passport, two pens, two small cords, a pack of gum, a cloth medical mask, a kindle, and a small flashlight.

I initially struggled quite a bit with this task. How could I mode-bend a photo of my bag’s contents into something else? I initially considered creating a recording of myself telling the story of why each object is in my bag, but this didn’t feel like sufficient “bend” for the mode that I was looking for. It was only after reading Dobson & Willinsky (2009) that my direction became clear. In particular, their discussion of hypermedia and the term’s continual reassessment through technological advancement provided me with the lens through which I opted to pursue this task.

I very much self-identify as a digital native, having spent my formative years participating in an explosively expanding online landscape. Thus, hypermedia is the medium which I am incredibly familiar and comfortable with consuming. The internet as a participatory space for sharing written text interspersed with multimedia and inter-linking is, from my subjective perspective, the internet at its best. Mabrito & Medley (2008) describes this textual environment as “…frequently multimodal, integrating words, graphics, sound, and video”.

My love for the web lead me to dig deeper into its architecture, leading to continual tinkering with web development, coding, and eventually a degree in Computer Science. Code is the foundation of hypermedia. Without code, digital text is much more in line with Gelb’s (1952, as cited in Dobson & Willinsky, 2009) definition as “markings on objects or any solid material”. Code is what facilitates the transformation of static text into hypermedia. Code creates links, it embeds images, it directs traffic, it handles interaction. Web software and its associated code enables the “hyper” of hypertext.

It is with this perspective in mind that I opted to leverage code to hypertextualize my original “what’s in your bag” task. Using a combination of HTML, CSS, and Javascript, I have added clickable hotspots to my original image. Each hotspot triggers a brief piece of audio associated with the object. This new aspect of the original image provides it with added richness and inter-linking that simply wasn’t present in its original static presentation. I would not say that a hotspot image is always an optimal user-experience for consuming media or information, which is likely why they aren’t commonly encountered on the broader web. As with text and hypermedia, my hotspot image simply aims to take something static and turn it into a linked and participatory artifact, and in doing so, imbue it with something new.

References

Dobson, T., & Willinsky, J. (2009). Digital literacy. In D. R. Olson & N. Torrance (Eds.), The Cambridge handbook of literacy (pp. 286-312). Cambridge University Press.

Gelb, I. J. (1952). A study of writing: The foundations of grammatology. Chicago: University of Chicago Press.

Mabrito, M., & Medley, R. (2008). Why Professor Johnny can’t read: Understanding the net generation’s texts. Innovate: Journal of Online Education, 4(6).


Posted

in

by

Tags:

Comments

3 responses to “Task 7: Hypermediafication of what’s in my bag?”

  1. Joti Singh Avatar

    Duncan, thanks for sharing your process in redesigning this task.

    Like you, I initially struggled with conceptualizing how to transform a static photo of my bag’s contents into something more dynamic by adding an audio component. I was particularly inspired by The New London Group’s advocacy for multimodal literacy and the use of various modes of communication to convey meaning.

    It seems we both drew inspiration from Dobson and Willinsky’s (2009) insights on digital literacy and hypermedia. Their discussion on the evolving nature of hypermedia provided a framework for understanding how different media forms can be combined to create richer, more interactive content. I think that by integrating clickable audio recordings alongside visuals, we enhanced the narrative, making the content more engaging and personal. I particularly enjoyed hearing what your items sounded like. (Your commentary on the wallet made me laugh.)

    While our end goals were similar, the paths we took to achieve them were actually different. Your background in computer science allowed you to leverage HTML, CSS, and JavaScript to create clickable hotspots on your image. I think that this approach reflects your deep understanding of the web’s architecture and your technical proficiency. By embedding interactive elements directly into the image, you created a seamless and sophisticated user experience.

    In contrast, since I am not as fluent in the technical aspects, I opted to use Genially. Without needing to code, I uploaded the photo of my bag’s contents and added interactive hotspots that played audio clues, transforming the task into a participatory guessing game. This method allowed me to focus on the storytelling aspect and the cultural significance of each item, while you were able to focus on the technical implementation.

    Finally, your reflection on identifying as a digital native resonated with me. Like you, I spent my younger years immersed in an ever-expanding digital landscape. This familiarity with digital tools and online environments has shaped how I approach tasks and problem-solving. While my journey didn’t lead to coding, my comfort with digital media and interactive platforms mirrors your experience.

  2. jonathan tromsness Avatar
    jonathan tromsness

    Hey Duncan, I initially played with the task in an auditory way as well. I wanted to try to find the commercial jingles or memetic songs that represented each object. That proved to be a little difficult so I opted for tag-lines or corporate/product slogans. I love that you were able to incorporate audio into your mode-bend. I find that we are bombarded with audio, chimes, jingles, songs in pretty well every aspect of our online existence. What I appreciated with yours was the sensory element of the sound. This was quite refreshing and unique. I imagine if I was blindfolded, I could probably guess most of the items before the description. It really personalizes the experience and exemplifies the New London Group’s concept of designing and redesigning – the re-contextualization and re-presentation of meaning (1996).

    Cazden, C., Cope, B., Fairclough, N., Gee, J., Kalantzis, M., Kress, G., … & Nakata, M. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard educational review, 66(1), 60-92.,

  3. chanmi33 Avatar
    chanmi33

    Hi Duncan,

    I loved reading about how you approached the mode-bending task! Your background in web development really shines through in the way you transformed a simple image into something interactive and engaging. The idea of using code to create clickable hotspots that play sounds is such a creative way to bring the “What’s in Your Bag” task to life.

    For my own project, I also wanted to make the task more dynamic, but I took a different path in presentation. I initially tried to use coding as well, but I quickly realized it was a bit beyond my current skill level. So, I ended up using Canva to create a multimodal presentation where clicking on each item plays the sound it makes. It was a fun way to mix visual and auditory elements, even though it wasn’t quite as tech-heavy as your approach.

    I think you did a wonderful job of turning something static into a more interactive experience. Your use of hypermedia and coding to add depth and interactivity is really inspiring. It’s made me think about revisiting some of the coding aspects in my future projects probably with the help of coding AI!

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet