If the animation doesn’t appear to be working, follow this link
Introduction
Writing today is increasingly transmodal – a process where different media such as text, sound, and image interact to create new forms of meaning (Murphy, 1969, as cited in New London Group, 1996).
This idea aligns with the New London Group’s (1996) concept of multiliteracies, which argues that literacy education must move beyond traditional reading and writing to include multiple modes of communication – visual, auditory, spatial, and linguistic.
In a world shaped by cultural diversity and digital media, learners must be able to interpret and design meaning across these different modes. Multiliteracies position educators as learning designers and students as active creators of meaning who shape their own social futures (New London Group, 1996).
Mode-Bending
For this task, I changed the semiotic mode of the “What’s in My Bag” activity from a visual photograph to an aural-textual experience. I used the online program called “Genially” to create an interactive experience incorporating my original image with the addition of audio clips, animations, and interactive buttons. To complement the static image, I used brief written descriptions paired with interactive sound clips to tell a story about the significance of each item.
The alarm clock noises in the morning as a new day begins, the hair dryer as I get ready in the morning, the car engine starting before I drive my kids to school, a dog barking to represent my canine responsibilities, referee whistle to represent coaching volleyball, the noise of a pen on paper to represent my MET studies, and the background noise of a grocery store to symbolize my daily life and responsibilities. These auditory cues revealed emotional and functional meanings including the independence of mobility and the routine of my daily habits that a single photograph could not express.
Reflection
This redesign highlighted both benefits and challenges. The aural mode allowed me to create a more immersive, personal, and emotionally resonant experience, emphasizing sound as a powerful meaning-maker. However, it also required careful technical and creative choices to ensure clarity and coherence. Overall, this process deepened my understanding of how changing modes transform not only how messages are conveyed, but how they are felt and understood within a multiliteracies framework.
AI usage: ChatGPT was used to refine my writing and summarize the main points of the New London Group reading. All ideas and final edits are my own.
References
OpenAI. (2025). ChatGPT [Large language model]. https://chat.openai.com/
Pixabay. (2025). Stunning royalty-free images & royalty-free stock. https://pixabay.com
The Media Insider. (2019, May 2). Semiotics analysis for beginners! | How to read signs in film | Roland Barthes Media Theory [Video]. YouTube. https://www.youtube.com/watch?v=SlpOaY-_HMk
The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66(1), 60-92. http://newarcproject.pbworks.com/f/Pedagogy%2Bof%2BMultiliteracies_New%2BLondon%2BGroup.pdf