Redesigning the contents of my bag into a new mode of meaning was quite challenging. While it was fun taking sound bites of each of the objects from my original image, thinking about how to put them together in a way that might contain meaning was difficult. Creating the audio file required my own multimodal literacies to navigate unfamiliar software, social context to sequence the sound bites, and a technological understanding to be able to move between hardware and software. I found that I still relied on a little bit of language to separate more obscure tapping sounds of my fumbling with headphones from (what I felt were) more recognizable sounds of pens and highlighters on paper. Although I did record plugging my headphones into a jack, it wasn’t until I listened to the playback without the gestural context that I realized the click was indistinguishable from the highlighter cap. There was a bit more of a double click, so I did end up using it anyway.
Once I had the separate sound bites, I had to figure out how to sequence the random, sounds that, on their own, have no context. Not being familiar with audio file editing, this was an additional challenge. Getting the overlaying audio track was a challenge to itself, let alone controlling its volume while I overlaid other sound effects. As I sequenced each audio clip, I thought about my typical morning to create a narrative structure. This, I think gives the audio file its overarching context. Of course, in order to get the overarching context, a listener would also need to be familiar with the social and technological contexts of the sounds. The loud sipping and “aaahh” is typically associated with drinking, usually a hot drink. Truthfully, I used a cold can of diet cola; it felt strange to slurp it. The typing, and USB noises are only familiar to those with existing knowledge of those sounds. Even the USB music is distinct to Windows. A Mac user may not be familiar with what that sound was supposed to be. To understand their meaning as technology being used requires specific existing social and technological awareness.
I do question what happens to an individual’s understanding if they are taught to be literate in a multi-modal environment and one of those modes is taken away. Are they still able to derive meaning? I think my auditory mode of communication for this task expresses far less than my visual linguistic one does. It relies on fewer modes of literacy, but as such requires more social and technological awareness on the part of the listener to derive meaning. As the creator, I am left unsure if or how my audio mode will make any sense to the listener. At least with visual linguistics, I have reference for where communication broke down. Even I wonder what I meant when I go back and proofread.