In this linking assignment, I’m going to make connections and analyze differences between the reflections my classmate, Christina Hidalgo, and I did for Task 7 – Mode Bending
I think Christina’s mode bend is a good example of what has been mentioned often throughout our course materials about how experiences are always multimodal. The video scribe she produced has visual, textual, auditory, and gestural representations. It was interesting to compare Christina’s approach with mine for many reasons. My mode bend had been mode literal in the sense that I took the objects and did my best to represent them in auditory form (for example the cellphone sounds represent the cellphone image) or evoke them through gestures (for example moving my hands as if I was reading a book represents the Bible). The first couple of seconds I was watching Christina’s video I felt there wasn’t much mode bending going on, as it seemed primarily visual and also included the pictures from the original post. However, I soon begin to realize I was incorrect as I became more attuned to her narration and understood how the representation was largely auditory. Her voice also had a lot of gestures, in the sense that emotions were being transmitted as she expressed ideas and concepts. When this became evident, I closed my eyes and realized that the auditory representation was by itself sufficient as it was very detailed and full of content. In the analysis of my mode bending task, I had observed that my auditory representations were not so clear as the images in the original post because I used sound effects instead of verbal descriptions. Christina’s mode bend supports well this argument, as we can see how using sound to produce words is highly descriptive and precise. This is again pointing to the power of language as one of the most efficient tools for experience description. I could even argue that words through sound (speaking) has a higher degree of content than words through images (writing), as it can transmit emotion with more precision. This is probably why in texting we often use emoticons or other symbols to convey emotion.
I also found it interesting how Christina was able to expand the visual dimension and representation through the use of video. While describing an object many new visual objects would emerge to support the description. This was not necessarily a bend of mode, as it still is a visual representation, however, it seems like the auditory representation gave birth to new auditory representations creating a whole new multi-dimensionality and richness to the descriptions she was making. This also shows interesting dynamics of how information is translated from one form of representation to another (visual to audio) and this creates new possibilities of re-representation of the original source (visual to audio to visual). In effect, it seems in my experience that the different senses are constantly informing one another creating more complex forms of human experience (for example I can hear a song that brings an image to my mind, which then I express in speech, which in turn makes me feel particular emotions that influence my gestures, and so on).