Link 1
https://sites.google.com/view/etec540-tomskinner/assignments-and-activities
In this task, Tom describes his experience using a voice to text tool to transcribe a story. Tom used Google Voice Typing, and I used the default Windows 11 tool for my own. The difference in these tools was notable, as there were attempts by Google’s tool to add punctuation and capitalization, whereas the Windows tool did not add any of those features. When starting this task, I had little experience with audio transcribing tools, so I deliberately chose a tool I had never used before.
My goal was to tell the story as naturally as possible without awareness of the limits of the tool, and this contrasted with Tom’s experience, as he deliberately included “text” elements to test the transcribing abilities of his tool (e.g., saying “quote” and “end quote” to mark reported speech in his story). Despite this, both Tom and I made modifications to our speaking due to the recorded nature of the exercise, and pointed out that we would have told it differently to someone in-person.
These similarities directly related to the description from orality to literacy described by Haas (2013) and the impacts of technology in writing communication. Both of our storytelling experiences were influenced by our literacy and expectation that the final words would exist in written form.
Reference
Haas, C. (2013). The technology question. In Writing technology: Studies on the materiality of literacy (pp. 3-23). Routledge.
Link 2
https://blogs.ubc.ca/justine540/2025/02/03/task-1/
The first noticeable difference between my attempt at this task and Justine’s is the graphical representation of our bags’ contents. In my task, I simply included a picture of the contents and bag, and then proceeded to describe everything in text below. Justine included lines and text labels in the image itself. I think this was effective because it allows the reader to more quickly and easily identify the contents and their purpose. Though some items are easy to identify, others may be not and this combination was a more effective way to present the contents than my more simplistic image.
Justine made an interesting point that many of these items can represent “a snapshot of a transitional period—one where analog and digital literacies coexist.” This does not only apply from the perspective of a future archaeologist looking back but also in the present. A variety of text technologies exist that serve similar functions, and could be swapped if needed. For instance, the book in Justine’s bag could be replaced with an e-book reader that would serve the same purpose. The function of the pens and planner could easily be simulated with apps on the smart phone. Yet, there are some technologies that can’t be so easily replaced.
Both Justine and I had a band-aid in our bags. The function of a band-aid cannot be substituted with another digital or text technology, and so it is not in a transitional period between analog and digital like many of the others are. Likewise, the food container in my bag serves an important function based on real-world physics and biology that cannot be replaced with a digital technology. it’s also hard to imagine it being replaced any time in the future without venturing heavily into the realms of science fiction.
Reviewing Justine’s task helped clarify in my mind the distinction between text technologies and others. Text technologies facilitate symbolic or language communication across space and are designed for that as a primary purpose. Meanwhile, other technologies serve more “practical” real-world functions disconnected from language and communication. It doesn’t mean these technologies can’t be used to make inferences about the owner or used in symbolic ways, but that is not their primary intended purpose.
Link 3
https://blogs.ubc.ca/etec540dj/2025/02/16/an-emoji-story/
The Emoji Story by David shows significant similarities to my own. We both talked about using emojis as different categories of representation (e.g., some emojis represent characters, emotions, or actions). We both made use of sequential ordering to indicate causal or connective relationships between different characters. Additionally, we used a ‘line-by-line’ sequence similar to written text, suggest we’re both viewing these emojis as replacements for a textual narrative that we’re attempting to convert into symbols.
One of the noticeable differences was the number of emojis in each line. My lines were quite short, as I attempted to convey a single subject and action in each line, similar to a complete written sentence. Meanwhile, David seems to be representing a series of actions in each line, punctuated often by an emotional reaction at the end. As a reader, this causes me to try to interpret each line individually before moving on to the next one. Yet, because there is no convention for this type of storytelling, I don’t think short lines or long lines are necessarily better in the attempt to communicate.
However, our Emoji Stories both make use of convention and novelty to communicate ideas. For example, we both use heart emojis to represent love or romance, as these are cultural symbols the reader will be familiar with. Other emojis are used to represent new ideas, such as the combination of gas tank and scuba mask in David’s story cassette tape plus loudspeaker symbols in my own. This hints at the true challenge of this task: relying on assumptions about the reader’s knowledge in some ways while manipulating the limited emoji pool to represent abstract or complex concepts in other ways, in the hopes that the story will be interpreted according to the writer’s intent.
The challenge I just mentioned shows how critical it is to rely on convention and shared symbolic understanding in graphical representations, as without it the reader would be completely lost on meaning. This explains why so many software apps, websites, and other digital graphics use similar layouts and symbols to communicate.
Link 4
https://blogs.ubc.ca/twong540/task-7-mode-bending/
Tristan’s reshaping of Task 1 is significantly different from my own. We both started with a similar structure in the initial task – an image of the items in our respective bags followed by a written description and discussion. Tristan’s reshaping used a different sensory mode (aural vs. visual) and a similar semiotic mode (language) whereas my reshaping used a different semiotic mode (language vs. spatial/gestural) and a similar sensory mode (visual).
The New London Group (1996) explained that redesigning is a product of the unique human experience and tool proficiency of the designer and necessarily creates new meaning. They also described how meaning making allows the creator to construct their own identities. This can be seen in Tristan’s reshaping where the switch to audio track allowed him to frame the task as an “unboxing”. The presentation wouldn’t have worked the same in text because the pauses, tone of voice and more “casual” description communicates information to the viewer that would be very difficult to express in written form. A similar change can be seen in my own task, where I was able to communicate different information than was in the original task through spatial and gestural design.
In this task, new information was gained, but some was lost as well. We both left out information that wouldn’t make sense or would be difficult to communicate in the new modes, revealing a clear tradeoff in using different design elements.
Looking at how others approached this task gave me a new appreciation for how semiotic and sensory modes can impact meaning making. I initially placed a strong importance on designs that shifted from a temporal focus to a spatial one. Going from a design that requires the “reader” to interact chronologically to one that allows them to jump in anywhere seemed significant. Yet, Tristan’s redesign showed a significant difference in how meaning was communicated, even though the written and audio versions both require a chronological “reading”. I think this shows the power of different design modes to influence meaning-making and communication in a hypermedia era.
Reference
The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66(1), 60-92.