Task 7 – Mode Bending

This task is a redesign of the original “what’s in your bag task” and when I think about the purpose of the original task I think about it from the perspective of identity texts. What do the things we carry say about who we are and our identity? I had discussed the concept of “mom pockets,” the game I play with friends about the things we carry in our pockets that are specific to our roles as parents. I have realized that much of my identity is in relationship to others and this cannot be fully conveyed through a written discussion of items. Dobson and Willinsky (2009) make the point that writing is formal and monologic, whereas speech is informal, interpersonal, and dialogic. Likewise, The New London Group (1996), draws our attention to the idea that different modes of communication draw forth different languages. I decided for this redesign that I would play with my understanding of modes of communication, identity, and texts by a) using social media for “mom pockets” to summon a check on my common identity with other moms and b) by recording a conversation with one of my closest friends about “mom pockets,” and identity.  To me, it’s a much richer discussion to talk about the things we carry as relational items and as texts situated in a shared identity. The interesting story that the ‘texts’ that I carry tell is what The New London Group (1996) might describe as the representation of a shared cultural context. 

When you listen to the recorded audio conversation of my friend and myself, not only do you hear us speak to a shared common identity, but you literally hear our shared common identity in our conversation pattern, our conventions of speech, and the language we use to make meaning. A theme that has come up in multiple readings (Dobson and Willinsky (2009); Kress 2004) is the loss of immediacy in written work. The audio recording instead affords an immediacy that my original written assignment lacks, and we can hear the give and take of a shared common identity.  If we consider the claim that Dobson and Willinsky (2009) make about writing (formal, monologic) vs. speech (informal, dialogic), we can see how they framed them in opposition. In this sense my first version of the task and this version of the task may be considered opposite modalities. I’ve also included a social media component which can be framed as a cross modality of both written and speech, in that it is written, but aims to mimic the informal nature of face-to-face communication. 

 

This audio clip represents the first half my discussion with my friend about “mom pockets.” The discussion includes the items in our pockets, our identities as parents, whether or not dad pockets exist, and a little discourse on the way woman, specifically, are socialized to parent. I’ve excluded the second half of our discussion as we digress into pandemic parenting.

Below is an image gallery of screenshots of how I use social media and “mom pockets” to summon a common identity

 

(Future Deirdre here: I saw that I could pull up very old stories in Instagram through the highlights option, so I’m able to include past, pre-pandemic “mom pockets,” including the infamous avocado pocket mentioned in my audio recording.)

References

Dobson, T., & Willinsky, J. (2009). Digital Literacy. In D. Olson & N. Torrance (Eds.), The Cambridge Handbook of Literacy (Cambridge Handbooks in Psychology, pp. 286-312). Cambridge: Cambridge University Press.

Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and Learning. Computers and Composition 22(1), 5-22. https://doi.org/10.1016/j.compcom.2004.12.004

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66(1), 60-92.

Standard

Text, Images, Hypermedia, and Telestrations

Given the class’s ongoing discussions of the affordances of hypertext and the goal of creating a web of interconnection between the students of ETEC 540, a few of us virtually got together to play a game of Telestrations. For those unfamiliar with the game, it’s a mashup of pictionary and broken telephone where players take turns drawing and guessing phrases, objects, or actions.  Telestrations was a great opportunity to play with text, image, and hyperlinks.

You can follow the hyperlinks from our respective blogs to see how the game unfolded. It works best if you follow the hyperlinks linearly, but nothing is stopping you from jumping around.

There were six of us who played.  I started the game off with the drawing pictured below. Can you decode what phrase/object/action the image is trying to illustrate? Before continuing, feel free to put your guess in the comments, otherwise click on the image to see Sandra’s interpretation of my drawing. Click here if you want to see the original phrase that prompted the drawing.

If you’d rather just jump around from blog to blog follow the links to see:

It’s interesting to think about the ways in which computer-mediated communication and hypertext support a game like this and what is lost when we remove the immediacy of playing this game in person.

Standard

Task 6 – An Emoji Story

Emoji representation of a popular film. Numbering has been used for referencing.

I just recently had to watch the move that I’ve depicted in emoji for another class, so it was still fairly fresh in my mind. Beginning with the title, I chose emoji that best reflected the more salient plot points in the movie. The emoji used to represent the title for instance are, I think, the most iconic or memorable parts of the film. In telling the plot through emoji, I focused on individual scenes represented on separate lines.

Interestingly, Kress (2005) discusses how different modes of representation each have their own qualities that define their use. For example, in the written mode of representation the sequence of words is very important for meaning making and readers are dependent on the specific order an author has laid out. However, with respect to images, the elements chosen for representation are presented simultaneously (Kress, 2005). What is to be said of the affordances of emoji? In my emoji story, I’ve taken advantage of the affordances of the written word, in that the emoji need to be viewed sequentially, at least line by line. However, this isn’t as clear cut if we’re to just look at each singular line of my story. For instance, I need the reader to view the second line of my story sequentially, however for the ninth line I need the reader to take in the images simultaneously. What about the title? It doesn’t require sequential reading, nor does it really require viewing the images simultaneously. For the most part I could have put the emoji in any order, however I do need the reader to group some of the emoji together as one item; the red circle, the pill, and the blue circle need to be tethered together as one image rather than three separate images.

When trying to depict narrative with emoji, there is also a clash between the conventional sequencing of words with respect to objects and actions and the fixed directionality of specific emoji. Take a look at line ten in my emoji story and consider the meaning that you make. Who is shooting whom in that sentence? The toy gun emoji is in a fixed position. Contextually, we know the direction that a gun fires and those on the barrel end of a gun are on the receiving end of a bullet. However, as Kress (2005) notes, the first position in a written sequence has specific meaning. It could mean that the person placed in the first position is causing or responsible for the action. With respect to this emoji story, is the first person in line ten being shot at or are they the one doing the shooting? In other words, does the direction of the image (the toy gun) take precedence for meaning making, or does the linear sequence afforded to sentence structure take precedence?

In Bolter’s (2001) discussion of picture writing, he makes an important point about its lack of narrative structure; “the picture elements extend over a broad range of verbal meanings: each element means too much rather than too little” (p. 59). Conversely, Kress (2005) describes words as being vague and empty of meaning without a reader to interpret them. If both text and pictorial depictions are vague, then perhaps the meaning making magic happens when they are combined. Personally, I can relate to using emoji to enhance my written word in casual conversations through text. Because I have been told I have a blunt style of written communication (I just don’t see the need for a lot of formality and exclamation marks), my tone is often misinterpreted. If I throw in an emoji or two I can quickly convey that I mean no trouble. Likewise, I have been known to clarify the tone of a text by recording myself reading the text with the intended tone. Consider this very real text exchange between myself and my younger sister (for context, my sisters and I were trying to virtually meet up, but having scheduling conflicts). 

Me: You live your life. Join us whenever the scheduling works out for you! 

Sister: you sound sarcastic AF

In her defense, I can see how my message changes significantly with tone. My intended tone was meant of support and flexibility. I ended up sending a voice recording in order to speak the text as intended. In text messaging we augment our writing with emoji, gifs, voice recordings, and the tapback feature on individual text bubbles on iPhones. I’m reminded of Kress (2005, p.17):

As one effect of the social and the representational changes, practices of writing and reading have changed and are changing. In a multimodal text, writing may be central, or it may not; on screens writing may not feature in multimodal texts that use sound-effect and the soundtrack of a musical score, use speech, moving and still images of various kinds. Reading has to be rethought given that the commonsense of what reading is was developed in the era of the unquestioned dominance of writing, in constellation with the unquestioned dominance of the medium of the book.

Makes me wonder about the representational changes of reading and writing in the dominance of the medium of the smartphone. 

References

Bolter, Jay David. (2001). The breakout of the visual. In Writing space: computers, hypertext, and the remediation of print. Routledge. (pp.47-76).

Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and Learning. Computers and Composition 22(1), 5-22. https://doi.org/10.1016/j.compcom.2004.12.004

Standard

Task 5 – Twine

It took more hours than I care to reveal, but I’ve created my very first Twine! I don’t mind that it took me a long time, it was pleasantly frustrating and the process was as enjoyable as the product. In this game called Get out the door you’ve slept through your alarm and now you’re running late. Your job is to get through the game before 8 AM. There are a few paths to victory and several dead ends. I hope you enjoy the game and I’d be happy to hear what you think.

Get out the door (2)

I’ve never  used Twine before and there was indeed a learning curve. I think that one needs to have a decent level of frustration tolerance before embarking on making a product on Twine. I would imagine that the more I used the program the more efficient I would become at using it. However, for me personally, part of the joy of making the game was the problem solving aspect, not just in coming up with the idea, but in actually employing the program. Kafai (2006) discusses the importance and value not in playing games for learning, but in making games for learning. I can see how Twine would certainly support the idea of ‘making’ as learning. For instance, my peer, Ying used Twine to tell an informative procedural story. Having students use Twine to demonstrate concepts would be an excellent way to make games for learning. Twine could also help students illustrate and make explicit their thinking and understanding of topics, especially maybe for students that are challenged by showing their thoughts linearly. As Bush (1945) notes the mind works by association and connects ideas through a web of trails, so why not capitalize on that associative experience by letting our students demonstrate their thinking that way too. Again, Ying has a really great discussion of this on her page.

My process for creating my game on Twine evolved organically and was not premeditated. What ended up happening was that anytime a fork in the story was created, I would continue fully down one path before returning to the fork. In that sense, there was a linear process included in making a game with parallel narratives.  Bolter (2001) writes that “all writers have had the experience of being overwhelmed with ideas as they write” (p.32) and discusses the idea that before the digital age and before printing, that there was a sense that writers were overwhelmed from within. I think that being able to use Twine to write parallel stories could ease that sense of being overwhelmed. It’s almost as every fork that was created served as a bookmark of an idea to come back to. This got me thinking about all the ways I bookmark ideas in my own life.  One habit I have is having multiple tabs open on my computer of similar related ideas that I can come back to.  In the last few months I have been using concepts maps. I have multiple notes in my notes apps to come back to. And sometimes I just store Ideas in my head, hoping that I’ll remember to come back to them.  I appreciate hypertext in this sense, because I can use it to follow streams of consciousness (I assume we can all relate to going down a Wikipedia rabbit hole).  I can see how something like hypertext would  help to solve the sense of both being overwhelmed from ideas from within and overwhelmed from all the information available for us to assimilate from the outside in (Bolter, 2001).

After reading about Nelson’s (1999) Xanadu, I was surprised to see how much my Twine resembled his preliminary ideas on parallel documents. Nelson discusses the idea of parallelism between documents and the need to be able to show, visually, the connections between them in order to compare them. I had never heard of Xanadu before, and as far as I can tell it has never come to fruition, but certainly I can understand the perspective, like that of Bush (1945), of wanting to be able to find a better system for organizing all the information that is available in the world. I can also understand the desire to find a system that breaks free of the limitations of the material world.

The back end of a game created in Twine may resemble the preliminary drawings of Theodore Nelson’s (1999) Xanadu.

 

I’m curious, how do you keep track of your thoughts, do you feel like you think more linearly or more associatively? How do you ‘bookmark’ ideas in your day to day life?

 

References

Bolter, Jay David. (2001). Hypertext and the remediation of print. In Writing space: computers, hypertext, and the remediation of print. Routledge. (pp.27-46)

Bush, V. (1945). As we may think. The Atlantic Monthly, 176(1), 101-108. https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

Kafai, Y. (2006). Playing and making games for learning: Instructionist and constructionist perspectives. Games and Culture,1(1), 36-40.

Nelson, T. (1999). Xanalogical structure, needed now more than ever: Parallel documents, deep links to content, deep versioning and deep re-use. ACM Computing Surveys, 31 (4).

Standard

Task 4- Manual Scripts

A picture I took of my writing while channeling my best inner millennial influencer

As an older millennial, computers were part of my school experience, but mostly we did our work by hand. I remember having a very prominent writer’s bump on my right hand that I was self conscious about. Over time, without me even noticing, my writer’s bump eventually disappeared ― a signal that indicated a shift from primarily manual writing to typing. I found the exercise of manually writing 500 words physically straining. Where my writer’s bump used to be was a purple, fleshy indent that became very tender.  I can’t remember the last time I manually wrote 500 words in one sitting.  

Compared to my typing, this was by far a more time consuming activity. When I made mistakes in my writing I either tried to correct the mistake, by writing directly on top of it (e.g. changing a lowercase ‘t’ to an uppercase ‘T’), or I crossed out the word with a line and continued writing. In the case of a missing word, I wrote the word overtop with a little arrow indicating where the word should be inserted. I used a pencil to complete this task and since I didn’t use an eraser, I believe I would have edited my work the same way with a pen.

It’s hard to say which form of writing, manual or typing, I prefer and I am reminded of something host Brad Harris said in an episode of The history of the modern world, “the value of a printed book is its content, but the value of a handwritten book was mostly the object itself” (Harris, 2018, 1:58).  There is a charm to writing by hand and there is something to be said for the aesthetic appeal of manual writing. Doing this exercise has made me nostalgic for my late grandmother’s penmanship, or the notes my mum used to write me in my packed lunch. I enjoy manual writing when I’m doing something stylistic (e.g. crayligraphy or brush lettering) or personal, like writing a letter to a friend. I also write manually when I know it will take less time to write something down than to open a computer (e.g. grocery list). On the other hand, I prefer typing for assignments, lesson planning, daily correspondence, and anything that requires professional or formal writing. In my opinion the benefits to typing are speed, uniformity, ease of editing, and ease of sharing documents.  

 

 

References

Harris, B. (Host). (2018, February 5th). The printed book: Opening the floodgates of knowledge [Audio podcast episode]. In How it began: A history of the modern world. https://howitbegan.com/episodes/the-printed-book/.

Standard