Link #6 – Dana C – Task 6 – An Emoji Story

Dana’s post is here.

Hi Dana,

I was so intrigued by your write-up that I just had to cheat and find out what the movie was via a Google search. Honestly, without the synopsis, I would have never figured it out. I loved how you approached this task – the simplicity of your emoji interpretation juxtaposed with the depth of your ideas and connections with other readings, as well as your personal work experiences, really bring the idea of changing media literacies home.

Like your approach, I tried to keep things straightforward, especially with the title, but I attempted to depict a summarized plot with the same process. Your final comment struck me: “It is a peek into my lifeworld while also allowing an understanding of my fellow classmates’ lifeworld.” I think this idea resonates with my reflection (in my post) that emojis are not as ‘universal’ as people think but rather “steeped in the culture and history of its speakers” (Leonardi, 2022). 

For example, my first experience of online ‘chat’ or texting was via mIRC in the 90s, an early instant messaging chat that used emotes – text indicating an action is taking place and a precursor to emojis. So, the first part of my post to you would read as follows:  

I was so intrigued by your write-up that I just had to cheat and find out what the movie was via a Google search *blushes with shame.* Honestly, without the synopsis, I would have never figured it out *breathes a sigh of relief.* I love how you approached this task *makes a star-struck smiling face* – the simplicity of your emoji interpretation juxtaposed with the depth of your ideas and connections with other readings, as well as your personal work experiences, really brings the idea of changing media literacies home *gives you a big thumb’s up.*

Of course, because of my now-frequent use of emojis, I can’t help but unconsciously emote with them; however, it is interesting to think about Kress’s ‘gains and losses’ when comparing emoting to emojis. Emojis are definitely the faster draw, but as we experienced in this task, they have a limited vocabulary for abstract and complex concepts.

Thank you, I really enjoyed your post! 

*smiles*

  

References:

Leonardi, V. (2022, October 18). Are Emojis Really a Lingua Franca? – De Gruyter Conversations. De Gruyter Conversations. https://blog.degruyter.com/are-emojis-really-a-lingua-franca/ 

Wikipedia contributors. (2023a, January 11). Emote. Wikipedia. https://en.wikipedia.org/wiki/Emote

Wikipedia contributors. (2023b, April 7). Internet Relay Chat. Wikipedia. https://en.wikipedia.org/wiki/Internet_Relay_Chat

Task 6: An emoji story

Made using emojikeyboard.io

The last movie I watched was Ant-man and the Wasp, so I started by attempting this title; however, unable to find a wasp emoji, I gave up fairly quickly. So my next strategy was to try out titles of my favourite movies/TV shows. After several dire attempts (Everything Everywhere, All at Once!), the compromises I would have to make started sinking in, and I finally settled on a title.

Although my initial start was quite literal (with the title), I gave up on this approach relatively fast. Instead, I focused on capturing the bigger picture or synopsis of the show, so there was definitely no consideration of syllables. Rather, I tried to combine depicting some key characteristics of the protagonist with the ‘big idea’ of the narrative. I don’t think anyone unfamiliar with the show’s plot could guess it based on my emojis alone!

The task was more complicated than I had initially anticipated, and my version ended up being briefer than I expected. However, in my view, emojis are not entirely ‘universal.’ They can be misinterpreted more often than read accurately (especially if the sender and receiver do not know each other well), so I opted for the ‘less is more’ approach.

Despite the challenges, this was a beneficial task to perform. I found it an intriguing way to connect to the Kress reading, where “the dominant modes of representation of speech and writing are being pushed to the margins of representation and replaced at the centre by the mode of image and by others” (pg. 17, 2005), others here being emojis. However, when reflecting on the “question of gains and losses, in the move from one mode and its arrangements to another mode and its different arrangements” (pg.16), as well as the “aptness of fit between mode and audience” (pg. 19), I did feel the use of emojis for this task resulted in a loss of richness and depth of narrative for the plot. This might, however, be because I am not a good ‘designer’ of emoji communication (just as someone might not be a great musician despite knowing how to play music) and could not effectively transform the intended meaning into the new semiotic landscape.

Reference:

Kress, G. (2005), Gains and losses: New forms of texts, knowledge, and learning. Computers and Composition, 2(1), 5-22.

Link #5 – Phiviet Vo – Task 11 – Text-to-Image

Phiviet’s post is here.

Hi Phiviet,

The linking assignment requires or implies that we should connect to tasks we have completed ourselves (to link, compare and critically reflect on); however, I just had to respond to your post after seeing your quick experiment with Craiyon AI for Task 11, even though (like you) I completed the Detain/Release simulation for this task.

To experience it firsthand, I tried the same prompts (rich people, poor people) in DALL-E and had similar results to you (below are the images) – rich people are all white and dressed similarly, except for one black man who needed wads of cash in his hands to signal his ‘richness.’ Poor people are all South Asian women and children! 

Despite everything we have read and heard about human bias being integrated and amplified in AI, it still shocks me in its immediate context. 

Yes, South Asia has a high level of poverty, especially compared to North America and certain parts of Europe. However, what stood out was the homogeneity and the implication of dress, culture, gender, age and race in this depiction of poverty. 

DALL-E creates the link between textual semantics and their visual representations by training on 650 million images and text captions (Johnson, 2022). These images (I assume) come primarily from the (uncurated) Internet, i.e. most online pictures captioned ‘poor’ must be of South Asian women and children. So I did a quick Google Image search of poor people, and the results matched. Below is a screenshot.

The difference that stood out for me between the two sets of images was that of context. Because the Google images are from the real world, they have an integrated backdrop that may provide more information relevant to reaching the judgement of ‘poor.’ On the other hand, DALL-E only extracts some salient features (race, gender, clothing style) from the data and then generates new images based on these, minus the context, thus reinforcing existing stereotypes.

This also leads us back to Dr. Shannon Vallor’s idea that “the kind of AI we have today and the kind we’re going to keep seeing is always a reflection of human-generated data and design principles. Every AI is a mirror of society, although often with strange distortions and magnifications that can surprise and disturb us” (Santa Clara University, 2018, 11:51).

Thank you for triggering an intriguing app exploration.

References:

Johnson, K. (2022, May 5). DALL-E 2 Creates Incredible Images—and Biased Ones You Don’t See. WIRED. https://www.wired.com/story/dall-e-2-ai-text-image-bias-social-media/

Santa Clara University. (2018, November 6). Lessons from the AI Mirror Shannon Vallor [Video]. YouTube.

 

 

Link #4 – Jamie Husereau – Task 9 – Network Assignment

Jamie’s post is here.

Hi Jamie,

I enjoyed reading your post on this task for several reasons. First, because it was so different from my approach, and second, I learnt a lot from your interpretation of the quiz data. I found the Palladio program confusing when first assigned this task and couldn’t get as far deciphering it, so it was helpful to see how you understood it. After reading your post, I played with it again, and it made much more sense! From the perspective you set up, it seems my choices were not as ‘mainstream’ as others.

Though I got a better handle on reading the data thanks to you, my original reflection on understanding the human reasoning behind them remains the same: hard to gauge from the data alone. I can see from your post’s ‘Political Implications‘ section that you are of a similar mind. 

Does the fact that my selection is not as mainstream as the rest of the group’s imply that I have little in common with the group? Not at all. Does it indicate I have a vastly different musical taste from the rest of the group? Not necessarily. You have stated aptly that “misinterpreting data can lead to misrepresenting people.”

Your final comments on the original record tracks and how and why they were selected take me back to my original post on this task. As well as the map from Module 8.1 that visualizes documents available through the Internet Archive demonstrates an uneven and limited representation of certain parts of the world over others, in almost every mainstream realm, digital or otherwise.

Link #3 – Amy Stiff – Task 8 – Golden Record Curation

Amy’s post is here.

Hi Amy,

Like you, I chose to use geography and cultural diversity as curating factors for the Golden Record, so I thought it would be interesting to compare our final choices. Here’s a quick visual I sketched to understand the selections:

70% of our choices were the same; however, we diverged on three track selections. The secondary criteria for curation we both had were entirely dissimilar, which could presumably explain the differences. You cited the range of human emotions expressed in music as a decisive factor, and I claimed a preference for tribal sounds over classical ones. 

Of course, the Venn diagram I sketched is a ‘logic diagram’ used to explain the logical relationships between sets of things. In retrospect, I speak for myself when I say, I picked the tracks I liked, first, and then applied my stated criteria to them. So a Venn diagram is perhaps not the most useful tool to understanding why I like certain music over others.

The entire exercise also takes me back to the first statement made at the start of this Module (8.1 Why digitize? Digitize what?): “Texts are promoted through time for many reasons that may have little to do with any inherent quality.” So often I think I am ‘neutral,’ but in reality, far from it.

Link #2 – Chris Rugo – Task 4 – Manual Printing

Chris’s post is here.

Hi Chris,
When it came to this task, I didn’t think twice and instinctively opted for handwriting, a communication medium I have loved since childhood. However, your post on the manual process of producing text via potato stamping was eye-opening.

As you walked us through your robust process of creating the simple text of your name, it reinforced Bolter’s idea that the value of electronic writing systems lies in making “structure a permanent feature of the text. The writer can think globally about the text,” (pg. 30, 2001) a trait not afforded by this stamping process (but still possible in a limited way via handwritten text).

Writing is a way of thinking for me, but even with a complete set of 26 potato letters ready on hand, I would not be able to express nor organize my thoughts adequately via this technology, except perhaps as a final (tedious) representation medium. As Bolter mentions, “the writer is thinking and writing in terms of verbal units or topics, whose meaning transcends their constituent words” (pg. 29, 2001), let alone individual letters, as is the case here.

Like you, I have always considered block printing more of an artistic or textile printing endeavour, so seeing its role and fit in the jigsaw puzzle of developing text technologies was interesting. In addition, your comparative analysis of manual versus mechanized printing processes was informative, and I agree with you on the far-reaching consequences of the latter.

Most of all, I appreciate the parallel you drew between your process and that of students in maker spaces, reinforcing Bolter’s (and somewhat McLuhan’s) idea of remediation: “a process of cultural competition between or among technologies” (pg. 23, 2001). Clearly, 3D printers win over hand-carved potato text blocks any day, but it was an interesting reminder to think of our maker tools as communication technologies as well.

Reference:

Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print (2nd ed). Lawrence Erlbaum Associates.

Link #1 – Jessie Young – Task 7 – Mode-Bending

Jessie’s post is here.

Hi Jessie, 

I enjoyed your rendition of this task tremendously! Reinventing this task as an ‘unboxing’ vlog is pure genius, and your execution of the video makes you look like a pro 😉

I have to admit, I think of videos as more visual mediums, which is why I shied away from them for this task, but the conversational style of your vlog made me realize the visual was secondary and the orality of your piece was fundamental to it, (reminiscent of Ong’s claim of a return to a second orality). 

I found your ‘unboxing’ performance authentic, fun, engaging and informative, as well as checking off many other boxes (entertaining, thoughtful). There was a point when I forgot this was graduate work/study, and consumer-me took over and started clicking on product links, all off on tangents. For me, that marks the integrity of your work, its ability to elicit a variety of spontaneous responses different from those perhaps intended. It also embodies the New London Group’s idea of multiliteracies, calling attention to the dynamic nature of “language and other modes of meaning,” which are “constantly being remade by their users as they work to achieve their various cultural purposes” (Dobson & Willinsky, 2009).

A final note; I also greatly appreciate your explanation/breakdown of how you executed the task because it motivates me to experiment with the media/interfaces you have used.

References:

Dobson, T., & Willinsky, J. (2009). Digital literacy. In D. R. Olson & N. Torrance (Eds.), The Cambridge handbook of literacy (pp. 286-312). Cambridge University Press. 

Task 12: Speculative Futures

Image credit: Teslasuit

Prompt: Describe or narrate a scenario about a piece of clothing found a generation into a future in which society as we know it has come apart. Your description should address issues related to citizenship and elicit feelings of excitement.

In the Almost-Now, machines and AI manufacture everything with perfect cold precision. As a result, VR is the norm, and every body is encased in the perfected ‘Teslasuit.’ These skintight suits use integrated EMS (Electro Muscular Stimulation) to allow wearers to feel and experience any desirable motion or sensation. A second skin per se and necessary to communicate with the synced-in smart environment surrounding us whether at home, work or play, these suits are durable, hygienic and indestructible with self-repairing capabilities so that one will suffice for a lifetime. Waterproof, tear-proof, with built-in features to take care of waste excretion or other physical needs, they need never come off. With the advanced version of ZOZO’s built-in 3D scanning tech, the suit grows with the individual, and when the encased body expires, this vital casing integrates Infinity Mushrooms tech to biodegrade the corpse into the environment safely.

Image credit: Sciencealert.com

In the Almost-Now, democratic socialism has taken over and class privilege has ceased, the state issues these unisex suits to every child at birth, playing an important role in unifying the masses with a choice of standard black, white and grey coating. 

But recent events have disturbed the status quo. The discovery of an ancient relic has instigated unrest and awakened primal desires thought long-forgotten. An obsolete cloth – hand-marked, hand-woven, rough to the touch, physical, torn, coloured, weathered and frail. An ode to a reality bygone, weighted with stories untold of girl and boy, man and beast, sacred and profane, belonging, longing and separation.

Image credit: Textile Museum of Canada

Advanced AI has partially uncoded this crude tapestry of an outdated reality, and on popular demand, it will soon be available for upload into new and existing Teslasuits for trial runs. Users will now be able to vir/partially experience the culture and emotions of the primitives that created this simple object and speculate on the rudimentary nature of their existence.  

 

References:

Kapfunde, M. (n.d.). Muchaneta Kapfunde. https://fashnerd.com/2017/11/zozosuit-hitech-fashion-tech-wearable/

MacDonald, F. (2016, February 16). This Mushroom Suit Digests Your Body After You Die : ScienceAlert. ScienceAlert. https://www.sciencealert.com/this-mushroom-suit-digests-your-body-after-you-die

Teslasuit. (2022, December 2). Full Body VR Haptic Suit with Motion Capture | TESLASUIT. https://teslasuit.io/products/teslasuit-4/

V2_ Lab for the Unstable Media. (n.d.). V2_ Lab for the Unstable Media. https://v2.nl/works/the-mushroom-death-suit

Reflection:

I loved the reading and the assigned task this week, probably because of my background in architecture and design. Many of the references in the assigned reading were familiar to me, and the purpose of speculative design resonated firmly once placed in the historical timeline of the development of the design industry via the Beyond Speculative Design: Past, Present – Future reading (2021). 

I primarily used the diagram from Chapter 2 (Mitrović et al., pg. 27, 2021) to structure my methodology for this task, as it made a lot of sense. So I tried to root the scenario firmly in existing technologies but then extrapolated “to create a modified version of the world” (pg.28) while still trying to maintain “plausibility.” I also introduced relevant-to-me social and cultural issues but left these open-ended and finally tried to end in a circle right back at the start… with the speculated now speculating. 

Reference:

Mitrović, I., Auger, J., Hanna, J., & Helgason, I. (2021). Beyond Speculative Design: Past, Present – Future.

Task 11: Detain / Release

Algorithms are nothing more than opinions embedded in code.
(Cathy O’Neil in Talks at Google, 2016)

A machine-learning algorithm can’t tell the difference between being morally good, neutral or unjust forms of bias, so that’s something humans have to be much more careful about.”
(Shannon Vallor in McRaney, 2018)

The Detain/Release simulation was a fascinating yet unsettling and disturbing task that revealed much about how flawed the judicial system seems to be. 

While going through the simulation, I had many questions and needed more information on individual cases. For example, if the crime was theft, then of what exactly? What was the nature of the robbery? Not all offences are of the same rank, as the case of the 18-year-old who tried to ride a six-year-old’s bicycle to school, demonstrated in the McRaney (2018) podcast. How was the level of expected violence or crime of the defendants determined? What factors determined the recommendations of the prosecution? So much critical information related to the context of these human lives was missing. I also became more acutely aware of my own biases, releasing women (perhaps they were mothers?) and instinctively assuming they were ‘less dangerous’ than men.

Each click to Detain or Release was burdened by the idea that these decisions would have far-reaching effects on real human lives. 

The process raised the question: Can machine code integrate human context and compassion critical to these decision-making processes? The answer is no, so these algorithms must be used cautiously, with wisdom and ethically-guided human supervision.

From the podcasts I listened to this week, much stood out. Now that algorithms are becoming pervasive in virtually every sphere of life, from banking, shopping, policing, and transportation to education etc., then Data Ethics – a super-critical domain today – must be prioritized. Transparency must be demanded. 

Before we start talking about machine morality, we have to think about human morality, that is, the morality of the people designing the machines..”
(Shannon Vallor in McRaney, 2018)

References:

McRaney, D. (Host). (2018, November 21). Machine Bias (rebroadcast) (no. 140). [Audio podcast episode]. In You Are Not so Smart. SoundCloud. 

Talks at Google. (2016, November 2). Weapons of math destruction | Cathy O’Neil | Talks at Google. [Video]. YouTube.

Task 10: Attention Economy

This was an incredibly frustrating exercise.

Having read and watched the week’s assigned content, I thought I would be up to the task and intuitively knew what was to come; despite this, I was amazed at how infuriated the poorly designed user interface made me feel. As well as the chafing reminder of how much time is wasted navigating such things in real-life scenarios and the manipulative techniques embedded within them.

I attempted the game several times during the week and never finished it despite trying my level best. At first, I thought that was deliberate, and the game was designed to be a never-ending loop of madness and folly upon which to reflect… but hats off to Amy Stiff! When I checked her post for this task, lo and behold, she had finished it! Perhaps others have too (I stopped checking after Amy’s post verified my inadequacy :). This led me firmly to the conviction that I must not be human because that was the stage of the game/task I could not cross – selecting pictures to prove my human-ness.

Completing the form was annoying and frustrating as intended, but also an informative, practical exercise in the dark patterns described by Brignull (2011). I could not help but compare this experience to other online interfaces with similar tactics I had been susceptible to in the past, especially those involving financial transactions.

Misleading buttons, incorrect highlights, dubious language, and hard-to-fill (if not impossible) forms giving no indication or feedback regarding how to proceed or what exactly the problem is.

I found Brignull’s article particularly intriguing because of his references to ethical design. I studied architecture and design in the 90s when design ethics were inseparable from design education and taught integrally within it, never considered outside of it. The default (unchallenged) premise was that designers have a moral responsibility to ensure benefit and not cause harm to their end-users. Tristan Harris (featured this week for his TED talk) worked as a design ethicist at Google (Wikipedia, 2022), and Pickering (2021) writes that a design ethicist is “someone who evaluates the moral implications of design decisions and takes responsibility for the effect those decisions have on the world at large.”

There seems to be a shift in where the weight or responsibility of design ethics lies, and I wonder how and in what ways design education and recent developments within it may have contributed to this transformation.

 

References:

Brignull, H. (2011). Dark patterns: Deception vs. honesty in UI design.Links to an external site. A List Apart, 338.

Pickering, M. (2021, December 29). How to be a design ethicist at any company – UX Collective. Medium. https://uxdesign.cc/how-to-be-a-design-ethicist-at-any-company-f166b2f34ecd

Wikipedia contributors. (2022, December 4). Tristan Harris. Wikipedia. https://en.wikipedia.org/wiki/Tristan_Harris

Spam prevention powered by Akismet