LINK 6 – COMMON SPECULATIONS IN OUR VISIONS OF THE FUTURE

The last week of ETEC540 proved to be one of the more creative weeks in the course, and as some of us round out the final tasks before the end of our MET journey, the light at the end of the tunnel starts to look increasingly closer. The speculative futures task challenged us to creatively formulate a vision of the future with specific focus on the relationship human beings will have with technology, education, media, and various types of text. It was interesting to see most of my colleagues visualize this relationship as on a similar trajectory, appealing to common concepts and technologies, and transforming the world socially, politically, and culturally. 

I endeavoured to consider AI in the distance dystopian future, attempting to warn of the potential rise of authoritarian type societies. The basis for my short story was Harari’s idea of the ‘useless class’, magnifying what that truly may look like in a neo-Marxist future. In this speculative future, the rise of AI algorithms have automated most of the middle class jobs, leaving two parties – the ‘haves’ and the ‘have-nots’. Essentially a new age proletariat vs. bourgeois story, the narrative reflects on the thematic role of text, technology, and education within this future. Education has become reserved for those ‘worthy’ and those not considered to be in that category are left to fend for themselves. In this cultural shift, the fundamentals of education have significantly changed, harkening back to more primitive and naturalistic forms of knowledge (ie- foraging, hunting, farming) whereas the more privileged technology users would obtain occupations ‘behind the AI scenes’, programming, coding, etc. The divide created by algorithms and AI was immense and immeasurable.

At the heart of the story is the imperative that the human capacity to create the algorithms embedded within AI technology requires deep and intentional ethical considerations, and needs to be utilized for the right reasons, by the right people. 

Similarly, I found that some of my colleagues appealed to analogous future circumstances. For example, Megan’s vision of the AI-enabled future was home to an app-based survey meant for middle class workers who had suffered job loss as a result of the increasing automation in society. The AI analyses the user- inputted information, runs it through an algorithm and generates a prediction with respect to the likelihood of success in a new industry. In both our speculative futures, we’ve envisioned an AI making important decisions for human beings, essentially dividing and sorting them into industries or factions, that have deep cultural and societal implications based on certain personal factors. 

Alternatively, Megan and I differ when it comes to the factors involved in making these decisions. I propose that genetic predispositions and relevant biomarkers will play an important part in the analysis of information, which will enable AI to make more rational, sound, less discriminatory positions; a more optimistic view of the improvements that will be made to algorithms, despite the dystopian setting. Comparatively, Megan claimed that racist and sexist discrimination will be perpetuated to a higher degree within future algorithms despite not including ‘race’ in the work reassignments survey. This prompts me to question how these modes of discrimination could be perpetuated in the first place. It’s my presumption that this information was meant to be reflected in each user’s name, but a quick Google image search for “Justin Scott” would produce contradictory results.

Likewise, James produced a vision of the future that commented on middle-class occupations becoming overwhelmingly influenced by automation. He also engaged with the idea that most jobs available would be ‘behind the scenes’ as people would have to learn how to code, program, and/or have influence in directing the ethics around AI-enabled technology. I appreciated James’ characterization of the work force as completely on edge, where workers have secured limited positions on a short term basis’ and their continued overworking may only potentially yield success. 

Of course, when we begin dealing with the concept of people programming, coding, and managing the direction of AI-algorithms, we must be vigilant in assessing the inherent biases. We’ve frequently seen the often unconscious prejudices built into AI technologies, and we need to be extremely careful in ensuring that these are corrected as AI continues to take hold of the future, especially when we are dealing with language and culture. 

There is utility in discrimination and it’s exceptionally important to balance the levels of distinction we bring with us in the future. Discrimination is the recognition and understanding of the difference between two things – This is not a negative concept. We discriminate against all other potential partners when we choose an individual to take as our significant other, for example. We discriminate against all other animals, or all other breeds when we choose a specific breed of dog as our pet. Discrimination becomes a problem when it turns into prejudice – the unjust treatment of the aforementioned recognition. This we must leave in the past. 

Regardless, it was interesting to recognize that my colleagues utilized some similar ideas presented in Yuval Noah Harari’s article Reboot for the AI Revolution. We’ve all touched on the potential for the ‘useless’ class, a faction of people who’ve been booted from their occupations due to automation and AI-enabled technology. Our differences resided in the factors embedded within the AI algorithms and the ways in which it decides to make decisions. 

 

Hariri, Y. N. (2017). Reboot for the AI revolution. Nature International Weekly Journal of Science, 550(7676), 324-327 Retrieved from https://www.nature.com/news/polopoly_fs/1.22826!/menu/main/topColumns/topLeftColumn/pdf/550324a.pdf

LINK 4- PERSONAL DATA AS THE CURRENCY OF THE ATTENTION ECONOMY

Without question, the most infuriating exercise of ETEC540 was the User Inyerface ‘game’ done in Week 10. The interface created by Bagaar was developed as a means of illustrating various sets of dark patterns many internet users may experience when navigating through the digital world. The task also demonstrated a variety of key considerations web developers need to appraise when building web interfaces and alternatively, highlights what many internet users may take for granted in the fundamental processes of traversing the digital realm. The game is completely counterintuitive to the ways in which we’ve been (un)consciously programmed to utilize the fundamental design conventions within the internet and strove to waste as much of the users time as possible. 

Of course, many of us recognized the plethora of dark patterns utilized in the game: The overall poor design, the double negatives strewn across the password creation page, the ambiguous words and images on the CAPTCHA page, the misdirection created with a selection of eye catching buttons, and of course, the hidden information embedded within the Terms and Conditions link. It seems, however, that only a few of us had deep concerns about the privacy aspects of the User Inyerface game. 

Personally, I did not use any of my real information in this game. I immediately questioned the degree of information privacy I was afforded and chose not to type my name or use any form of legitimate password, username, or email. The poor design of this interface instantly raised a red flag for me; it made me feel like this site was plastered with fake ads, where my computer could be threatened by pernicious viral software, or worse, my personal data stolen. I realized quickly, it was simply an intentionally poorly designed game meant to challenge, frustrate, and obstruct users by demonstrating a number of dark patterns.

James encapsulates our privacy concerns

Similarly, James conceded that he did not read the Terms and Conditions, yet still had questions regarding what was being done with the information he was submitting. He had concerns specifically about the image he was asked to upload. Like him, I did not upload an image of myself, and rather used some stock image from the internet. Comparatively, Selina knew that this was only a game, and the knowledge that she was not threatened by the possibility of downloading malicious software emboldened her to become more adventurous with her clicks. Ultimately, it was Meipsy’s characterization of data and information collection that prompted me to think: perhaps data privacy is the true currency within the attention economy.

In his TedTalk, Tristan Harris suggests that social media, advertising companies, and digital marketing strategies are vying for one thing: our attention, and the best way to achieve that is to understand how our mind’s work. From autoplay functions, to algorithms that determine what and when we will view content, the internet and the forces behind it have fashioned a digital infrastructure dictated on our habits, behaviours, and in some cases, our personal information. Moreover, Harry Brignull suggests that the levels of deception used to gather these details are often very subtle, appealing to the users negligence, unawareness, or naivety. 

Harris also asserts that the internet does not evolve on a whim, rather it is calculated in the way it strives to understand it’s users’ patterns. The User Inyerface game illustrates how those subtle deceptions can gather information about us but also, shields us from any threat. Afterall, it is simply a game and the gathered information goes nowhere (Or does it?). Ultimately, if we were to apply these realized patterns to other more malicious web spaces, it becomes quite clear how these programs go to great lengths to assemble information pertaining to the products we buy and are partial to, the forms interaction with other users and information online takes, and what subjects we are most prone to becoming involved with in an online space. This information is the equivalent of gold to the social media, advertising, and marketing industries; it allows them not only pinpoint specific populations to target with marketing campaigns, but allows strategic deployment of products and services conditional on a seemingly infinite number of factors (ie – age, sex, location, profession, to name a basic few). Of course, this can be done honestly as well.

Further, we are approaching a point where these algorithms are evolving to increasingly attempt to match our online and offline behaviours. Meipsy closes her reflection with an interesting thought: 

As we learn more about how information is gathered and how we are manipulated, hopefully we will also become more adept at understanding these persuasions and take control and push back against the way these companies manipulate us for their own end game and purposes.

Although I tend to give perhaps more credit to the new generational members of the internet community with respect to spotting these manipulative designs, I can foresee these dark persuasions evolving alongside our increasing awareness. Regardless, through understanding our personal information as the currency by which these entities construct the infrastructure of the attention economy, the more we will be able to effectively and willfully participate in a more equitable redesign of the internet’s fundamental conventions. If we function as if data privacy is as valuable as it’s monetary counterpart, the less manipulation is bound to occur in the digital realm. 

 

Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. Interaction Design, Usability, 338.

Harris, T. (2017). How a handful of tech companies control billions of minds every day. Retrieved from https://www.ted.com/talks/tristan_harris_the_manipulative_tricks_tech_companies_use_to_capture_your_attention?language=en

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. Retrieved from  https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?language=en

 

LINK 3 – DEVIATIONS IN CONVENTIONS: VOICE-TO-TEXT AND THE ACCENT

In our third week of ETEC540, we were tasked with relating an unscripted narrative into a chosen voice-to-text application, record the outcome, and analyze the degree to which English language conventions were deviated from. We were also instructed to observe what we believed to be ‘right’ and ‘wrong’ within the recorded text, and make an intentional link between the distinctions of oral and written storytelling. 

I had fun with this experiment, and employed the voice-to-text program (https://speechnotes.co/) in a number of scenarios. I recorded myself narrating a portion of my lesson on The Alchemist to my class, I documented a phone conversation between myself and my partner to observe the degree of accuracy voice-to-text could produce by hearing speech through a separate technology, and I chronicled a conversation I had with a colleague at work.

There are some surface level connections between myself and many of my colleagues: Manize and I both used SpeechNotes, while comparatively, Olga utilized the Dictation tool on her Windows computer. We all recognized that literally mentioning the punctuation mark to the program would have drastically changed the meaning of the text, but conceded that this should not be a necessary step. Regardless, one of the most commonly agreed upon ‘mistakes’ in the voice-to-text scenario was the absence of grammatical and structural conventions. These typographical signs manifest themselves most frequently in basic punctuation like commas, periods, and capitalization and the lack of these proper morphological protocols give credence to the assertion that voice-to-text technology does not yet quite adequately have the ability to discern those written symbolic gestures from oral speech. Both Olga Kanapelka and Manize Nayani are colleagues that reflected on this idea, and went on to suggest that there were also many structural components of writing that were nonexistent within the text. For example, one of the more difficult aspects in comprehending the voice-to-text block of writing is that ideas are not organized or structured through the use of sentences or paragraphs. Through comparing our voice-to-text products, it’s clear that no matter what voice-to-text tool is used, the scarcity of grammatical and structural concordances remain. The lack of these literary principles, coupled with the inability to punctuate, make it increasingly difficult to effectively interpret the true narrative essence of the text. 

There are, however, some deeper connections between myself, Olga, and Manize: our voice-to-text body of writing was created through the influence of an accent. Olga, Manize, and myself reflected on the adequacy of spelling and level comprehension within our bodies of text. We all seemed to touch on the degree to which accents played a role in the formation of meaning-making within speech-to-text outputs; both in the sense of the program understanding what has been spoken, and in the sense of ensuring the written product was intelligible. 

Manize revealed that English is her second language as she moved to Vancouver from Mumbai, India some years ago. She seems to imply that many of the words picked up incorrectly were a result of her accent. She also posits that she believes having a story scripted would have permitted her to speak with more clarity and the number of spelling mistakes would have decreased. Similarly, Olga discloses that English is also her second language and specifies that English vowels are most difficult for her to pronounce. Similarly, when prompted to think about the difference of the written output if it were influenced by a script, Olga seemed to suggest the same idea as Manize: that the script would have aided in clarity and cohesion, ultimately resulting in a more readable text. 

Olga provides a clear example of how her accent directly affects the voice-to-text transcription program:

Olga was clear and intentional about how her accent could be misconstrued by the program. This was interesting to me, and indicated that voice-to-text technologies do not listen for context, they simply listen for sound. In other words, it listens, but it does not hear. On a separate but related note, I find it ironic that many of our chosen A.I voices (think GPS’s) can be manipulated to reflect a plethora of accented voices from across the world, yet struggle in deciphering accented spoken words. I wonder if the Australian GPS voice could effectively transcribe a true Australian accent for example. 

Although English is my primary language, and I do not speak with an accent (although some here in Vancouver think I speak with an Ontario or ‘Toronto’ accent), I recorded a conversation with a colleague of mine who speaks with a very thick English accent. The results were astounding in comparison to my original spoken narrative. Perhaps it was the fact that this was a conversation; that more than one person was talking, or that my colleague’s accent made it difficult for the voice-to-text program to discern was was truly being said, but the entirety of the text is blatantly incoherent. It was a stark contrast to my two colleagues who, despite scattered errors in spelling and coherence, theirs was predominantly intelligible.

Ultimately, it seems as if we all agree there is a certain level of flexibility when it comes to oral storytelling. Despite the mnemonic element required in reiterating a narrative, the story does not necessarily follow a strict sequential structure. Verbal strategies like emphasis, energy, intonation, volume, and pace can all contribute to the (in)effectiveness of orality while in written narratives, these elements are much more limited. I would even go as far as saying the accented influence of a narrative bestows it with more character and authenticity. Perhaps these elements appear, but in a fundamentally distinct way (punctuation?). Moreover, there is a certain level of grammatical forgiveness in orality – audiences are much more lenient when it comes to the variety of ‘mistakes’. There is no deleting an oral story, but there can be correction.

 

Bauman, R., & Sherzer, J. (Eds.). (1989). Explorations in the Ethnography of Speaking (2nd ed., Studies in the Social and Cultural Foundations of Language). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511611810

Gnanadesikan, A. E. (2011).“The First IT Revolution.” In The writing revolution: Cuneiform to the internet. (Vol. 25). John Wiley & Sons (pp. 1-10).

LINK 2 – THE ARCHITECTURE OF AN EMOJI STORY

In week six, we were tasked with exploring the ‘breakout of the visual’. Gunther Kress, the Australian semiotician, laid our foundation by suggesting that visual elements are more than simple decorative pieces but rather true modes of representation and meaning that influence symbolic messaging (Kress, 2005). So much so, that these visually discernible features could define what we understand as a type of new contemporary literacy.

What then do we make of those little emotion icons we know as emojis? What (grammatical? written?) conventions are we to use if we were to create a narrative using only emojis? Jay David Bolter, in his book The Breakout of the Visual asserts that picture writing simply lacks narrative power; that a visual plainly means too much rather than too little (Bolter, 2001). As a result, it can become increasingly difficult to write a narrative using visuals alone – it’s easy to convolute the communication of character relationships and development, the sequencing of plot points, the passage of time, or the overall narrative flow.

Consequently, it proved interesting to peruse my colleagues’ emoji stories and analyze the way in which they decided to construct the narrative form. Ultimately, I felt that Judy Tai’s transcription of Ratatouille held a multitude of similarities with respect to the architecture of an emoji story in comparison to my arrangement of A Life on Our Planet: My Witness Statement and Vision for the Future. While some participants chose the classic horizontal familiarity that comes from reading books, many others like Judy and myself, chose a vertical approach to projecting some approximation of narrative continuity. Among other blog posts, the most frequently mentioned factor was the difficulty in transcribing singular words into emojis; rather, authors needed to conceptualize a group of words or meanings and represent it with a chosen emoji image. Oftentimes, even this strategy proved difficult and some people had to simply revert to searching to images that offered readers expansive interpretations.

Carlo’s story on the left, Judy’s story on the right

The first and perhaps most obvious link, was both Judy and I took a vertical arrangement approach to conveying the central notions of our stories. It seems both Judy and I instinctively appealed to some semblance of linearity and order, just as the traditional written commands readers to follow a strict order of comprehension (Kress, 2005), when we began our synopsis’ with a signal of the medium, and a corresponding title. When comparing this manner of structure with other colleagues, it became evident that this approach was the most common manifestation of architecture with respect to an emoji story. As far as I am aware, there are no formal conventions on how to construct a narrative consisting of solely visual aspects, like emojis. Therefore, it seems interesting to me that the default pattern of assembly was through vertical methodologies; perhaps more fascinating is the deep contrast between content and form in writing with visuals. Although there are many similarities between both our emoji stories, it seems like Judy’s images are a lot more spaced out than mine. In comparing them, I feel like my story is attempting to jam a lot more information into each line, while Judy is more delicate with the chose information. Despite these types of electronic hieroglyphs representing an extremely new medium of communication in human history, our automatic reaction was to revert to the style of the scroll. 

Comparatively, only a few participants in the Emoji story task utilized a horizontal approach to arranging their synopsis’. For example, Anne Emberline’s story took a linear form, similar to that of traditional writing structures. Anne was unique in that she opted to relay her story using image after image, attempting to build meaning using the fundamental processes of reading and writing we currently use. Interestingly, one of the things I commented on Anne’s posting was that although I totally comprehended what her narrative meant, I had no idea what exactly was the movie, game, book, or show. Consequently, this put Kress’ assertion of “that which I can depict, I depict” (Kress, 2005). at odds with our interpretations as I have only negotiated an insubstantial meaning specific to me, while others could infer something completely different, or alternatively, nothing at all. 

Anne’s Emoji Story

What exactly prompts this style of organization? Why was it that most used line breaks to separate ideas, while others simply rattled off emoji after emoji with the hopes of creating meaning. I believe there is something to be said about our semiotic abilities to discern direction and instruction from punctuation. Writing is a marriage of words and symbolic markings, both of which direct meaning making within our minds as we decipher information through written words. With respect to the emoji stories, it is my interpretation that each line break indicated a new idea, new sentence, or new concept. I had a more difficult time deciphering Anne’s story than I did Judy’s.

Finally, Judy makes a compelling argument regarding the addition of images to text (in the form of graphic novels) becoming a driving factor for the increased interest levels of young readers. She posits an interesting connection between our human ability to read emotion and facial expressions as a means of inferring deeper about a particular story. While I agree with her assertions, I can’t help but think of some of the defining principles of Jean Piaget’s cognitive development model – I believe that at a certain point, our human minds crave a new challenge as they can formally operate within deeper texts; the image/word relationship begins to become commonplace. Moreover, our processing of both text and image pertains strictly to the visual sense. While Bolter makes this word/image relationship case with respect to internet models of publication, I can foresee a bit of a harkening back to the age of orality, where some of our future texts will be truly multi-modal; demanding an aural, visual, tactile, and perhaps even our gustatory or olfactory senses.

 

Bolter, J.D. (2001). Writing Space: Computers, hypertext, and the remediation of print. Mahway, NJ: Lawrence Erlbaum Associates.

Kress (2005). Gains and losses: New forms of texts, knowledge, and learningComputers and Composition, Vol. 2(1), 5-22

LINK 1 – GOLDEN RECORD CURATION: SELECTION CRITERIA

Among the abundance of compelling tasks we were meant to complete throughout ETEC540, there remains a small collection that stood out as most intriguing; one being the Voyager Golden Record and the process of curating a sample of 10 tracks. As simple as this venture sounds, it challenges participants to address, as Abby Smith Rumsey suggests, what we afford to lose?.

It is a challenging question, because as Smith Rumsey asserts, it’s difficult to determine what has future value particularly due to our ineptitude with respect to predicting what contexts or events could eventually lend meaning. It’s not feasible to truly know the value of anything until far in the future when certain events and contexts provide meaning to seemingly ‘useless’ artifacts (Smith Rumsey, 2017). It then follows to reason that the best way we can form present value at least, in the context of potentially submitting ten songs from earth to our extraterrestrial brothers and sisters is to formulate some semblance of criteria to follow.

In foraging through my colleagues’ webspaces, I attempted to explore the criteria that others used to ascertain what tracks best belonged on their curated Golden Record. The network analytics I did on the Golden Record Curation Task revealed that Marwa and I chose 70% of the same songs, while Sarah H and I shared only 20% of the same songs. Thus, I decided to investigate the criteria they used for content selection.

Firstly, let’s review the selection criteria I adopted. I chose to use a specific tenet from Abby Smith Rumsey’s article Why Digitize as the foundation of my criteria:

Creation of a ‘virtual collection” through the flexible integration and synthesis of a variety of formats, or of related materials scattered among many locations (Smith, 1999).

In essence, I creatively applied Smith Rumsey’s principles for valuable digital captures to the Golden Record curation exercise. It’s worth noting that this record is meant for potential alien life elsewhere in our universe. Thus, I intentionally attempted to eliminate any specific cultural, ethnic, or social significance to any music included partly due to the fact that if any intelligent life were to stumble upon these sounds, they would presumably be incognizant to those underlying factors. It then follows that the basis of my selection was informed by a synthesis and variety of formats (or genres), and a diversity of locations on planet earth.

Comparatively, Marwa used an analogous barometer for curating her chosen ten, however, she chose to include a gender metric to aid in selection. With this metric, it seems we may be at risk of entering the territory of equality of outcome. While I agree with her assertion that there is an overrepresentation of classical music and the entirety of the record is constrained to certain tonal and historical periods, I don’t entirely understand how the idea of ‘conforming to male gender-norms and conventions’ play into the overall choices. What does this mean exactly? Does this pertain more towards the depiction of males within these songs? Or is it more generally about the over representation of males as the artists of these pieces? Are there any suitable alternatives to these selections? How are we to counteract this? – Are we to travel down to Congo to educate the Mbuti of the Ituri Rainforest about gender normativity? Mozart is one of the most prolific and celebrated classical composers in human history, but I’m not sure how much of that he owes to his gender rather than his competence in a certain field. How do we reconcile the idea of the Golden Record conforming to these sorts of conventions with the inclusion of Chuck Berry as the only African American rock n’ roll artist? Further, the Golden Record seems awfully ableist by including only one blind artist! 

It simply seems to me, that if we are going to include metrics pertaining to gender or an artist’s/composer’s individual characteristics, the slope continues to become very slippery with respect to having to include a number of other related individual metrics.

Ultimately, the fact is that the Voyager Golden Record was launched in 1977 and perhaps it’s reasonable to estimate they may not have been as perceptive or sensitive to these types of conventions as we are in 2021. Moreover, and perhaps most importantly, I’m not entirely sure that the intelligent extraterrestrial life forms that may happen upon our curated Golden Record’s will be overtly aware or remotely conscious of the gender-norms we seem to have developed on planet earth. Regardless, it serves as an interesting distinction because both Marwa and I selected 70% of the same songs, proving that the data network does not illustrate the paradigm of arriving at the same destination despite taking different pathways .

In contrast, Sarah’s determining criterion followed a slightly different vein of thought. She chose to select songs based on 1) a representation of diverse cultures on earth, 2) a variety of styles inclusive of instruments and lyrics, and 3) encapsulating ‘joyful life’ on Earth in contrast to the ‘gloom’ of the current pandemic. Again, we see a tertiary metric that involves extra-musical factors. This is interesting to note because all three of us (Marwa, Carlo, and Sarah) all had two common criteria: diversity in location, and variety of style but varied in a third metric. With respect to epitomizing songs as joyful, it’s difficult to discern how to represent joyfulness in the first place. To what degree is the Navajo Night Chant joyful? Tough to say. Try listening to the Men’s House Song on repeat for more than five minutes and let’s have a conversation about how joyful we feel! Interestingly, El Cascabel, the Mexican mariachi style typically played at joyous and celebratory occasions, did not make the cut!

It certainly was difficult not to inject personally subjective measurements into the curation of 10 tracks from an incredibly diverse Golden Record. I think it’s important to remember the purpose of the Golden Record, and to entertain the idea of extraterrestrial life as completely void of any understanding of earthly planetary customs and conventions in direct relation to our subjective experiences. Thus, a strict focus on the musical aspects and the diversity of locations those songs represent seem to yield the most efficient results in terms of degrees of connectivity in curation.

 

Smith Rumsey, A. (1999, February). Why Digitize? Retrieved June 15, 2019, from Council on Library and Information Resources: https://www.clir.org/pubs/reports/pub80-smith/pub80-2/

Smith, Rumsey, A. (2017) Digital Memory: What Can We Afford to Lose

Spam prevention powered by Akismet