LINK 6 – COMMON SPECULATIONS IN OUR VISIONS OF THE FUTURE

The last week of ETEC540 proved to be one of the more creative weeks in the course, and as some of us round out the final tasks before the end of our MET journey, the light at the end of the tunnel starts to look increasingly closer. The speculative futures task challenged us to creatively formulate a vision of the future with specific focus on the relationship human beings will have with technology, education, media, and various types of text. It was interesting to see most of my colleagues visualize this relationship as on a similar trajectory, appealing to common concepts and technologies, and transforming the world socially, politically, and culturally. 

I endeavoured to consider AI in the distance dystopian future, attempting to warn of the potential rise of authoritarian type societies. The basis for my short story was Harari’s idea of the ‘useless class’, magnifying what that truly may look like in a neo-Marxist future. In this speculative future, the rise of AI algorithms have automated most of the middle class jobs, leaving two parties – the ‘haves’ and the ‘have-nots’. Essentially a new age proletariat vs. bourgeois story, the narrative reflects on the thematic role of text, technology, and education within this future. Education has become reserved for those ‘worthy’ and those not considered to be in that category are left to fend for themselves. In this cultural shift, the fundamentals of education have significantly changed, harkening back to more primitive and naturalistic forms of knowledge (ie- foraging, hunting, farming) whereas the more privileged technology users would obtain occupations ‘behind the AI scenes’, programming, coding, etc. The divide created by algorithms and AI was immense and immeasurable.

At the heart of the story is the imperative that the human capacity to create the algorithms embedded within AI technology requires deep and intentional ethical considerations, and needs to be utilized for the right reasons, by the right people. 

Similarly, I found that some of my colleagues appealed to analogous future circumstances. For example, Megan’s vision of the AI-enabled future was home to an app-based survey meant for middle class workers who had suffered job loss as a result of the increasing automation in society. The AI analyses the user- inputted information, runs it through an algorithm and generates a prediction with respect to the likelihood of success in a new industry. In both our speculative futures, we’ve envisioned an AI making important decisions for human beings, essentially dividing and sorting them into industries or factions, that have deep cultural and societal implications based on certain personal factors. 

Alternatively, Megan and I differ when it comes to the factors involved in making these decisions. I propose that genetic predispositions and relevant biomarkers will play an important part in the analysis of information, which will enable AI to make more rational, sound, less discriminatory positions; a more optimistic view of the improvements that will be made to algorithms, despite the dystopian setting. Comparatively, Megan claimed that racist and sexist discrimination will be perpetuated to a higher degree within future algorithms despite not including ‘race’ in the work reassignments survey. This prompts me to question how these modes of discrimination could be perpetuated in the first place. It’s my presumption that this information was meant to be reflected in each user’s name, but a quick Google image search for “Justin Scott” would produce contradictory results.

Likewise, James produced a vision of the future that commented on middle-class occupations becoming overwhelmingly influenced by automation. He also engaged with the idea that most jobs available would be ‘behind the scenes’ as people would have to learn how to code, program, and/or have influence in directing the ethics around AI-enabled technology. I appreciated James’ characterization of the work force as completely on edge, where workers have secured limited positions on a short term basis’ and their continued overworking may only potentially yield success. 

Of course, when we begin dealing with the concept of people programming, coding, and managing the direction of AI-algorithms, we must be vigilant in assessing the inherent biases. We’ve frequently seen the often unconscious prejudices built into AI technologies, and we need to be extremely careful in ensuring that these are corrected as AI continues to take hold of the future, especially when we are dealing with language and culture. 

There is utility in discrimination and it’s exceptionally important to balance the levels of distinction we bring with us in the future. Discrimination is the recognition and understanding of the difference between two things – This is not a negative concept. We discriminate against all other potential partners when we choose an individual to take as our significant other, for example. We discriminate against all other animals, or all other breeds when we choose a specific breed of dog as our pet. Discrimination becomes a problem when it turns into prejudice – the unjust treatment of the aforementioned recognition. This we must leave in the past. 

Regardless, it was interesting to recognize that my colleagues utilized some similar ideas presented in Yuval Noah Harari’s article Reboot for the AI Revolution. We’ve all touched on the potential for the ‘useless’ class, a faction of people who’ve been booted from their occupations due to automation and AI-enabled technology. Our differences resided in the factors embedded within the AI algorithms and the ways in which it decides to make decisions. 

 

Hariri, Y. N. (2017). Reboot for the AI revolution. Nature International Weekly Journal of Science, 550(7676), 324-327 Retrieved from https://www.nature.com/news/polopoly_fs/1.22826!/menu/main/topColumns/topLeftColumn/pdf/550324a.pdf

Task 12 – Speculative Futures

OVERTURE

Being an English teacher, I jumped at the opportunity to write a narrative. Typically, I am the one teaching the narrative elements to my students, but I never truly have the opportunity to write creatively myself. I’ve also been an avid follower of Yuval Noah Harari’s writing throughout the years, and was thrilled to use his article Reboot for the AI Revolution as the basis of inspiration for my narrative. 

Set in the distant future, my narrative warns of the authoritarian type experiences AI could produce on a dystopian speculative basis. It takes Harari’s idea of the “useless class” and magnifies what that might truly look like in a neo-Marxist type future. In this speculative future, the rise of AI algorithms and automation has essentially eliminated the middle class, leaving only the ‘haves’ and the ‘have-nots’. While one struggles to accumulate the basic necessities for survival, the others grapple with determining what living truly means: A new age proletariat vs bourgeois story, so to speak. You may notice the influence of a number of course elements thoughtfully integrated in the story.

There are also thematic reflections on the role of text, technology, and education throughout the piece – I was intentional in highlighting the shift in the fundamentals of education of the time, ironically contrasting the return/ importance to naturalistic and ‘primitive’ types of knowledge despite being set in a hyper technologized world (ie- foraging, hunting, farming). I attempted to also make clear that use of ‘high technology’ was dominated by the ‘high class’, and that the divide between the two was immense and immeasurable. At the center of the narrative is the idea that the human capacity to create algorithms in AI technology needs to continue to develop on an ethical level, but more importantly, to be utilized for the right reasons and the right people. I also went to some lengths to direct some attention to the damage done to the environment due to the lack of action on climate change; an area where I truly think we need to turn our attention to especially when it come to enabling technology to solve problems. Enjoy…

_________________________________________________________________________

PART 1

Centered within a ramshackled skyline, three tall towers rose above all else. 

Their peaks brushed the ceiling of the sky while the base of the buildings dispersed into multiple purple and silver tubers solidly planted deep within the cold, hard ground. Most of The People referred to them as Roots. The Roots ascended to great heights, at least ten stories Ava had heard people say. They reminded her of the large mangrove trees she’d learned about while watching a video using her friend’s stolen internet connection; trees that were long extinct now. “Imagine a coast line” the video began. It was difficult for her to conjure up accurate images. Ava had never seen a coast line. “The mangroves straddled two worlds… not only do they adapt, they create a sanctuary for an extraordinary range of creatures”. Ava couldn’t help feel like the haven these Roots purported to protect, felt more like a prison, an immensely twisted metallic jail cell.

Click The Mangrove to see the video Ava Carlton watched on her stolen internet

At a certain point, the Roots culminated in a gnarled wire tangle where a thick plateau of steel rested and served as the base for the countless stories above. Each tower seemed to disappear into the clouds and each were made of the highest quality SmartSpecs, TechnoSteel, and IntelliFibres. All three towers were fenced off with cable fencing, every corded wire measuring at least three feet in diameter and supercharged with 7000 volts of electricity. The sound of buzzing electricity was constant. A series of enormous metal spheres sat at regular intervals as Ava’s eyes climbed the towers, serving as the only common structure that kept them all connected.

A sketch taken from Ava Carlton’s journal

Nobody had ever seen the Paragons come in or out of any of the towers, but everyone knew they were in there.

Ava Carlton had been walking home from Dr. Howard’s schoolhouse, close to the towers, when she had been jolted by the sight of a small pack of coyotes digging through heaps of garbage. With the surrounding region becoming so environmentally bare, the animals that once lived wild and free were forced into the city’s encampments to find food, and in some cases, people. Avian species were one of the organisms left completely extinct, and sardonically, small drones roved through the skies in their place, watching. The forests that once stood at the edge of the city had been cut down long ago and large metallic cylinders emerged from the ground, branching off high into the sky. Some said they were the same tubules that made up The Roots, connected in an underground maze meant to harvest various energy sources for the Paragon’s usage. Some conspired to dig, but nobody had ever found anything. 

The ocean ice had long melted and sea levels had increasingly rise year after year. Summer’s were extraordinarily hot, and those who didn’t die of hunger, thirst, disease, rabid animals of succumbed to the hands of nomadic bandit tribes, died of heat exhaustion or dehydration. Salination levels fell to an all time low, and if the chemical breakdown of plastic within ocean water didn’t kill the aquatic life, it was the fact that sea creatures simply couldn’t survive in water that diluted. There was talk every year that the ‘big storm’ was coming; a rain so intense and so long that it would flood the earth like in Biblical times.

The pandemics had wiped out roughly 60% of The People, and those who did survive found that the food supply would quickly run out. The Paragons, on the other hand, were largely unaffected having been injected far in advance with innovative biotechnology they nicknamed BloodBots; nano-robots designed to fight off disease, discomfort, and all sorts of pain. How can you be human, if you can’t feel any pain, Ava thought to herself.

The People did not have the means to afford this technology shortly after The Divide. They were relegated to “the old ways”, using the land in some capacity to survive. Planting crops in arid land, hunting and foraging in barren forests in hopes of some semblance of a decent harvest. An ironic full-circle approach to a world once filled with promise of technological opportunity for the underprivileged. Those who chose not to adhere to the old ways resorted to thievery, destructive violence, and generally reckless nihilism. Danger lurked in every corner. Ava picked up her pace and weaved her way through a series of narrow alleys until she bolted safely through the front door or a dilapidated apartment building, clicking closed the four padlocks her father had installed.

Devin sat nervously at the tiny wooden table in the kitchen. He had been reading the Daily Bulletin on his tablet, trying to make out the words through the fractured and splinted screen. Most books had been used in the early days to fuel fires. The textbooks went first. Devin put down his tablet and walked painfully over to embrace Ava, limping heavily with every step. He hadn’t eaten properly for months and had been nursing leg wounds sustained from hunting. He insisted that most of the food he was able to scavenge went to Ava, and there were no conversations to have about it. His hunting and foraging skills were a far-cry from his civil engineering job years ago. He had been pushed out of the industry by the rise of A.I powered algorithms that could produce higher quality projects at a faster rate. This was a common occurrence to many when The Divide happened. 

Their neighbour, Don, was a military man but was discharged after the ComBats took over as the main vehicle for wartime combat. Marianne lived across the hall within the commune. Once a doctor, she lost her job after CyberMed AI Systems replaced many of the medical professionals at hospitals, walk-ins, and private clinics. It was common to find ruined apartment buildings housing groups of useless people working together just to survive. Once Ava arrived home, both Don and Marianne made their way over to Devin’s unit, where Ava taught them what she had learned throughout the day. They’d been working on this for a year now, and there wasn’t much time left to complete it. 

She knew that doing so put her in grave danger.

___________________________________________________________________________

PART 2

Dr. Howard sighed heavily as his legs walked him into his domicile. He stepped out of his exo-skeleton frame and dropped heavily into his anti-gravitational chair. All Paragons wore their OssoXO suits throughout the day as a way of fully embodying their fundamental belief that ‘technology was made to serve us’, a belief that Dr. Howard despised. I’m certainly capable of using my legs myself, he thought.

Because basic human movement had been crafted for their technological counterparts, the muscles of all Paragon’s had atrophied over the years. Despite the loss of muscular vigor, Paragons were in peak physical and intellectual health, primarily due to the infusion of nanotechnology within their bodies. BloodBots collaborated with blood cells, helping to fight off novel diseases and block pain receptors from reacting in the brain. CereSynap chips were implanted in their brains, allowing new information to be uploaded to their memory as if one were uploading a photo to a device. Some Paragon’s injected DNChain technology into their bodies, allowing them to modify their DNA in various ways, while others replaced limbs with robotic prosthetics. All Paragons believed that this was the next step in the evolutionary ladder: human and technological integration.

A faint buzzing sound had begun emanating nearby. A drone-like object, no bigger than a human hand, had appeared quickly and flooded Dr. Howard’s face, torso, and legs with yellow beams of light. EARL, an acronym for Electronic Algorithm and Response Lexicon, was a standard issue companion drone meant to monitor and serve each Paragon user.

“Good evening Doctor.” The drone spoke as if it were human. AI voice had come a long way since its inception. “Our research algorithms suggest there are multiple disease variants on the horizon. It is advised that you upgrade your bio-protection system with the following nano-bots”

The drone produced a tiny vial filled with a clear liquid and a small syringe. What fun it is to be human when you don’t feel any pain, thought Dr. Howard sarcastically as he jabbed himself with a needle and pressed hard into his tough skin. 

The drone spoke again – “Secondarily, there are indications that the global deluge is on pace to arrive at our current location no later than Friday next. All systems in the Towers are operational and our data suggests we should have no problem withstanding the projected damage.” There was a slight pause “My algorithms are indicating a higher than normal sense of stress, Doctor. Your blood pressure is high, and your brain waves indicate that you are on high alert. Is there something the matter?”

 

An algorithmic visual rendering of the aforementioned scene

“Funny, EARL. I didn’t notice,” Dr. Howard said dismissively. But he did notice. Over the past year, Dr. Howard had set up a small inconspicuous school house in what most Paragon’s called the Filth, the surrounding area around the three spires. It was causing him deep anxiety. None of the other Paragons knew of Dr. Howard’s endeavours, for it went against their beliefs, and any Paragon who violated their hallowed customs paid a significantly lethal price. 

“You cannot lie to me, Doctor. My biotech algorithms are flawless,” chided the machine.

“EARL, do you know why they called it The Divide?” Dr. Howard turned to face the floating drone.

“My global database suggests that in the year…” EARL began sputtering out information.

“I figured. You cannot know. You can only regurgitate the data that’s been provided for you. You claim that your algorithms are flawless. How do you then account for the population of people living down there!” Dr. Howard pointed out the SmartSpec glass window, which automatically untinted itself to provide a clearer picture for the viewer.

“No algorithm is flawless,” muttered Dr. Howard under his breath.

Most people weren’t aware why everyone referred to that time in history as The Divide. It was assumed that it simply stipulated an alarming divide between two major sects of society: the Paragons and the People. Although true, this was not what The Divide was meant to relay. In the late second millennium, Dr. Howard had developed a new algorithmic technology designed to assess the future potential of any individual on earth. It factored in a multitude of characteristics such as DNA genetics, Intelligence Quotient, previous and potential life experience, and geographic location among hundreds of other facets. The technology showed promise in identifying individuals who could be the next Einstein, Mozart, or Shakespeare. It could be used to ‘harvest’ these individuals and allow them to make meaningful and lasting change for the entirety of planet earth; to put the right people in the right positions. This is not how the story went.

Greed took over. The technology fell into the wrong governmental hands, and rather than use the algorithm to determine the people who could meaningfully impact the world, the Paragons were formed: an identified sect of society inherently more valuable than the other half, according to the AI. The Paragons formed their own society, with their own beliefs, customs and rules, harnessing the AI technology to perennially solidify their seat in the social hierarchy. Resources, food, energy, protection from the elements, animals, and disaster all went to the Paragons. The Divide did not only create two societal factions, it quite literally algorithmically divided the worthy and the unworthy, the living and the living dead.

In a personal act of penance for his grave misdealings, Dr. Howard had taken it upon himself to secretly rework the algorithm and use it to identify those in the lesser population who had the potential to comprehend Paragon knowledge and the skills necessary to construct and distribute technology to The People. He had been privately and secretly tutoring a small group of children, and young adults, providing them with the knowledges they would need to introduce various life-saving technologies to the people below. Ava was one of his brightest. He hoped that after a year’s work, she would be able to produce something relevant before next Friday. Before the deluge had destroyed every last one of the People.

A knock came at the door of Dr. Howard’s unit. A tall, black haired woman slowly paced into his room.

“Dr. Yael, to what do I owe this pleasure!” Dr. Howard said with delight.

Yael was the chief medical engineer within the Towers. She had the important job of programming, engineering, and managing the manufacturing of all medically related AI technology within the Towers. With AI taking over the medical industry, the only vocations left were the one’s who created the machines. There was a grim expression on her face. She held a flat metallic remote in her hand. Dr. Howard knew what was about to happen.

“Doctor, I’m sorry. There has been speculation about your whereabouts recently. It’s given rise to an internal investigation. We know you’ve been associating with… them”. Dr. Yael moved closer to Dr. Howard, as if to ensure he couldn’t flee.

“Is that so” murmured Dr. Howard.

“I’m sorry Doctor, but I know you are aware of the protocols. We must ensure what we’ve established here lives on here forever. We can’t afford to change our…”

“Algorithms” Dr. Howard finished her sentence. 

Dr. Yael bowed her head and took on a somber tone, “We owe a lot to you Doctor. I’m sorry.” 

With that, she thrust the remote into Dr. Howard’s exposed neck. A quick flash. An electric buzz. A jolt of a body. Dr. Howard remained conscious, but wore a puzzled face. His eyes went from their vivid emerald green, to a silken black. His facial muscles relaxed. You could watch his memories being drained from his brain, as if a computer hard drive had been wiped of it’s storage files.

—————–

Anytime new members were inherited into the Paragons, typically through procreation, a new EARL drone is manufactured for their purposes. On rare occasions, Dr. Howard’s old algorithm was used on The People to determine if there were any worthy of Paragon life. In these cases, previously used EARL are given new assignments.

“EARL Series 3 Model xx91C – You’ve been assigned to a new Paragon,” a Paragon engineer directed to the drone, “Please report to domicile 932. You’ve been allocated to Dr. Carlton”

An algorithmic visual rendering of the aforementioned scene.

 

 

Dunne, A., & Raby, F. (2013). A methodological playground: Fictional worlds and thought experiments. In Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge: The MIT Press. Retrieved March 18th, 2021, from Project MUSE database.

Hariri, Y. N. (2017). Reboot for the AI revolution. Nature International Weekly Journal of Science, 550(7676), 324-327 Retrieved from https://www.nature.com/news/polopoly_fs/1.22826!/menu/main/topColumns/topLeftColumn/pdf/550324a.pdf

Price, L. (2019). Books won’t die. The Paris Review. Retireved from https://www.theparisreview.org/blog/2019/09/17/books-wont-die/

 

LINK 5 – A GENERATIONAL DIVIDE IN MANUAL SCRIPTS

Week 4 prompted some ETEC540 students to write a diary-style entry or reflection of approximately 500 words. Typically, this is no sweat for a graduate student, but the challenge this week was that this writing must be done manually. It’s interesting to me that me and my colleagues seemed to endure much of the same experiences despite a seemingly large generational gap between us. 

When it comes to manual script writing, there seemed to be a consensus from a majority of my colleagues on a number of aspects:

1) Most, if not all, of our professional writing is computer-generated and typed using a device

It seems clear that the most effective and ‘professional’ way to communicate in our various occupations is through computer-generated text. Greg Patton articulated that the writing he does for work, which was characterized as ‘formal’, is done overwhelmingly on the computer. Deirdre preferred typing when dealing with assignments, lessons, or “anything that requires professional or formal” types of communication. Ying, interestingly enough, characterized manual writing as “reserved for the most special recipients, those truly worth our time… only done on heartfelt messages like love letters and thank you notes”. I will certainly be rethinking the next time I need to send a text to my partner, or perhaps my boss: “Maybe I should hand write this and mail it?”. 

Our current culture has dictated that the most suitable means of communication is through some form of digital text. Why? I think Deirdre summed it up pretty well: Typing affords users an unparalleled speed in comparison to manual scripts, furnishes users with the ability to edit and correct text leaving it unblemished, and bestows writers with a simple mechanism for sharing and sending information. Although Ying’s criteria is well-intentioned, it positions those who regularly receive non-manual scripts as ‘unimportant’ and creates a false dichotomy between individuals who inhabit a hierarchy of ‘importance’ and those whom an individual considers “worth their time”. Consequently, I found it shocking to hear that “career excellence is impossible with a child” and “women who place career over their children are ostracized by society”. First, I think the portrayal of career excellence here is exceptionally vague – What does excellence look like? I’m not sure this is a completely objective term. Is this a concept that applies to both sexes or only women? As for the repudiation of career-driven women from society, I’d be interested in hearing exactly what benefits are stripped from individuals who find themselves in this position. Would this ostracization occur if the career was the sole means of facilitating a suitable life for a child? Do men also find themselves excluded from societal advantages if they are exceptionally career driven? Is this truly a gendered issue or is this simply about the choices we make with respect to careers and families?

2) There is aesthetic beauty, ugliness and physical limitations to hand written material

Perhaps overwhelmingly mentioned was the inherent aesthetic of hand written material. Ying alluded to the fact that hand written material should be reserved only to those special to you, which also implies there is a certain charm and desirability to it. Perhaps Deirdre elucidates a more responsible criteria: hand written material is simply reserved for something or someone more personal. With that said, the intrinsic beauty embedded within manual writing can turn unattractive quickly when a manual script is marred with corrective edits, scratches, or misspelt words. Ying does a great job in expressing how these textual imperfections make the product look tarnished, draws unnecessary attention, and potentially leaves evidence that reflects lower intelligence levels (although not entirely convinced of that last one – should we consider students who have severe dyslexia to have lower intelligence levels?).

Using a pencil with an eraser, although helpful in eliminating these aesthetic blemishes, will not aid in producing a completely flawless product as scars are still left behind, albeit minute. Fundamentally, there is no room for major error in order to manufacture a beautiful piece of writing. Moreover, it needs to be legible! The physical limitation manifests itself specifically in hand muscle cramps or corporeal damage like the writer’s bump Deirdre mentions. These are factors of manual script writing that simply slow us down.

3) Written texts are mnemonic in nature

Both Deirdre and Greg conceded that written texts are most helpful when the desired effect is to remember something. Greg describes his habit of leaving numerous post-it notes on and around his desk in order to remind him of certain tasks or items. Likewise, both Deirdre and I claimed to write out our grocery lists; we do this in shorthand or point-form as a means of saving time and mental/ physical energy. Research suggests that there are mnemonic elements related to the tactility of the written word:

“Writing is a process requiring the integration of visual, proprioceptive (i.e., haptic/kinesthetic), and tactile information… There is evidence that writing movements are involved in letter memorization… that is, we write in order to remember something” (Mangen, 2015). 

Despite all these similarities, there was a stark contrast I noticed when analyzing the generational divide between us all. Greg, for example, conveyed that he is consistently able to write faster than he can type. He concedes that his typing skills are lacking, and that he uses “2-3 fingers” to type words on a keyboard. Along the aforementioned lines of writing and tactility, Greg iterates that he appreciates the “feel of holding a finished copy in his hand… [he] thinks this is because [he] is an older guy and hasn’t embraced technology to some of the younger generation’s degree”. Comparatively, both Deirdre and I find ourselves on the farther side of the millennial spectrum and we both can remember working with both pen and paper, and a computer in school. The manual script exercise made us feel nostalgic, while Greg simply felt more comfortable, even in a position to excel. We certainly did not feel like we could write faster than we could type as we both spoke about the rate in which we could produce typed text. 

To me, it’s evident that there is a tangible generational difference in perception, ability, and comfort when it comes to manual writing. Writers from an older generation are forced to embrace new media and harness novel tools in order to survive, at least in a professional context. Comparatively, millennial writers for example, are unique in the sense that they were born into a transitional age, wherefore these tools no longer were novel, but rather, sensible and commonplace and used in conjunction with their counterparts. It’s no wonder the written word is thought to be reserved for the personal; the new generational members were not around when the written word was the commonplace medium for textual communication.

 

Bolter, Jay David. (2001). Writing space: Computers, hypertext, and the remediation of print [2nd edition]. Mahwah, NJ: Lawrence Erlbaum.

Mangen, A., & Anda, L., & Oxborough, G., & Brønnick, K. (2015). Handwriting versus keyboard writing: Effect on word recall. Journal of Writing Research. 7. 227-247. doi: 10.17239/jowr-2015.07.02.1.

Task 11 – Algorithms & Predictive Text

I think it first serves us well to understand that algorithms are rooted  in nature and within collective organisms, not within computers. It is unwise to understand algorithms as explicitly applied to computers, robots, or codes. 

In its most basic form, an algorithm is simply a methodical set of steps that can be utilized to make calculations, realize a determination and/or choose decisions. More often than not, the perception is that algorithms are contextualized as codes embedded within the language or computers, but similar to McRaney’s assertion that prejudices are inherent within the way human beings make decisions, so too are algorithms intrinsic in the way we survive. At a neuroscientific level, what are emotions other than biochemical algorithms vital for the survival of all mammals? What is the process of photosynthesis other than mother nature’s algorithm for plant growth? Artificial Intelligence (A.I) simply mimics the most basic human configuration for decision making; all we have done is project our humanistic operations and behaviours into an artificial medium (Vallor, 2018).

With that said, I do believe we are currently sitting at a significant crossroads where we may be implementing technologies, specifically with respect to A.I, without recognizing the potential unintended consequences. Cathay O’Neil speaks about this concept at length and focuses her line of thought on judiciary matters, educational administration, and fundamental hiring practices. It seems only recently have we begun to recognize the implicit biases A.I technologies seemed to have inherited from their creators. Examples are endless: Legal analysts are rapidly being replaced by A.I, meaning that successful prosecutions or defences can rely almost wholly on precedents reconfigured as algorithms, and even predict future criminals based on certain human factors (see: Machine Bias Against African Americans). The job market increasingly relies on A.I tech to filter CV’s. Most human eyes will never fall upon a prospective employee’s resume again, effectively placing people’s livelihoods at the mercy of machines (see: Amazons AI hiring tool biased against women). Ultimately, these algorithms are caricatures of our own human imprints.

So when I think about the predictive text feature on my phone, and the created sentences generated by the prompts, I can’t help but feel that there is a piece of me in there somewhere. I have a Google Pixel phone, and used the predictive text feature in the messaging app. I find that the feature is excellent when I need to correct a spelling error, or suggest the next potential word while I am in the process of texting, but I did not find it helpful at all for this exercise. When given the freedom to produce its own sentences, it failed to construct anything coherent. For the record, I do not think any of these predictive text iterations sound remotely like me. 

My instincts tell me that the predictive text feature analyzes the words and phrases used the most within my texting app and generates the next most likely option. I found small successes when formulating two to three word phrases, but outside of that, there was much left to imagination. Take this example here: “Everytime I think about our future together with any of these documents, I have been in the future of fashion technology and services” .  ‘Future’ appears twice in this sentence, and I can at least understand it’s relativity to ‘technology’  and ‘services’ for example. Alternatively, I haven’t the slightest clue where it got ‘fashion’ from. 

This second example makes a little more grammatical sense, and is slightly more eloquent in its delivery, but the fact remains that I simply do not text like this. There is a high degree of formality in this rendering, as if I was speaking to a workplace superior. I found it interesting that both examples incorporated elements of documents and attachments. Perhaps a reflection that I’m working too much… Moreover, these predictive texts are fairly good at sensing when there truly is a link available (often when a link is sent, there will be a mini-previous provided), but of course, there was no link sent. 

Perhaps the most interesting example to me was the following predictive text that was typed but not sent. I wanted to provide an alternative perspective and make available a sort of ‘behind the scenes’ image to illustrate what predictive aspects were offered to me:

The most striking feature in this image is the predictive emoji being offered: the smiley with a cowboy hat. Not only do I question the emoji’s particular relevance within this predictive body of text, but I can confidently say, without a shadow of a doubt in my mind, that I have never once used the cowboy hat emoji in any context whatsoever. I am dumbfounded by what algorithm decided to offer me the cowboy hat emoji as an option here. 

I struggled to discern these types of predictive patterns in academic articles, novels, or anything of the like (perhaps I’m just being naive in that sense), however, I did seem to recognize similarly structured sentences in social media infrastructure, and online ads. For example:

Perusing Facebook permitted me to acknowledge some potential predictive text, within a specifically targeted predictive advertisement. I don’t spend that much time on Facebook, truthfully, but I know that this being a sponsored ad, I was obviously a target of a number of specific algorithms designed to place this ad in front of me. The text in the ad strikes me also as predictive: “Classic men’s clothing Built For the Long Haul and the modern man.” Something about it just doesn’t seem human – Why are there capitals in the middle of the sentence? Why does the modern man portion seem like it’s just been tacked on at the end? Perhaps this is where my predictive text got fashion from…

Conversely, I am aware of automated journalism as a concept gaining much traction. I think it’s important to echo one of O’Neil’s sentiments about the rise of A.I powered machines; that we shouldn’t attempt to employ A.I as a means to eliminate human enterprise, but rather as a tool to empower it. In reading the aforementioned A.I generated news column, I do find it to be extremely ‘bare-bones’ in the sense that it is only relaying specific facts, rather than injecting a creative or original tone into the story. Perhaps this is a mode reserved more effectively for sports or finance news stories. 

One of the ethical dilemmas we tend to find in this particular arena is simply: what is truth? We are inclined to think that journalists are held to high standards and are bound to their journalistic commitment to spreading what is true. But it’s no secret that in recent years, we’ve seen a decline in ethical journalism and the overall journalistic standards in the industry. Is this a journalist’s fault? Can we blame A.I for this? It’s a difficult area, but they both seem to have a hand in the rise of fake news, and the fall of ethics within journalistic standards. 

 

McRaney, D. (n.d.). Machine Bias (rebroadcast). In You Are Not so Smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown. https://www.youtube.com/watch?v=TQHs8SA1qpk&list=PLUp6-eX_3Y4iHYSm8GV0LgmN0-SldT4U8&t=1032s

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor. https://www.youtube.com/watch?v=40UbpSoYN4k&t=1043s

LINK 4- PERSONAL DATA AS THE CURRENCY OF THE ATTENTION ECONOMY

Without question, the most infuriating exercise of ETEC540 was the User Inyerface ‘game’ done in Week 10. The interface created by Bagaar was developed as a means of illustrating various sets of dark patterns many internet users may experience when navigating through the digital world. The task also demonstrated a variety of key considerations web developers need to appraise when building web interfaces and alternatively, highlights what many internet users may take for granted in the fundamental processes of traversing the digital realm. The game is completely counterintuitive to the ways in which we’ve been (un)consciously programmed to utilize the fundamental design conventions within the internet and strove to waste as much of the users time as possible. 

Of course, many of us recognized the plethora of dark patterns utilized in the game: The overall poor design, the double negatives strewn across the password creation page, the ambiguous words and images on the CAPTCHA page, the misdirection created with a selection of eye catching buttons, and of course, the hidden information embedded within the Terms and Conditions link. It seems, however, that only a few of us had deep concerns about the privacy aspects of the User Inyerface game. 

Personally, I did not use any of my real information in this game. I immediately questioned the degree of information privacy I was afforded and chose not to type my name or use any form of legitimate password, username, or email. The poor design of this interface instantly raised a red flag for me; it made me feel like this site was plastered with fake ads, where my computer could be threatened by pernicious viral software, or worse, my personal data stolen. I realized quickly, it was simply an intentionally poorly designed game meant to challenge, frustrate, and obstruct users by demonstrating a number of dark patterns.

James encapsulates our privacy concerns

Similarly, James conceded that he did not read the Terms and Conditions, yet still had questions regarding what was being done with the information he was submitting. He had concerns specifically about the image he was asked to upload. Like him, I did not upload an image of myself, and rather used some stock image from the internet. Comparatively, Selina knew that this was only a game, and the knowledge that she was not threatened by the possibility of downloading malicious software emboldened her to become more adventurous with her clicks. Ultimately, it was Meipsy’s characterization of data and information collection that prompted me to think: perhaps data privacy is the true currency within the attention economy.

In his TedTalk, Tristan Harris suggests that social media, advertising companies, and digital marketing strategies are vying for one thing: our attention, and the best way to achieve that is to understand how our mind’s work. From autoplay functions, to algorithms that determine what and when we will view content, the internet and the forces behind it have fashioned a digital infrastructure dictated on our habits, behaviours, and in some cases, our personal information. Moreover, Harry Brignull suggests that the levels of deception used to gather these details are often very subtle, appealing to the users negligence, unawareness, or naivety. 

Harris also asserts that the internet does not evolve on a whim, rather it is calculated in the way it strives to understand it’s users’ patterns. The User Inyerface game illustrates how those subtle deceptions can gather information about us but also, shields us from any threat. Afterall, it is simply a game and the gathered information goes nowhere (Or does it?). Ultimately, if we were to apply these realized patterns to other more malicious web spaces, it becomes quite clear how these programs go to great lengths to assemble information pertaining to the products we buy and are partial to, the forms interaction with other users and information online takes, and what subjects we are most prone to becoming involved with in an online space. This information is the equivalent of gold to the social media, advertising, and marketing industries; it allows them not only pinpoint specific populations to target with marketing campaigns, but allows strategic deployment of products and services conditional on a seemingly infinite number of factors (ie – age, sex, location, profession, to name a basic few). Of course, this can be done honestly as well.

Further, we are approaching a point where these algorithms are evolving to increasingly attempt to match our online and offline behaviours. Meipsy closes her reflection with an interesting thought: 

As we learn more about how information is gathered and how we are manipulated, hopefully we will also become more adept at understanding these persuasions and take control and push back against the way these companies manipulate us for their own end game and purposes.

Although I tend to give perhaps more credit to the new generational members of the internet community with respect to spotting these manipulative designs, I can foresee these dark persuasions evolving alongside our increasing awareness. Regardless, through understanding our personal information as the currency by which these entities construct the infrastructure of the attention economy, the more we will be able to effectively and willfully participate in a more equitable redesign of the internet’s fundamental conventions. If we function as if data privacy is as valuable as it’s monetary counterpart, the less manipulation is bound to occur in the digital realm. 

 

Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. Interaction Design, Usability, 338.

Harris, T. (2017). How a handful of tech companies control billions of minds every day. Retrieved from https://www.ted.com/talks/tristan_harris_the_manipulative_tricks_tech_companies_use_to_capture_your_attention?language=en

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. Retrieved from  https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?language=en

 

LINK 3 – DEVIATIONS IN CONVENTIONS: VOICE-TO-TEXT AND THE ACCENT

In our third week of ETEC540, we were tasked with relating an unscripted narrative into a chosen voice-to-text application, record the outcome, and analyze the degree to which English language conventions were deviated from. We were also instructed to observe what we believed to be ‘right’ and ‘wrong’ within the recorded text, and make an intentional link between the distinctions of oral and written storytelling. 

I had fun with this experiment, and employed the voice-to-text program (https://speechnotes.co/) in a number of scenarios. I recorded myself narrating a portion of my lesson on The Alchemist to my class, I documented a phone conversation between myself and my partner to observe the degree of accuracy voice-to-text could produce by hearing speech through a separate technology, and I chronicled a conversation I had with a colleague at work.

There are some surface level connections between myself and many of my colleagues: Manize and I both used SpeechNotes, while comparatively, Olga utilized the Dictation tool on her Windows computer. We all recognized that literally mentioning the punctuation mark to the program would have drastically changed the meaning of the text, but conceded that this should not be a necessary step. Regardless, one of the most commonly agreed upon ‘mistakes’ in the voice-to-text scenario was the absence of grammatical and structural conventions. These typographical signs manifest themselves most frequently in basic punctuation like commas, periods, and capitalization and the lack of these proper morphological protocols give credence to the assertion that voice-to-text technology does not yet quite adequately have the ability to discern those written symbolic gestures from oral speech. Both Olga Kanapelka and Manize Nayani are colleagues that reflected on this idea, and went on to suggest that there were also many structural components of writing that were nonexistent within the text. For example, one of the more difficult aspects in comprehending the voice-to-text block of writing is that ideas are not organized or structured through the use of sentences or paragraphs. Through comparing our voice-to-text products, it’s clear that no matter what voice-to-text tool is used, the scarcity of grammatical and structural concordances remain. The lack of these literary principles, coupled with the inability to punctuate, make it increasingly difficult to effectively interpret the true narrative essence of the text. 

There are, however, some deeper connections between myself, Olga, and Manize: our voice-to-text body of writing was created through the influence of an accent. Olga, Manize, and myself reflected on the adequacy of spelling and level comprehension within our bodies of text. We all seemed to touch on the degree to which accents played a role in the formation of meaning-making within speech-to-text outputs; both in the sense of the program understanding what has been spoken, and in the sense of ensuring the written product was intelligible. 

Manize revealed that English is her second language as she moved to Vancouver from Mumbai, India some years ago. She seems to imply that many of the words picked up incorrectly were a result of her accent. She also posits that she believes having a story scripted would have permitted her to speak with more clarity and the number of spelling mistakes would have decreased. Similarly, Olga discloses that English is also her second language and specifies that English vowels are most difficult for her to pronounce. Similarly, when prompted to think about the difference of the written output if it were influenced by a script, Olga seemed to suggest the same idea as Manize: that the script would have aided in clarity and cohesion, ultimately resulting in a more readable text. 

Olga provides a clear example of how her accent directly affects the voice-to-text transcription program:

Olga was clear and intentional about how her accent could be misconstrued by the program. This was interesting to me, and indicated that voice-to-text technologies do not listen for context, they simply listen for sound. In other words, it listens, but it does not hear. On a separate but related note, I find it ironic that many of our chosen A.I voices (think GPS’s) can be manipulated to reflect a plethora of accented voices from across the world, yet struggle in deciphering accented spoken words. I wonder if the Australian GPS voice could effectively transcribe a true Australian accent for example. 

Although English is my primary language, and I do not speak with an accent (although some here in Vancouver think I speak with an Ontario or ‘Toronto’ accent), I recorded a conversation with a colleague of mine who speaks with a very thick English accent. The results were astounding in comparison to my original spoken narrative. Perhaps it was the fact that this was a conversation; that more than one person was talking, or that my colleague’s accent made it difficult for the voice-to-text program to discern was was truly being said, but the entirety of the text is blatantly incoherent. It was a stark contrast to my two colleagues who, despite scattered errors in spelling and coherence, theirs was predominantly intelligible.

Ultimately, it seems as if we all agree there is a certain level of flexibility when it comes to oral storytelling. Despite the mnemonic element required in reiterating a narrative, the story does not necessarily follow a strict sequential structure. Verbal strategies like emphasis, energy, intonation, volume, and pace can all contribute to the (in)effectiveness of orality while in written narratives, these elements are much more limited. I would even go as far as saying the accented influence of a narrative bestows it with more character and authenticity. Perhaps these elements appear, but in a fundamentally distinct way (punctuation?). Moreover, there is a certain level of grammatical forgiveness in orality – audiences are much more lenient when it comes to the variety of ‘mistakes’. There is no deleting an oral story, but there can be correction.

 

Bauman, R., & Sherzer, J. (Eds.). (1989). Explorations in the Ethnography of Speaking (2nd ed., Studies in the Social and Cultural Foundations of Language). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511611810

Gnanadesikan, A. E. (2011).“The First IT Revolution.” In The writing revolution: Cuneiform to the internet. (Vol. 25). John Wiley & Sons (pp. 1-10).

LINK 2 – THE ARCHITECTURE OF AN EMOJI STORY

In week six, we were tasked with exploring the ‘breakout of the visual’. Gunther Kress, the Australian semiotician, laid our foundation by suggesting that visual elements are more than simple decorative pieces but rather true modes of representation and meaning that influence symbolic messaging (Kress, 2005). So much so, that these visually discernible features could define what we understand as a type of new contemporary literacy.

What then do we make of those little emotion icons we know as emojis? What (grammatical? written?) conventions are we to use if we were to create a narrative using only emojis? Jay David Bolter, in his book The Breakout of the Visual asserts that picture writing simply lacks narrative power; that a visual plainly means too much rather than too little (Bolter, 2001). As a result, it can become increasingly difficult to write a narrative using visuals alone – it’s easy to convolute the communication of character relationships and development, the sequencing of plot points, the passage of time, or the overall narrative flow.

Consequently, it proved interesting to peruse my colleagues’ emoji stories and analyze the way in which they decided to construct the narrative form. Ultimately, I felt that Judy Tai’s transcription of Ratatouille held a multitude of similarities with respect to the architecture of an emoji story in comparison to my arrangement of A Life on Our Planet: My Witness Statement and Vision for the Future. While some participants chose the classic horizontal familiarity that comes from reading books, many others like Judy and myself, chose a vertical approach to projecting some approximation of narrative continuity. Among other blog posts, the most frequently mentioned factor was the difficulty in transcribing singular words into emojis; rather, authors needed to conceptualize a group of words or meanings and represent it with a chosen emoji image. Oftentimes, even this strategy proved difficult and some people had to simply revert to searching to images that offered readers expansive interpretations.

Carlo’s story on the left, Judy’s story on the right

The first and perhaps most obvious link, was both Judy and I took a vertical arrangement approach to conveying the central notions of our stories. It seems both Judy and I instinctively appealed to some semblance of linearity and order, just as the traditional written commands readers to follow a strict order of comprehension (Kress, 2005), when we began our synopsis’ with a signal of the medium, and a corresponding title. When comparing this manner of structure with other colleagues, it became evident that this approach was the most common manifestation of architecture with respect to an emoji story. As far as I am aware, there are no formal conventions on how to construct a narrative consisting of solely visual aspects, like emojis. Therefore, it seems interesting to me that the default pattern of assembly was through vertical methodologies; perhaps more fascinating is the deep contrast between content and form in writing with visuals. Although there are many similarities between both our emoji stories, it seems like Judy’s images are a lot more spaced out than mine. In comparing them, I feel like my story is attempting to jam a lot more information into each line, while Judy is more delicate with the chose information. Despite these types of electronic hieroglyphs representing an extremely new medium of communication in human history, our automatic reaction was to revert to the style of the scroll. 

Comparatively, only a few participants in the Emoji story task utilized a horizontal approach to arranging their synopsis’. For example, Anne Emberline’s story took a linear form, similar to that of traditional writing structures. Anne was unique in that she opted to relay her story using image after image, attempting to build meaning using the fundamental processes of reading and writing we currently use. Interestingly, one of the things I commented on Anne’s posting was that although I totally comprehended what her narrative meant, I had no idea what exactly was the movie, game, book, or show. Consequently, this put Kress’ assertion of “that which I can depict, I depict” (Kress, 2005). at odds with our interpretations as I have only negotiated an insubstantial meaning specific to me, while others could infer something completely different, or alternatively, nothing at all. 

Anne’s Emoji Story

What exactly prompts this style of organization? Why was it that most used line breaks to separate ideas, while others simply rattled off emoji after emoji with the hopes of creating meaning. I believe there is something to be said about our semiotic abilities to discern direction and instruction from punctuation. Writing is a marriage of words and symbolic markings, both of which direct meaning making within our minds as we decipher information through written words. With respect to the emoji stories, it is my interpretation that each line break indicated a new idea, new sentence, or new concept. I had a more difficult time deciphering Anne’s story than I did Judy’s.

Finally, Judy makes a compelling argument regarding the addition of images to text (in the form of graphic novels) becoming a driving factor for the increased interest levels of young readers. She posits an interesting connection between our human ability to read emotion and facial expressions as a means of inferring deeper about a particular story. While I agree with her assertions, I can’t help but think of some of the defining principles of Jean Piaget’s cognitive development model – I believe that at a certain point, our human minds crave a new challenge as they can formally operate within deeper texts; the image/word relationship begins to become commonplace. Moreover, our processing of both text and image pertains strictly to the visual sense. While Bolter makes this word/image relationship case with respect to internet models of publication, I can foresee a bit of a harkening back to the age of orality, where some of our future texts will be truly multi-modal; demanding an aural, visual, tactile, and perhaps even our gustatory or olfactory senses.

 

Bolter, J.D. (2001). Writing Space: Computers, hypertext, and the remediation of print. Mahway, NJ: Lawrence Erlbaum Associates.

Kress (2005). Gains and losses: New forms of texts, knowledge, and learningComputers and Composition, Vol. 2(1), 5-22

LINK 1 – GOLDEN RECORD CURATION: SELECTION CRITERIA

Among the abundance of compelling tasks we were meant to complete throughout ETEC540, there remains a small collection that stood out as most intriguing; one being the Voyager Golden Record and the process of curating a sample of 10 tracks. As simple as this venture sounds, it challenges participants to address, as Abby Smith Rumsey suggests, what we afford to lose?.

It is a challenging question, because as Smith Rumsey asserts, it’s difficult to determine what has future value particularly due to our ineptitude with respect to predicting what contexts or events could eventually lend meaning. It’s not feasible to truly know the value of anything until far in the future when certain events and contexts provide meaning to seemingly ‘useless’ artifacts (Smith Rumsey, 2017). It then follows to reason that the best way we can form present value at least, in the context of potentially submitting ten songs from earth to our extraterrestrial brothers and sisters is to formulate some semblance of criteria to follow.

In foraging through my colleagues’ webspaces, I attempted to explore the criteria that others used to ascertain what tracks best belonged on their curated Golden Record. The network analytics I did on the Golden Record Curation Task revealed that Marwa and I chose 70% of the same songs, while Sarah H and I shared only 20% of the same songs. Thus, I decided to investigate the criteria they used for content selection.

Firstly, let’s review the selection criteria I adopted. I chose to use a specific tenet from Abby Smith Rumsey’s article Why Digitize as the foundation of my criteria:

Creation of a ‘virtual collection” through the flexible integration and synthesis of a variety of formats, or of related materials scattered among many locations (Smith, 1999).

In essence, I creatively applied Smith Rumsey’s principles for valuable digital captures to the Golden Record curation exercise. It’s worth noting that this record is meant for potential alien life elsewhere in our universe. Thus, I intentionally attempted to eliminate any specific cultural, ethnic, or social significance to any music included partly due to the fact that if any intelligent life were to stumble upon these sounds, they would presumably be incognizant to those underlying factors. It then follows that the basis of my selection was informed by a synthesis and variety of formats (or genres), and a diversity of locations on planet earth.

Comparatively, Marwa used an analogous barometer for curating her chosen ten, however, she chose to include a gender metric to aid in selection. With this metric, it seems we may be at risk of entering the territory of equality of outcome. While I agree with her assertion that there is an overrepresentation of classical music and the entirety of the record is constrained to certain tonal and historical periods, I don’t entirely understand how the idea of ‘conforming to male gender-norms and conventions’ play into the overall choices. What does this mean exactly? Does this pertain more towards the depiction of males within these songs? Or is it more generally about the over representation of males as the artists of these pieces? Are there any suitable alternatives to these selections? How are we to counteract this? – Are we to travel down to Congo to educate the Mbuti of the Ituri Rainforest about gender normativity? Mozart is one of the most prolific and celebrated classical composers in human history, but I’m not sure how much of that he owes to his gender rather than his competence in a certain field. How do we reconcile the idea of the Golden Record conforming to these sorts of conventions with the inclusion of Chuck Berry as the only African American rock n’ roll artist? Further, the Golden Record seems awfully ableist by including only one blind artist! 

It simply seems to me, that if we are going to include metrics pertaining to gender or an artist’s/composer’s individual characteristics, the slope continues to become very slippery with respect to having to include a number of other related individual metrics.

Ultimately, the fact is that the Voyager Golden Record was launched in 1977 and perhaps it’s reasonable to estimate they may not have been as perceptive or sensitive to these types of conventions as we are in 2021. Moreover, and perhaps most importantly, I’m not entirely sure that the intelligent extraterrestrial life forms that may happen upon our curated Golden Record’s will be overtly aware or remotely conscious of the gender-norms we seem to have developed on planet earth. Regardless, it serves as an interesting distinction because both Marwa and I selected 70% of the same songs, proving that the data network does not illustrate the paradigm of arriving at the same destination despite taking different pathways .

In contrast, Sarah’s determining criterion followed a slightly different vein of thought. She chose to select songs based on 1) a representation of diverse cultures on earth, 2) a variety of styles inclusive of instruments and lyrics, and 3) encapsulating ‘joyful life’ on Earth in contrast to the ‘gloom’ of the current pandemic. Again, we see a tertiary metric that involves extra-musical factors. This is interesting to note because all three of us (Marwa, Carlo, and Sarah) all had two common criteria: diversity in location, and variety of style but varied in a third metric. With respect to epitomizing songs as joyful, it’s difficult to discern how to represent joyfulness in the first place. To what degree is the Navajo Night Chant joyful? Tough to say. Try listening to the Men’s House Song on repeat for more than five minutes and let’s have a conversation about how joyful we feel! Interestingly, El Cascabel, the Mexican mariachi style typically played at joyous and celebratory occasions, did not make the cut!

It certainly was difficult not to inject personally subjective measurements into the curation of 10 tracks from an incredibly diverse Golden Record. I think it’s important to remember the purpose of the Golden Record, and to entertain the idea of extraterrestrial life as completely void of any understanding of earthly planetary customs and conventions in direct relation to our subjective experiences. Thus, a strict focus on the musical aspects and the diversity of locations those songs represent seem to yield the most efficient results in terms of degrees of connectivity in curation.

 

Smith Rumsey, A. (1999, February). Why Digitize? Retrieved June 15, 2019, from Council on Library and Information Resources: https://www.clir.org/pubs/reports/pub80-smith/pub80-2/

Smith, Rumsey, A. (2017) Digital Memory: What Can We Afford to Lose

Task 10 – Attention Economy & Interface Design

It took me roughly 7 minutes to complete the User Inyerface game, and I think about 4 – 5 of those minutes were spent trying to fulfill the supposed password requirements. I had to conduct a quick search on the meaning of cyrillic characters, and questioned the degree of information privacy I was afforded. Initially, I thought this was just an exceptionally poorly designed website, which raised a red flag for me in terms of what information I should truly divulge. As I progressed through the game, I came to realize this was simply a cleverly designed interface meant to challenge and frustrate internet users by highlighting a multitude of dark patterns.

Ultimately, the User Inyerface game by Bagaar is essentially and completely counterintuitive to the ways in which we’ve been (un)consciously trained to understand and utilize the fundamental design conventions within the Internet; basic patterns of recognition like directive buttons that draw your eyes, hyperlinked words and links, checkbox and form-filling functions and webpage symbols/ images have all been reworked in a way that do not reflect the current accordances of internet functionality. It almost felt like I was sort of learning a new internet language while playing this game. Perhaps more significantly, I was forced to recognize the depth of online marketing strategies and considerations developers need to contemplate when building web spaces and internet interfaces. I certainly won’t be taking the auto-delete function for granted anymore when I need to fill out an online form. With that said, however, the User Inyerface was not attempting to sell anything or truly strive to elicit dark patterns rather than venture to illustrate the degree to which dark patterns can appear within internet interfaces, challenge users to recognize them, and enquire about the importance they have within marketing strategies that influence social and cultural behaviours (Tufekci, 2017). 

The most obvious deception is the giant green button on the first page; it draws your attention away from other page elements, tempting you to click it despite clearly reading “NO” as if satirically indicating it is not the right pathway. The button expands when you hover your cursor over it, implying the button actually does something. The small fine print below the button is written in a different colour font, and reads “Please click HERE to GO to the next page”- I could spend my entire post reflecting just on these 9 words.

An internet user reads these words, recognizes the misdirection, and immediately becomes confused as to where exactly to be clicking. For me, my first instinct was to click on the underlined “click” particularly because I have come to understand typical hyperlinked words as underlined or shaded in different colours as a means of making a distinction from other text in that line. Of course, that goes nowhere. The next determination I made was to click on the “next page” text because it was written in a light blue hue. Nothing. Consequently, we are left with the capitalized words “HERE” and “GO”. Both words indicate a direction, but none make it clear where the next pathway leads; turns out we need to literally click HERE. None of these design features adhere to the traditional understanding of effectively navigating the internet.

Perhaps the most frustrating aspect for me was the password creation page. Similar to the examples Brignull provides, the Inyerface UI utilizes a number of double negatives meant to confuse users (Brignull, 2011). Things like “I do not accept the Terms and Conditions” and “Your password is not unsafe” plastered in red text gives rise to concern for some users. For me, I noticed this and began continually reworking my password until I somehow met the requirements (It took me a few minutes to decipher that my password being ’not unsafe’ was actually a good thing). I absolutely despised the fact that when you click into the form-filling boxes, the text didn’t automatically disappear, and again, the button below labelled ‘cancel’ plays on the users traditional assumptions that this button is the correct button to use in proceeding to the next page. 

The user’s interest form is another example of an interface designed to play on the inherent design customs embedded within the traditional understanding of internet navigation. Firstly, the blue button labelled “download image” draws the users eyes and implores them to click, but any critical thinker will recognize that we do not need to download, but rather upload an image to proceed. Of course, this is uncharacteristically buried in a hyperlinked text above. Secondarily, the instructions prompt users to select three interests from a list of questionable selections. The twist, of course, is that all the selections are checked off, inclusive of both options providing users the possibility of selecting or deselecting all choices. I couldn’t help but remember a quotation from Brignull here:

Those who ignore both the checkboxes will unknowingly give some marketing permissions, while those who zealously tick both checkboxes will also end up giving some marketing permissions (Brignull, 2011).

The last step of the game is the CAPTCHA stage where the interface asks users to select all images pertaining to an incredibly ambiguous word in order to discern whether we are human. I was almost offended at this point – You want to confirm I’M human when you can’t even design a simple form properly? You can’t adhere to the basic conventions of internet navigation and you are challenging my humanity? 

Regardless, “choose all the images of a bow” could be interpreted as selecting a bow tie, a bow and arrow, or perhaps the action of bowing. “Choose all the images of checks” could manifest itself in check marks, a monetary cheque, or the act of putting your opponent in check while playing chess. All of which turn out to be correct, and in fact, all of which are needed to be selected to succeed in this final task.

Consequently, this game is designed to infuriate the user on purpose. It’s meant to highlight the fundamental design aspects needed to be considered when navigating the internet on the most basic level. It challenges users to creatively problem solve when these patterns are not adhered to, and provokes participants to begin recognizing dark patterns in online marketing strategies. For me, it made me begin questioning: How did I learn to use the internet effectively? Who taught me how to navigate? To what degree are internet design aspects responsible for facilitating my unconscious understanding of online navigation?

 

Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. Interaction Design, Usability, 338.

Harris, T. (2017). How a handful of tech companies control billions of minds every day. Retrieved from https://www.ted.com/talks/tristan_harris_the_manipulative_tricks_tech_companies_use_to_capture_your_attention?language=en

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. Retrieved from  https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?language=en

Task 9 – Network Analysis: The Curated Golden Record

It took a while to understand how to effectively utilize the Palladio network, and harness the filtering tools to reveal exactly what I was looking for, but once that was complete, it was clear how much information was available for network analysis. Evoking some of the language from graph theory, it seems as if this exercise was a simple multigraph utilizing a variety of nodes and links (as opposed to its multiplex counterpart). It’s also clear that this was an undirected and unweighted graph, and thus we can only rely on the sum total of links a node has in order to determine the degree of connectivity.

Some statistics and analysis:

There were 27 tracks on the Golden Record, and 21 participants in this curation exercise.

  • I shared an average of 4.65 common songs with my peers.
  • Marwa and I chose 70% (highest rate of commonality) of the same songs, while Sarah and I shared only 20% (lowest rate of commonality). Between a community of the three of us, for instance it seems we could only agree on a singular song: “Percussion (Senegal)”.

  • The song with the highest degree of connectivity was “Percussion (Senegal)” at 76% of participants.
    • There are fourteen songs categorized as “Folk” or “Cultural” which makes up roughly 52% of the total music on the Golden Record. Every participant picked at least one Folk style song. 
      • Of them, Percussion Senegal was chosen an average of 10.64 times (76%), Flowing Stream 9.24 times (66%), Crane’s Nest 7.98 times (57%), Tchakrulo, Types of Flowers, and Panpipes & Drums all 5.88 times (42%), Wedding Song 5.32 times (38%), Azerbaijan Bagpipes, Night Chant, Morning Star, and Izlel de Delyo all 4.62 times (33%), Solomon Panpipes and Pygmy Girl’s Initiation both 3.22 times (23%), and finally Men’s House Song 2.66 times (19%).
  • The song with the ‘lowest degree of connectivity’ was “String Quartet No. 3 in B Flat” at 9% of participants.
    • There are seven Baroque/ Classical style tracks on the Golden Record’ which makes up 26% of the total music on the Golden Record. Nineteen participants (91%) picked at least one Classical style song. 
      • Of them, Symphony No. 5 was selected at an average of 3.29 times (47%), Fairie Round 2.94 times (42%), Well Tempered Clavier 2.52 times (36%), Magic Flute 1.82 times (26%), both the Brandenberg and Gavotte at 1.47 times (21% respectively), and String Quartet at 0.7 times (10%).
  • Of the Top 10 songs with the highest degrees of connectivity, 50% represent music from the continent of Asia, 30% from North America, 10% from Europe, and 10% from Africa.
    • Of these 10 most commonly selected songs, I chose 80% of them in my curated Golden Record. Does this simply reflect my superb music taste? (Definitely not) Is it more of a reflection about my ability to predict what others will choose? Perhaps it indicates something about the criteria I chose to select these songs?

That final statistic prompted me to think further: What criteria did my peers use when selecting their chosen songs into their own Curated Golden Record? I felt it pertinent to review my own criteria. To me, this is the most glaring piece of information the data does not divulge. Despite the commonalities and differences between my peers, there is no indication of why our choices are similar or dissimilar. This is a significant factor to consider: Although we may have selected, or not selected the same song, we may have made this decision based on completely different reasons and criteria. Thus, in spite of the fact that we may have been grouped into certain communities based on our song selection, we may have been grouped there for completely the wrong reasons.

For example, let’s take the rock n’ roll classic Johnny B. Goode by Chuck Berry. I chose to prescribe to a criteria that eliminated the societal significance certain songs had, presumably because these factors would be arbitrary to any extraterrestrial intelligent life that happened upon my Curated Golden Record. Instead, I focused on a thoughtful variety of songs that demonstrated unique and distinct genres, and plethora of instruments, and a certain diversity in location on planet earth. Alternatively, someone may have included Johnny B. Goode because of the immense cultural and societal value it holds; Chuck Berry was one of the first African-American rock n’ rollers, a revolutionary in his own right, and often considered the father of this particular musical genre. Although we may have came to the same selection, our pathways to reach that destination were exponentially different. 

Moreover, because there was a 10 song limit to our Golden Record selections, there are underlying implications regarding excluded songs. I suppose the ‘null’ choice can be reflected in the data (neither effectively or positively), but only in comparison to other participants who selected. By not including a song, essentially the data has disallowed you from associating with a given ‘community’. With only 10 songs to pick, I simply had to be ruthless in which tracks I chose to include. I suppose there was also criteria for my non-inclusive music list as well: tracks that sounded somewhat similar or utilized the same instruments, songs of the same genre, songs from the same global area etc. I didn’t pick Panpipes and Drums or Night Chant, for example, but I would have if I had 11 songs to choose!

The community membership of the top ten most commonly selected songs.

The act of including or omitting in itself is inherently a political act. When I peruse Twitter in between bouts of work, I often notice staunch supporters of the idea that “teachers should never reveal their political positions or ‘indoctrinate’ youth with various political ideologies”. Truthfully, I believe the act of teaching itself is often political – For example, we teach from a mandated curriculum created by a governing body; it’s different from other curricula around the globe. Should we not then develop a universal curricula? It’s often best to tackle these types of issues head one, discuss them, formulate opinions on them, rather than hide from adversity and sweep unwanted conversations under the rug!

 

Code.org. (2017, June 13). The Internet: How Search Works . Retrieved from https://youtu.be/LVV_93mBfSU

Systems Innovation. (2015, April 18). Graph Theory Overview . Retrieved from https://youtu.be/82zlRaRUsaY

Systems Innovation. (2015, April 19). Network Connections . Retrieved from https://youtu.be/2iViaEAytxw

Spam prevention powered by Akismet