[6] Reflection on Megan’s Site Design and Speculative Futures

Access Link: Task 12 | Speculative Futures-Megan Stewart

[A] Platform, Design, and Literacies

Megan chose to build her website in UBC blogs using a WordPress template, similar to Olga and Allison; the visitor needs to scroll through all blog posts; the latest post is first on her blog, and Task 1 is the last post. Her site has no menus, only a “home” link that will navigate the visitor to the latest post. Similar to Allison, the title “Megan Stewart ETEC 540” will move you to all posts and there is a search bar on the top of the page. However, she has widgets enabled to navigate to the recent posts and comments.

Like me, writing is dominant in her posts, but her writings are more concise and efficient than mine. As I have realized, throughout this learning journey, that multimodal representation is more beneficial, I worked towards interacting and interlacing visual and linguistic literacies in the latest productions. I have integrated multiple representations of content (text, video, images) more often than Megan that her overall content primarily privilege textual literacy over a visual. With the same essence, for task 12. I composed narrative and video productions for making meaning. In some respect, Megan’s surveys can be perceived as a multimodal presentation (visual text) as well. In my opinion, they integrate more abstract symbols than my productions and involve more complex interpretations on contextual, metaphoric, and philosophical levels.

[B] Speculative Futures

Megan’s artifacts share several commonalities with my first production,” My Brain Chip”; I sensed they hold the same message, although the context and characterization are different. Both encapsulate the digital ethics failure and the tendency to trust computer algorithms and the belief that algorithms are based on—and produce—indisputable facts (O’Neal, 2016). Also, they consider the fact that computer companies ignore the recent allegations raised against the AI products released with subtle discrimination, racism, and flawed data and respond to them as “glitches” or one-off errors. Finally they show the consequences due to the overreliance on automation.

Both of us speculated significant advances in Artificial General Intelligence (AGI) happening in the coming few decades, that AI could successfully perform any intellectual task that a human being could. Thus robots and AI will take over many human jobs and everything is under their control. These worrying ideas have been brought out by Stephen Hawking and other leading scientists, including Stuart Russell, Max Tegmark, and Frank Wilczek; they have warned us about the potential downsides of AI becoming too clever . Also, these ideas fed Hollywood films for decades, such as “2001: A Space Odyssey,” “The Terminator series,” and “Transcendence” – all of which depict one version or another of a dystopian world dominated by out-of-control AI. 

In her scenario, after the automation of many human jobs, Megan speculates that humans will admit for “Re-education and Work Assignment” to receive five years educational training in order to work in a new industries. The software run by AI asks the users to fill in a questionnaire and assign the training industries that best fit with the candidates. The results show that the white man (Justin Scott, a mechanic in the Robotics industry) has been assigned for training in the healthcare industry. In comparison, the woman of color (Chenille Jackson, a coder in the technology sector) was given a training post in the hospitality industry. The decision made by the AI software was based on gender, race, and color rather than the candidate’s professional experience or qualification. She concluded that “[These] results imply that individuals who fought hard against systemic barriers to work in technology industries are set back again due to automation” (para. 1). Similarly, in my narration, the Faception test run by AI disapproved Ali’s (a 13-years child in the story) request for a brain chip implant because the system predicted a 10% chance that this child may be a future terrorist. Indeed, these results come with no surprise to anyone monitoring the perpetuation of discriminatory technological systems. Meanwhile, the machine-learning algorithm gets its test cases from a human dataset or their programmers (predominantly white men). So, it rational to expect that future AI systems “[would have  picked] up on biases in the way a child mimics the bad behavior of his [or her] parents” (Metz, 2019, para.2) and would involve all sort of idiosyncratic foibles in our humanity.

Megan artifacts may be seen as more authoritarian versions of AI technologies marketed as helpers for decision making process. For example, the “Predictim” app used by individuals to scan the web footprint and social media of a prospective babysitter and determine how risky a choice the person might be (O’Brien, 2020). Also, in HireVue, they claim that their AI algorithm analyzes the candidates’ moves and whether or not the candidate smiled within his/her video recording to make hiring decisions. The dilemma in such technology that no one have answers of the “Why” or “How” questions: Why are those who smile a lot or use gestures a lot considered to be better candidates than those who do not? How do they analyze the candidates moves/ gestures/ expression? It seems that “in our world of big data, we are losing sight of the bigger wisdom and the qualities that don’t come with measures” (Toyama, 2015, p.52). In all cases, these applications are open invitations for personal inputs and possible biases; and unfortunately, the developers do not seem to be pausing to consider ethics (O’Brien, 2020). 

Utopian futurists commonly imagine futures where the power of artificial intelligence will help us make better decisions, augment our capabilities, and enhance our lives. This vision has been encapsulated in the Star Trek future, depicting technologies as heroes along with humans saving the earth from war and illness (Toyama, 2015). But this is very hard to believe when we are aware that the technology field is discriminator to others; their producers consider whiteness as “normal, universal, and appealing” (Benjamin, 2019, p.29), and the foundation of AI systems is based on “flawed” assumptions that are neither explainable nor transparent? (O’Neal, 2016; O’Brien, 2020). On the other hand, dystopian futurists believe that technology will be destructive to humanity. This vision was demonstrated decades ago by Aldous Huxley, the author ofBrave New World, where society is engineered, so everyone plays a role according to their presumed abilities. Megan artifacts and “My brain chip” story belong to the second camp. However, my second scenario, “A day in the infinite class,” encapsulates the first one. The reason for this is my belief that the future will probably be a mixture of both visions (Toyama, 2015); following Kranzberg’s first law, “technology is neither good nor bad, nor is it neutral.” (Schatzberg & Vinsel, 2018, para.1).

As a final remark, our scenarios present ethical questions rather than answers: If AGI is possible, is it ethically and morally OK to build “artificial intelligences”—that is, software (“softbots”) or hardware (robots) that can think like human? Would this put us in the position of being a Dr. Frankenstein? Would they take responsibilities since they make decisions on our behalf? Might they be eroding our thinking abilities rather than augmenting our capabilities? The consequences of failing to take these ethical questions seriously at this early stage of developing new AI systems would be drastic and difficult to reverse or change (O’Brien, 2020). Thinking of and finding the answers now will enable us to keep moving forward, taking advantage of the emergence of revolutionary new technology while remaining assured that we are on the right track (O’Brien, 2020).

<<Previous Reflection

Spam prevention powered by Akismet