Category Archives: Tasks

Linking Assignment

Link 1 – Sebastoam’s Task 1 – What’s in Your Bag?

Comment made on May 21st but did not go through moderation:

Hi Sebastian,

Looking forward to learning aside you. Wow, 4 MET courses in one semester. Best of luck! And congrats on the upcoming retirement as well!

25 years ago my bag would definitely have a Sony Discman (I think they got rebranded to Walkman later when the original Walkman with cassettes were discontinued) as well, and I also carried around a 25 disc CD wallet as well so that I can switch out CDs whenever I was in the mood for. Did you do something similar, or did you just have the one CD at a time?

If memory serves, 25 years ago CD burners weren’t popular yet, so I just listened to the CD tracks in the order that the artist designed. I think this is something that was gradually lost over time: as MP3 players and online music streaming services become more popular, I find it more rare to listen to CDs in its entirety and only jump around tracks I enjoyed. If we go with the “weaving” and “creating” definition of text, then we went from going with large “texts” created by the artist (the CD with its specific track order) to small chunks by different authors, or a “text” created by an algorithm such as Spotify’s recommended playlists. What are your thoughts on that transition?

Cheers,
Matt

 

This link demonstrates that as early as week 1, my preexisting notions of “text” has changed from something with written words to something that is weaved and created. Seeing Sebastoam mention that they would have a Sony Walkman in their bag 25 years ago reminds me of my own experiences with Discmans and Walkmans, and the type of text that is created when one creates a mixtape or a mix CD, or the various thoughts that artists and support staff go through when they decide the track order in an album. Sebastoam’s post, in conjunction with the module one readings, allowed me in incorporate more items into my mental schema of what is text.

Link 2 – Kris’s Task 2 – Does Language Shape the Way We Think?

Thank you for sharing your perspectives. Your point about Borditsky’s [sic] points at 24:00 reminds me of this TED Talk (https://youtu.be/PWCtoVt1CJM?t=234) by Gebru. Artificial intelligence perpetuate certain gender stereotypes and associations, following the trends of the data that trains it.

 

Kris’s point about gendered nouns and the prevalence of gender stereotypes in society reminded me of Gebru’s TED Talk that I’ve encountered in previous MET courses. Little did I know at the time that this would be a major topic several months later in module 11 when we talk about algorithms perpetuating preexisting bias. This linkage shows how this various aspects of this course could be interconnected, much like how hypertext has affects our thinking from linear to networks. Boroditsky’s TED Talk links to algorithms, and to a far weaker extent, my thoughts on how the arrangement of tracks on an album could be text links to the Golden Voyage Record task.

Link 3 – Sebastoam’s Task 2: Does Language Shape the Way We Think 

Comment made on May 26th but did not go through moderation:

I just wanted to chime in on your 1:46 point about the abundance of languages. In a TED Talk (https://youtu.be/RKK7wGAYP6k?t=762) that’s a condensed version of this video, Boroditsky talks about how we’re losing one language a week, and we could potentially lose half of our languages in the next century. Things such as colonialization, globalization, social values that view certain languages superior than others, all lead to continued loss of languages around the world.

That said, there certainly are efforts to combat there. As part of Truth and Reconciliation efforts, the BC Ministry Education has began to offer indigenous language classes in secondary school. One of the videos in 2.2 came from Wikitongues, an attempt to archive languages and dialects near extinction around the world. Hopefully more places around the world also begin to implement policies to help preserve dying languages.

 

Sebastoam’s point about languages being at a risk of extinction is one that I emphasized in my Task 2 as well by talking about my mother tongue, Taiwanese. Even outside of this course, whether it be teaching an introductory epistemology course, or my own musings on various subreddits on Reddit, I get quite passionate about talking about dying languages, so I jumped at the chance to reply to Sebastoam who made that point. Sebastoam’s post also allowed me to think  and elaborate on ongoing efforts to preserve dying languages, something I didn’t have a chance to discuss in my own task.

Link 4 – Joti’s Task 3 – Voice to Speech

I really enjoyed reading your Digging Deeper section. From reading Gnanadesikan I’ve made a few thoughts about Taiwanese, Chinese, and literacy rates, and reading your Aha Moment section led to further reflections.

My initial response to the readings this module was skepticism. Many of the readings seem to suggest a “writing-envy” ” in speakers of languages without writing, and that without writing a language is fated to die out (Gnanadesikan 2011) or not be able to perform complex mental processes (Ong 2002, Schmandt-Besserat & Erard 2007). Schamndt-Besserat & Erard 2007 also argue that the English/Roman alphabet will eventually find its way to various languages. My initial reading response was focused on Indigenous languages and how though transcribing Indigenous languages using the English/Roman alphabet this may lead to further westernization of the Indigenous peoples and losing what is unique about an oral-only Indigenous language. That said, as the latter readings of the module demonstrates, Romanizing indigenous languages is a practice that the indigenous communities are actively engaging in, so perhaps my fears are unfounded (Hadley 2019, Anishinaabemodaa n.d.).

Your reflections led to some further thoughts about how Taiwanese applies to the arguments in this module. On the one hand, Taiwanese did have a Romanization system called Peh Oe Ji (POJ) that Western missionaries introduced to transcribe Taiwanese sounds using the Roman alphabet into written text, but this system was mostly unused since the Japanese colonial era (Kloter 2017). The fact that Taiwanese doesn’t have a writing system (Mair 2003) and has fewer speakers every year supports Gnanadesikan’s 2011 arguments, but I also wish to point out that it’s because of writing that regional languages/dialects are eliminated in favour of a main national language. For example, in China there is a negative correlation between literacy rates and the usage of regional languages/dialects such as Shanghainese (Wellman 2013), and module 4 readings such as Innis (2007) highlight how the printing press allows for far faster dissemination of knowledge through writing than oral communication. Writing may help preserve languages, but it’s also a driving force for languages becoming endangered in the first place.

 

Joti’s task 3 led to a lot of opportunities for reflection and elaboration. While going through the readings, especially Schamndt-Besserat & Erard, one topic I was quite uncomfortable with was that Romanization is inevitable in other languages, and I thought of how the local indigenous languages are Romanized in various ways. On the one hand, it helps with the preservation of the language, but on the other hand, it’s further westernization/colonialization on indigenous culture. This was a topic that I wish I could consult various local indigenous people on and hear their thoughts, though further course materials such as Hadley 2019 and Anishnaabemodaa n.d. did alleviate my concerns somewhat by emphasizing language preservation.

I was able to further elaborate on Taiwanese, and now at the end of the course I’m reminded of how Joti’s Task 3 allowed me to make the link between the printing press talked about in module 4 to how writing both preserve and endanger languages.

Link 5 – Brie’s Task 5 – Twine

Hi Brie,

Thanks for the fun Twine. My first time through I managed to make most of the “moving forward” decisions rather than the decisions that needed to loop back, but my second time around I decided to check out the other options. I think my experiences with your Twine shows a potential limitation of hypertext (and choose Your Own Adventure stories), that with so many potential paths, not everyone will explore every path.

This reflects a current conflict in pedagogy. One the one hand, as a teacher I want to provide opportunities for students to explore whatever paths they are interested in to further develop this skills and knowledge, and I think mediums that use hypertext is perfect for this. On the other hand, I’m limited by time and curricular constraints, so if I want my students to learn how to balance chemical reactions, rather than a non-linear medium such as hypertext where students can “wander off the path,” I prefer paper/linear texts that provide scaffolding for students to gradually develop skills and competencies.

Would you also say that there are some things that non-linear mediums are better for, but for other things linear text is the way to go? Or, is that an anachronistic way of thinking and that we should just fully embrace non-linear text?

 

Brie’s wonderfully made Twine allowed me to think about a few aspects of hypertext that I didn’t have a chance to discuss in my own task. The first is that with many possible branches and endings, not everyone will explore every path. While one may argue that for entertainment purposes such as choose your own adventure stories this may not be a huge issue, I believe that by not exploring all possible paths presented we’re doing a disservice to the creator and the work they put in to every path. Visual novels, a genre of video games that’s akin to choose your own adventure stories, sometimes get around this problem by unlocking the “true story” once all the major paths of a story are explored.

Brie’s task also allowed me to reflect on hypertext and education, and how under the current BC Ministry of Education policies there’s a conflict between wanting students to choose and explore various topics to develop skills and knowledge, and yet being constrained by a curriculum to focus on teaching certain topics, not to mention that various pedagogical concepts such as scaffolding may be far more difficult through the hypertext medium.

Link 6 – Steph’s Task 9 – Network Assignment 

Hi Steph,

Thank you for your analysis of the data. Like you I quickly noticed that we all selected El Cascabel and most of us selected Wedding Song, but that only led to more questions: what about the other seven people that selected El Cascabel that didn’t end up in our group, or the other five people that selected Wedding Song? I started looking at our non selections and noticed that all five of us did not select Brandenburg Concerto 2, Sacrificial Dance, or Flowing Streams. While this could be a potential reason, as you’ve mentioned it’s impossible to infer accurately.

Another grouping method I thought about was seeing how many of tracks I shared with all participants. I shared five tracks with you and Jonathan, while I only shared three tracks with Carlo and four tracks with Carol. There were three others whom I shared five tracks with that didn’t end up in my group. If I had the motivation and time, this could potentially be done with all participants, and that could perhaps give more data on how the algorithm grouped us.

Your point about the one track you truly liked wasn’t well connected in our group also highlights the deficiencies in our data. If we had to rank our selections and produce a weighted graph (which would have made both this task and task 7 far more difficult), we may have been able to obtain a visualization and grouping that better reflects our personal connects to these tracks.

Finally, like you I shared the same thought about not belonging to this group. While most of the group had diversity as their top selection criteria, I went with having vocals for mine. This further supports your point of the arbitrary nature of algorithm groups; my main take away is that we should be vigilant about trying to understand the processes behind these algorithms rather than allowing it to remain as a “black box.”

 

Both Steph and I had the same reservations about whether or not we truly belong in the group we were placed in. For our analysis, even though we used different methods, we looked at similar things such as how all of us selected El Cascabel and most of us selected Wedding Song. I feel that with the adjacency matrix that I constructed, it was easier to organize the data to see who else selected El Cascabel and Wedding Song, as well as being able to see our non-selections as well. Moving slightly away from the course (although still talking pedagogy), this is a prime example of constructivism, how different people when seeing the same thing (our Golden Record data) will look at it from different angles, highlight and analyze different things, and reach different conclusions. At the same time, this also gives more weight to the consensus truth test, where even if we approach it from different perspectives and reach the same conclusion (such as that algorithm groupings feel arbitrary as they do not consider the reasons of why we selected these tracks), then there is more validity to the conclusion we reached through different paths.

Final Assignment: Automatic Writing Assessors

Introduction

Text has been evolving alongside technology throughout human history. As humanity develop and utilize the scroll, the codex, the printing press, hyperlink text, and predictive text of algorithms, aspects of text and human thought changes. In an age of gamification, instant feedback, and advancements in machine learning, one possible next step of text related technology would be automatic writing assessors (AWAs). Rather than modern tools such as spellcheck, autocorrect, and predictive text which provides feedback as one writes, AWAs evaluate and provide feedback on an entire piece of writing rather than words and sentences, allowing the user to act on that feedback and refine their writing.

Various computer algorithms already exist that associate variables of a piece of writing to its quality. For instance, the free online scientific journal depository mentioned in module 1 of this course, arXiv, utilizes an algorithm that detects the presence of key phrases updated through machine learning to determine if an article is scientific and therefore should be added to arXiv (Becker 2016). AWAs can function in the same manner, though instead of an output of ‘scientific, therefore, should upload to arXiv’ or ‘not scientific, therefore, should not upload to arXiv,’ AWAs can assign a numerical score to a piece of writing and/or provide feedback on the piece of writing.

In this article I will discuss my pedagogical context, the history of AWAs, how a machine learning technique, random forest, can train AWAs, how AWAs can be used in my context, and concerns that may rise from the development of AWAs.

Personal Pedagogical Context

I began my education career as a science teacher, and a few years ago I began to teach an introductory epistemology class, Theory of Knowledge (TOK), for the International Baccalaureate (IB) Diploma Program. Unlike assessments for science classes which typically contain questions that require students to follow various step to solve and I can determine students’ thinking process depending on the work shown, TOK assessments are various pieces of writing that I submit to the IB organization for moderation. For the TOK Exhibition where students have to select three objects and relate them to one of the prompts provided by IB in a maximum 950-words piece of writing. The TOK Exhibition is a third of a student’s overall TOK mark, and after grading all of my students’ exhibitions IB randomly selects certain students to ensure that I’m neither too hard or too easy with my marking. In my first years teaching TOK, IB determined that for the students I rated highly, I was too generous, while for students from the mid to low range I matched their assessment practices. IB TOK rules also stipulate that to ensure that the exhibitions are the students’ own work, as a teacher I’m only allowed to read and provide feedback on one draft. All these factors make me wish for a tool that a) allows students to check and refine their own work without teacher input, and b) allows me to better grade these exhibitions to match IB standards. This is where AWAs could be of use.

Evolution of AWAs

According to Shermis & Burstein’s Handbook of Automated Essay Evaluation: Current Applications and New Directions (2013), the concept of AWAs stemmed from Ellis Page’s 1966 article, The Imminence of… Grading Essays by Computer. In 1993 Wresch referred back to Page’s article and concluded that the concept was still being explored with nothing momentous yet. Fast forward to 2012 when the Hewlett Foundation organized a contest to evaluate AWA tools and found that the quadratic weighted kappa (QWK) rates, a statistics metric used to determine whether two scores agree, for human scorers on the samples provided were between 0.75 to 0.97 and the winning AWA tool had a QWK of 0.77 (Shermis & Wilson 2024). In the past decade, AWAs have been developed further, and in 2021-2022, the US’s National Assessment of Educational Progress (NAEP) program ran a contest on AWAs and found that the human QWK was 0.91 while the winners had QWKs of 0.89, 0.88, and 0.87 (Shermis & Wilson 2024). AWAs are being widely adapted, with several school districts in the United States such as “South Dakota, Utah, North Carolina, Louisiana, Ohio, and West Virginia” using AWAs to provide formative feedback to students (Shermis & Wilson 2024).

Various AWAs are currently on the market, such as Pearson’s Intelligent Essay Assessor (IEA) that “uses machine learning and natural language processing to score essays and short answers in the same way human scorers do” (Pearson n.d.), Page’s Project Essay Grade (PEG) where “students who received PEG feedback alongside traditional writing instruction demonstrated a 22% stronger improvement in their writing skills compared with those who did not” (ERB Team 2023), and Vantage Labs’ IntelliMetric where a Californian school district “was able to evaluate student writing and their students were able to use the instantaneous feedback to drastically improve their writing. The majority of teachers found IntelliMetric to benefit their classrooms as an instructional tool and found that students were more motivated to write” (n.d.).

Rather than analyzing and evaluating these various tools on the market, the goal of this article is to provide my own proof of concept as to how an AWA can be developed using machine learning algorithms and how it could be used in my practice.

Machine Learning: Random Forest

As discussed in module 11 of this course, language-based machine learning algorithms are trained using a corpus of texts. Machine learning algorithms can statistically analyze its corpus to determine the likelihood of words that follow each other, allowing language based artificial intelligence to produce an output that passes the Turing test (Hall n.d.).

There are numerous types of machine learning algorithms such as neural networks, linear regression, logistic regression, clustering, decision trees, and random forest (IBM Cloud Education n.d.). Of these, the random forest caught my eye as it is more transparent rather than the “black box of the neural network” (IBM Cloud Education n.d.) and allows for internal evaluations of accuracy.

A random forest uses numerous distinct decision trees (IBM Cloud Education n.d.) to overcome issues with just one decision tree such as overfitting (Yee & Chu 2015). A decision tree uses a series of machine generated variables to split the data into smaller groups to predict something (Yee & Chu 2015). For instance, variables to determine the strength of a piece of writing could include the number of key phrases used, the word count, the number of commas and/or periods, and the average word length. A decision tree for an AWA could first look at the word count and determine that anything above the maximum or below the minimum won’t make the cut. Then, it could look at the presence of key phrases to predict if it’s on topic, as well as counting the number of words between periods to determine if there are any overrun sentences. Sometimes, overfitting occurs, where variables selected for a decision tree are irrelevant but exist in the tree because they match the training data (Yee & Chu 2015). The random forest overcomes the overfitting issue though the generation and evaluation of numerous decision trees.

Starmer 2018 provides an introductory glance of how random forests work. To apply the concept to AWAs, imagine that we are given a sample of 100 TOK Exhibitions, all already assessed by IB. The first decision tree randomly selects 72 of these exhibitions and uses variables such as presence of key phrases, average word length, and word count to match each of the exhibitions to their scores. The second decision tree selects 68 exhibitions and uses variables such as the number of commas, the absence of words associated with a bad exhibition, and the word count. This process repeats until there are numerous distinct decision trees that make up a random forest. For each tree, the unused samples can be used to determine the accuracy for a tree. For the second tree, if the decision tree scores the 32 unused samples of the training data inaccurately, it can be pruned from the forest (perhaps the number of commas was an overfitting variable).

Once the forest is generated through training data, it can then be used to evaluate new data. If 30 decision trees are generated and 27 of them thinks an exhibition should receive 7/10 while 2 of them evaluates it at an 8 and the final one evaluates it at a 6, that exhibition should receive 7/10.

Pedagogical Considerations

Note that while the variables listed above are just examples and may have not a strong correlation with the quality of a piece of writing. Through the examples of arXiv and the NAEP contest, AWA tools currently can be used to gauge whether or not a piece of writing belongs in a certain category (whether something is scientific or should have a certain score). While currently language-based algorithm models do not operate off of understanding the criteria of a rubric but statistical likelihoods, it can still provide a similar outcome as a human. In the context of the IB TOK Exhibition where teachers can only provide feedback on one draft, this provides students with another potential tool to evaluate and refine their work. This could also provide instructors another guide in scoring TOK Exhibition, if IB scores (rather than a specific teacher such as myself) were used for the training data.

In a way, an AWA for the IB TOK Exhibition using random forest is already similar to how I introduce the exhibition to my students. One activity I do is to have students go over the rubric, then have them go around to various stations to read sample exhibitions provided by IB and rate them. After we compare what score they give the exhibitions to the actual IB scores, students then return to each exhibition to study various features of the samples that led to their score. While hopefully students don’t come up with something more frivolous such as the number of commas used, strong students were able to gain an understanding of what a highly scored exhibition requires. After generating a piece of writing, the cycle of having it evaluated by an accurate AWA and refining a piece of writing can level the playing field and help all students understand the aspects and requirements of a strong exhibition, especially in this context where continuous teacher feedback is forbidden.

Concerns

AWAs can spark various concerns that parallel those of algorithms that we have discussed in module 11 of this course around training data for language-based algorithms: the privacy concerns exemplified by the Enron emails mentioned by Herman 2019 and how various algorithm using tools perpetuate preexisting biases (Talks at Google 2016). Recently there has also been concerns about how machine learning using copyright material as training data, exemplified by a lawsuit by New York Times against OpenAI mentioned by Reed 2024. These issues would be minimized with a mindful approach to AMAs. The privacy and copyright issues of the training data can be mitigated by asking for student agreement to provide their works as training data and removing any personal information from students’ work prior to using it as training data. While it’s true that AMAs use would perpetuate preexisting biases of the original examiner(s) who assessed the training data, the assessment process that leads to the training data and the random forest method of machine learning is transparent and allows for policy makers of various educational institutions such as IB to audit to determine the presence of any unfair bias.

Conclusion

Recent advancements in text technology such as spellcheck, predictive text, and even language-model artificial intelligence such as ChatGPT have reduced the production of text. Instead, these technologies encourage the selection of machine generated suggestions. AWAs would allow a shift back towards text production, providing instantaneous feedback on a piece of writing created by the user.

References

Becker, K. (2016, October 13). What Counts as Science. Nautilus. https://nautil.us/what-counts-as-science-236150/?_sp=eec044e8-5f5a-44bb-af98-6d4cab8e3a2b.1722903148511

ERB Team. (2023, August 10). How the AI-Driven PEG Scoring Algorithm Can Improve Student Writing. ERB. https://www.erblearn.org/blog/peg-scoring-algorithm/

Foltz, P. W., Streeter, L. A., Lochbaum, K. E., & Landauer, T. K. (2013). Implementation and applications of the intelligent essay assessor. In M. D. Shermis, M. D. Shermis, J. Burstein & J. Burstein (Eds.), Handbook of automated essay evaluation (1st ed., pp. 68-88). Routledge. https://doi.org/10.4324/9780203122761-5

Hall, D. (n.d.). 99% Invisible (No. 382). Retrieved December 12, 2022, from https://99percentinvisible.org/

Herman, C. (Host). (2019, June 5). You’ve got Enron mail! (no. 35). [Audio podcast episode]. In Brought to You By. Business Insider.

IBM Cloud Education. (n.d.). What is machine learning (ML). IBM. https://www.ibm.com/cloud/learn/machine-learning

Page, E. B. (1966). The imminence of… grading essays by computer. Phi Delta Kappan, 47(5), 238-243.

Pearson. (n.d.). Automated Scoring. Pearson Assessments. https://www.pearsonassessments.com/large-scale-assessments/k-12-large-scale-assessments/automated-scoring.html

Shermis, M. D. & Burstein, J. (2013). Handbook on automated essay evaluation: Current applications and new directions (1st ed.). Routledge. https://doi.org/10.4324/9780203122761

Shermis, M. D. & Wilson, J. (2024). The routledge international handbook of automated essay evaluation. Routledge.

Starmer, J. (2018, February 5). StatQuest: Random Forests Part 1 – Building, Using and Evaluating. [Video]. Youtube. https://www.youtube.com/watch?v=J4Wdy0Wc_xQ&ab_channel=StatQuestwithJoshStarmer

Talks at Google. (2016, November 2). Weapons of math destruction | Cathy O’Neil | Talks at Google.

Vantage Labs. (n.d.). Intellimetric. https://www.intellimetric.com/

Wresch, W. (1993). The imminence of grading essays by computer—25 years later. Computers and Composition, 10(2), 45-58. https://doi.org/10.1016/S8755-4615(05)80058-1

Yee, S. & Chu, T. (2015, July 27). A visual introduction to machine learning. r2d3. http://www.r2d3.us/visual-intro-to-machine-learning-part-1/

Task 12: Speculative Futures

Prompt

Describe or narrate a scenario about an advertisement found a generation into a future in which “progress” has continued. Your description should address issues related to communication and elicit feelings of amusement.

Ad

Ad transcript:

Wanting to reach a bigger audience with your ads? Looking for an ad space where viewers literally can’t look away? Get your ads on Googgle™ now! In case you’ve been living under a rock, Googgle is the new smart goggles from Google. Googgle used to have a free model that attracted almost 38 million Canadians to use its various features such as providing real-life captioning/translations, recording first-person Snap-Tok videos, and virtual reality gaming, all without interruption. As we transition to a subscription model where free users will be subjected to ad breaks (temporary disabled if we detect the user is driving), and with users signing away their privacy rights with our 328-page EULA, this is your chance to reach users with our precise targeted ads! Want to become the next major car company such as Honda and Toyota? Anytime a user spends 3 seconds looking at car dealership or if our microphones pick up key phrases such as “new car” we’ll bombard them with ads for your vehicles! Your company specializes in groceries? Anytime our GPS detects that a user is at a supermarket and hears key phrases such as “what’s for dinner” or “I’m hungry,” we’ll hit them with ads for your products!  Better yet, using our machine learning algorithms, we detect what kind of circumstances make users most susceptible to your ads and begin focusing your ads on everyone in those circumstances! Interested? Contact Google today to inquire about ads on the Googgle!

Reflection

My ideas for this prompt were first inspired by seeing a digital billboard displaying “your ad here,” leading me to think about an ad for advertisers. Thinking about the future, my mind immediately went to dystopian futures where people have retina implants that plays ads, but because this tech would be more than a generation away, I went with the lower-tech alternative, “smart glasses.”

Although Google Glass has already been developed for a decade and was considered a failure (Gvora 2023), technology associated with smart glasses are still being developed. For instance, as an evolution for the voice-to-text technology discussed in module three, Google was working on augmented reality (AR) for smart glasses that provide “subtitles for real life” that could be translated in real time to break down language barriers (Google 2022).

I imagined a world where smart glasses made a comeback in the near future alongside advancements in augmented reality and virtual reality technology, and applied Google’s current practices with Youtube. While Youtube was originally ad-free, Google eventually monetized it and came up with a subscription model, Youtube Premium, where paid users do not receive ads while free users are bombarded with targeted ads. I imagined the Googgle, a pun on Google and Goggle, would use a similar trajectory in which it was initially released ad-free to get millions of users on board, and then switching to an ad model to increase revenue. The targeted ads and correlation of circumstances with ad susceptibility is inspired by module 10, where sources such as Tufekci (2017) discuss how companies use machine learning algorithms for ads.

References

Google. (2022, May 11). Breaking down language barriers with augmented reality | Google [Video]. Youtube. https://www.youtube.com/watch?v=lj0bFX9HXeE

Gvora, J. (2023, April 30). Google Glass: What Happened To The Futuristic Smart Glasses? Screen Rant. https://screenrant.com/google-glass-smart-glasses-what-happened-explained/

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. [Video]. TED.

 

Task 11: Detain/Release

It’s scary how, despite all the warnings about the issues with algorithms in the various podcasts and videos for this module, I ended up heavily relying on just three colour-coded lines of “fail to appear,” “commit a crime,” and “violence” to make decisions in Porcaro’s 2019 simulation Detain/Release. Rather than mulling over all the aspects of each case such as what the prosecutor recommends or the defendants’ age and statement, I gravitated towards green/yellow/red words that highlighted the risks of each defendant and without thinking about the accuracy behind it, used those to inform my choices. This personal experience with Detain/Release reminds me of the concept of “thin-slicing,” and in this reflection I will focus on how thin-slicing creates a positive feedback loop when we throw algorithms in the mix.

Almost two decades ago I was taking a undergraduate cognitive psychology class, and Gladwell’s 2005 book, Blink, was the assigned reading. In it Gladwell argues that with life experience, rather than processing all pieces of information available, people develop subconscious heuristics relying on a few key pieces of information to make decisions/judgements. Gladwell calls this “thin-slicing,” and gives numerous examples such as how rather than agonizing over all the aspects of a property, people buying a new home only need a few moments with the property to realize it’s “the one,” or how a psychologist with decades of experience working with married couples can accurately predict whether a couple will stay married 15 years down the road with 90% accuracy after spending 15 minutes with them. Gladwell also points out that often thin-slicing is affected by prejudices or biases, and do not lead to correct decisions, giving the example of blind vs face-to-face orchestra auditions that was also given by O’Neil in the 2016 Talks at Google video.

Two decades later, I believe that thin-slicing is more prevalent. With easy access to far more data, one could spend hours gathering data or something to make a choice, or one could selectively place their attention on a few pieces of key information. This attention selection could be either deliberate or primed subconsciously through UI design elements such as colour, movement, and soud, discussed in last week’s module. For example, in an assignment for ETEC 511 a year and a half ago I reflected on how I purposefully only considered price, rating, images of the room, and location when I was selecting hotels for my vacation, whereas in Detain/Release, I suspect that I was heavily influenced by the colour-coding of information to more attention to it. The prevalence of video games could also contribute to more thin-slicing; Green & Bavelier (2012) was one of the readings in the aforementioned assignment for ETEC 511, and in it they discuss how video games “enhance attentional and executive control. By facilitating the identification of task relevant information and the suppression of irrelevant, potentially distracting sources of information, improvements in attentional control could enable individuals to more swiftly adapt to new environments or to more quickly learn new skills.”

Alongside the various discussions in this week’s module of the issues with algorithm such as how data was used in a metric rather than a tool to inform decisions, privacy issues regarding training data, creating epistemic bubbles, positive feedback loops and self fulfilling prophecies were two themes I noted throughout. In Vogt’s The Crime Machine podcasts, CompStat was shown to perpetuate the summons issued in a particular area. Crime is reported -> police logs the crime -> algorithm now shows the area having more crime and sends more police there -> police now need to continue issuing summons to meet their quota. O’Neil in the 2016 Talks at Google video provides another example with recidivism algorithms: biased judge makes a ruling -> ruling is used to train algorithms -> algorithms informs other judges to make rulings. My personal experience with Detain/Release supports O’Neil’s claims while highlighting how factors such selective attention and thin-slicing help reinforce this positive feedback loop. By distilling all the information into colour coded words about someone’s supposed likelihood of running away, recommitting a crime, or violence, key pieces of data to inform decisions are no longer obtained through observation through a lens shaped by personal experience, but obscured through a lens provided by an algorithm.

References

[10.2] Task 10: Attention Economy

The most memorable experience I had with a design trying to grab my attention wasn’t on a website, but in a video game. In one section of the PC game Pony Island (the game is nothing like it sounds) on Steam, the player is suppose to be paying attention to the center of the screen, yet at a crucial moment the game has a pop-up in the corner as well as a sound notification mimicking that of the Steam User Interface to trick the user into thinking that one of the their Steam friends was messaging them. I found myself being distracted by this fake message from a friend, and missed the information at the center of the screen. A quick Google search led to a 2016 video from a streamer, DrDroo, who experienced the same thing as I did: https://www.youtube.com/watch?v=2J55kfbR0bk 

A combination of sound notifications and visual indicators, associated with an instant messaging application, are the most distracting to me. Pony Island mimicked Steam friend messages, and I find myself distracted whenever I receive a Whatsapp (sound + the web tab giving a number to indicate new messages) or Discord message (sound + a number next to the discord icon in the task bar changing to indicate new messages). Below are two screenshots that demonstrate the changes on my screen whenever I receive messages.

In comparison, User Inyerface was less about things that grabbed my attention, but deceptive and/or inefficient design choices that affected how long it took me to complete the website. Also, having experienced various dark design practices in Brignull’s 2010 article, especially ever since EU law changes that led to most websites in the world providing options to accept or reject cookies, I’ve been far more careful in reading websites carefully before clicking things. I found that I managed to evade most of User Inyerface’s descriptive dark designs, but it was the inefficient design choices that slowed me down.

For instance, after the first screen with the deceptive colourful “No” button that I ignored, unlike most websites one cannot jump to the next text box using the “tab” key. This is a shortcut that has saved me a bit of time whenever I fill out forms on the internet. In addition, unlike most websites, when clicking on a textbox, the descriptor of the textbox (e.g. “choose a password”) is not erased automatically, prompting me to ctrl+a and backspace to get rid of all the text before engaging with the task.

On the next page, I quickly unselected a few of the boxes with interests until I was met with the “hurry up time is ticking!” pop-up that took me a while to deal with. Knowing what the arrows icon represent (maximizing) as well as “lock,” I tried to ignore the textbox and return to the task (I couldn’t), drag it around to move it out of the way (I couldn’t), until I got desperate and tried the other two red herring buttons hoping it’d lead to more options (they didn’t). Eventually, I noticed the “close” with the copyright symbol for C and clicked it. Interestingly, while trying to get rid of the pop-up, I noticed the “unselect all” option (that isn’t typically another checkbox) and used it; if the pop-up did not occur, I probably would have unselected all the interests checkboxes before noticing the unselect all option.

The personal details page had similar issues with text input that I described earlier, with additional complications such as using flags for the country (luckily Canada is fairly early in the alphabet so I didn’t need to scroll), or that the months of the year were in alphabetical rather than numerical order. I chuckled at the age slider that could go up to 200, but I managed to luckily scroll to the right age without having to slowly click around to fine-adjust my age. The colour of the gender selection did get me: I’m accustomed to think that the button with the same colour is the not-selected option while a different colour than the background is the selected option, but this was a quick fix.

The human verification is what took the most time for me. I was assuming the verification was being pedantic, so “panes of glass” doesn’t qualify as “glasses,” nor do “checkmates” qualify as “checks,” and as a Canadian I don’t consider “checks” a correct spelling for cheques. Likewise, bowties are not the same thing as a bow, and several images just more than one person bowing rather than “a bow.” During the selection process, I also noticed that the images correspond to the check box above the image, rather than the typical convention below, so as I was carefully selecting images, at times I realized I made the wrong selection and had to scroll around and correct myself. The verification must of took me more than five minutes and was the most frustrating part before I did a “hail mary” and clicked all the boxes.

Use Inyerface was a frustrating, yet fun exercise that highlights some ways in which websites use dark patterns (the main one that got me was the gender selection colours to indicate which choice was made), but otherwise just featured inefficient/frustrating rather than purposefully deceptive website design. In terms of designs that manage to actually rip my attention away from a current task, a combination of audio/visual notifications like those from instant messaging applications were far more effective. It’s because of this that I’ve started to keep my phone permanently on mute, and that if I’m busy working on a task I would put myself on “do not disturb” mode for all the instant messaging applications I use.

References

Bagaar. (2019). User Inyerface. [web game].

Brignull, H. (2011). Dark patterns: Deception vs. honesty in UI design. A List Apart, 338.

DrDroo. (2016, February 3). (PONY ISLAND SPOILER)I get a message on Steam [Video]. Youtube. https://www.youtube.com/watch?v=2J55kfbR0bk

Task 9: Network Assignment Using Golden Record Curation Quiz Data

Looking at the Palladio visualization data for our Golden Record Curation Quiz Data, I was first reminded of a similar visualization of data in another MET course, ETEC 543: Understanding Data Analytics. In it, we were given access to Threadz, “a learning analytics tool that allows you to visualize and better quantify the student discussions happening in Canvas discussion boards” (University of British Colombia Learning Technology Hub 2023).

Threadz works better than the visualization for the Golden Record quiz data. Threadz’s nodes are merely participants, rather than the two kinds of nodes in the Golden Record visualization data (participants and tracks), and connects in Threadz visually demonstrate who’s responding to whom, giving a real sense of tangible networks in both meanings of the word (edges between nodes, as well as connections between classmates). The Golden Record visualization data, on the other hand, is visually overwhelming due to the degree of connectivity each node may have. Each participant node will have edges to the ten track nodes they have selected, and each track node could have potentially zero to 23 nodes, depending on how many participants selected it (though from later analysis, the nodes with the highest degree of connectivity was Johnny B. Goode and Melancholy Blues each with 16 connections, and each track was picked at least once).

Because of this, and due to my familiarity with Excel, I decided to represent the data in another way to better analyze it. Unknown to me at the time (I looked at and analyzed the data before watching the videos for this module), I was building an adjacency matrix, though I used the word “Yes” instead of “1.”

While building this adjacency matrix, I relied on Google searches to find Excel formulas that would automate this process instead of me doing it manually. While the first search, “search within a cell,” did not return helpful results, my refined search of “search within a cell” had the top result providing information that I needed. This second search also supports the point made in Code.org’s 2017 video. Rather than interpreting, for example, searching for various organelles such as the mitochondria in a biological cell, perhaps due to my previous search, the results all provided information relating to Excel cells.

With the adjacency matrix completed, I first analyzed it by using the “countif” formula on Excel to see how many times a particular track has been selected and Excel’s built-in sort function to place them in order (this version of the adjacency matrix is not shown, as I did further analysis afterwards that rearranged the tracks). For tracks with low select counts, I could have gleamed the same amount of information from the Palladio visualization as I could from the adjacency matrix: for example, Track 22 was selected by only one participant, while Track 27 was selected by two. Yet, the adjacency matrix was far better at giving the numerical value of the degree of connectivity of the more popular track nodes, whereas I would have had to manually count the degree of connectivity for each node on the Palladio visualization.

Comparing the facet grouping to an adjacency matrix also yielded interesting results. I was placed in group two with Stephanie, Jonathan, Carol, and Carlo (highlighted light green on my adjacency matrix). Looking at the Palladio visualization data for this group first, I noticed that there was only one piece that all of us selected Track 6: El Cascabel, and there were several tracks that most of us selected such as Track 11: The Queen of the Night Aria (three of us selected this one) and Track 23: Wedding Song (four of us selected this one).

Curious as to how these groupings were made, I rearranged the tracks to only the ones I have selected and using the aforementioned “search within a cell” formula results, I quickly built another column to see the number of tracks I had in common with each participant. Of the 22 participants (excluding myself) and 27 tracks, I had between two to five tracks in common with each participant, and the following people had five tracks in common with me: Stephen, Stephanie, Lachelle, Krisjana, and Jonathan. I immediately noticed that only two of these (Stephanie and Jonathan) were members of my group created by Palladio. If I wanted to spend more time on this, I could have done this for each participant, and built a new table to see how many tracks each participant has in common with all other participants, to see if the algorithm grouped us based on that.

Yet, I went with another approach to dig deeper without building a new table by looking at specific tracks we had in common. This analysis was far easier with the adjacency matrix due to the ability to rearrange and/or sort nodes, while it would have been quite difficult to do this analysis on the Palladio visualization, trying to follow specific edges when there are numerous of them. I moved the five of us to the top of my adjacency matrix, and it turns out that the only track all five of us selected is El Cascabel. Yet, what about the other seven participants that also selected El Cascabel? Why weren’t they in our group? The only track selected by four of us was Wedding Song, but there were also others such as Stephen, Lachelle, and Kristjana who selected El Cascabel and Wedding Song but weren’t part of our group. Perhaps it was our non selections? The only three tracks not selected by the five of us are the Brandenburg Concerto 2, Sacrificial Dance, and Flowing Streams. Stephen and Lachelle both selected Flowing Streams, while Krisjana selected Sacrificial Dance. Was that the requirements to get into our group set up by the grouping algorithm, that one needs to select El Cascabel without selecting Brandenburg Concerto 2, Sacrificial Dance, or Flowing Streams? This is only a hypothesis and I lack time as well as additional data to see if this was how the algorithm grouped us.

That said, if this is indeed how the algorithm grouped us, then this is quite an arbitrary grouping system that highlights a key point about machine learning and algorithms I’ve encountered in other MET courses as well as my personal life: algorithms are programmed to only link the data to find correlation, without much consideration as to the reasons behind the correlation. The lack of consideration of causation is demonstrated in these algorithm groupings: I selected my tracks with the rationale of wanting to demonstrate human vocal cord capabilities to any potential alien species that discover the Golden Record. Other members of my group focused on diversity of some sort, whether it’s geographical, cultural, and/or instruments used. Their primary reason was quite different than mine, but the algorithm placed me into the same group based solely on the results rather than the rationale.

Also, my various analysis and hypotheses as to how we were grouped also highlight another key point about algorithms: while programmers designed algorithms and machine learning methods to find correlation between data, the processes to determine the output is starting to become incomprehensible even to the programmers themselves. This is outlined in Rudin & Radin 2019, who highlight numerous negative consequences of the “black box model” of machine learning algorithms, such as not knowing about any deficiencies in the data that trained the algorithm (reasons behind us selecting the tracks), or how people may incorrectly hypothesize the processes the algorithm used to output the groupings (my hypotheses may very well be rejected with more data/analysis). In addition, relying on machine groupings without understanding the reasons behind it may lead to numerous negative consequences such as perpetuating existing biases in the training data or specific grouping criteria that turned out to be inconsequential, such as whether participants selected El Cascabel without selecting Brandenburg Concerto 2, Sacrificial Dance, or Flowing Streams.

References

Code.org. (2017, June 13). The Internet: How search works. [Video]. YouTube.

Rudin, C. & Radin, J. (2019). Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition. Harvard Data Science Review1(2). https://doi.org/10.1162/99608f92.5a8a3a3d

University of British Columbia Learning Technology Hub. (October 2023). Threadz Instructor Guide. Learning Technology Hub. https://lthub.ubc.ca/guides/threadz-instructor-guide/

Task 8: Golden Record Curation Assignment

A Youtube playlist (ordered in a that I felt best connects them) of the ten pieces I have selected from the Voyager Golden Record can be found here: https://www.youtube.com/playlist?list=PLwcY6qJPmEMFq9yH1e-aPFd-cDwb1qal5

  1. Wedding Song – Peru
  2. Kinds of Flowers
  3. Jaat Kahan Ho – India – Surshri
  4. Morning Star and Devil Bird – Australia
  5. Iziel je Delyo Hagdutin – Bulgaria
  6. Pygmy Girls Initiation Song Zaire
  7. String Quartet No. 13 in B Flat, Opus 130, Cavatina
  8. Mozart – Queen of the Night – Eda Moser
  9. Johnny B Goode – Chuck Berry
  10. El Cascabel – Lorenzo Barcelata & the Mari

My approach began with thinking about the purpose of the Voyager (and the Golden Record) of giving any alien species that came across the Voyager and its contents an introduction of Earth and its inhabitants. For this purpose I selected mostly pieces that have vocals so that these aliens would have a better understanding of the capabilities of human physiology, though four vocal pieces such as Melancholy Blues, Dark was the Night, Navajo Night Chant, and Tchakrulo were cut from my list due to various reasons such as overlapping too much with other music pieces (rock music represented by Johnny B Goode evolved from blues), me not personally enjoying the piece (Dark was the Night), and a song about war which may send the wrong message (Tchakrulo). The Navajo Night Chant was the most difficult for me to justify cutting, but in the end I decided to replace it with String Quartet No. 13 as the only representative of purely instrumental music. The latter was selected due to a personal enjoyment of the piece (this was the first time I heard it), and it beat out the other pieces of western classical music, especially the most iconic piece of music on the Voyager Golden Record: Beethoven’s Fifth (which has the same composer as String Quartet No. 13), which was first written in dedication to Napoleon but that dedication was later rescinded due to Napoleon’s tyranny, which again makes it not the ideal piece to use in a message of peace for aliens. My personal knowledge of  Beethoven’s Fifth resulting in it not being selected (and I suspect it’ll be selected in many others’), as well as the fact that I focused on vocal pieces, illustrates Smith Rumsey’s points in both the 1999 article and the Youtube video about how everyone, including the scholars, make selections about what should be preserved due to their upbringing, education, background information, and thought processes.

References

Brown University. (2017, July 11). Abby Smith Rumsey: “Digital memory: What can we afford to lose?” [Video]. YouTube.

Smith Rumsey, A. (1999, February). Why digitize? Council on Library and Information Resources.

Task 7: Mode-bending

When I first looked at the task description, I immediately though about various ways to change the semiotic mode of the first task: stringing videos of me using the various items, an audio only option, a Twine or some sort of webpage with hyperlinks to explore each of the different items so that the user can decide the order, and so on.

Yet, as I further reflect on the New London Group (1996)’s discussion of multiliteracies, especially around design, meaning-making, and “the game (purpose),” I revisited the first task to interpret its main purpose. Brown’s BAG Project “explores the duality between the way people characterize themselves in public and the private contents of their handbags” (Brown n.d.). I decided that I needed a semiotic mode that shows (not exclusively visually, as per the requirements) both my public image and the “story of the objects themselves” (Brown n.d.), and previously-considered options such as audio-only wouldn’t have been as effective in portraying a public image versus the private contents of a bag. I eventually opted for a video format of me talking to my audience through a camera, with one part of the video portraying my public self and the other a story about the private contents of my bag. Starting from this purpose of contrasting public and private then made choices about aspects of the redesign process such as the genre, the discourse, and grammars obvious.

For the public image section, I made various design choices to support the main purpose of portraying a public self. I used a neutral background similar to those used by Brown (n.d.), dressed in my work clothes, had an upright posture, and adjusted the lighting in the room. In addition, though in other video assignments I would typically work with a full pre-written script, I opted against it because my public image is not one that carefully crafts all their utterances, so thus I went with some pre-written notes to reflect my tendencies as a teacher to go off of my plans of a lesson and yet am flexible enough to explore students’ questions and ideas in group discussions.

I made completely different design choices for the private section. I placed the camera on the table where I send most of my time, with my storage closet behind me sometimes visible, laid back in my chair, dressed how I typically would at home, and I did not bother with lighting. My talks were also script-free (to better reflect my tendencies in private) story of an instance where I used several of the objects in my original task. As a result of these choices, I was far more relaxed in this section, and one thing that arose from this increased relaxation is more use of hand gestures as I talk.

After creating these two sections, I noticed that various modes of meaning  and their elements discussed by the New London Group (1996) were apparent. The aforementioned body posture and hand movements are part of gestural design, background choices and lighting are elements of spatial and visual design. For linguistic design, elements such as delivery, modality, and transitivity were seen in the different ways I spoke, such as during the public section I paused more often to think of the next appropriate word to say, as opposed in the private section I spoke on auto-pilot and then corrected previous word choices. These unintended differences arose from the intended design choice of having pre-written notes (but not a full script) for the public section and not having anything at all for the private section.

My biggest take away from this task, aided by my interpretation of the New London Group’s article, is that starting with thinking about the purpose, in other words, backwards design, may be the best approach for the redesign process because afterwards, it’s easier to make design choices to support that purpose. This could apply to my own personal pedagogical practice as I redesign lesson plans and activities, putting more thought into the purpose, and subsequently, various design elements. Prior to this task, I thought of backwards design as just another professional development buzzword, and it wasn’t until reflecting on the New London Group that I began to appreciate the myriad of design choices that can go into the backwards design process.

References

Brown, E. (n.d.). BAG. Ellie Brown Photography and Artworks. https://www.elliebrown.com/#/bag/

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60-92. https://doi.org/10.17763/haer.66.1.17370n67v22j160u

 

Task 6: An emoji story (Optional task)

Title:
⚔️⭐????

Synopsis:
????‍♂️????????
????????????‍♂️
????????????
????‍♂️➕????????☮️
????????????‍♀️
????‍♀️????????⁉️
????‍♀️????????????‍♂️
????‍♀️????????????
????‍♂️????????????
????‍♀️➕????????????????

I’m currently watching numerous series, but when I consider how many of these series would be widely known, I settled on this particular one. For the title I relied on words/parts of words to translate to one symbol, while for the synopsis I went with a combination of words and ideas.

To connect this task with Kress (2005), perhaps due to the translational nature of this assignment I found that I followed the temporal conventions of written English text. Events are ordered in rows going left to right, and later events are placed below earlier ones. In this task, the emoticons are more “text” than an image where one could use the affordances of space to provide different messages.

As I went through this task, I found myself contrasting alphabets as discussed in Gnanadesikan (2011) with logograms as discussed in Bolter (2001). I find that unlike alphabet based languages, logographs such as emojis (and as I’ll discuss later, Chinese) do not have the base unit, an alphabet. While English words have base units in Roman alphabets that can be arranged to produce numerous combinations, and the arrangement of the alphabets provide hints to the sound of the word, emojis cannot be broken down further. The availability of a limited amount of base units (too many base units and a keyboard may be unwieldy) contributes to the speed at which one sends information: with English, by memorizing the locations of all the base units on a keyboard, I can type quickly to combine base units into words and transmit that to others. With emojis that lacks base units and are numerous, I have to constantly go through all the emojis listed to select the correct one, as seen below in my screenshot of the emoji library of Signal, one of the instant messaging apps I use. Similar to the point made in Gnanadesikan 2011, “to make a truly different symbol for each word of a language would result in far too many symbols” (p. 6).

I found a similarity between this process and the pinyin Chinese input method, where one types the sound in English, look at the words/symbols/characters that come up, and select the correct one. For example, typing “yi” provides the following this of homonyms using the Microsoft Windows Traditional Chinese input method:

Like emojis, a list is given, each with a different meaning, and I need to select the correct one. Similar to the points brought up in Gnanadesikan (2001), I believe the root cause of this similarity is due to the lack of an alphabet and a wide variety of symbols available, resulting in an extra step needed to select from a list instead of simple typing (although in Chinese input method, this process is slightly sped up, as the nine most commonly used symbols can be selected with a press of the corresponding number key).

One other connection I made between emojis and Chinese, thanks to Bolter (2001), is that they can be both considered as logograms. In Chinese, the characters for one, two, and three, are essentially number of sticks somewhat similar to Roman numerals (一,二,三). Field is 田, showing sectioned off rice patties, while the character for mountain, 山, showcases peaks. Door/gate, 門, looks like old saloon doors, and river looks like 川.

If this is the case, then there is an implication that there would be less usage of emojis for people typing in Chinese since there is no need for an image to replace a character/symbol/word, since the character/symbol/word itself takes up as much as an image and would take around the same amount of time to input.

Indeed, I find that my conversations with family members and friends, “stickers,” rather than emojis, are used instead. Stickers are popular stock images that one can send in, for example, Line, the top instant messaging app in Taiwan. Unlike emojis that depict one object, stickers typically depict a situation. For example, as shown in the screenshot below, one of the default stickers in Line is a bear is holding a mug of beer, emitting a music note, while a plate of a burger and some fries are in front of them. To translate this sticker, it would take approximately four emojis.  As per arguments made in Kress (2005), stickers are images that can utilize space where all elements are simultaneously present, as opposed to emojis in this exercise that are more “text” that have a logic to their order.

This task also led to a lot of comparison between English and logographs such as emojis and Chinese. As aforementioned, English is made up of base units, and emojis are not. Although Chinese has visual base units called radicals (for example, 忍, to endure, is a combination of 刃, blade, and 心, heart), they’re not widely used as an input method, and I suspect the main reason is visual (text) vs aural (speech). The radical Chinese input method is not popular since radicals are based strictly on written text, while the popular Chinese input methods, pinyin in China and bopomofo in Taiwan, are based on sound. Pinyin uses the Roman alphabet for Chinese word sounds, while Taiwan’s bopopmofo creates a new series of symbols for the sound of a character.

References

Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print (2nd ed.). Lawrence Erlbaum Associates.

Gnanadesikan, A.E. (2011). The first IT revolution. In The writing revolution: Cuneiform to the Internet (pp. 1-12). John Wiley & Sons.

Kress, G. (2005), Gains and losses: New forms of texts, knowledge, and learningComputers and Composition, 2(1), 5-22.

 

Task 5: Twine task

Huang Task 5 – Twine

With previous experience with Twine throughout my ETEC journey, as I went through the module I found an uncanny resemblance between the Memex demonstrated in Flowers (2016) and Twine. Like the Memex, Twine can allow users to create and modify trails, pathways that Twine stories can take. In both Memex and Twine, trails do not need to be linear, allowing for diverging paths, as well as options to go back to the previous page or back to the start.

For this task, I wanted to do more than just make a story that has various trails, and I included two sections. One was due to a point in Wesch (2007) about how technology has allowed us to move beyond “shelves.” While this is true as demonstrated through the various hypertext in my Twine by not categorizing things, this could also be potentially be problematic. Without shelves, rather than browsing in topics to find a content one’s interested in, algorithms such as that used on Youtube recommend content based on browsing history and physical location, which could potentially lead users down rabbit holes. I have attached a previous Twine I have built in another course, ETEC511, that attempts to demonstrate how such content works, and that after the initial selection of categories, my algorithm can prioritize various content on the “front page.”

As for the section of my Twine I built for this task, I felt that a reading review quiz as a medium this medium allows me to demonstrate the various points made in Nelson (1999) and Bolter (2001). While not an exact of replicate of Xanadu, the amount of linking to the videos, podcast, and readings used in this module whenever the user answers a question incorrectly follows the spirit of Xanadu and the dissemination of information and knowledge through hypertext discussed in Bolter (2001).

References

Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print. Lawrence Erlbaum Associates.

Flowers, T. (2016, June 19). Memex #001 demo [Video]. YouTube.

Nelson, T. (1999). Xanalogical structure, needed now more than ever: Parallel documents, deep links to content, deep versioning and deep re-useACM Computing Surveys, 31(4).

Wesch, M. (2007, October 12). Information r/evolution [Video]. YouTube.