Final Project: Describing Communications Technology

For the final project, I chose to examine the communications technology of voice software, pertaining to both text-to-speech and speech-to-text. Please enjoy the below podcast as my final submission for ETEC 540.

Listen on Spotify

Listen on Anchor.fm

 

References

Eddy, S. R. (2004). What is a hidden Markov model?. Nature biotechnology22(10), pg. 1315-1316.

Edwards, K. (2008). Examining the impact of phonics intervention on secondary students’ reading improvement. Educational Action Research, 16, 545–555. doi:10.1080/09650790802445726

Fisher, T. (2019, October 9). How AI and voice technology will transform healthcare. TED. https://www.youtube.com/watch?v=GU8-2bvxCKg&t=4s

Fine, S., Singer, Y., & Tishby, N. (1998). The hierarchal hidden Markov model: Analysis and applications. Machine Learning, 32(1), pg. 41-62.

Gruner, S., Ostberg, P., & Hedenius, M. (2018). The compensatory effect of text-to-speech technology on reading comprehension and reading rate in Swedish schoolchildren with reading disability: The moderating effect of inattention and hyperactivity symptoms differs by grade groups, Journal of Special Education, 33(2), pg. 98-110. DOI: 10.1177/0162643417742898

Huang, Y.M., Liu, C.J., Shadiev, R., Shent, M.H., & Hwang, W.Y. (2014). Investigating an application of speech-to-text recognition: A study on visual attention and learning behaviour. Journal of Computer Assisted Learning, 31(1). Pg. 529-545. doi: 10.1111/jcal.12093

Joshi, S., Kumari, A., Pai, P., Sangaonkar, S., & D’Souza, M. (2017). Voice recognition system. Journal for Research, 3(1). pg. 6-9.

Lowerre, B.T. (1976). The HARPY speech recognition system. (Publication no. 15213). [Doctoral   dissertation, Carnegie-Mellon University]. https://stacks.stanford.edu/file/druid:rq916rn6924/rq916rn6924.pdf

Pieraccini, R. (2012). From AUDREY to Siri. Is speech recognition a solved problem. [Powerpoint Slides]. International Computer Science Institute. https://www.icsi.berkeley.edu/pubs/speech/audreytosiri12.pdf

Robertson, B. (2016). How does speech-recognition software work? Science and Children, 54(3). pg. 64-68.

Schalkwyk, J., Beeferman, D., Beaufays, F., Byrne, B., Chelba, C., Cohen, M., Garret, M & Strope, B. (2010). “Your word is my command”: Google search by voice: A case study. Advances in speech recognition (pp. 61-90).

Shaywitz, S. E., Shaywitz, B. A., Fletcher, J. M., & Escobar, M. D. (1990). Prevalence of reading disability in boys and girls: Results of the Connecticut longitudinal study. Journal of the American Medical Association, 264, 998–1002.        doi:10.1001/jama.1990.03450080084036

Stinson, M.S., Elliot, L.B., Kelly, R.R., & Liu, Y. (2009). Deaf and hard-of-hearing students’ memory of lectures with speech-to-text and interpreting/note-taking services. The Journal of Special Education, 43(1). Pg. 52-64.          https://doi.org/10.1177%2F0022466907313453

Strauss, S., & Xiang, X. (2006). The writing conference as a locus of emergent agency. Written Communication, 23(4), 355–396.

Yaco, S. (2007). The potential for use of voice recognition software in appraisal and transcription of oral history tapes. ARSC Journal, 38(2). pg. 214-225.

Young, C. & Stover, K. (2013). “Look what I did” Student conferences with text-to-speech software. The Reading Teacher, 67(4). pg. 269-272. DOI:10.1002/TRTR.1196

Speculative Futures

A Dystopian Speculative Future

Minanfotos (2021). City landscape color. Photograph. Pixabay. https://pixabay.com/photos/city-landscape-color-nature-3542248/

In the not-so-distant future, the world is a very different looking place. There has always existed a gap between those who have the technology and those who do not, but over time that gap has become a dark chasm.

It all started in the early 2030s when paper products were deemed to be a major environmental hazard. A halt was put on all forestry and processing of paper products, from printer paper and books to lumber and furniture. A culture of reduce, reuse and recycle became the norm. It sounded like a sound environmental strategy, but those who relied on the printed word for the education and the support of schools rapidly fell behind as their textbooks became outdated and their supplies needed for learning rapidly dwindled. People became concerned for the wellbeing and mental development of their children, while areas that relied on primary industries for their economy began to degrade and break down.

When this started to happen, those who could afford to began to migrate in search of a better life. Families streamed from great distances in search of the places that technology reigned supreme and after the paper ban, life not only continued on as normal, it thrived. Where cities such as Tokyo, Silicon Valley, Vancouver, Mumbai, and Paris once stood, there emerged a new type of city. A city-state governed by technology. Education in these states took on a drastically different method; all learning was centered around the maintenance and advancement of technology. Texts existed purely in digital forms while students took lessons on the language of code alongside learning how to read. Authorship of new writings became less common as predictive algorithms voraciously consumed the contents of the pre-existing libraries of knowledge and became the primary means of information production. Humankind’s relationship with text became one-sided, focused on consuming only.

With the influx of residents and the resulting economic upturn, these areas began to grow. They became shining metropolises of advancement and technology. With the need for space and infrastructure, local governments overturned the environmental laws that had decimated so many areas around the world. The harvesting of the natural world was harsh. It allowed for these cities to grow at alarming rates while devastating the resources around them. The cities grew into gleaming jewels of human progress, while surrounded by the murky stain of human consumption. While life had prospered in these technology-driven areas, abandonment by residents meant that the areas they left behind became desolate and decrepit. This change in law also accomplished an unintentional side effect: it angered those who still resided outside of the cities.

This was the beginning of the conflicts. People living outside the walls of the tech-cities began to raid them in search of resources to help them survive. Those who lived within the walls began to demand their governments protect them. Technology leaders developed algorithms to find those who would be best suited to help guard the cities. Algorithms poured through all available data, from Facebook posts to surveillance data to medical information. Those selected were elevated to the position of state military police, tasked with protecting the secrets of the technology. That was still not enough for the citizens, protests filled the streets each night and anger filled the online sphere. Politicians were goaded into using the same algorithms to analyze their own citizens, identifying potential problem makers and defectors, taking them into custody before they could cause problems.

Outrage grew both inside and outside the cities. Conflicts grew in scale and violence until finally those who governed the states felt there was no alternative: war. And war always ends the same way.

 

A Utopian Speculative Fiction

Lee, C. (2021). The spruce. Photograph. The Spruce. https://www.thespruce.com/home-office-organization-ideas-4586995

Adam awoke gently to the sound of his Thursday morning alarm. The pressure sensor in the bed had been monitoring his sleep cycle, choosing to softly awake him when in his shallowest sleep during his programmed acceptable wake window. It made for a pleasant start to the day. Adam stretched his arms, yawned, and sauntered to the bathroom to complete his daily routine.

After exiting the bathroom, Adam walked to his closet where the touch screen on the wall warmly chimed to life. “Good morning Adam,” the soothing voice came from speakers unseen, “I have taken the liberty of selecting three possible outfits based on today’s weather, your scheduled activities for today, and your previous Thursday outfit selections.”. A faint mechanical whirring came from inside the closet and when Adam slid open the doors, his clothes were hanging neatly. He tapped his preferred option on the touch screen and the closet rod slid forward for him to grab his clothes.

After getting dressed, Adam came downstairs and went to his fridge. The possibilities of what he could make flowed across the screen on the front of the fridge collaborated between the internet access and the history of his last shopping trip. A smoothie seemed appealing, so Adam opened the doors, gathered the ingredients, and set about making breakfast. The digital assistant voice chimed again from the depths of the house. “Adam, according to traffic patterns, you should depart for the office in the next 10 minutes.”. Grabbing his keys, Adam left the house, the security system locking itself as the hidden cameras watched the owner depart the premises.

The biometric scanner built into the front door of the office beeped quietly as Adam grasped it and admitted him to the office. Adam took off his coat, made his way to his corner office, and took his seat. The coffee maker whirred to life as he took his seat and Adam gestured his passcode into the camera above the computer, the motion completing the security protocols. Adam ran through his day in his head, he had some notes to complete before starting. He typed the first word into his keyboard, allowing the predictive algorithm to complete his thought. He waved his hand mindlessly, using the motion gestures to approve or override the suggestions made by his text software. An alarm chirped from his computer, alerting him that his next task was about to start.

Adam started up the camera app, checked his hair, and launched the broadcast software. He smiled widely as he connected to his classroom and waved to the camera. Two dozen black rectangles greeted him, each emblazoned with the initials of his pupils. Too cool, as usual. Adam typed into the computer, allowing the computer software to enunciate his words for him. He had long given up on trying to talk with his students, each of them preferring to complete their group tasks with the AI software prompting the conversation or using text as their primary method of communication. Remote learning was supposed to be temporary, but with the option to travel anywhere while learning, many families had permanently made the change.

Adam clicked his mouse, distributing the copies of the novel that they would be working with today. He missed the days of feeling the weight of the books in his hands and seeing the well-worn pages that he lovingly repaired, but the practicality of trying to ensure students carried materials with them was not worth the effort. He assigned their reading pages, clicked off the camera, and waited patiently for any questions. His co-teacher AI filtered any basic queries, forwarding only those questions that it was not equipped to answer. That happened less and less these days.

Adam sighed, leaned back, and dragged the PDF of the novel into his lesson generation program. He let the program work, sipped his coffee, then curated the suggested lessons for his upcoming classes. The texts he approved were instantly uploaded to his file storage and classroom website for future reference. Another alarm chirped on the screen. Adam flipped over to his next classroom where he repeated the same process with his second classroom. Technology had made his job so efficient that he could now do the job of two teachers without setting foot in a school.

He glanced out the window of his office wherein the room across the hall, and in many like it, another teacher was doing the same job. Adam didn’t even know her name. Adam loved teaching and had become incredibly tech literate to do this. He was well-respected in the learning community and even had published some articles on AI-assisted, remote teaching. But Adam missed talking to people.

Algorithms of Predictive Text

Education is not about the fact that we are going through this process of being together. I don’t think you need to be there in person to see other benefits. There are some people who don’t have the right idea of what to do. I don’t know what the name of the game is but the only thing that matters is that we don’t go backwards.

I chose to create a micro-blog on the prompt, “Education is not about…” and used the Apple iPad textual algorithm to complete the micro-blog. Since the algorithm makes predictions on the weighted strength of the connections fostered through use,  I think that it is important to make note of what activities that I primarily use this device for. I do very little academic writing on it, instead primarily using it for personal reading, texting friends/family, and quick online queries. These activities would definitely shape the words that are suggested by the algorithm.

The statement that was generated by my predictive algorithm was similar to one that you may see on a textual product such as a blog. The usage of the personal voice distinguishes it from the other type of writing that I primarily participate in, which is academic in nature. The produced text may also show similarities with novels told from the first-person point of view, although the subject matter and writing level would be lower than those I would be reading as an adult.

This generated statement is different from the way than the opinion I would normally express on the topic. While it conveys the similar ideal of education being related to a collaborative movement forward, I would not have specifically mentioned other people since I believe that education is primarily a unique, individual experience that can be enriched through collaboration with others. This is an example of how the predictive algorithm influenced the output of the text and shaped the expressed opinion.

Furthermore, I tend to opt into the academic or formal tone when doing any writing due to my occupation as a teacher and continued involvement in formal education. If I was provided with this prompt, I would not have used any first-person personal voice. The algorithm twice uses “I” as a way to begin the sentence. This prediction would originate from my personal text messages that I occasionally use on the device, where I opt for a much more casual voice, often using personal nouns. The “voice” that the algorithm created for me sounds like a blend of the two different “tones” of voice that I use in my daily life, but is not quite able to replicate what I would respond with.

In the educational setting, I often see students using the predictive text element that is part of Microsoft Word to complete writing assignments. This often makes their writing unintelligible and the student’s comprehension of what they have produced is often compromised. With the emergence of algorithms being used in an educational setting, it is easy to see how they could support students but also take away from authentic learning experiences that students participate in as part of making mistakes and maintaining a growth mindset. I would imagine that a more complicated algorithm would also make it easier to plagiarize since it could use other academic sources as a basis of knowledge. We can also observe algorithms as an academic policing software, such as TurnItIn. This software can make it easier to maintain academic integrity, but also can reduce the role of the teacher and lower their understanding of their student’s abilities. It will be a constant struggle of education to balance progress with supportive usage of technological advances moving forward.

Attention Economy

As someone who uses the internet daily for both my professional and personal life, I set out to complete the User Inyerface task thinking that it would be fairly easy and quick. However, I quickly found that to be simply not true. It took me 16:29 to navigate my way through the pages.

The aspect that was the most challenging for me was attempting to complete the verification to confirm that I was human and complete the registration. The checkboxes and images presented were all homographs, making it impossible to distinguish which ones were to be selected to complete the verification. I spent several minutes trying different combinations, such as choosing all of the drinking glasses for the “Select all glasses” task, then trying all glasses that assist with vision, and finally glasses that would be in windows. Finally, in frustration, I tried all of the checkboxes and that was when I realized that the selection boxes were actually above the image rather than underneath as I had assumed. You also needed to scroll upwards to view all the available checkboxes, in addition to selecting all images.

This checkbox debacle for me was representative of the dark pattern of bait and switch. If this was not a verification, but rather a selection of goods or services, it would be very easy to accidentally select something undesired, but could potentially be profitable for the hosting company. I began this experience dismissing the webpage as simply poorly designed, but after reflecting on the Dark Patterns site, it was clear that the page could simply be poorly designed for the user, but serve the interests of the host.

Another dark pattern that was evident in the User Inyerface experience was “privacy Zuckering”, which Dark Patterns defines as sharing more information than intended. When I sign up for web pages that I will not be using regularly, I often use a fake age/birthdate, mis-select gender, or use a throwaway email. The web design used several strategies to make it incredibly challenging to misrepresent myself, such as forcing the selection of a password that shares a letter with the email or not allowing for information to be submitted that did not line up. In order to navigate further, I found myself needing to think carefully to beat the system, but can see how people would become frustrated and use their true information to complete the transaction. Completing this reflection, I also realize that I never saw any terms and conditions, meaning that the webpage could potentially be free to sell my data to others for profit.

Another dark pattern that irritated me greatly was that of misdirection. This was evident in the help chat box and the lock screen. What should have been simple actions, such as closing the chat box or accessing help, were intentionally made difficult with symbols that shared resemblance or in a similar proximity. While the User Inyerface was not trying to sell anything, it is easy to see how other web pages would utilize this pattern to take advantage of distracted users.

References

Brignull, H. (2021). Types of dark patterns. Dark Patterns. https://www.darkpatterns.org/types-of-dark-pattern

Golden Record Networking

 Is the visualization able to capture the reasons behind the choices?

The visualization has collected participants into communities based on the similarities of their track selections, which since it is based on a numerical assignment is a quantitative data set. A limitation of this visualization is that it does not allow for a consideration of the qualitative motivations that went into the curation of the Golden Record. When examining my own placement within a community, I was grouped with 3 of my colleagues, but when reading their reflections on the motivations available on their webspace, our motivations differed greatly.

For example, while R. Lalani (2021) used a similar motivation to my own, trying “to have some measure of geographic and racial diversity represented” for part of the curation. It quantitatively and qualitatively makes logical sense for us to share a community. In contrast, Noelle (2021) tried to “use ‘math’ to match the songs from the record to Voyager 1 and 2’s most important events while they were within our solar system”. We did not share nearly the same motivations, yet both the quantitative value of the number of edges and nodes groups us together.

The visualizations of these communities assume that we share similar mindsets and motivations, which simply is not true. The algorithm is not able to consider the qualitative aspect of the community relationship. This emphasized for me why companies, such as Google, rely so heavily on user input and interaction measurements to provide weight to certain online nodes. It is an attempt to allow the algorithm to accommodate the more ‘human’ qualitative nature of the data.

Reflect on the political implications of such groupings considering what data is missing, assumed, or misinterpreted.

           An implication of these groupings that ignore the qualitative data means that groups of people with potentially minimal ideals and motivations can be brought together. There also exists some null data that easily slips through the cracks. I chose to focus on the community that I was grouped into. While examining the degree of connections and the centrality, I noticed that there were not enough nodes for how many curators and Golden Record tracks there were. For example, since no curator in my community selected Track 2 or Track  4, that data was completely disregarded as irrelevant by the algorithm.

If weight is provided by interactions in many online algorithms, nodes that are not interacted with frequently are placed further down the hierarchy of search results. That means that the more privileged populations who can easily have Internet access see their priorities and preferences reflected in the online space, whereas those who are already at a disadvantage may have potentially beneficial nodes pushed from ‘view’ with the massive size of the network. This null data can potentially serve to reproduce and perpetuate inequalities that already exist.

 

References

Lalani, R. (2021, October 31). Task 8: Golden record curation. UBC.  https://blogs.ubc.ca/rlalani540/2021/10/31/task-8-golden-record-curation/

Peach, N. (2021, October 30). Task 8: Curating the golden record. UBChttps://blogs.ubc.ca/peach524/2021/10/30/task-8-curating-the-golden-record/

Golden Record Curation

For this task, I was listening to a playlist of the Golden Record music while reading the quote by Apple (1985) about how the knowledge that is considered legitimate by those in power becomes the taught curriculum, which can lead to power imbalance. With that idea in mind,  I decided to try and curate this collection in a manner that did not reflect my personal preference and rather created a simple collection of diverse musical pieces.

Most modern music pieces run approximately 2:30-4:00 in time and I decided to start narrowing down the collection based on a similar time. I selected 3 minutes and thirty seconds as the dividing line and used a random number generator to decide to use those longer or shorter than my selected time. I ended up using songs under the length of 3:30. I had previously decided to look for the greatest geographic diversity, so I again used my random number generator to eliminate any tracks from the same location. I continued this process, attempting to select one track from each region of the world, using the random number generator to eliminate tracks from similar regions.

My Curated Golden Record Playlist

  1. Senegal, percussion, recorded by Charles Duvelle. 2:08
  2. “Johnny B. Goode,” written and performed by Chuck Berry. 2:38
  3. New Guinea, men’s house song, recorded by Robert MacLennan. 1:20
  4. Mozart, The Magic Flute, Queen of the Night aria, no. 14. Edda Moser, soprano. Bavarian State Opera, Munich, Wolfgang Sawallisch, conductor. 2:55
  5. Georgian S.S.R., chorus, “Tchakrulo,” collected by Radio Moscow. 2:18
  6. Peru, panpipes and drum, collected by Casa de la Cultura, Lima. 0:52
  7. “Melancholy Blues,” performed by Louis Armstrong and his Hot Seven. 3:05
  8. Azerbaijan S.S.R., bagpipes, recorded by Radio Moscow. 2:30
  9. Solomon Islands, panpipes, collected by the Solomon Islands Broadcasting Service. 1:12
  10. India, raga, “Jaat Kahan Ho,” sung by Surshri Kesar Bai Kerkar. 3:30

 

References

Apple, M. W. (1985). Teaching and. Teachers College Record86(3), 455-73

 

Linking Assignment

#1: Maurice Broschart’s Twine Taskhttps://blogs.ubc.ca/etec540texttech/task-five/

Reflecting on this selected link, I believe that I was drawn to Maurice’s post due to my own experiences working with Google Docs and linking in the same manner that he shared. In my professional life, I often use links within documents to share multiple sources of information or contextualize learning for my students. What I never considered was the fact that this is actually creating a hypertext that my students and I interact within without a second thought. It became second nature for those individuals who were raised as digital citizens and practice digital literacy on a regular basis.

These documents that are linked may seem to be one continuous text, but really demonstrate the concept of parallelism, as discussed by Nelson’s (1999) writing. With the ability to rapidly access information from multiple authors without necessarily changing our mindset between readings, the reader may be unintentionally ignoring the intention of a text and making objective comparison impossible. With the expansion of the Internet network and the complexity of node connection, it is easy to ignore the intentions of online texts in hypertext and use them to serve our own purposes. I think of writing an academic paper in my undergraduate as an example. I would often use the references section to find related journal articles, but use only a sentence or two to reaffirm my position rather than considering the full context of the research. The hypertext network made it easy to jump from text to text without changing the perspective lens that I was using to critically examine the text. I can fully understand and appreciate Nelson’s (1999) perspective that intertext relationships need to be visually seen in order to explain, comment or disagree with in an informed manner, even if this reality is incredibly challenging with today’s largest hypertext, the Internet.

 

Nelson, Theodore. (1999). “Xanalogical structure, needed now more than ever: Parallel documents, deep links to content, deep versioning and deep re-use..” Online.

 

#2. Johanna Bolduc’s Potato Stamp Task- https://blogs.ubc.ca/boldjo/2021/10/04/assignment-4/

The reason that I selected Johanna’s post for my linking reflection was the difference in the way that we chose to approach the task. For my print, I chose to cut in relief into a single potato, whereas Johanna imprinted each letter into its own respective potato. Looking at the Clement (1997) reading on the evolution of printing and eventual mechanization that lead to mechanically printed texts, I felt that I had taken a much different perspective on the manual printing process.

When approaching this task, I never considered carving the letter into multiple potatoes or sections, only that I would be doing a single word on the single stamp. I believe that this assumption of the best method to approach the task stems from my relationship with the text, in the sense that it is simple for me to generate and print text using technology. While typing on a computer or smartphone is creating the same text as the use of matrixes and pressing, there is a disassociation between the process since the labour and the product. Additionally, word processing software and algorithms that assist with corrections often deal with the combination of letters only as a single word rather than individual elements, something that I have also incorporated into my method of thinking about the creation of text.

My approach to this task was in contrast to Johanna’s. She took an approach that echoed the development of the Gutenberg printing press that used imprinted type blocks to impose the text onto the paper. Celement (1997) makes note that having type to complete a full book would be incredibly expensive and the type needed to be reused to make the process practical (pg. 14). This revelation in reflecting on the post was that my “potato type” would be incredibly ineffective in a printing press due to the singular nature of my selected word, whereas Johanna’s approach would be much more practical due to the multiplicity and fluid nature of the individual letter types. We both reflected on the time-consuming and patience-requiring nature of the process, but my approach would actually require a significant amount more time and effort. This realization made me examine my relationship with text and how I take the production of text for granted through the usage of technology.

 

Clement, Richard W. (1997). “Medieval and Renaissance book production.“. Library Faculty & Staff Publications. Paper 10. https://digitalcommons.usu.edu/lib_pubs/10

 

#3. James Martin’s Emoji Story- https://blogs.ubc.ca/etec540jamesmartin/2021/10/17/task-6-4-emoji-story/

I chose James’ entry partially because it was depicting one of my favourite movies, but also because of the manner in which he chose to use the arrangement of the emoticons to pictographically represent the scene, which was fascinating to me. I had always used emoticons in primarily informal communications and for the communication of short ideas/meanings or even single words. I had never considered using emoticons in this manner. I wrote my emoji story in a manner that echoed the sequence of traditional written text, is left to right, and listed in the intended sequence for my reader to interpret. James created a spatial organization in which all the elements are viewed at once and the arrangement is what gives the reader meaning, an idea that is echoed by Kress’ (2005) work. I felt that James used the affordances of the visual image more effectively than I did in this task.

Another reason I chose to reflect on this post was it lead me to a realization that I made the assumption that the act of interpretation is different for words than that of emoticons. Kress (2005) notes that images can be used to depict anything and provide meaning through that depiction (pg. 15). I had assumed that this was the case for emoticons; their arrangement and selection are what provided meaning. However, as James points out, these emoticons are all assigned meanings by the creators, thus actually having a finite number of possible elements making emoticons closer in nature to words than image-based depictions. Another element that emoticons share with words is that they are always general and vague, meaning the reader needs to provide the meaning (Krauss, 2005, pg. 15) unless they know how to access the hidden assigned meanings, such as using the read-aloud software of the device, as James did. I noted that emoticons may have a multiplicity of meanings depending on the cultural contexts of the reader, something that James echoes in his response to my comment. He gives the example of a firecracker, which he intended to be an explosion, but our peer interprets it as being representative of Chinese New Year (Martin, 2021). This post really forced me to examine my assumptions about the nature of emoticons and their relationships to images and words.

 

Kress (2005), Gains and losses: New forms of texts, knowledge, and learningComputers and Composition, Vol. 2(1), 5-22.

Martin, J. [james martin]. (2021, October 27). Emoji story. [Online forum post]. WordPress. https://blogs.ubc.ca/etec540jamesmartin/2021/10/17/task-6-4-emoji-story/

 

#4. DeeDee Perrott’s Golden Record Networkhttps://blogs.ubc.ca/ddperrott/2021/11/04/task-9-golden-record-network/  

The reason I selected DeeDee’s posting to reflect on was that I found she shared my experience of trying to decipher the data and struggling to understand the quantitative element without paying equal attention to the qualitative component. As a strong proponent of the usage of algorithms to enhance our daily lives, I had not considered what information may be omitted by them, rather focusing on the benefits, such as bringing information that may be of use or interest to the focus. Seeing the edges and nodes of the Golden Record responses visualized for me the complexity of the network, especially such a large one such as Google, and highlighted how easy it was for some nodes to become entirely disconnected or hidden from all, but the closest scrutiny.

During this module, we also learned that some algorithms take the number of interactions into account to add weight to the degree of the relationships between nodes. This was interesting to me, as I had not previously considered how search engines create the hierarchy of results. As I noted in my comment on DeeDee’s post, I was struck by the potential for the recreation of inequity related to ease of access to information. If weight is determined by the number of interactions, then those with the access to interact with nodes that are of interest to them, or align with personal beliefs, have more influence on the weight and resulting hierarchy used by some search algorithms. This may lead to inequity of access being reproduced in the digital network. Those in positions of privilege determine what information is deemed “more important” due to simply having the privilege to access the online network space. These potential problems with the relationships between algorithms and networks were not something I had considered previously and will continue to be mindful of moving forward.

 

#5. Chris Howey’s Attention Economyhttps://blogs.ubc.ca/chowey/2021/11/08/attention-economy/

My reasoning for selecting Chris’ post to engage and reflect with was that I felt he had a different experience with the User InyerFace activity than I did. My frustrations came mostly with the fact that I felt this website was needlessly trying to attain my personal information, whereas Chris felt similar frustration with the design elements of the webpage itself. I found it interesting that two people with experience working in the online sphere could have such different experiences on the same web artifact.

I also picked Chris’ post because I really enjoyed the quote that he pulled from the TED talk by Tristan Harris regarding outrage being a good way of getting attention. This caused me to go back and re-watch the video. The main message that I pulled out was the need for accountability in algorithms and the need to maintain ethics/boundaries in the online sphere (Harris, 2017). We learned much about how there are technological advances in search of perfecting the online experience and gathering information about the users, but it remains to be seen how consumers and users can protect themselves and personal information. There may be laws that protect online users, but as we can clearly see in this task, there are deceptive means to an end. This experience reinforced for me the need to teach and learn about responsible digital citizenship, how to evaluate online spaces, and use critical thinking to make informed decisions online. I will strive to continue to practice and teach my students these ideals going forward.

 

Harris, T. (2017). How a handful of tech companies control billions of minds every day. Retrieved from https://www.ted.com/talks/tristan_harris_the_manipulative_tricks_tech_companies_use_to_capture_your_attention?language=en

 

#6. Amy Jazienicki’s Algorithms of Predictive Text- https://blogs.ubc.ca/etec540ajazieni/2021/11/21/task-11-algorithms-of-predictive-text/

I chose to reflect on Amy’s post because I thought that the quote she had pulled from McRaney’s podcast correlated well with the article written by O’Neil (2017) regarding the problem with algorithms. O’Neil lists 4 layers of complexities when coming to “bad” algorithms: unintentional problems that reflect cultural biases, neglect, nasty but legal, and those that are intentionally nefarious. Those four categories, combined with the point made by McRaney regarding unintentional sexism of algorithms reinforced a concern that I had identified in the networking task regarding the replication of inequities in the online sphere.

This raises the question for me of what responsibility do agencies that maintain these algorithms, such as a company like Google, have to protect consumers from their own algorithms and the injustices/inequities that may be unintentionally reproduced. Additionally, should it be up to the individual to try to combat these effects online or does there need to be government legislation to maintain those rights? This is a problem that I don’t believe currently has an answer to and needs further consideration as technology evolves at an ever-increasing rate.

 

McRaney, D. (n.d.). Machine Bias (rebroadcast). In You Are Not so Smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

 

Reflecting on all 6 Links

A common message that I found myself coming back to throughout the selection of 6 blog posts and the subsequent reflection was how can the rapidly changing text technologies be unintentionally perpetuate societal inequities. I have always been a proponent of the benefit of technology, often amazed at the speed of new developments, as well as how easily I find myself becoming reliant on digital technology for the majority of the methods that I create and consume text. I had never taken the time to reflect on the potential limitations of the way that I interact with text and the need to critically examine my assumptions about texts.

My classmates and I all experienced the various tasks using text in very different ways, which is completely understandable given our different backgrounds, worldviews, cultures, experiences and beliefs. However, I was of the assumption that because we share a similar level of education and interest in technology (assumed because of enrollment in the Masters of Educational Technology program) that we would share more commonalities when engaging with different texts. However, it was a common theme in all the online artifacts that I examined that we had unique experiences. This allowed me to examine some of my own assumptions about when I generate texts for a selected audience, such as my students, and to fully allow open exploration of texts.

In terms of the web authoring tool, all the authors that I looked at used the UBC version of WordPress to present their tasks and reflections. Where there was a difference was in the way that the information was conveyed. Some authors relied heavily on images to represent ideas, others used primarily written text. Less common was other media, such as videos or audio recordings, although I did see these in the posts of other classmates that I did not choose for this activity. Throughout the course, we discussed the benefits of using texts in a way that is not just written, such as Kress (2005) noting that images can be used to depict anything and provide meaning through said depiction (pg. 15), but many classmates still held true to traditional written text. I think that this is representative of how many people interact with the text. Even though how we create text may have changed (such as typing on a computer), we still adhere to traditional rules and formatting since it is comfortable. There is a need for critical thinking and digital literacy regarding how we interact with texts in the new online sphere since traditional methods of thinking don’t always apply to these new mediums.

Task 7-Mode-Bending: What’s In Your Bag

When completing the “What’s In Your Bag” task the first time around, I defaulted to my most commonly used text communication, which was written in full, complete sentences that heavily relied upon adjectives and nouns, combined with photographs. This fell primarily into the linguistic design, as described by the New London Group (1996). My choice in how I completed the task reflected my most common textual experiences, with my occupation being in teaching and having lots of experience in an academic setting. With these factors in mind, I set out to transform my most commonly used elements by turning the photograph into an abstract piece of art and assigning sounds and verbs for each object

Reimagined What’s In Your Bag

Please see the above link for audio recordings

With the written description that was provided in the first iteration of this task, there was a clear lacking of audio design and that was a focus of the semiotic reimagining. With the assigning of sounds to each object, there is a transformation of the linguistic language that is typically correlated with the object to a sound that can form that association instead. The audience is then free to come up with their own interpretation, allowing for the integration of pre-existing knowledge and personal cultural interpretations that may have been limited previously. With the reading of the words, there is another layer of audio design,  as the New London Group acknowledges that spoken language is as much part of the audio design as it is part of the linguistic design (pg. 29).

Another aspect that I wanted to transform was the visual representation. In the first version, the available design was a photo, whereas I wanted to mix the available designs, with the first being audio, to a visual design that focused more on perspective and colours. While I kept many of the colours consistent, I chose to rearrange some object representations and tried to communicate personal significance through which representations I placed in the foreground and the sizing of each representation. I found that a strength of the redesign process was allowing the audience to bring in more of their personal interpretation and potentially open up discussions about the meaning and personal identity.

A challenge that I had in the redesign of this task was redesigning the task into something that was still meaningful for me. Since the first iteration was my preferred method of communication, I felt confident putting that text out there since I felt my meaning would be clear and refined. With this new redesign, I feel that my intended meaning may be a bit diluted and that limited my confidence in creating this version. However, in doing so, the redesign process may make my initial work easier for someone else to understand and acknowledge the multiplicity of design.

 

References

The New London Group.  (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66(1), 60-92.

 

Emoji Story

The process of creating the emoji story was similar to creating the Twine task previously in the sense that it was very organic and unplanned. When starting the task, I did begin with the title. This is because I chose the particular work since I could visualize the emoticons for the title immediately in my head, partially due to familiarity from the use of the emoticons I selected. I also began with the title due to my primary interaction with texts being in traditional forms of media, such as books or journal articles, which always begin with the title at the top of the page. I even found myself trying to establish a manner of underlining the emoticons that constitute the title of my chosen work to match the formatting that I commonly use when producing texts.

An immediate challenge that I faced was that I had chosen my work based on my presumption that an emoticon existed to represent part of the title, however, I was mistaken. I had to then substitute another emoticon that was similar, however, I had hesitation because I was concerned that the meaning of the symbol could be misconstrued by the reader. The idea of depiction as discussed in Kress’ (2005) article, where he says that anything can be depicted at any rate with great specificity, which is a strength of depictions versus writing (pg. 15) was something that I connected to here. While emoticons serve as a sort of digital depiction, they are still limited in their communication potential by the fact that they are still a curated alphabet. With a company, such as Apple, creating this alphabet library, they are the ones selecting important words/ideas that are being represented, representing popular culture. There is no chance to add more details, like a drawn depiction, rather relying on a combination of symbols that individual people may interpret differently.

In order to create my emoji story, I relied primarily on ideas and ideas, while also utilizing a few syllables. With the majority of the emoticons available, they are mostly designed to represent specific words, which I made use of. It is interesting to note that while emoticons typically represent words, there are ones that take on different meanings in cultural significance. For example, the flame emoticon would refer to the word “fire”, but that was not the context I used it for, rather to refer to an emotion. This allows for a multiplicity of meaning for emoticons. I would be curious to see emoticon popularity between cultures to see if there are the same trends in popularity with the emoticons that have taken on alternate cultural meanings.

References

Kress (2005), Gains and losses: New forms of texts, knowledge, and learningComputers and Composition, Vol. 2(1), 5-22.

 

 

Twine Task

For this Twine task, I took inspiration from the choose-your-own adventure books that I grew up reading. I wanted to create a pathway through the hypertext that I created that led to the ideal possible outcome, which in my story was being hired for your dream job at a very peculiar company. The ability to interact with the pieces of writing in a variety of reading orders, as mentioned by Bolter (2001), really reinforced for me a strength of hypertext. The reader is able to have a higher level of interactivity with the text, which I found to be more engaging than simply reading through a passage. Typically, higher levels of engagement and enjoyment can be linked to higher levels of comprehension of what was being read.

The generation of the text was very organic, with no prewriting or thought to structure, as opposed to the steps I would take writing a traditional narrative text. A strength of the Twine platform was the malleability of the passages, manipulating them as I wrote rather than designating the organization beforehand. While being very familiar with conventional prose writing, I had minimal experience working with this type of coding language. The data structures provided within the Twine program really allowed for me to visualize the hypertext and the interconnection between the passages.

The ease of the creation and visualization of this small hypertext Twine story really  opened my eyes to the sheer size and accomplishment of the texts that are being produced as  part of the Internet. We as people seem to place significantly different values on traditionally printed text than that found within the online sphere, when the main difference exists solely on the manner of interaction. Bolter’s (2001)  point about needing to negotiate the remediation of printed text and electronic writing really resonated with my experience since I needed to rely upon traditional printing skills to complete this story, but applying them in a new, code-based digital environment.

ETEC.html

References

Chapter 3 of Bolter, Jay David. (2001). Writing space: computers, hypertext, and the remediation of print. New York, NY: Routledge.