Linking Assingment

Linking Assignment 6: Sonia – Speculative Futures 

I was drawn to Sonia’s speculative futures task because she was prompted to think of a health device that encouraged a sense of optimism and promise for a future society. She created the HealthLink app which “allows the algorithm to locate available medical professionals as well as store and house safely patient medical information so patients always have access to their own important data regardless of which medical professional they see.” This is a really promising medical application that allows users to stay on top of their health and directly convey their health data and information to healthcare professionals. There is no doubt, that a tool like this could certainly improve the lives of many, particularly individuals with chronic or serious health conditions that need to consult several health care practitioners at once. 

I think Sonia almost essentially did the utopian version of my dystopian speculative futures. I refer to a health app that uses biological monitoring to allow users to keep track of their data, until the data becomes public and analyzed by complex algorithms – but not for the better. The key difference here is that in the future, it’s assumed that the data remains private and is treated with care and caution. Whereas, in my narrative, the data becomes publicly available. 

This makes me think about the degree to which we trust large tech conglomerates to make ethical decisions with our personal data and information. I think the question I beg to ask with my narrative is, what would the world look like if we had no privacy? What if the most intimate details of our bodily functions were available to others? If our health data is available for our doctors, then they can make educated decisions about our health and support our needs. But if that data is provided elsewhere, to perhaps insurance companies for instance, then that information can be used against us. The comparison of our two narratives demonstrates how important it is to ensure that our data and information are seen and used by the right people. 

Standard
Linking Assingment

Linking Assignment 5: Kristine – Detain/Release AI Simulator

I think Kristine has done a really great job narrating and reflecting on her experience using the Detain/Release simulator. She has included a play by play of her actions and also included some reflection questions, right at the start, to get the reader thinking. As Kirstine notes, using an algorithm in isolation is challenging, especially when dealing with such sensitive information in a highly-consequential situation. She elaborates that the trade off of efficiency is not always the right approach, especially given the narrow data-driven approach of algorithms. 

One of the key points Kristine outlines is, “For each final decision I made, I wondered if it might result in a bigger consequence such as homicide, rape, loss of a home or job, or even death by suicide.” One of the first things that came to my mind was the possibility of sending a violent man back into the world. Why? Because I too was also concerned about rape, sexual violence, and domestic abuse. Clearly, indicating that I had a bias towards men as I sorted through the cases. As a woman, I move through the world, on some level, afraid of men. Covering my drink, taking out my headphones, looking over my shoulder, walking away from the road, etc… are all things that change/impact my life because I am afraid a dangerous man will put my well-being at risk. Throughout this experiment, that sentiment lingered in the back of my mind. Subconsciously thinking, how likely is it that a woman is going to go home and abuse her spouse? But with so limited information, making an honest and accurate assessment is challenging – even with the right resources we can’t always know what’s going on behind closed doors. This simulation certainly forced me, and likely many of its players, to think about their underlying biases. 

Standard
Linking Assingment

Linking Assignment 4: Chris – Attention Economy 

While I think that the majority of us who had tried to navigate the User Inyerface faced quite a bit of frustration, both Chris and I had defaulted to seeing another person walk through the game. He had watched a Youtuber, Kittrz, do a speedrun of the game to see where players usually go wrong and how to successfully bypass obstacles to get to the end of the game. I think we both leaned towards watching somebody else navigate the remainder of the game, rather than devoting more time troubleshooting. 

Evidently, the purpose of the game was so that users were not able to intuitively move through the process and struggled nearly at every step – regardless of how small the step may be. Chris’s analysis through a disability vernacular, is a perspective that I didn’t think about deeply enough. Chris’s point prompted me to think about user design in a broader sense. The game really demonstrates how frustrating design might be for individuals with a disability, who struggle when user interfaces are not accommodating. That frustration for design can go well beyond web-based pages and even front facing user platforms, but also to our physical tools and technologies.  

This line of thinking reminded me of a TikTok I watched (I desperately tried to find it for this reflection but it’s unfortunately lost in the abyss). A young woman in a wheelchair was saying that her school was able to offer her some really “fancy” assistive technology. This support was offered through applications and platforms, such as advanced audio recording and transcription services. Which is great and all….BUT, most of the doors in the buildings she needed to access for her university classes were not wheelchair accessible! Accessibility to the building through door-opening buttons and elevators are the designs that she needed to succeed in her learning. That is to say that accessibility and design is not a one-size fits all. I took a UX/UI design course through Google and the number one thing they stressed was that user design needs to be made for the universal user, “universal usability refers to the design of information and communications products and services that are usable for every citizen” (Wikipedia). Ultimately, this exercise shows that a keen eye for detail and a high-level of user consideration is needed to create user interfaces. 

References

Wikimedia Foundation. (2022, July 24). Universal usability. Wikipedia. Retrieved March 29, 2023, from https://en.wikipedia.org/wiki/Universal_usability 

Standard
Linking Assingment

Linking Assignment 3: Deborah – Emoji Story 

Deborah’s initial remarks were, “the biggest challenge for this task was finding the appropriate emojis”. I also faced this issue! I am pretty familiar with the emoji keyboard offered by Apple, however, I found it challenging to copy those emojis over, as Deborah said. So alternatively I opted for the emoji keyboard recommended in the assignment instructions. Unfortunately, there are significantly fewer emoji options on the emoji keyboard than on the Apple keyboard. Clearly, the emoji keyboard needs to do an update. 

Similarly, I also found that I was spending considerable time trying to copy and paste the emojis over and edit my story to include all the relevant emojis. There were SEVERAL instances where I had completed my story (or so I thought) and then later realized that a few emojis were missing or something was in the wrong order. But the keyboard does not let you edit the emojis once they are in their line, so on several occasions I had to redo my whole emoji story. The lack of functionality of the emoji keyboard was frustrating, but then I also found out that WordPress does not support emojis and I needed to take a screenshot, adding another element of resistance to this endeavor. Deborah wrote, “Even when I entered into my post, this platform does not want to support them and all I got at the end was lines of question marks. I finally had to screenshot the Word document to create a .png and upload it onto the page as an image.” The two following factors, limited emojis to convey my story and the inability to edit my story, ended up meaning that my story was relatively short and underdeveloped. 

I think Deborah’s reference to how we are told to present slides (with limited text and supportive images) was a really strong connection to how we interpret the relationship between text and image. It reminded me of how my company uses Slack, with an aggressive amount of custom emojis. I even have a collection of emojis with my facial expressions that I can use to reply to threads, demonstrating how I feel about a situation – without actually having to reply with my words. 

Standard
Linking Assingment

Linking Assignment 2:  Lubna – Mode Bending

 I think the way that Lubna reimagined the semiotic mode of what’s in her bag, was very creative and interactive. I appreciate that she took the time to embed the application into her WordPress website and have the tool accessible to the class (rather than linking it elsewhere). She used H5P, which is a platform I am not very familiar with, to create an interactive image. The image, which looks like the contents of her bag, is in grayscale with a dark outline around the objects. As the reader hovers over the contents of the bag, little purple plus signs pop up, bringing the viewer to a new interaction. As the viewer hovers over each of the objects, an audio clip curated by her is played. 

Similarly, I dove into the element of audio. We had both taken audio clips of the objects in our bag being used. She mimicked the sound of sneezing to demonstrate that the object in her bag is tissues, almost in a similar way that I shook a pill bottle. We both asked our audience to listen closely and to guess what the object or meaning might be based on the sound. 

Each of the videos uses a unique sound, which might include her speaking, a child speaking, or perhaps environmental sound. When I hover over the keys, I hear her say “how much do you love me, tell me again” with a response from a young boy that says “like every mini second it adds by infinity and beyond.” The capture of this very intimate moment is revealing of her personal identity and space, much like a bag would be. 

Her commentary below the interactive piece reveals that she seldomly approaches academic spaces in her native language, oftentimes defaulting to English. This piece stood out to me because of her integration of cultural and personal elements. She says “I cut a short audio clip from an interesting YouTube video of a Pakistani man living in New York who ‘unboxes’ this quintessential Pakistani flavor and unexpectedly finds fifty Pakistani rupees inside. This would be worth about 19 cents in American currency, but his joy at the discovery is priceless.” She opted for this video rather than selecting the Blue Ribbon cheesy commercial jingle, and I would agree that the humanistic element of the genuinely happy man adds a special kind of warmth. Her interpretation of intertextuality here is particularly crafty and thoughtful. 

I think this was such a creative and multi-layered piece. Even though it’s so accessible and easy to maneuver, there is considerable thought towards what sounds are chosen and the interpretation of the audience. I also really appreciate how vulnerable this medium can be, with the sharing of intimate conversations; however, it was an effective way of engaging the audience and asking them to think critically about the why and what.

Standard
Linking Assingment

Linking Assignment 1: Dana – Voice-to-Text Task

Dana uses the voice-to-text feature on Google Docs, stating that she picked the platform because it’s the technology that her students use in class. Her initial observations with the Google Doc voice-to-text feature were extremely similar to mine, she says “At first glance, it’s obvious how the text deviates, there are no indents, paragraphs, and few punctuation markings”. The largest challenge Dana encountered was poor grammar and formatting. I also noted in my experience that I found the punctuation to be the largest concern, in my paragraph there were essentially no periods. However, I used the feature as if I had no experience using the tool, which is not the case as I regularly use voice-to-text to rough draft my papers. 

Further, we both take note of the mediation that occurs with writing that does not occur with oral storytelling. Writing can be reformatted, re-written, and re-structured, changing every detail to sound exactly the way the author intends. A person telling a story can easily jump between ideas or instances, whereas in writing, weaving through time requires craft and consideration. When we share our stories we are not confined by the rules of grammar, as Dana notes, “oral language forgoes many of the rules and standards that written language can represent”. 

I think it’s interesting that Dana has included her own experience of her students struggling with literacy and second language learning, stating that they use this application as an assistive technology. Prior to reading her reflection, I had not considered that learners of a second language would be excluded from using this device. Even as a native English speaker, the text-to-speech function was not always correct with my wording, occasionally missing some of my pronouns or propositions. I could see how this function would be especially frustrating for individuals who struggled with pronunciation. I would be interested to do some digging on ESL apps that use speech/voice recording technology to assist with pronunciation, as I would think they are using similar language processing as Google text-to-speech (though I could be very wrong).

Standard
Tasks

Task 12: Speculative Futures

“Describe or narrate a scenario about a corporation found a decade into a future in which society as we know it has come apart. Your description should address issues related to citizenship and elicit feelings of anxiety” 

The government funneled money into “Sync” a company that focused on bringing people together and creating safer communities. When it first launched, the program was only for criminals. Only people who were considered dangerous or threats to the community needed to have the procedure done. In the trial phase, a small pool of individuals who were out on parol with major convictions qualified. The monitor measured all the user’s biometric data, heart rate, movement, body temperature, etc… and cross-referenced their information with GPS tracking. The devices were intended to monitor and surveil those on parol to ensure they were not partaking in illegal activities or putting others at risk.

The biometric measurements were wildly accurate and even helped some catch health issues at the very early stages. So, Sync released a commercial version, supported by the government, that allowed individuals with medical conditions to get the chips, under the pretense that they were medically assisted devices. Within the first year, Sync was endorsed by the government, celebrities, and serious athletes as the next step towards a longer, healthier life. The general population followed suit, and the model was refined to only require one embedded sensor near the heart. Sync partnered with governments in dozens of other counties and infiltrated global markets; the device became a staple in the lives of billions. Sync was a household name, like Amazon or Google. 

However, as Sync grew, the company expanded the purpose of the monitor – becoming more than just a health device. The page that used to infrequently pop up on people’s smart-watches warning them that they were in proximity to a convicted felon turned into detailed profiles. Profiles of all their users were created with publicly available biometric data that could be accessed by anybody. At first, it was fun sharing your heart rate with your friend. But API integrations created by Sycn and other big tech companies used advanced algorithmic models to predict a person’s mood and mental state. There was even an application that compiled a person’s metadata from their computers, TV, phone, watches, and vehicles and cross-referenced the information to their biological data. This allowed the app to re-create a user’s life in digital form and make extremely accurate, curated predictions of a person’s thoughts, feelings, and future actions.  

That’s when things went south. 

In real-time, people could extrapolate and analyze data from heart rates and body temperature. Meaning, this information could be used to determine if people were telling the truth. Unlike devices of the past, like smartwatches, removing the monitor required surgical intervention. The permanent nature of the device meant these features were inescapable. The government took advantage of the scarily accurate lie detection feature and re-opened nearly all criminal investigations. Headlines started to read “Woman kills husband after using Sync Monitor Lie Detection” and “Another Teen Dies After Trying to Remove Sync Monitor”. Society faced collective panic, living in fear of being watched and constantly exposed. Relationships and lives started to unravel as the Sync monitor forced its users to be completely honest. Secrets that people had hoped to take to their graves were now being trialed by friends, family members, and law enforcement. In a matter of a few months, the world had become an unpleasant, isolating, and hostile place, filled with distrust toward loved ones and a burning resentment toward the government. 

 

 

Standard
Tasks

Task 11: Detain/Release 

In the book Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Noble she explores how biased algorithms are effectively oppressing people of color, in a deceitful and manipulative way. Within the introduction of the text, she prefaces that these algorithms are impacting users in very legitimate and pragmatic ways. However, they primarily function without investigation from the public and are hidden from everybody except a very few select individuals. So, for the most part, the public has a very limited grasp of the machine-learning functions that dictate these highly-impactful algorithms (Noble, 2018). Its also the case that officials using these algorithms to make decisions, like police officers, are unaware of the parameters and biases that impact the algorithm’s computing method and therefore the results. We see this concept in action through the “Detain and Release Simulation” produced by the Berkman Klein Centre for Internet and Society. 

Unfortunately, this simulation is a crude example of the ways in which algorithms are used to classify individuals (Porcaro, 2019). My initial impression of the game, that the player needs to assess “criminals”, demonstrates that it’s clear this algorithm is being used on marginalized individuals who likely do not have as much credibility to voice their concerns. The algorithm categorizes each case through three separate categories, flight risk, their likely hood to re-offend, and level of violence. Each category is then given a level between low, medium, and high. No information is provided on how these levels are distinguished or why an individual might pose more risk than others. Despite that, the player of the game needs to navigate each case and determine if they are going to release the offender or detain them. 

My assumption is that the algorithm is using information like the offender’s previous criminal record, employment status, income, social background, geographical area, etc.. to determine these low-high rankings. However, these parameters are constructed using overarching generalizations that are likely informed by other biased algorithms or information. As Noble points out, these algorithms feed off one another and perpetuate inaccurate representations of marginalized people (Noble, 2018). The player does not have the opportunity to ask for more contextual information, nor are they encouraged to, as the limited function of the game suggests that the sole purpose of the player is to sort through each file. We could say that the player is acting on their own judgments when making their decisions, and to some degree that might be true, but the player is determining a risk threshold based on the information provided by the algorithm. So, in many ways, the element of human judgment is almost irrelevant considering the algorithm had already determined which cases are more likely to be detained. For example, an ethical person would likely not allow a highly-violent individual to re-enter society so any file with a high level of violence is unlikely to be released. 

The player might release an individual they deem low-risk but later find out through a news article that the offender has gotten into some trouble. The article reads something along the lines of “Judge allows violent offender back onto the streets”. What the headline of this article implies, is that the player, or decision-maker, is at fault for determining this offender is low-risk. However, the language then places the accountability onto the decision maker, which calls into question the player’s judgment. The accountability placed on the decision-maker allows the inaccuracies of the algorithm to go left unchecked. Rather than reassessing the variables that determine the algorithm’s accuracy, the player continues navigating the remaining files with a higher level of scrutiny and apprehension. This comes back to the idea that algorithms are seen as neutral actors, incapable of being swayed by human biases because they are founded on mathematical equations – when that’s simply just not the case. 

References

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Porcaro, K. (2019, April 17). Detain/release: Simulating Algorithmic Risk Assessments at pretrial. Medium. Retrieved March 20, 2023, from https://medium.com/berkman-klein-center/detain-release-simulating-algorithmic-risk-assessments-at-pretrial-375270657819

Standard
Tasks

Task 10: Attention Economy

Expectedly, the User Inyerface game designed by Bagaar was extremely frustrating to navigate and quite time-consuming (Bagaar). The game is “designed to induce rage: mislabeled buttons, complicated password rules, nearly impossible to close pop-up windows, slowly scrolling terms and conditions, and annoying CAPTCHA forms” (Gartenberg, 2019). The makers of the game sought to make the navigation of the site as challenging as possible, including everything that goes against strong user interface design. 

I think most of us know when we are moving through a design that is not user-friendly; oftentimes, we feel frustrated that something is not clear or the navigation is not intuitive. This particular game was infuriating, and frankly, I spent a good 10 minutes getting through the first two pages before I gave up and decided to watch a Youtuber play the game instead. Misleading words, colors, italics, and bolded text were among the few design elements that made surpassing the first page challenging. The big green button below the instructions reads “NO” and is not linked to anything. Naturally, I clicked the button, wondering if it was only the lettering and not the shape itself that needed to be clicked; however, the word “HERE” was what needed to be clicked in the line below.

The next page included a form that needed to be filled with the users chosen password, email, and signed user agreement form. Entering information into each part of the form was difficult, everything required several alterations or some degree of troubleshooting. 

As users, we often feel the impacts of a badly designed user interface but what’s more concerning is when an interface uses its design to trick its users (often consumers). These deceptive methods are what Harry Brignull describes as “dark patterns” (2011). Unfortunately, these deceptive methods are not always easily identifiable, allowing certain companies to make additional profits at the expense of the user’s ignorance. 

An area of inquiry that sparks my interest is the controversy surrounding social media user interface design. The documentary The Social Dilemma explores how the objective of these social media companies is to produce users who become dependent on ingesting content. These big tech companies use some extremely deceptive, tactful, and manipulative methods to ensure their users depend on the gratification they receive from social media – to a point where the media impacts the user’s mental well-being. The goal of encouraging users to endlessly and mindlessly scroll plays on the human psyche, in ways that the majority of users can’t begin to conceptualize (think children). TikTok’s user interface and algorithm model are notorious for having young users glued to their phones, for at times, twelve hours a day. This issue was exacerbated by the pandemic when adolescents were spending most of their time online. However, the problem persisted following the lifting of restrictions with children and teens still spending an alarming amount of time on the app. Recently, Tiktok claimed that in upcoming months they will impose a mandatory 60-minute screen limit for all users below the age of 18 (Hart, 2023). Could we say this is TikTok removing dark patterns from its design? 

 

References

Bagaar. (n.d.). User Inyerface – a worst-practice Ui Experiment. User Inyerface – A worst-practice UI experiment. Retrieved April 9, 2023, from https://userinyerface.com/

Brignull, H., Eagan, C., MacIntyre, J., Clancey, P., Overkamp, L., Brosset, P., & Prater, S. V. (2011, November 1). Dark patterns: Deception vs. honesty in Ui Design. A List Apart. Retrieved April 9, 2023, from https://alistapart.com/article/dark-patterns-deception-vs-honesty-in-ui-design/

Hart, R. (2023, March 2). TikTok sets default daily screen time limit for under 18s. Forbes. Retrieved April 9, 2023, from https://www.forbes.com/sites/roberthart/2023/03/01/tiktok-sets-default-daily-screen-time-limit-for-under-18s/?sh=7a39f2c01bfa 

 

Standard
Tasks

Task 9: Network Assignment Using Golden Record Curation Quiz Data

Networking diagrams can be helpful to visualize the relationship between entities, but oftentimes in these diagrams, we lose context that would otherwise provide the viewer with significant value. My initial thoughts on the diagram are that there are clear indications as to what songs the group preferred. It’s not a surprise to me that some of the American songs are the most picked. Songs like Track 7: Jonny B Goode and Track 14: Melancholy Blues are likely songs many of us in the class had heard before. These songs likely evoke memories or personal experiences because they are commonly played on the radio and in movies. 

We can see that the less selected songs are outlining the perimeter, as they are not acting as nodes with several connections. Some of the less-selected songs include Track 19: Izlel e Delyu Haydutin by Valya Balkanska. This surprised me as this was actually one of my favorite songs on the records, and one I had never heard of before. Another less-selected track is Track 12: Chakrulo, with only one person adding it to their list. I think this aggregation of data does a good job of representing the number of connections or preferences our class might have for a certain song; however, it does not capture the reasoning for why we selected the songs we did.

Interestingly, this type of data reminds me of a previous software we used at my old university called LitMaps. LitMaps is a platform that produces networking maps that demonstrate how frequently related texts are referenced. This tool is helpful when performing literature reviews, but also in assessing the literary canon (LitMaps). 

 

References

Litmaps. (2023). About Litmaps. Litmaps. Retrieved March 20, 2023, from https://www.litmaps.com/company

Standard