Linking Assignment: Links

Linking assignment to T6 link: https://blogs.ubc.ca/shetec/2021/10/15/linking-assignment-the-many-lives-of-emojis/

Linking assignment to T8 link: https://blogs.ubc.ca/shetec/2021/10/31/linking-assignment-t8-spinning-multiple-records/

Linking assignment to T9 link: https://blogs.ubc.ca/shetec/2021/11/13/linking-assignment-task-9/

Linking assignment to T10 link: https://blogs.ubc.ca/shetec/2021/12/06/linking-assignment-10/

Linking assignment to T11 link: https://blogs.ubc.ca/shetec/2021/12/06/linking-assignment-t10/

Linking assignment to T12 link: https://blogs.ubc.ca/shetec/2021/12/06/linking-assignment-t12/

Final Assignment: The Future of Communication in Education

Click link to download podcast script: ETEC540fin-script

References

Dunne, A. & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge: The MIT Press. Retrieved August 30, 2019, from Project MUSE database.

Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and learning. Computers and
Composition, 22(1), 5-22. http://doi.org/10.1016/j.compcom.2004.12.004

Liu Y. & Sourina, O. (2014). Real-Time Subject-Dependent EEG-Based Emotion Recognition Algorithm. In: Gavrilova M.L., Tan C.J.K., Mao X., Hong L. (eds) Transactions on Computational Science XXIII. Lecture Notes in Computer Science, vol 8490. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-43790-2_11

Vallor, S. (2018, Nov 6). Lessons from the AI mirror Shannon Vallor [Video]. YouTube. https://www.youtube.com/watch?v=40UbpSoYN4k&t=872s

Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3-14. https://doi.org/10.1016/j.stueduc.2011.03.001

Linking Assignment: T12

Richard’s post resonated because there have been times I have wished I had extra eyes and ears to monitor my students and prevent bullying. Just the other day there was one pen and four students all claiming it was theirs. Credit Note could save me the trouble of talking to different people to determine who the pen belongs to. Is the solution to ban outside stationery (the school provides students with writing utensils and stationery so students don’t have to bring anything)? What about free choice, being responsible and learning from mistakes? I feel algorithms don’t have the nuances to allow people to make mistakes and grow from them. I fear these kind of algorithms teach people to follow a guideline without thinking. How can future generations innovate new ideas if they live in fear and are forced to live with black and white values?

Linking Assignment: T11

 

I responded to Anu’s post because of her comment about algorithms need to be transparent and fair. What does fair mean exactly? Does it mean that everyone should be treated the same? If someone doesn’t show up for their hearing because they need to work to support their family, should they be treated the same as someone who doesn’t show up for their hearing because they were stealing to support their family? I wonder how an algorithm would process this? O’Neil (2017) mentions that the algorithm targets people in poverty. It’s impossible for an algorithm to consider all aspects of a person’s circumstances because at this time not all agencies are linked or even online (for example, hospital records). When an algorithm targets people based on their income or skin colour, it overlooks the underlying issues and perpetuates the biases that the algorithm was created with. I think algorithms can be seen as patterns that seek to repeat these patterns, which is why the same people keep getting pulled into this net. Judges in bail court have limited resources and time, if this is all the information you have, then is it fair to ignore it? Could algorithms be used for preventative measures instead so judges can properly listen to cases?

References

O’Neil, C. (2017, April 6). Justice in the age of big data. IDEAS.TED.COM https://ideas.ted.com/justice-in-the-age-of-big-data/

Linking Assignment: T10

I responded to Clarissa’s post because some things that she did, such as entering fake personal data, have become part of our daily lives. In the past child predators and kidnappers would want to know your name so they could pretend to know who you are and get you into their van or house. Now marketers and companies use your name to create a false sense of intimacy so they can get you into their stores, website and/or checkout counter. The occult could use your birthday and birth time to see your past and future, which is basically what the algorithms are, but they seem more insidious because they also use that data to manipulate your future decisions, which I suppose the occult could do as well, but the algorithms are everywhere and hard to ignore and so commonplace that they are sometimes unnoticeable.

Task 12: Speculative Futures

“Jay, why did you scream?”

“I don’t know.”

“Think about it.”

“I thought that was your job.”

“What was happening before you screamed?”

“Well, B screamed first so I decided to too.”

“If B jumped off a bridge, would you do that too?

“You sound like my teacher. This is really annoying; why do you have to go over everything I do?”

“Because you don’t.”

Dunne and Raby (2013) state that conceptual designs are more than just ideas, they’re ideals as well. Moral philosopher Susan Neiman elaborates on this, “Ideals are not measured by whether they conform to reality; reality is judged by whether it lives up to ideals” (cited in Dunne & Raby 2013, p. 12). An only child with busy career-oriented parents, their parents have become concerned about Jay’s emotional maturity. They have invested in a SmartWatch app that uses sensors to detect Jay’s emotional state and behaviour to prompt them to reflect and regulate his emotions and actions. Dunne and Raby (2013) note that designers’ creations are made with the best intentions, often neglecting people’s worst tendencies. Their solution to this design flaw is to change “our values, beliefs, attitudes, and behavior” (Dunne & Raby, 2013, p.2). The app’s algorithms used to determine behaviours and emotions are user-dependant, meaning Jay will be prompted by the app to input their emotions and causes.

Meanwhile, Jay’s teacher is talking to herself, or is she? She’s having a discussion with her teaching app. The app goes through the curriculum and student data. From the time students enter the school, their learning progress has been input into a database. This data plus lesson plans taken from several online resources are input into the app which then uses this information to plan lessons around the teacher’s learning objectives. Initially, teachers were concerned that their jobs and/or pay would be cut, but as Dr Shannon Vallor (2018) observed, “AI is not ready for solo flight” so “we [people] are still the responsible agents.” The purpose of this app is to free up time for teachers to have non-contact hours for evaluating student work during school hours, focusing on presentation rather than creation, mentoring and coaching new teachers, and of course, inputting students’ learning progress into the database. Teachers are still involved and engaged in the lesson planning progress since the lessons will use the data teachers input into the database to determine if the class should move on to the next learning objectives. Different activities and lesson formats are suggested by the app for teachers to choose from with the option to make modifications. The role of teaching is moving from lesson planner to evaluator and mentor. Dr. Vallor (2018) points out that the “future of human-AI partnership, one that serves and enriches human lives, won’t happen organically; it will need to be a choice we make, to improve our machines by improving ourselves.”

References

Dunne, A. & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge: The MIT Press. Retrieved August 30, 2019, from Project MUSE database.

Vallor, S. (2018, Nov 6). Lessons from the AI mirror Shannon Vallor [Video]. YouTube. https://www.youtube.com/watch?v=40UbpSoYN4k&t=872s

 

Task 11: Detain/Release

Everyone comes with their own biases and algorithms may be formed with the best intentions, but they are undoubtedly embedded with the bias of their creators. The biases I had coming into the detain/release module were

  1. I’ve watched every season of the Good Wife. I think in the last season the main character in bail court.
  2. I’ve watched most, if not all of John Oliver’s shows on prison and detainments.
  3. The song “What’s Your Story?” came to mind while I was listening to the different podcasts.

I don’t think this background makes me an expert, but this viewing history has made me more aware of biases, an unjust and prejudiced judicial and policing system where sometimes evidence is “lost” or “found” to get a case off the docket.

The charge is serious and sensitive. The information from the algorithm is confusing and makes me think of To Kill a Mockingbird where Tom Robinson was wrongly accused of rape. At the same time, there are plenty of reports in the news where rape victims’ claims were dismissed for no reason. I’d want to see the evidence they have against Alexander Dix. Why is he likely to appear and unlikely to be violent, but likely to commit a crime? Wouldn’t a person likely to appear also be unlikely to commit a crime? Did the algorithm predict he’d commit a crime because of skin colour? What kind of crime does the algorithm predict he’d commit? Shoplifting? Shoplifting is not something I condone, but why would he shoplift? It sounds like he has a family to support. If he is no longer there to support his family, isn’t it more likely that the people he left behind will suffer and perhaps resort to petty crime?

I think the problem with the algorithm is it overlooks underlying issues that need to be solved, such as over-policing in certain areas, lack of social support systems such as welfare and childcare, which O’Neil (2017) noted are related and results in the poor being caught up in “digital dragnets”.

It’s not just adults who can become entangled by these algorithms. LMS, which are becoming a part of more educational systems, can collect data from students that can be used to identify at-risk students. As a teacher I can use the data to confirm what I already know or look into cases that surprise me–perhaps a student who did well in class did poorly on a test due to a stomachache or an argument; however, if this data were sent to the head of school or the school board, the data would not show the wider picture that the classroom teacher is privy to. A report sent without teacher input could unnecessarily alarm and upset parents and students.

Algorithms do have their time and place. They definitely would have made my life easier back when I had 200 students during online classes, but I would have had to follow up with the students’ homeroom teachers to effectively use that data to evaluate and support students.

References

O’Neil, C. (2017, April 6). Justice in the age of big data. IDEAS.TED.COM https://ideas.ted.com/justice-in-the-age-of-big-data/

Linking Assignment: Task 9

Grant’s comment to my T9 post made me dive more deeply into the question I asked at the end of Task 9: Perhaps it would be easier to achieve diversity through universal feelings?

His comment made me curious to see his T9 post, which I responded to and posted below.

 

I think we should lean into our differences and try to understand them rather than ignore them.

In more detail, I’d like to talk about my shopping experiences in China. Sometimes the item I want is available but not in the colour I want. I used to say, “No” when asked if I’d like to purchase it in a different colour. Now I say, “That colour is not quite to my liking” because I noticed that the sales associate would giggle. At first I was surprised by the laughter–to me it made no sense because I had not said anything funny and I did not think my pronunciation was bad enough to cause laughter. Then I remembered a tip I had been given during my early days of living in China, people often laugh when they are nervous. By doing this I realized that my answer was too abrupt. I started paying attention to how locals interacted with sales associates and learnt how to say “no” in an inoffensive way. The problem with algorithms is oftentimes people don’t know much about the brains and biases behind the algorithms. What biases do they have? What data did they input and how did they interpret this input? How I interpret the data an algorithm comes up with will be different from how someone else would.

If my cultural blunders could be visualized as an image of me covered in question marks, would the number of question marks lessen as I interact with more locals and increase in number every time I move to a new region? Algorithms can both dictate and influence people’s decisions, but isn’t that the same as human interactions? The more often I interact with a group of people, the more I understand them and the more I might adjust my behaviour to better interact with them, but unlike an algorithm, I can find out the exact reasons behind their behaviours.

Dr Zeynep Tufekci’s (2017) TED talk discusses how the choices algorithms make can affect our emotions and political beliefs. I wonder if algorithms could be designed so people can explore differences like the op-ed section of newspapers? At the same time, I am worried about what I consider to be the darker side of our differences. Do I want to know why some people think the Holocaust is a hoax and why some people are, for example, anti-Semitics? I think it would be helpful to know their reasons so I can better address them, but I don’t want to be exposed to the vitriol that probably exists behind those reasons. If I don’t see that vitriol, could I be misled and misunderstand the impact of those feelings? If I were to design an algorithm that aims to provide a balanced argument, would I overcompensate for my biases and lean too far to the other side? Can technology distinguish between equality and equity? Perhaps they can sometimes, but not all the time. Technology should be something humans interact with, not something that replaces humans.

References

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. Retrieved from: https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?

Task 10: Attention Economy

 

Infuriating was my initial thought; I think my reaction was so extreme because of some things that happened earlier in the day. After completing the “game” I visited the bagaar.be website and read through their job application page and realized what a great ice-breaker this would be at their job interviews. Now I think the people behind this idea are creative and could be fun in small doses.

To start the game I had to read the text under the giant NO button. While HERE was capitalized, click was underlined, making it look like a link. I clicked on click first because I wanted to check to see if it was a link. It wasn’t a link so I had to click on HERE. Often websites will make the surrounding words a link as well to make it easier for users and in consideration of any missed clicks that can happen. This process to start the activity reminded me of the attention and time needed to subscribe and unsubscribe for paid subscriptions.

A pop-up reminding me time was passing took several tries to close. The option to close the pop-up was written in a small font in a light colour and it could be easily missed when scanning because it looks like a copyright line. I thought this pop-up design effectively took into account the Steve Krug’s psychological insight shared on Brignull’s (2011) article, “[w]e don’t read pages. We scan them”. The time pressure can make people skip or rush through the fine print and click on buttons without knowing what they’re agreeing to.

Another psychological insight Brignull (2011) mentioned is by Robert Cialdini, “[p]eople will do things that they see other people are doing”. Normally I would not bother completing a “game” like this. Even though the assignment says that the “game” does not have to be completed if an explanation is given, I chose to complete the activity. I could have written about my own experiences with dark patterns, but I thought that most of my classmates would complete the “game”.

Before completing this task and after reading about forced continuity practices (Brignull, 2011), I remembered a subscription that I should cancel. To get my cancellation processed, I was asked to confirm my cancellation. The Yes option to cancel was in red on the left and the No option to stop the cancellation was in green on the right; however, I’m not sure if I should describe this company as deceptive because they did send me a reminder email a few days before my free trial ended. They could have set up the options that way to avoid accidental cancellations, but resubscribing should be easy.

Prior to reading Brignull’s 2011 article, I saw dark pattern practices as part of everyday life. I think most organizations try to mislead customers, just like stores hike up prices before sales or make cheaper quality clothes to add to their sales and outlet stores. Just because these practices are common, it doesn’t mean that they’re acceptable.

Dark patterns can go beyond generating revenue; they can be used to drown out honest voices. I was looking for free AR apps to use in an educational setting and it seemed that the completely free AR apps recommended by educational blogs and educators online were out of business. Having never used them, I can only speculate what happened. Perhaps they used honest design in their UI which resulted in lower revenue and less advertising. If honest UI design leads to not having a voice, should honest companies resort to dark patterns to balance the field? Brignull’s (2011) Expedia.com example shows that companies can drop dark patterns, but would they have reached the success they experienced if they had started with honest practices?

References

Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. Interaction Design, Usability338.

Task 9: Network Assignment

For this assignment, I preferred using the tables generated by Palladio file. First I found the community I had been sorted into on the Palladio file. Grant, Emily, Elizabeth and I shared four common tracks, which have been highlighted in yellow.

I made this table to help me better visualize and make sense of our choices. Both the least and most selected tracks were picked by everyone within my community: The Well-Tempered Clavier and Flowing Streams.

Due to the popularity of Flowing Streams, I wondered if diversity and representation were common themes among everyone’s lists. I read the Task 8 assignments from the people in my community and some random ones from neighbouring communities, and it seems that many people strived for representation of diverse cultures and identities.

During Task 8, I tried to choose tracks with a diversity of cultures and identities in mind, but I was concerned about how I went about this because

  1. Diverse representation could imply to extra-terrestrials that diversity is widely valued and exists peacefully.
  2. Actually we live in a world where biases are perpetuated in algorithms which allows individuals to live with little exposure to different perspectives (Edwards, 2019).
  3. Choosing tracks from different cultures felt like tokenism since it does not address the underlying issues that exist.
  4. Representation by population ignores the minority groups that do not have the population numbers or cultural clout to be represented.
  5. The data input into algorithms are used to identify current and future trends. How can we be sure that the algorithm will interpret this data so it values diversity?
  6. The impossibility of representing all cultures through 10 tracks makes me wonder if a better  way to represent all humans is being overlooked by the majority, thus ignored by the algorithms. The most advanced algorithms use neural networks that are designed copy the way the human mind works and identify underlying relationships within a data set (Edwards, 2019), which means diversity of thought will be limited by the data set.

I looked at Task 8 completed by people in the community farthest from mine. One thing I noticed was the people in this group all mentioned auditory aesthetics, which I feel, were not so prominent in the Task 8 assignments from people in my community. Next I visited the Task 8 assignment of two people who had not chosen Flowing Streams, Johanna and Danya. Johanna listened to the tracks and used universal human experiences, feelings, to curate her tracks. Danya’s process involved using research and sound to pick tracks that represent the world through history, geographical representation, variety of musical instruments, rhythms and melodies to narrow down her initial choices. Like Johanna, she selected her final ten through listening. A completely different approach than mine, but nevertheless effective. Because this data was not in my immediate community, I doubt I would have seen this in any of my searches or social media feeds. If I were to complete Task 8 again, I would seriously consider approaching the task from human emotions because feelings are something everyone experiences and can relate to. Perhaps it would be easier to achieve diversity through universal feelings?

References

Edwards, J. (2019, August 19). What is predictive analytics? Transforming data into future insights. CIO. https://www.cio.com/article/3273114/what-is-predictive-analytics-transforming-data-into-future-insights.html

Spam prevention powered by Akismet