Linking Assignment: Task 9

Grant’s comment to my T9 post made me dive more deeply into the question I asked at the end of Task 9: Perhaps it would be easier to achieve diversity through universal feelings?

His comment made me curious to see his T9 post, which I responded to and posted below.

 

I think we should lean into our differences and try to understand them rather than ignore them.

In more detail, I’d like to talk about my shopping experiences in China. Sometimes the item I want is available but not in the colour I want. I used to say, “No” when asked if I’d like to purchase it in a different colour. Now I say, “That colour is not quite to my liking” because I noticed that the sales associate would giggle. At first I was surprised by the laughter–to me it made no sense because I had not said anything funny and I did not think my pronunciation was bad enough to cause laughter. Then I remembered a tip I had been given during my early days of living in China, people often laugh when they are nervous. By doing this I realized that my answer was too abrupt. I started paying attention to how locals interacted with sales associates and learnt how to say “no” in an inoffensive way. The problem with algorithms is oftentimes people don’t know much about the brains and biases behind the algorithms. What biases do they have? What data did they input and how did they interpret this input? How I interpret the data an algorithm comes up with will be different from how someone else would.

If my cultural blunders could be visualized as an image of me covered in question marks, would the number of question marks lessen as I interact with more locals and increase in number every time I move to a new region? Algorithms can both dictate and influence people’s decisions, but isn’t that the same as human interactions? The more often I interact with a group of people, the more I understand them and the more I might adjust my behaviour to better interact with them, but unlike an algorithm, I can find out the exact reasons behind their behaviours.

Dr Zeynep Tufekci’s (2017) TED talk discusses how the choices algorithms make can affect our emotions and political beliefs. I wonder if algorithms could be designed so people can explore differences like the op-ed section of newspapers? At the same time, I am worried about what I consider to be the darker side of our differences. Do I want to know why some people think the Holocaust is a hoax and why some people are, for example, anti-Semitics? I think it would be helpful to know their reasons so I can better address them, but I don’t want to be exposed to the vitriol that probably exists behind those reasons. If I don’t see that vitriol, could I be misled and misunderstand the impact of those feelings? If I were to design an algorithm that aims to provide a balanced argument, would I overcompensate for my biases and lean too far to the other side? Can technology distinguish between equality and equity? Perhaps they can sometimes, but not all the time. Technology should be something humans interact with, not something that replaces humans.

References

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads. Retrieved from: https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet