Intellectual Production #3: Algorithms – Everyday Life (Option 2)
Neyland (2019) gives voice to what many may already know, algorithms now play an integral part in shaping the everyday (p. 12). They are pervasive, some overt like personal assistants, many others invisible curating what we see and hear, who we interact with, who interviews for jobs, whether we get a mortgage, or who we go on a date with, all fundamental aspects of our everyday life (Neyland, 2019; Noble, 2018). This pervasiveness is troubling in light of algorithmic bias; a misinterpreting of the data can have long lasting impacts in many different ways (“Algorithmic Bias”, 2022; Noble, 2018).
Algorithms and the artificial intelligences they govern need to be trained, retrained, and tested on a regular basis to ensure they are operating as programmed and at peak efficiency, requiring vast amounts of data (Crawford, 2021). In the past, this data was hard to come by and required following rigorous ethics rules and procedures, central to which is the concept of informed consent (Crawford, 2021). The expansion of internet usage starting in the late 1990’s meant that much greater amounts of data was available to be collected and used for testing and training purposes while also seeing the decline of informed consent (Crawford, 2021). The advent of social networking meant that millions of people are freely posting data about themselves, all of it captioned and classified, ready for use by algorithms (Crawford, 2021). This represents huge volumes of data about us, beyond what anyone thought possible twenty years ago, being fed into algorithms everyday often without the knowledge of many users (Crawford, 2021).
Every action on the internet, every search, click, ‘like’, purchase, post, comment, Gmail message, Google Doc., etc. now represents consent (Crawford, 2021; Noble, 2018). The concept of fair use has changed such that even the use of our own images by others is not considered legally problematic (“Fair Use”, 2022). Beyond the invasiveness of these practices and the shockingly huge amount of information people are willingly, knowingly or not, providing to algorithms that are shaping their interactions with the world around them, this can have serious implications in schools with regards to FIPPA (Freedom of Information and Protection of Privacy Act) in BC, and similar laws in other parts of the world, with potentially identifiable student information being gathered and used in ways that are not open for accountability (Noble, 2018). How do we protect our students, their privacy, and identity when we are more and more being required to use internet services that are not neutral in our curriculum and practices (Noble, 2018)?
The concept of the internet as a democratizing space that provides neutral and equal access to information and a safe space to express oneself has become a myth. As Noble (2018) points out, most users are unaware of the commercialization of the sites and services that they use each day, or how those sites generate income from users. Overtime these corporations have grown and diversified to the point where they control many of the access points that people use to interact with the internet such as search and email, etc. These corporations are concerned primarily with making a profit, often directing users away from competitor’s sites and services (Noble, 2018).
Most of the programmers and developers of algorithms are white males from North America or Europe, and their inherent biases are encoded into their creations, either in the programming stage or in the data that is used to train and test their algorithms. Without representative programmers and datasets that truly represent the diversity of humanity, we will see a reinforcing of discrimination on the basis of race, gender and sexuality, from the denial of loans, failure of facial recognition, lack of access to appropriate services to the over policing of minority communities. (“Algorithmic Bias”, 2022; Buolamwini, 2019; Crawford, 2021; Hao, 2020; Heilweil, 2020; Noble, 2018).
Algorithms are constantly editing and shaping what we see in every interaction with them, from search results to the thumbnail images for movies on streaming services, no two people receive the same experience or results in their interactions with algorithmic agents (Crawford, 2021; Noble, 2018; Taylor, 2021). Corporations will explain the benefits of curating a search result, or what appears on our social media or new feed, however tailoring of results to match our interests can lead, very quickly, to manipulation where we are only seeing what the algorithm wants us to see, a decision based on relatively few data points from past actions (Crawford, 2021; Noble, 2018). This has resulted in the influencing of voting patterns, or whether one votes at all, to the propagation of racism, sexism and other forms of discrimination (Noble, 2018).
The concept of friend has drastically shifted in the era of social media and streaming platforms, now meaning more that of followers rather than someone you know a great deal about and would do almost anything for, many based on the recommendations of algorithms. As we surrender more and more of our own agency to algorithms telling us who would make good friends or partners, there is a much greater chance of people, or programs, gaming the system to seem compatible and expand their ‘friends’ count and therefore their sphere of influence (Noble, 2018). The risk for youth, and the vulnerable, especially is these ‘friend’ suggestions are not who they purport to be and are using the system to exploit them in different ways from influencers seeking economic gain, to recruiters for gangs and the sex trade, to conditioning youth into harmful and hateful beliefs through the messages of ‘friends’ or the banner ads that pop up on their Facebook or Instagram feeds (“Algorithmic Bias”, 2022; Buolamwini, 2019; Hao, 2020; Heilweil, 2020; Noble, 2018).
The pervasive and agentic nature of algorithms make it ever more important for education about algorithms, while most won’t understand how they work the average citizen needs to understand the role that they have on our everyday lives so they can provide informed consent in their actions (Crawford, 2021; Neyland, 2019; Noble, 2018; Slavin, 2011).
References
Algorithmic bias. (2022, June 10). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Algorithmic_bias&oldid=1092443375
Buolamwini, J. (2019, February 7). Artificial Intelligence Has a Problem with Gender and Racial Bias. Here’s How to Solve It. Time. https://time.com/5520558/artificial-intelligence-racial-gender-bias/
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. (Introduction & Chapter 3L Data, pp. 1-21 & 87-121). https://doi.org/10.12987/9780300252392
Fair use. (2022, June 8). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Fair_use&oldid=1092180146
Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru
Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. A computer can make a decision faster. That doesn’t make it fair. Vox. https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency
Neyland, D. (2019). The everyday life of an algorithm. Springer International Publishing. (Chapter 1, pp. 1-20). https://doi.org/10.1007/978-3-030-00578-8_1
Noble, S. U. (2018). Algorithms of oppression. New York University Press. (Introduction, Chapter 1, Conclusion)
Slavin, K. (2011). How algorithms shape our world [Video]. TEDGlobal. https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world?language=en#t-896320
Taylor, A. (2021, February 2021). Are streaming algorithms really damaging film?. BBC News. https://www.bbc.com/news/entertainment-arts-56085924