Task 11: Algorithms of Predictive Text

Standard

Microblog #1

Education is not about the risk of being able to make the necessary money for a non governmental organization. 

Microblog #2

Every time I think about our future and the kids a lot more stuff like this stuff and it’s not going on anymore.

I have played this game of using only predictive text many times throughout this pandemic with many friends and family as a way to laugh at what our phone’s predictive text chooses as words. I have also done something similar where you choose a certain number of emojis in a row from your most recently used list and it pertains to a question like what will your 2021 be like? 

 The above video shows emoji prediction to which we laugh which month will be and when we should be wary. It’s a game but the reality of how much algorithm plays into this is a bit frightening. The algorithm is calculating which emojis I use most and displaying them for recently used which usually makes choosing them easier and more likely feeding the cycle.  

I tried this again in Gmail to see if the predictive text pattern changed. It did a little, it offered me more emojis. The structure of sentence construction also seemed oddly less professional. I would assume text messaging to be the place of language-like “stuff” and not necessarily email. I was, however, still using my phone so rather than google predicting my text I imagine my IOS was still in charge. 

 

From the perspective that the IOS predictive text is machine learning from my pattern of text input, it is interesting to note that as I listened to the podcast “ You are not so smart” I followed along with the text typed in at the beginning. My phone did not use only she for nurse and actually had they or she for doctor.  This tells me that it’s not an unlearning algorithm or my text choices would have been the same as the host. 

So what has the IOS text prediction tool learned about my speech pattern and what is it assuming?  I can tell that because I use my phone a significant amount for my union vice-president’s position which involves messaging a lot of educational issues. The first microblog entry is closer to the words and syntax I would likely use in a professional situation. It’s a repetitive conversation on my phone. The second microblog entry, however, is not given such specific keywords like educational at the beginning and thus likely triggers text prediction more in line with my quick back and for texts with family and friends. The language used is more akin to the quick abbreviated and sloppy language I use with close family and friends. I am actually quite surprised that the machine learning nature of my IOS is so sophisticated when given keywords to provide a little context. 

This context-oriented word and syntax choice is potentially a positive part of the algorithm as it lends itself to what Dr. Cathy O’Neil outlines about the context of algorithm use.  Using context to require decision tree analysis for fairness is likely not how my predictive text works and pattern-based learning is likely the case. The patterned-based mechanism of my text prediction would allow for a pseudo decision tree outcome because it supports the idea of two contexts, however, it is likely that it is simply the self-propagating cycle as found in codifying recidivism which is discussed in many of this week’s podcasts. My use of the word educational moves the predictive text into a pattern of language that is more in line with how I respond to professional messages. It’s a false sense of learned fairness.  

This kind of surface support for the use of predictive text is very similar to the false sense of the efficacy of Comcast in measuring crime rates. At first glance, it looks as though it is doing its job but where is it not doing its job? That is why I wrote the second microblog and email. I gave the text predictor all the data by copying and pasting the previous text for context, and the resulting text produced for the email remained in the colloquial language I use with family and friends. The algorithm could not predict which kind of text I was writing as it is only codified to make a best-guess prediction based on what was initially input, likely based on the information from the Enron emails, and maybe text patterns I have created. The amount of texting I do is heavily weighted toward short bursts with family and friends. 

Text prediction on IOS is a far cry from sentencing algorithms or health prediction algorithms for insurance agencies, its bearing on my life is an annoyance, not life or death. I have the opportunity to override the algorithm by typing my chosen text. When algorithms are relied upon blindly as Dr. O’Oneil points out we lose the ability to interject fairness from a human decision-making process and view it as a score created from science and thus void of bias. The codified and amplified societal bias is embedded in algorithms. This bias is amplified over time as more data in the self-perpetuating cycles of ill-used algorithms feeds the machine learning.  

I had a unique experience at the BC TECH Summit listening to the emerging field of AI ethics a few years back. Ultimately the questions asked by Dr. O’Neil were echoed in this research. What needs to be written into law, constitution, governance in order to demystify algorithms and create awareness and transparency of how they are created and what kind of collated data they are creating. How do we ethically use algorithms as a powerful tool that is also just? Military, political, health and judicial systems and decisions are made internationally based on data collated by algorithms. The widespread issue of racism is amplified in those decisions. Imagine we use an algorithm to predict which group will be the next terrorist group to strike. You need only follow the media trail of terrorist rhetoric to know that an algorithm based on the published information of terrorism is going to predict that the next group will be from the middle east. What if that drove all military action. The racism present in the media coverage of terrorism could roll the military machine into war with a middle eastern country that is benign.  

Algorithms are powerful tools and are currently being used in the wild, wild west landscape of modern data-driven society.  We need to establish checks and balances to create a fair and just system that uses algorithms as a tool with awareness for bias.

 

References

 

McRaney, D. (n.d.). Machine Bias (rebroadcast). In You Are Not so Smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast (Links to an external site.)

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.

O’Neil, C. (2017, April 6). Justice in the age of big data. Retrieved June 18, 2019, from ideas.ted.com website: https://ideas.ted.com/justice-in-the-age-of-big-data/

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies (Links to an external site.)

Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor.

The Age of the Algorithm. (n.d.). In 99 Percent Invisible. Retrieved from https://99percentinvisible.org/episode/the-age-of-the-algorithm/

  • TED-Ed. (2013). What’s an algorithm? – David J. Malan.
  • Vogt, P. (n.d.-a). The Crime Machine, Part I. In Reply All.
  • Vogt, P. (n.d.-b). The Crime Machine, Part II. In Reply All.

Leave a Reply

Your email address will not be published. Required fields are marked *