Task 11 – Algorithms of Predictive Text

For this week’s task, I decided to test the predictive text algorithms of my text messaging by first selecting words that I felt made the most sense as an entire message. As predictive text messaging offers three choices of following words, the second attempt was to deliberately select only the ‘left’ predictive word, the third attempt was to intentionally choose only the ‘middle’ predictive word, and the fourth and final attempt was to actively default to only the ‘right’ predictive word. For consistency, I chose the same prompt for all four microblog attempts:

(1) Selective choice:  

Every time I think about… our future in our lives we will be relying upon the best of all worlds to make sure that our lives are not the same as the last. We have been blessed by our family and friends and we have been very happy with our new home. I am sure that we can find a way to make a difference in the lives of our children and their lives and the future.

(2) Left choice only: 

Every time I think about… you and the family had some great time to talk to me and our kids we were very happy to have tutoring today until 5. We have been in a lot more of our time and we have quiet time to get back to work with our parents. We have been very busy and I have not been able to make our time to meet with the family and friends.

(3) Middle choice only: 

Every time I think about… it I will be there for you and Electra tomorrow. I will see you and Electra tomorrow and I will be there as soon as I can. I will see you and Electra tomorrow and I will be there as soon as I can. I will see you and Electra tomorrow and I will be there as soon as I can.

(4) Right choice only: 

Every time I think about… how you are able I don’t know if there are a few minutes of the Bible study that I am just not a big word. But if it doesn’t have a good night then it would have been good for you both and we are so glad to be working in your prayers with your friends on this.

What is remarkable about each microblog is the variability between each post; there are similar uses of grammar and vocabulary, but they have vastly different meanings and contexts. Humourously, the 3rd microblog (where I selected only the middle choice), was an endless loop of the same sentence over and over; a likely assortment of sentence structures that I frequently composed in coordinating tutoring with a family. I should add that my interjection in adding a period to complete the sentence may have contributed into the final product; adding a concluding piece of punctuation then informed the predictive algorithm to offer sentence starting pronouns as opposed to continuing the sentence before.

The only context in which I have read statements like the ones I generated are only (thankfully) within the text messaging platform. My preferred alternative to the text messaging platform is Whatsapp, which I am subjectively casual in my communication. As O’Neil (2017), algorithms are not free from objectivity; their decision-making is based on subjective data that is continually fed in an endless loop. It is likely that each application has its own predictive algorithm, so its subjective use will tailor different forms of data; more formal sentences/grammar for text messaging and more casual sentences/grammar (even emojis!) for Whatsapp.

Although it is must more difficult for demonstrate how email utilizes predictive text algorithms, it is noticeable that my email compositions do have ‘auto-complete’ for commonly used sentence structures that I frequently use in my writing. As opposed to the text algorithms for my phone, the email auto-complete algorithm does wait (as if it is determining whether my sentence will be the same/similar compared to previous emails) for me to input a few of the words before offering to complete my sentence for me. As an educator, many of my emails to parents or colleagues or students follow a typical professional format,  so the auto-completion function does serve a purpose in quickening my writing and saving time. It is noteworthy that this predictive text algorithm is likely linked to my Google account, so this algorithm is widespread enough to follow me whenever I sign in on other devices. Although it is certainly widespread, it is thankfully not mysterious or destructive (O’Neil, 2017).

When I consider the use of algorithms in the public space, I am reminded of the feedback loops that it perpetuates; just as how policing generated new data, it justified more policing (O’Neil, 2017), or the recent accelerant fueling of conspiracy theories in the recent year (Vallor, 2018). Algorithms, in their intended purpose, serve to support humans for simple and task-specific functions. Though they are not sentient, their computing power depends on what we, as its users and audience, feed into the algorithm as data. The creation and implementation of algorithms has likely led to the pushing of philosophies to vast extremes on every side, as we examined in previous weeks how companies may manipulate algorithms to benefit their profits by the allurement of escalating forms of extremism by suggesting more controversial material. This danger is serious. The more we surrender control to algorithms without prior reflection of ourselves as data sources, we risk the accumulation of amplified expression of our moral failures and deficiencies.

This week’s module has enlightened me in realizing the importance of taking technology slow, and reflecting frequently. The rapid implementation and use of technology (and involved algorithms) may seem life-saving in its moment just as CompStat did for policework, but as humans are imperfect, source data for technology will also be imperfect. We simply won’t know how good a technology/algorithm will be until there is a significant amount of data over time.

References:

O’Neil, C. (2017, April 6). Justice in the age of big data. Retrieved June 18, 2019, from ideas.ted.com website:  (Links to an external site.)https://ideas.ted.com/justice-in-the-age-of-big-data/ (Links to an external site.)

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor.

 

Leave a Reply

Your email address will not be published. Required fields are marked *