Task 11: Algorithms of Predictive Text

Our task this week was to explore predictive text on our phones using one of the five provided prompts and create a microblog.  

Before I began this activity, I had to first enable predictive text on my iphone 8s. It was a feature that I disabled because I always found it so annoying when I was texting and rarely used the feature. Sure I liked the autocorrect feature or that it would help me to input a word faster when I wanted to send a quick text, however I never really used it which was why I disabled the feature. 

After enabling the predictive text feature, I began to play around with the suggested prompts to see what would unfold as I chose the words that would spring up along the way. This reminded me of those fun Facebook prompts that people share every so often where you put your sentence starter in and then only use the middle suggested word. Whatever would show up, you would post and share with your friends and everyone would have a giggle about how their sentences turned out. For this task however, I am not just choosing from the middle word that pops up but rather a selection of a few word choices and even some emoji pictures. 

To begin I decided to go with the prompt: This is not my idea of….

Then I also decided to try: My idea of technology is…

As well as: Education is not about…

Constructing these sentences was a bit entertaining, however I would say none of the predictive text generated sentences that reflect what I would have actually constructed. The constructed sentences sounded stilted and seemed to lack any substance or even sound coherent. If I did not stop the sentence at a seemingly natural break, then I could have continued choosing words at whim until I got tired. The diction choices seem to mostly pull on words and even pictures that I have used in other message formats. My phone is older, so I would assume it would have a larger library of words to draw from, assuming that the algorithm is pulling the data from my phone. Or perhaps it is merely just pulling words that I have used a certain number of times, which is why I see more words that I have commonly used; hence the “predictive” concept of the textual words. This then brings me to consider this week’s readings and podcasts about predictive text and AI algorithms. In particular how algorithms are capable of breaking down massive amounts of text down to their dominant themes and patterns. 

I have used the predictive text feature when using Google Docs and it does seem to give me a more appropriate option that does “sound” like me when I write. I have used this option from time to time when I use Google Docs for writing up lesson plans and assignments for my students.This reminds me of what both O’Neil and Vallor mention in how AI algorithms can be designed to support human growth and development. Algorithms can be a valuable tool to help us become better in our professions as they can pick out patterns, and help inform our decisions. 

When I think of algorithms and politics I immediately think about Donald Trump’s time in office in the United States. I remember his complaints that Google’s algorithms were biased towards negative news articles about him and these same algorithms were silencing other conservative voices. In my opinion, these algorithms were working just fine as most of the news I remember ever seeing were about the disastrous things he was doing. If this news was playing out on T.V, then it seems reasonable that these types of articles would show up the most during Google searches for his name. I think these search algorithms also show that a higher majority of Americans and others that are going Google searches are probably leaning more liberal in their views, which would be the reason that liberal leaning articles tend to show up more in searches.

 

References:

Bonazzo, J. (2018, August 18). Trump Thinks Google’s Algorithm Is Biased—Here’s Why It’s (Probably) Not. The Observer. Retrieved from https://observer.com/2018/08/google-donald-trump-seo-algorithm/

McRaney, D. (n.d.). Machine Bias (rebroadcast). In you are not so smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor

The Age of the Algorithm. (n.d.). In 99 Percent Invisible. Retrieved from https://99percentinvisible.org/episode/the-age-of-the-algorithm

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet