Task 11: Algorithms of Predictive Text

Whether you like it or not, whether you know it or not – algorithms are an integral part of our lives today. However, it became quite apparent in this last week how destructive they can be when people rely too heavily on the algorithm simply because they trust the math or don’t understand.
This last week we heard from Cathy O’Neil, a data scientist who talked about a few injustices that are happening today as a direct result of trusting in algorithms. In the 99 Percent Invisible podcast (episode 274) she talks about how algorithms show up when there is a difficult decision or conversation – it allows us to avoid the hard decisions. Furthermore, we learned about how the algorithm is failing society. With a main example from the criminal justice system. In which humans are giving computers power to create a negative feedback loop – where the output is tied to the input in an unexpected or hidden way.

“Algorithms that shit where they eat.”

Another example of this a negative feedback loop is teacher assessment in the U.S. and the Value-Added Model; O’Neil refers to this in a Google Talk. Teachers are given points for and against depending on how their students scored. O’Neil recounts a story where a teacher, a good teacher, was fired because a student’s previous years’ teacher cheated on the test – giving students higher scores than they deserved.
This week’s task had us create a microblog using only the predictive text selection. McRaney, D in his podcast You are Not So Smart, begins Ep. 140 talking about predictive text algorithms, and how these algorithms use a word to vec, essentially a bag of words taken from past documents and data. Often these words banks can have three million words in them. The machine will follow certain patterns of words in order to create a prediction. For my microblog I chose the prompt: As a society, we are…

“As a society, we are going to have to be careful about what you do with your friends and family members who were not in the very first thing you need to do. The first thing that comes to mind that fact that the government has not been able to find a way to make the best of both worlds. The only things I would like to see more of a challenge to be able to get the job done.”

The short pieces of phrases or word choice shown in the predictive text are often found in magazines, or social media posts, online articles, and oral speaking. Things like “the first thing”, or “best of both worlds”, or “friends and family members” are quite common. I found this task challenging in a way because the predictive text often produced grammatical errors and incomplete ideas, or words that I didn’t intend or wanted to use. As I reread and reflected on this small piece above I do think the predictive text shares similarities in how I use my voice in the text as times, but overall, the voice does not sound like me.

I think this predictive text is an example of why algorithms are not reliable. They can’t be reliable to convey our thoughts and how we interpret the world around us – simply because it is a machine, not a mind. Algorithms can only create a mathematical model – they cannot interpret the world around them (ep. 140, Machine Bias – You Are Not So Smart).

When it comes to hot topic prompts about society, education, or even predicting the future – algorithms can only predict the future based on the past. However, there is hope and potential in algorithms through AI. Dr. Vallor shares that through “Narrow AI” or through a fixed and finite set of rules we can achieve machine augmented cognition – such as self-driving cars, virtual assistants, decision support systems, and social robots (to fill gaps). These examples could be the future. Bolter (2001) looked at the importance of shifting from one technology to another. He notes that new technology always claims to better than the one it’s remediating. Cathy O-Neil, Dr. Shannon Valor, and many more share the major benefits of the remediation of AI, and the use of algorithms – however, it is essential to remember as we move forward that morality and ethics reside 100% in the human mind. And that “algorithms are nothing more than opinion embedded in Code” (Cathy O’Neil, Google Talk).

Resources

Bolter, Jay David. (2001). Writing space: computers, hypertext, and the remediation of print. New York, NY: Routledge.

McRaney, D. (n.d.). Machine Bias (rebroadcast). In You Are Not so Smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast (Links to an external site.)

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies (Links to an external site.)

Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor.

The Age of the Algorithm. (n.d.). In 99 Percent Invisible. Retrieved from https://99percentinvisible.org/episode/the-age-of-the-algorithm/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Spam prevention powered by Akismet