Task 11: Algorithms of Predictive Text
“Algorithms that shit where they eat.”
“As a society, we are going to have to be careful about what you do with your friends and family members who were not in the very first thing you need to do. The first thing that comes to mind that fact that the government has not been able to find a way to make the best of both worlds. The only things I would like to see more of a challenge to be able to get the job done.”
The short pieces of phrases or word choice shown in the predictive text are often found in magazines, or social media posts, online articles, and oral speaking. Things like “the first thing”, or “best of both worlds”, or “friends and family members” are quite common. I found this task challenging in a way because the predictive text often produced grammatical errors and incomplete ideas, or words that I didn’t intend or wanted to use. As I reread and reflected on this small piece above I do think the predictive text shares similarities in how I use my voice in the text as times, but overall, the voice does not sound like me.
I think this predictive text is an example of why algorithms are not reliable. They can’t be reliable to convey our thoughts and how we interpret the world around us – simply because it is a machine, not a mind. Algorithms can only create a mathematical model – they cannot interpret the world around them (ep. 140, Machine Bias – You Are Not So Smart).
When it comes to hot topic prompts about society, education, or even predicting the future – algorithms can only predict the future based on the past. However, there is hope and potential in algorithms through AI. Dr. Vallor shares that through “Narrow AI” or through a fixed and finite set of rules we can achieve machine augmented cognition – such as self-driving cars, virtual assistants, decision support systems, and social robots (to fill gaps). These examples could be the future. Bolter (2001) looked at the importance of shifting from one technology to another. He notes that new technology always claims to better than the one it’s remediating. Cathy O-Neil, Dr. Shannon Valor, and many more share the major benefits of the remediation of AI, and the use of algorithms – however, it is essential to remember as we move forward that morality and ethics reside 100% in the human mind. And that “algorithms are nothing more than opinion embedded in Code” (Cathy O’Neil, Google Talk).
Resources
Bolter, Jay David. (2001). Writing space: computers, hypertext, and the remediation of print. New York, NY: Routledge.
McRaney, D. (n.d.). Machine Bias (rebroadcast). In You Are Not so Smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast (Links to an external site.)
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.
O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies (Links to an external site.)
Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor.
The Age of the Algorithm. (n.d.). In 99 Percent Invisible. Retrieved from https://99percentinvisible.org/episode/the-age-of-the-algorithm/