My predictive text and I have something very important to say:
As a society, we are in a good place to make a good point. The only problem is that the game doesn’t play with the other games and I can’t stop playing it.
Why does my predictive text sound like a dude bro in his first year of a philosophy major? I truly hope that outside of my predictive text, I don’t sound like someone who is stereotypically high waxing poetic, but never actually saying anything of substance (that would be mortifying). Is this exemplary of how I write? Is predictive text a reflection of who we are? In some ways, the answer is both yes and no. Predictive text uses machine learning in order to assess which words we tend to use more often and creates a personalized dictionary that scores those words based on the probability that we’ll use them again (NBC News, 2017). Additionally, the predictive text algorithm also uses something called “probabilistic language modelling” which considers the context of what is being written and how certain words tend to go together (NBC News, 2017). Ultimately, the algorithm of predictive text is a combination of machine learning (about how I speak) and language probability (how everyone speaks). This leads me to wonder, how much of the predictive text is me and how much of the predictive text is language modelling?
Interestingly, some predictive text, such as Smart Compose in Gmail is based off of a finite corpus of emails from Enron, (though I’m unclear if all predictive text uses this base of emails for developing its algorithm) (Mars, 2020). As Amanda Levendowski points out, It doesn’t take a lot to imagine how basing predictive text and other AI off of a specific set of emails might bias the technology (Mars, 2020). Is it possible that the probabilistic language modelling of my phone’s predictive text algorithm is based off of the emails of Enron dude bros embroiled in an early aughts corporate scandal?
In some ways using predictive text feels like using a Ouija board with your friends when you’re eleven years old . You may think it’s random or believe that it’s ghosts, but in reality you and your friends exert force onto the planchette of the Ouija to make it spell something coherent (Romano, 2018). Similarly, predictive text is not random, nor is predictive text a ghost writer, we guide the predictive text by the choices we make and it coughs out a sentence.
I tried the predictive text a few times with a few different prompts, partly because it was so fun, but also to see if I could get a result that I thought sounded like me. Interestingly, I found the predictive text leaned more toward a positive affect, for example the words ‘good’ and ‘great’ came up quite often (as well as the word ‘birthday’). Every time I put in the prompt “as a society, we are…” I got sentences like “we are going to be able to make it” or “we are in a good mood” or “we are in a good place.” Interestingly, I don’t think I would ever finish that prompt that way. I’m quite critical of society actually, but the predictive text never gave me the option!
I also found that the predictive text never really led to saying anything quite meaningful, consider this sentence generated by predictive text: I think we should have some more of the things we need to make sure we are doing the next year. It reminds me of a segment of John Oliver’s Last Week Tonight in which John Oliver compares Trump’s actual presidential speech patterns with that of predictive technology (Last Week Tonight, 2017, 3:10).
As an extension to this activity, I also tried using predictive text for something that I wanted to say (rather than using a provided prompt and seeing where the predictive text meanders). In this example I’ve highlighted the words that the predictive text offered that were indeed predictive of what I was intending to write (I did not include the words that the predictive text offered after I had typed a few letters of the word):
You should check out the episode “you’ve got mail” from 99% invisible which is about how basically all AI technology, like predictive text, is based on the released emails from Enron.
Certainly there is a difference between using predictive text to generate content versus using predictive text to support content. In a study about predictive text the authors found that when using it to caption photographs it led people to write shorter, more predictable captions (Arnold et al., 2020). When I think about predictive text in various contexts, I can see its value in situations that require more formulaic writing or in contexts where brevity is valued. Writing work emails would be an example of this type of context; there is a specific language and way that people conduct themselves in work emails. Anyone that has had to write or receive an email for work would be well acquainted with the phrases “please see the attached document,” or “at your earliest convenience,” and the dreaded “as per my last email.” However, in the end, I would make the case that predictive text works best when we use it to support our ideas, not generate them.
References
Arnold, K., Chauncey, K., & Gajos, K. (2020). Predictive text encourages predictable writing. Paper presented at the 128-138. https://doi.org/10.1145/3377325.3377523
Last Week Tonight [LastWeekTonight]. (2017, November 13). The Trump presidency: Last Week Tonight with John Oliver (HBO) [Video]. YouTube. https://www.youtube.com/watch?v=1ZAPwfrtAFY
NBC News. (2017, November 8). Predicitve texting: How your phones keyboard figures out what you might type next [Video]. YouTube. https://www.youtube.com/watch?v=OfzMkERVFu8
Roman, M. (2020, November 9). You’ve got Enron mail (episode 421). [Audio podcast episode]. In 99% Invisible. Radiotopia. https://99percentinvisible.org/episode/youve-got-enron-mail/
Romano, A. (2018, September 6). How Ouija boards work. (Hint: It’s not ghosts.). Vox. https://www.vox.com/2016/10/29/13301590/how-ouija-boards-work-debunked-ideomotor-effect