Task 11: Predictive Text

For this week, we are asked to develop a microblog using a sentence starter. In exploring predictive text apps online I discovered https://botnik.org/apps/writer/. This website allows you to generate predictive text based on different text sets, so for an example, you can generate predictive text in the voice of John Keats. You can also upload a text file to generate your own voice. So in the spirit of examining predictive texts, I have generated several microblogs to examine the differences.

John Keats “voice” microblog: 

My idea of technology is wither’d from a wayward midnight palely draught with pleasant fans no heat in language.

Bob Marley lyric microblog:

Every time I think about our future takin yoy youths want every chorus be jammin right out.

Beauty advertisement microblog:

Education is not about diapers your skin will feel perfect with vitamin enriched curves

Jeff Bezos quotes microblog:

As a society, we are now choosing creativity more than any other cause on amazon.com

Generated from the text of my task submissions 1-10 (aka my voice):

This is not my idea of the algorithms that most things really need to communicate with our dog

This experiment demonstrates that the text sources predictive text algorithms use matter! As you can see in the microblogs, as the text source changes, so too do the predictive text options. While the John Keat voice is eloquent, moving to beauty advertisement lead to a non-sensical microblog. I will admit that I definitely had fun generating the microblogs…and I may have been attracted to the most ridiculous option in the predictive texts, so they might be slightly skewed. However, this also has to do with the options themselves. Some of the generators were so far from how I think and speak normally that I felt I might as well just pick something random. For me, the beauty advertisement and Bob Marley texts were the most difficult, while the John Keats and Jeff Bezos seemed easier to find a voice.  While I would love to think this is because I am both eloquence and an intelligent business mind, I think the more probable answer is cultural influence, as it seems to be the common thread that binds me to the two figures. Plus, both Keats and Bezos have influenced contemporary culture in the West in different ways, so there is some familiarity with their texts.

Interestingly enough, the result of the generator using my own texts I do not feel is completely my voice. They are all words I use fairly regularly yes, but the actual result is not something I would ever say.  I played around further with the tool trying to generate something I would say or write, but it was always off in some way.

Why is it that it these generated statements feel awkward, while predictive text on my smartphone is often accurate?

The most obvious answer is the algorithms–they could be different. It is likely that the predictive text on my Samsung is more sophisticated than this free web application. However, I think blaming the algorithm might be too simplistic an answer. After all, an algorithm is just math. Anyone who enjoys the arts, language, music should hopefully think it is more than just math or sticking different elements together that makes the works great.

One of the major differences is the quality of text. When you are texting and the predictive text seems accurate, you are likely communicating short statements “I’m running late”. If you are engaging in a philosophical debate via text, it becomes less accurate (I know, as I do this often). The statements we used to start the predictive text require a deeper engagement with language and ideas, so it is unlikely predictive text will inspire the quality of language needed.

This is very similar to the Crime Story podcasts we listened to this week. The machine directed police officers to target specific activities and people to pad numbers in a particular way. While the summon and arrest numbers went up, the quality of police activity was suspect. The machine only looked for ‘how many’ not ‘why’. Similarly, predictive text algorithms look for frequent word combinations to present options, but they do not read the content of the text.

This becomes problematic when we start to investigate what algorithms include and exclude. O’neil and the Age of Algorithm podcast refer to a few great examples of this. I attended a speech of Meridith Boussard this summer. As a data journalist, she investigates the way artificial intelligence and algorithms might go wrong. One of the examples she gives is how some automated soap dispensers sometimes do not work for people with darker skin colour. Her argument is that the lack of diversity in the Tech sector creates these blindspots in technology. What’s worse, instead of improving technology to be more inclusive, inventions are pushed out in the spirit of innovation and few ever circle back to fix the blindspots.

 

Leave a Reply