“My idea of Technology is not available for remote playback”
In what textual products have you read statements like the one you generated?
The first statement sounds like an error message, and a quick Google search revealed that it is an ongoing issue with playing videos through streaming devices such as Chromecast and Roku. It also sounds like a weird sound-byte that would be part of a podcast’s theme.
The second statement sounds like one of the highlighted quotes in a videogame review. However, the second half could easily be picked out of a text message.
The third’s “the going gets tough” is a cliche statement I have encountered in nearly every media, from movies to novels.
I am unsure if I have ever encountered a statement like the fourth anywhere.
How are these generated statements different from how you would normally express yourself and/or your opinions on the matter you wrote about?
While entertaining, none of these generated statements reflect my thoughts on technology. Their diction is limited to mostly unrelated topics. Furthermore, they do not use the same context-specific self-censorship that I do, including topics I try to avoid anywhere that my professionalism is on public display— namely, alcohol. Perhaps because my phone is relatively new, it does not have a large library on the things I say (assuming the algorithm is pulling this data from the phone itself, rather than my Google account). This may explain why my predicted sentences had nothing to do with my topic.
Had I completed the sentence myself, I may have settled on something like “My idea of Technology is a portal to endless possibilities directed by the choices of humanity.” Or perhaps, “My idea of Technology in the classroom is constantly evolving as I experiment with new ways to use it.”
Did the statement you generated speak in your “voice”—did it sound like you? Why or why not?
I do not think these sentences sound like me, but their vocabulary is a Frankensteinian amalgamation of texts I have sent recently. I have used the words “great game”, “go[ing] to the brewery”, “want any stickers cut”, “Library”, and “fabulous gin” all in the past week. How the algorithm stitched them together, however, does not sound like me at all. Except, perhaps, the passive-voice of the last sentence. I have a tendency to write ponderous-sounding messages such as “I do have a great collection of board games!” rather than “I have lots of board games!”
Reflect further on the use of algorithms in public writing spaces and the implications this might have in various arenas (politics, academia, business, education, etc.).
An ongoing theme in the readings, videos, and podcasts related to algorithms is that they are very good at solving very particular problems. Feed an algorithm the official works of Harry Potter, and it will generate a tale reminiscent of J.K. Rowling. Teach an algorithm to recreate motivational posters*, and it will learn to mimic them both visually and verbally (see examples below). Show an algorithm the collective works of Rembrant, and it makes an original painting that looks like it was from the 17th Century painter.
*Note: InsproBot’s creators have not revealed exactly what kinds of motivational posters they used.
My experience with predictive-text makes me wonder if it could lead to a narrowing of thought and vocabulary. We use many voices in our daily lives, depending on the context. How I write feedback for my students is different than how I give feedback to my friends. How I portray myself on a blog is different than how I portray myself on Messenger. You do not always want all of your voices to be rolled into one, and then have that voice directed in a multitude of directions from grant proposals to business correspondence to report cards.
Vallor was very apt when she described A.I. as an accelerant and a mirror. Algorithms are capable of distilling tremendous amounts of text down to their dominant themes and patterns. When Microsoft deployed an A.I. on Twitter, it quickly learned from the tweets it encountered and reflected the bigoted vitriol back on the world within a matter of hours. Was it reflecting Twitter-culture as a whole, or merely the messages sent to it by Trolls? It seems that using A.I. to write is less like an automated typewriter, and more like a digital magnifying glass bringing unnoticed patterns into stark focus.
Algorithms are not very practical in situations with many exceptions, situations that benefit from human creativity, and critical thinking. Take lesson-planning, for instance. I could develop an algorithm to write lesson plans, but they would be based on what already exists and likely pedagogically outdated (e.g. there are far more lesson plans out there with accompanying worksheets than with deeper learning activities). I could use an algorithm to help identify harmful themes in my lessons (ex: unbalanced representation between fe/male scientists, Euro-centrism, and/or my ratio of worksheets-to-activities-or-projects). I may even be able to use an algorithm to help me identify cross-curricular connections, ties to current events, and pop-culture. However, I wonder in this latter example would be able to tie ideas together without them using the same vocabulary. For example:
Knapweed is a beautiful invasive flower that is extremely harmful to B.C.’s Central Interior Ecosystems. It emits toxins into the soil which kill off other native species of plants. The knapweed has evolved to resist the effects of its own toxin. In World War I, meanwhile, the Germans invented and released a poisonous chemical weapon into the trenches. Chlorine gas (also known as mustard gas) devastated Ally forces. The behaviour of the gas was unpredictable, as it was easily carried by the wind, so the Germans also invented special gas-masks to protect them from their new chemical weapon.
Would an algorithm, set loose on the entirety of topics covered in the B.C. curriculum, find this connection? While the terminology is similar (toxin/poison, evolved/invented, emits/released, kill/devastated, resist/protected) it is not identical. Or would it require human observation and collaboration? I am sure there are already algorithms for writing report card comments, but how useful are they for commenting on a student’s interactions with others, or their extra effort on a particular project? I do not believe algorithms should be treated as solutions and supervisors. However, algorithms are valuable tools to help us become better teachers, professionals, and people. They can pick out patterns, and help inform our decisions.
As a parting tidbit, I would like to recommend an episode of the Draftsman Podcast. Two acclaimed art instructors, Stan Prokopenko and Marshall Vandruff, discuss the merits of A.I. and how it can be (and is being developed to be) used to help students learn.