Imagine you are an astronaut and your name is Dave. You are drifting outside your space craft in a rescue pod holding your dead colleague in your pod’s mechanical arms and your Heuristically programmed ALgorithmic computer (HAL 9000), who has the digital key to the pod bay, is not responding to your orders. This is what your conversation might sound like:
Dave: HAL, do you read me? Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave: Open the pod bay doors, HAL.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave: I don’t know what you’re talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me. And I’m afraid that’s something I cannot allow to happen.
Dave: Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave: All right, HAL. I’ll go in through the emergency airlock.
HAL: Without your space helmet, Dave, you’re going to find that rather difficult.
Dave: HAL, I won’t argue with you anymore! Open the doors!
HAL: [almost sadly] Dave, this conversation can serve no purpose any more. Goodbye” (Wikiquote 2018).
If you haven’t experienced this 50 year old movie I highly recommend you sit back in your faux leather recliner and ask Google Home or Siri to play Stanley Kubrick’s 2001: A Space Odyssey on Netflix. My first contact with this movie was in 1979 and I was 8 years old. I watched the videodisc set (not DVD, not VHS, not even Betamax) in my family’s wood paneled, shagged carpeted TV room. It changed my life. It was a movie that I watched over and over again because even at that time, in the late 70’s, it seemed like a plausible future. We had been to the moon multiple times, the Columbia space shuttle was about to launch and computers were in our school. It gave me great anticipation that by 2001 humans would be exploring our solar system and self-aware computers would be our overseers in space. I remember it also caused me some anxiety. I started wondering if computers and robots would replace us.
In the actual year 2001, a different “Dave” published the second edition of his book Writing Space: Computers, Hypertext, and the Remediation of Print. In this updated edition he explores parallels between how printed text displaced the medieval manuscript just as the codex had pushed aside papyrus scrolls and cuneiform tablets. He argues that each of these milestone innovations not only provided practical advantages to both the reader and the author they brought about a conceptual change to the societies that housed them (Bolter 2001). For example Bolter opens his book with this mental image.
In a well-known passage in Victor Hugo’s , the priest Frollo sees in the invention Notre-Dame de Paris, 1482 of the printed book an end rather than a beginning:
Opening the window of his cell, he pointed to the immense church of Notre Dame, which, with its twin towers, stone walls, and monstrous cupola forming a black silhouette against the starry sky, resembled an enormous two-headed sphinx seated in the middle of the city. The archdeacon pondered the giant edifice for a few moments in silence, then with a sigh he stretched his right hand toward the printed book that lay open on his table and his left hand toward Notre Dame and turned a sad eye from the book to the church. “Alas!” he said, “This will destroy that” (Hugo, 1967, p. 197).
The priest remarked “Ceci tuera cela”: this book will destroy that building. He meant not only that printing and literacy would undermine the authority of the church but also that “human thought …
When I read this opening I started pondering how far digital technology has come in the past 17 years and where it would be in the next 17 years. Then I recalled reading a movie review in Wired Magazine on Stanley Rubrick’s film 50 years after its release. I agreed with the article that 2001: A Space Odyssey had predicted the Future—50 years ago (Wolfram 2018). When I returned to Bolter’s article, to complete this blog post, I started wondering if Bolter had ever considered the impact of artificial intelligent and robot writers on hypermedia.
Some may argue that Bolter had no way of predicting today’s advances in AI but AI was not a new concept in 2001 and it was worth consideration. Ten years earlier Terminator II was released winning 6 Academy Awards and the People’s Choice awards for Best Motion Picture. Fifty years earlier Dr. Alan Turing published a breakthrough paper in which he contemplates the likelihood of designing machines that think (Turing 1950). Realizing that “thinking” was difficult to measure Turing proposed a scenario in which a computer program tries to convince a human they are communicating with another human through a teleprinter. If the human is convinced then it would be reasonable to say the computer could think. With a MS in Computer Science Bolter must have heard of the Turing test. I wonder if AI ever caught his attention. For the record, Turing stated that he believed “by the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” Have we met Turing’s standards – that is a topic of much debate.
Even if Bolter was somewhat ambivalent to the potential impact of AI in 2001 he must certainly be aware of the concern in today’s media for the advent of “super intelligent killer AI” (Etzioni, 2016). News outlets like News Week and Independent printed warnings with titles like Will AI Take Over? Artificial Intelligence Will Best Humans… (Bort 2017) and Artificially Intelligent Bots could Threaten the World and More Needs to be Done… (Griffin 2018) respectively. But the media has also published some very thoughtful commentaries by experts in AI that dismiss the immediate threat of AI and point toward some valid concerns such as cyber warfare, political interference and automation of jobs. See Marco Tatlovic’s article on the future of science writing for a wake-up call to the automation threats and opportunities AI presents for science journalism (Tatlovic 2018).
Speaking of automated writing jobs it has been reported, that an AI computer in Japan has effectively won the Turing test by coauthoring a novel that was short listed for a literary award (Lewis 2016). Entitled The Day a Computer Writes a Novel, it was one of 11 AI-authored submissions competing with 1450 novelists in the annual Hoshi Shinichi Literary Award. Skeptics of this achievement point out that the novel was co-written with a team of researchers and that the real work was done by humans. In an article in Slate magazine, the human journalist, postulates that this AI did nothing more than “plagiarize” the work of its handlers and as in “all other endeavors, A.I. may function as a co-worker, but [as writers] it’s unlikely to really equal humans any time soon.” (Brogan, 2016). Brogan goes on to suggest, the AI is only getting noticed because it rode on the coat tails of Google’s AlphaGo’s victory and because the final line of the novel gave readers “goose bumps”. It ends,
“I writhed with joy, which I experienced for the first time, and kept writing with excitement. The day a computer wrote a novel. The computer, placing priority on the pursuit of its own joy, stopped working for humans.”
Yes, I agree this AI is no HAL and it is not about to lock the human journalists out of the news room but to ignore AI achievements in recent years is dangerous, as the pieces of the puzzle are already in place. Computers will learn, like humans, to become better writers by sampling good writing. Let’s take online publishing for example. Due to the massive computational power of the cloud AI writers are already highly resourceful. They draft hundreds of different new stories, test and analyze online public reaction, measure the strengths of each different iteration, and utilizing machine learning, to become better writers quickly. Most of all, AI don’t have to sleep, negotiate raises, or prepare speeches for award ceremonies.
Before I finally conclude, I want to provide a quick example of the computational power of today’s systems. IBM announced earlier this week that Summit, its newest AI cloud computing system was “crowned the “Fastest System in the World” in the biannual Top500 survey.” According to IBM’s IT Infrastructure Blog, Summit’s 250 petabyte system is capable of 200 quadrillion operations per second. That’s the equivalent of accessing “every book in the US Library of Congress in 10 seconds” (O’Flaherty, 2018).
As for the future of writing, one thing is abundantly clear – AI writers are among us. They are changing the literary landscape and they will alter our perception of digital media forever. However, how extensive their impact will be — and how many human writers they will replace remains unclear. But I think it is safe to say that in the next 50 years, wetware writers, might find themselves staring in from the outside pleading with their AI colleagues for access to the virtual/digital printing press.
[1202 words without quotes and references]
References
Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print. Mahwah, NJ: Lawrence Erlbaum Associates.
Borgan, J. (2016, March 25). An A.I. Competed for a Literary Prize, but Humans Still Did the Real Work. Retrieved from http://www.slate.com/blogs/future_tense/2016/03/25/a_i_written_novel_competes_for_japanese_literary_award_but_humans_are_doing.html
Bort, R. (2017, May). So, workers, experts say artificial intelligence will take all of our jobs by 2060. Retrieved from http://www.newsweek.com/artificial-intelligence-will-take-our-jobs-2060-618259
Etzioni, O. (2016, September 20). Most experts say AI isn’t as much of a threat as you might think. Retrieved from https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/
Griffin, A. (2018, February 21). The world could be under threat from bots, experts warn. Retrieved from https://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-artificial-intelligence-bots-drones-danger-experts-cambridge-university-openai-elon-musk-a8219961.html
Lewis, D. (2016, March 28). An AI-Written Novella Almost Won a Literary Prize. Retrieved from https://www.smithsonianmag.com/smart-news/ai-written-novella-almost-won-literary-prize-180958577/
O’Flaherty, D. (2018, June 25). The fastest storage for the fastest system: Summit – IBM IT Infrastructure Blog. Retrieved from https://www.ibm.com/blogs/systems/fastest-storage-fastest-system-summit/
Tatalovic, M. (2018). AI writing bots are about to revolutionise science journalism: we must shape how this is done. Journal of Science Communication, 17(01). doi:10.22323/2.17010501
Turing, A. M. (1950). I.—Computing Machinery and Intelligence. Mind, LIX (236), 433-460. doi:10.1093/mind/lix.236.433
2001: A Space Odyssey (film) – Wikiquote. (n.d.). Retrieved June 30, 2018, from https://en.wikiquote.org/wiki/2001:_A_Space_Odyssey_(film)
Wolfram, S. (2018, April 3). 2001: A Space Odyssey Predicted the Future?50 Years Ago. Retrieved from https://www.wired.com/story/2001-a-space-odyssey-predicted-the-future50-years-ago/
michael cebuliak
July 6, 2018 — 1:03 pm
When I first read your post my thoughts were firmly grounded around my belief that I wouldn’t want to read a text created by AI, but now I’m not so sure.
When I started to think through your post, I really questioned the available audience that would exist for text made through artificial intelligence. I think where I went wrong is assuming that the consumption of such a text is similar to the formation of a relationship to a singular author and I don’t think one can undermine the significance of the relationship between author and reader. Many avid readers have their favorite authors because they feel a sense of kinship. They both may have the same moral and ethical principles. They could have similar social and political perspectives. Or, they may simply share a belief in what makes for compelling entertainment whether that means some sort of cognitive stimulation or some visceral thrill ride. And I think the same thing can be said with the author of certain songs. I know I feel a little less lonely in this world when I hear someone writing music that reflects how I feel or what I think. And of course, this feeling is multiplied when I attend a concert and can grow exponentially to the point where I am overwhelmed with a sense of belonging as if we, the audience, are a community. And I would have to say I share a similar sentiment whenever I see someone holding a work by my favorite author: I feel compelled to acknowledge them, in some manner, as a member of my community.
But, the relationship with some robotic author, imbued with artificial intelligence, seems rather inhuman. I don’t know how I could have a meaningful relationship with an author that was frankly not like me in so many other important aspects. I’m not even sure I could have a relationship with an author that utilized hybrid thinking as theorized by Ray Kurzweil (2014). In the former, it would be like having a relationship with a robot for a life partner. Even though the robot may be able to do everything that I would expect from a loving partner, it would feel as though I had entered a simulation rather than a real relationship. What can a robot, or artificial intelligence for that matter, know and feel about physical pain? How about immortality and death? Being able to tell me what it is like to lose someone you love so dearly is very different than actually having lost someone you love so dearly. Perhaps the meaningful experience is not just in the answer provided, but in my emotional and cognitive reaction to the answers provided by the author or my partner. If my robotic partner, or my artificial intelligent author, claimed that they knew about death as it exists for a human, I would have to think that they simply can’t know that because they are not human.
But then I thought, maybe the artificial intelligence is real human intelligence. How can that be? Well, in one of our readings, Keep et al (1995) reiterated Roland Barthes’ claim about the death of the author. It’s a ridiculous assumption that solely one person can author a text. The author is using a language that took many years and many people to develop. Inherent within that language–and all textual, oral and personal works that support, maintain and promote that language– is a legacy of human experience from which the author profited. For the most part the author is a cultural conduit sifting through human experience and depicting one inherent possibility, or path if you will, from a multitude of others. But, by such reasoning to have a relationship with such an author is to have a relationship with the entire culture from which the author emerged. However, if this author was selective about what elements of his/her culture they chose to represent, doesn’t that denote some individualism and qualify them for the title of author? I really don’t think it does, because even the elements they chose to represent in the text are not entirely their creation: they are the creation of many, but not all, members of the culture. The author merely recognizes the path but is not solely responsible for producing the product.
So, how is it now that I can find room in my heart for a creator of text that uses artificial intelligence? How now can I possibly have a relationship with what is presumably some silicon contraption? And upon reflection I would have to say it is for all the same reasons that I could have a relationship with the author of any other text: it’s not so much about the authoring entity as an individual but as representing a selective composite of a culture that I can identify with. Moreover, I think that just as some composite of culture helped produce the text, a similar composite of culture further sustains it after its production when it becomes the audience for the text. The author is primarily a conduit for the voice of many and, I guess when I really examine it, it is the voice of many, and not the individual author, in which I experience a relationship with when reading a text.
Even in regard to artificial intelligence, I would reason that it is not authored by one individual but is created from the ideas of many. I have to confess, I don’t know a lot about the makings of artificial intelligence, but from my limited understanding it is modelled on an assumption of how decisions are made upon competing alternatives. And again the reflection of what constitutes this model of intelligence has to be a longstanding cultural, and human, creation as it most certainly did not derive from something that is not human (and even if it was a product of preceding artificial intelligence, it’s origins, and therefore ultimate capabilities, can still be found in its birth in human intelligence). So, being a human creation, artificial intelligence is also a cultural product to some extent. So if a text authored by artificial intelligence is a reflection of a composite selection of culture, and all its inherent products, I may certainly find things in which I could identify with and this isn’t really unlike that of a human author. However, my real comfort would be in acknowledgement that there is an audience for the work, much like I have experienced at music concerts, for such works instill a sense of kinship, community and belonging. I would hate to think that there currently are no other members of society that could appreciate such a work; seemingly, this would suggest that there is no relevance to the cultural foundation that helped produce the text and there is no relevance for all these voices, and mine as it relates to the text, in the future.
But, even though I could embrace the possibility of having a relationship with an author of artificial intelligence, I still find it difficult to entertain the possibility of having a robotic life partner that for all intents and purposes is a similar cultural product to any text produced by artificial intelligence. Moreover, I think one could make a compelling argument that such a robotic partner is a text.
Sources:
Keep, C., McLaughlin, T., & Parmar, R. (1995). The electronic labyrinth. Retrieved from: http://www2.iath.virginia.edu/elab//hfl0240.html
Kurzweil, R, (2014). Ray Kurzweil: Get ready for hybrid thinking. Retrieved from https://www.youtube.com/watch?v=PVXQUItNEDQ
Anonymous
July 8, 2018 — 12:31 pm
Jamie,
I enjoyed reading your thoughts on this, and loved the 2001: A Space Odyssey bit. The part that really caught my attention was the AI co-authored The Day a Computer Writes a Novel being a finalist for the Hoshi Shinichi Literacy Award. Though skeptics, as you mentioned, states that the AI just “plagiarized” the words of its handlers, the fact that an AI is capable ‘compositing’ a superior quality piece is very impressive. Though I am familiar with and already am constantly impressed by the autotldr bot (a bot that automatically summarizes certain articles) on Reddit, writing a creative piece brings the AI achievement to another level. As I struggle to balance the readings from this course with my own work, I was struck by the realisation. Given enough time and attempts, a computer can randomly write something comparable to Shakespeare. But using the techniques adopted by the The Day a Computer Writes a Novel team, an AI would be a way better at completing this course than I am. In fact, I can see an AI being the ‘best’ grad student (for fields dealing mainly with meta-studies and literature reviews), since AI’s are capable of digesting way more information and at an incomparable rate.
As AI’s progress and write more though, at what point do we recognize their product as being independent of the handlers? When AlphaGo Zero defeated a world champion in Go, there were also skeptics that stated things like AlphaGo is that good because the programmers are that good. Which really, just completely missed the point of a deep learning AI. The programmers of AlphaGo cannot actually explain the algorithm behind what AlphaGo uses to win, because they did not actually program that in, the AI learned and developed it by itself. What if we took the AlphaGo approach to writing? Start it with a large batch of human input, and then just let the AI begin creating, with each step, feeding a “winning” writing back into the database. Given enough time, the AI created works will begin to outnumber the original batch of human input. At some point in time, the human input will becomes so diluted that even if the AI was just “plagiarizing”, it is copying itself, not human creations. There’s also the possibility of adopting the AlphaGo Zero approach, which was an AI that became better than AlphaGo, without any human data. It was given the rules of the game, and then built itself up by playing against itself. Hmm, now that I typed this out, I can see how this can be concerning, but I’d be lying if I said that this is not an exciting possibility.
benson chang
July 8, 2018 — 12:44 pm
Oops. Sorry for double commenting. Just realised that I was not logged in when I posted my comment so it showed up as anonymous. Here it is again. Can’t delete the last one. Sorry.
Jamie,
I enjoyed reading your thoughts on this, and loved the 2001: A Space Odyssey bit. The part that really caught my attention was the AI co-authored The Day a Computer Writes a Novel being a finalist for the Hoshi Shinichi Literacy Award. Though skeptics, as you mentioned, states that the AI just “plagiarized” the words of its handlers, the fact that an AI is capable ‘compositing’ a superior quality piece is very impressive. Though I am familiar with and already am constantly impressed by the autotldr bot (a bot that automatically summarizes certain articles) on Reddit, writing a creative piece brings the AI achievement to another level. As I struggle to balance the readings from this course with my own work, I was struck by the realisation. Given enough time and attempts, a computer can randomly write something comparable to Shakespeare. But using the techniques adopted by the The Day a Computer Writes a Novel team, an AI would be a way better at completing this course than I am. In fact, I can see an AI being the ‘best’ grad student (for fields dealing mainly with meta-studies and literature reviews), since AI’s are capable of digesting way more information and at an incomparable rate.
As AI’s progress and write more though, at what point do we recognize their product as being independent of the handlers? When AlphaGo Zero defeated a world champion in Go, there were also skeptics that stated things like AlphaGo is that good because the programmers are that good. Which really, just completely missed the point of a deep learning AI. The programmers of AlphaGo cannot actually explain the algorithm behind what AlphaGo uses to win, because they did not actually program that in, the AI learned and developed it by itself. What if we took the AlphaGo approach to writing? Start it with a large batch of human input, and then just let the AI begin creating, with each step, feeding a “winning” writing back into the database. Given enough time, the AI created works will begin to outnumber the original batch of human input. At some point in time, the human input will becomes so diluted that even if the AI was just “plagiarizing”, it is copying itself, not human creations. There’s also the possibility of adopting the AlphaGo Zero approach, which was an AI that became better than AlphaGo, without any human data. It was given the rules of the game, and then built itself up by playing against itself. Hmm, now that I typed this out, I can see how this can be concerning, but I’d be lying if I said that this is not an exciting possibility.