5.3 – Idea Processors and the Birth of Hypertext

Memex Demo

Being familiar with Twine and seeing the Memex demo in action made me realize how much Twine is inspired by it. Twine could create new links, jump around paths, and return to the very start, just like the Memex.

Engelbert

Engelbert (1963) reminds me of an assignment in ETEC 511 where we read up on various pioneers of artificial intelligence, summarize their definition of intelligence, and compare our research output to what ChatGPT outputs. Engelbert’s 1963 discussion that the focus should not be on “isolated clever tricks that help in particular situations” parallels Chollet (2018), one of our readings, who defines intelligence and generalizability of skills from one task to another. It also reminds me of Malcolm Gladwell’s Blink (2005), in which Gladwell argues that hunches and snap decisions arise from various evolutionary processes that enable us to make quick, mostly accurate decisions.

We can also view Engelbert’s arguments through a pedagogical lens, with Engelbert suggesting that although operating a car can be incomprehensible to someone who has never interacted with a car before, we can scaffold and train that person to be able to do it.

Engelbert’s point on us adapting the movement and copying of text into our repertoires also led to some reflection. This article was written six decades ago, and from my impressions of 5.2’s video on the evolution of word processors, things haven’t advanced to that point yet. Engelbert was able to predict the usage of word pressors quite accurately; in my case many times with the help of digital word processors it’s far easier to jump around, insert paragraphs, rather than writing something linearly from start to finish with only room for minor edits. Even when writing this reading reflection there have been times where I typed a paragraph and/or a sentence, and then hit the enter key a few times to push it to the bottom while I returned to elaborate on previous ideas and thought. This links back to task 4, as well as the evolution from scrolls to pages, to now hypertext and other tools to produce text in a non-linear manner.

Engelbert’s point on augmentation also links back to another 511 reading, Woolgar (1993), which discusses how users can be “configured” by factors such as a machine in order to act a certain way, similar to how Engelbert (and other readings of this course) argue that the evolution of text has altered the way we think and process information.

Intelligence amplification is another example of how Engelbert is still applicable six decades later. While machines were designed to amplify human intelligence, what we’re seeing with ChatGPT, or even the point made in 5.2 about how autocorrect/autofill is replacing intelligence. While on the one hand, as Engelbert argues, tools such as ChatGPT, autocorrect, and autofill can be used to save time and/or mental processes to free us up for more complex operations, they can also lead to the decrease usage of practicing the basics. The argument of the calculator comes to mind: while I allow calculators in my senior chemistry courses, if I were an elementary teacher, I don’t foresee myself allowing calculators at all due to my belief that children must learn arithmetic. Otherwise, there ends up being a blind trust in technology that would lead to less critical thinking and more errors.

Engelbert’s point about rapid information search is yet another example of the reading’s applicability, as we currently have the devices they hypothesized. Interestingly, I feel that unlike what Engelbert suggested, there hasn’t been a major shift in education policies to accommodate the availability of these tools. How the students at my school learn in IB Chemistry seems relatively similar to how I learned IB Chemistry two decades ago.

Engelbert’s discussion of Bush’s Memex added more context. In addition to the association with Twine that I have made from watching the video, Bush’s Memex is essentially Wikipedia, with various ‘trails” how Wikipedia links articles together. Related to this is a game called “Wikipedia Racing” where participants are given a starting article and an ending article on Wikipedia, and they race through the hyperlinks to ty and reach the end article the fastest.

Ultimately, Engelbert (1963) highlights ways in which computers support human thought processes, mostly around organizing, processing, analyzing, storing, and easing retrieval of information, and a lot of their arguments are still relevant six decades afterwards.

Bush

Ironically, this article was behind an account wall and was not freely accessible, reminding me of Willinsky at the start of the course. I ended up looking up a tool I use from time to time that allows one to bypass these walls.

In lines with previous discussions in Module 4 about time, efficiency, and economics, Bush also points out how economic constraints prevented sound ideas such as Babbage and Leibnitz. Replaceable parts, one of the main technologies in the 4x game Civilization VI, allows machines to be constructed at a fraction of the cost. Rather than remaking the machine completely if it breaks, by having interchangeable parts, one allows a minor switch of components in order to keep the machine functioning.

Bush’s points about how an advanced mathematician not being able to perform simple arithmetic or even calculus directly goes against the points I made above about the need for development and practice of basic skills. While Bush argues that great minds should not be bogged down by the details and allow computers to worry about them, which although has merit, is not fully inline with my self-perceived purpose as a secondary teacher. The ideas behind Bloom’s Taxonomy, although apparently falling out of favour in teacher education programs, still guides a lot of my science pedagogical practices.

Nelson

At first read, Xanadu sounds similar to Google Doc’s version history feature, providing a display in which one can see the changes made to a document, though Google Docs only highlight differences. Wikipedia’s history feature does something similar. Also similar to Excel, where we can set up tables to reference another sheet/book.

The bit about recommendation links goes back to my point about 5.1, and how it can lead to societal issues.

Finally, the transcopyright discussions relate back to Willinsky at the start of the course again.

Bolter

Using encyclopedias to discuss the evolution of categorization of knowledge drives home the point. With linear scrolls, it’s difficult to find a specific section of a text. With the codex, it become far easier (table of contents, alphabetizing). With something like Wikipedia, it’s even more straight forward, with connects made constantly.

The section on alphabetizing itself was interesting; and my interpretation/takeaway is that an encyclopedia contains general knowledge and needs to be alphabetized rather than categorized. This is in contrast to modern day academic journals, which are still categorized by discipline. Due to textual overload as a result of the printing press, speed of retrieving information became an important factor as well, which alphabetizing, table contents, indexes, and other systems allowed.

“Coleridge’s encyclopedia was clearly a product for the industrial age of print, in which the text is laid out in one ideal order” (Bolter 2000). This statement stuck out to me, reminiscent of a connect I made in the first task where a colleague was discussing CDs. Artists arranged songs in specific orders on their albums, and certain artists told a story through the order of songs on the CD.

Interesting too that the Britannica’s Propaedia was ahead of its time, limited by public opinions of the printed book and its linear fashion, as opposed to modern feels of text in which a circle of hyperlinks can be common.

“Texts that appeal to small or economically disadvantaged groups may still be neglected” (Bolter 2000). Here’s another issue, similar to how for whatever reason the vast majority of psychology studies are done on western, educated, industrialized, rich, and democratic nations. Both of these issues can trace roots back to socioeconomic roots: it being easier to use North American undergraduates for psychological studies, or that the demand of the larger cultural groups should be fulfilled first. As a result of this, marginalized groups are pushed to the wayside due to lower demand. This parallels the issue with dying languages, and how speakers of a dying language typically needed a new, more popular language in order to access the same amount of knowledge as the rest of the world, and end up neglecting the dying language because of it.

Exploring chapter three, many of the authors in this module believed that hypertext is a natural evolution to work in conjunction with how we think. Humanity constant makes links between words and concepts, and hypertext is an extension of that. Free from the constraints of paper, perhaps hypertext, allowed only due to its presence in digital space, allowed for humanity to devise a system that’s far more “natural” rather than being configured by things such as the scroll or the codex.

 

Leave a Reply

Your email address will not be published. Required fields are marked *