The Changing Spaces of Reading and Writing

Hypermedia and Cybernetics: A Phenomenological Study

As with all other technologies, hypermedia technologies are inseparable from what is referred to in phenomenology as “lifeworlds”. The concept of a lifeworld is in part a development of an analysis of existence put forth by Martin Heidegger. Heidegger explains that our everyday experience is one in which we are concerned with the future and in which we encounter objects as parts of an interconnected complex of equipment related to our projects (Heidegger, 1962, p. 91-122). As such, we invariably encounter specific technologies only within a complex of equipment. Giving the example of a bridge, Heidegger notes that, “It does not just connect banks that are already there. The banks emerge as banks only as the bridge crosses the stream.” (Heidegger, 1993, p. 354). As a consequence of this connection between technologies and lifeworlds, new technologies bring about ecological changes to the lifeworlds, language, and cultural practices with which they are connected (Postman, 1993, p. 18). Hypermedia technologies are no exception.

To examine the kinds of changes brought about by hypermedia technologies it is important to examine the history not only of those technologies themselves but also of the lifeworlds in which they developed. Such a study will reveal that the development of hypermedia technologies involved an unlikely confluence of two subcultures. One of these subcultures belonged to the United States military-industrial-academic complex during World War II and the Cold War, and the other was part of the American counterculture movement of the 1960s.

Many developments in hypermedia can trace their origins back to the work of Norbert Wiener. During World War II, Wiener conducted research for the US military concerning how to aim anti-aircraft guns. The problem was that modern planes moved so fast that it was necessary for anti-aircraft gunners to aim their guns not at where the plane was when they fired the gun but where it would be some time after they fired. Where they needed to aim depended on the speed and course of the plane. In the course of his research into this problem, Wiener decided to treat the gunners and the gun as a single system. This led to his development of a multidisciplinary approach that he called “cybernetics”, which studied self-regulating systems and used the operations of computers as a model for these systems (Turner, 2006, p. 20-21).

This approach was first applied to the development of hypermedia in an article written by one of Norbert Wiener’s former colleges, Vannevar Bush.  Bush had been responsible for instigating and running the National Defence Research Committee (which later became the Office of Scientific Research and Development), an organization responsible for government funding of military research by private contractors. Following his experiences in military research, Bush wrote an article in the Atlantic Monthly addressing the question of how scientists would be able to cope with growing specialization and how they would collate an overwhelming amount of research (Bush, 1945). Bush imagined a device, which he later called the “Memex”, in which information such as books, records, and communications would be stored on microfilm. This information would be capable of being projected on screens, and the person who used the Memex would be able to create a complex system of “trails” connecting different parts of the stored information. By connecting documents into a non-hierarchical system of information, the Memex would to some extent embody the principles of cybernetics first imagined by Wiener.

Inspired by Bush’s idea of the Memex, researcher Douglas Engelbart believed that such a device could be used to augment the use of “symbolic structures” and thereby accurately represent and manipulate “conceptual structures” (Engelbart, 1962).This led him and his team at the Augmentation Research Center (ARC) to develop the “On-line system” (NLS), an ancestor of the personal computer which included a screen, QWERTY keyboard, and a mouse.  With this system, users could manipulate text and connect elements of text with hyperlinks. While Engelbart envisioned this system as augmenting the intellect of the individual, he conceived the individual was part of a system, which he referred to as an H-LAM/T system (a  trained human with language, artefacts, and methodology) (ibid., p. 11). Drawing upon the ideas of cybernetics, Engelbart saw the NLS itself as a self-regulatory system in which engineers collaborated and, as a consequence, improved the system, a process he called “bootstrapping” (Turner, 2006, p. 108).

The military-industrial-academic complex’s cybernetic research culture also led to the idea of an interconnected network of computers, a move that would be key in the development of the internet and hypermedia. First formulated by  J.C.R. Licklider, this idea was later executed by Bob Taylor with the creation of ARPANET (named after the defence department’s Advanced Research Projects Agency). As a extension of systems such as the NLS, such a system was a self-regulating network for collaboration also inspired by the study of cybernetics.

The late 1960s to the early 1980s saw hypermedia’s development transformed from a project within the US military-industrial-academic complex to a vision animating the American counterculture movement. This may seem remarkable for several reasons. Movements related to the budding counterculture in the early 1960s generally adhered to a view that developments in technology, particularly in computer technology, had a dehumanizing effect and threatened the authentic life of the individual. Such movements were also hostile to the US military-industrial-academic complex that had developed computer technologies, generally opposing American foreign policy and especially American military involvement in Vietnam. Computer technologies were seen as part of the power structure of this complex and were again seen as part of an oppressive dehumanizing force (Turner, 2006, p. 28-29).

This negative view of computer technologies more or less continued to hold in the New Left movements largely centred on the East Coast of the United States. However, a contrasting view began to grow in the counterculture movement developing primarily in the West Coast. Unlike the New Left movement, the counterculture became disaffected with traditional methods of social change, such as staging protests and organizing unions. It was thought that these methods still belonged to the traditional systems of power and, if anything, compounded the problems caused by those systems. To effect real change, it was believed, a shift in consciousness was necessary (Turner, 2006, p. 35-36).

Rather than seeing technologies as necessarily dehumanizing, some in the counterculture took the view that technology would be part of the means by which people liberated themselves from stultifying traditions. One major influences on this view was Marshall McLuhan, who argued that electronic media would become an extension of the human nervous system and would result in a new form of tribal social organization that he called the “global village” (McLuhan, 1962). Another influence, perhaps even stronger, was Buckminster Fuller, who took the cybernetic view of the world as an information system and coupled it with the belief that technology could be used by designers to live a life of authentic self-efficiency (Turner, 2006, p. 55-58).

In the late 1960s, many in the counterculture movement sought to effect the change in consciousness and social organization that they wished to see by forming communes (Turner, 2006, p. 32). These communes would embody the view that it was not through political protest but through the expansion of consciousness and the use of technologies (such as Buckminster Fuller’s geodesic domes) that a true revolution would be brought about. To supply members of these communes and other wayfarers in the counterculture with the tools they needed to make these changes, Stewart Brand developed the Whole Earth Catalogue (WEC). The WEC provided lists of books, mechanical devices, and outdoor gear that were available through mail order for low prices. Subscribers were also encouraged to provide information on other items that would be listed in subsequent editions. The WEC was not a commercial catalogue in that it wasn’t possible to order items from the catalogue itself. It was rather a publication that listed various sources of information and technology from a variety of contributors. As Fred Turner argues (2006, p. 72-73), it was seen as a forum by means of which people from various different communities could collaborate.

Like many others in the counterculture movement, Stewart Brand immersed himself in cybernetics literature. Inspired by the connection he saw between cybernetics and the philosophy of Buckminster Fuller, Brand used the WEC to broker connections between ARC and the then flourishing counterculture (Turner, 2006,  p. 109-10). In 1985, Stewart Brand and former commune member Larry Brilliant took the further step of uniting the two cultures and placed the WEC online in one of the first virtual communities, the Whole Earth ‘Lectronic Link or “WELL”. The WELL included bulletin board forums, email, and web pages and grew from a source of tools for counterculture communes into a forum for discussion and collaboration of any kind. The design of the WELL was based on communal principles and cybernetic theory. It was intended to be a self-regulating, non-hierarchical system for collaboration.  As Turner notes (2005), “Like the Catalog, the WELL became a forum within which geographically dispersed individuals could build a sense of nonhierarchical, collaborative community around their interactions” (p. 491).

This confluence of military-industrial-academic complex technologies and the countercultural communities who put those technologies to use would form the roots of other hypermedia technologies. The ferment of the two cultures in Silicon Valley would result in the further development of the internet—the early dependence on text being supplanted by the use of text, image, and sound, transforming hypertext into full hypermedia. The idea of a self-regulating, non-hierarchical network would moreover result in the creation of the collaborative, social-networking technologies commonly denoted as “Web 2.0”.

This brief survey of the history of hypermedia technologies has shown that the lifeworlds in which these technologies developed was one first imagined in the field of cybernetics. It is a lifeworld characterised by non-hierarchical, self-regulating systems and by the project of collaborating and sharing information. First of all, it is characterized by non-hierarchical organizations of individuals. Even though these technologies first developed in the hierarchical system of the military-industrial-academic complex, it grew within a subculture of collaboration among scientists and engineers (Turner, 2006, p. 18). Rather than  being strictly regimented, prominent figures in this subculture – including Wiener, Bush, and Engelbart -voiced concern over the possible authoritarian abuse of these technologies (ibid., p. 23-24).

The lifeworld associated with hypermedia is also characterized by the non-hierarchical dissemination of information. Rather than belonging to traditional institutions consisting of authorities who distribute information to others directly, these technologies involve the spread of information across networks. Such information is modified by individuals within the networks through the use of hyperlinks and collaborative software such as wikis.

The structure of hypermedia itself is also arguably non-hierarchical (Bolter, 2001, p. 27-46). Hypertext, and by extension hypermedia, facilitates an organization of information that admits of many different readings. That is, it is possible for the reader to navigate links and follow what Bush called different “trails” of connected information. Printed text generally restricts reading to one trail or at least very few trails, and lends itself to the organization of information in a hierarchical pattern (volumes divided into books, which are divided into chapters, which are divided into paragraphs, et cetera).

It is clear that the advent of hypermedia has been accompanied by changes in hierarchical organizations in lifeworlds and practices. One obvious example would be the damage that has been sustained by newspapers and the music industry. The phenomenological view of technologies as connected to lifeworlds and practices would provide a more sophisticated view of this change than the technological determinist view that hypermedia itself has brought about changes in society and the instrumentalist view that the technologies are value neutral and that these changes have been brought about by choice alone (Chandler, 2002). It would rather suggest that hypermedia is connected to practices that largely preclude both the hierarchical dissemination of information and the institutions that are involved in such dissemination. As such, they cannot but threaten institutions such as the music industry and newspapers. As Postman (1993) observes, “When an old technology is assaulted by a new one, institutions are threatened” (p. 18).

Critics of hypermedia technologies, such as Andrew Keen (2007), have generally focussed on this threat to institutions, arguing that such a threat undermines traditions of rational inquiry and the production of quality media. To some degree such criticisms are an extension of a traditional critique of modernity made by authors such as Alan Bloom (1987) and Christopher Lasch (1979). This would suggest that such criticisms are rooted in more perennial issues concerning the place of tradition, culture, and authority in society, and is not likely that these issues will subside. However, it is also unlikely that there will be a return to a state of affairs before the inception of hypermedia. Even the most strident critics of “Web 2.0” technologies embrace certain aspects of it.

The lifeworld of hypermedia does not necessarily oppose traditional sources of expertise to the extent that the descendants of the fiercely anti-authoritarian counterculture may suggest, though. Advocates of Web 2.0 technologies often appeal to the “wisdom of crowds”, alluding the work of James Surowiecki  (2005). Surowiecki offers the view that, under certain conditions, the aggregation of the choices of independent individuals results in a better decision than one made by a single expert. He is mainly concerned with economic decisions,  offering his theory as a defence of free markets. Yet this theory also suggests a general epistemology, one which would contend  that the aggregation of the beliefs of many independent individuals will generally be closer to the truth than the view of a single expert. In this sense, it is an epistemology modelled on the cybernetic view of self-regulating systems. If it is correct, knowledge would be  the result of a cybernetic network of individuals rather than a hierarchical system in which knowledge is created by experts and filtered down to others.

The main problem with the “wisdom of crowds” epistemology as it stands is that it does not explain the development of knowledge in the sciences and the humanities. Knowledge of this kind doubtless requires collaboration, but in any domain of inquiry this collaboration still requires the individual mastery of methodologies and bodies of knowledge. It is not the result of mere negotiation among people with radically disparate perspectives. These methodologies and bodies of knowledge may change, of course, but a study of the history of sciences and humanities shows that this generally does not occur through the efforts of those who are generally ignorant of those methodologies and bodies of knowledge sharing their opinions and arriving at a consensus.

As a rule, individuals do not take the position of global skeptics, doubting everything that is not self-evident or that does not follow necessarily from what is self-evident. Even if people would like to think that they are skeptics of this sort, to offer reasons for being skeptical about any belief they will need to draw upon a host of other beliefs that they accept as true, and to do so they will tend to rely on sources of information that they consider authoritative (Wittgenstein, 1969). Examples of the “wisdom of crowds” will also be ones in which individuals each draw upon what they consider to be established knowledge, or at least established methods for obtaining knowledge. Consequently, the wisdom of crowds is parasitic upon other forms of wisdom.

Hypermedia technologies and the practices and lifeworld to which they belong do not necessarily commit us to the crude epistemology based on the “wisdom of crowds”. The culture of collaboration among scientists that first characterized the development of these technologies did not preclude the importance of individual expertise. Nor did it oppose all notions of hierarchy. For example, Engelbart (1962) imagined the H-LAM/T system as one in which there are hierarchies of processes, with higher executive processes governing lower ones.

The lifeworlds and practices associated with hypermedia will evidently continue to pose a challenge to traditional sources of knowledge. Educational institutions have remained somewhat unaffected by the hardships faced by the music industry and newspapers due to their connection with other institutions and practices such as accreditation. If this phenomenological study is correct, however, it is difficult to believe that they will remain unaffected as these technologies take deeper roots in our lifeworld and our cultural practices. There will continue to be a need for expertise, though, and the challenge will be to develop methods for recognizing expertise, both in the sense of providing standards for accrediting experts and in the sense of providing remuneration for expertise. As this concerns the structure of lifeworlds and practices themselves, it will require a further examination of those lifeworlds and practises and an investigation of ideas and values surrounding the nature of authority and of expertise.


Bloom, A. (1987). The closing of the American mind. New York: Simon & Schuster.

Bolter, J. D. (2001) Writing space: Computers, hypertext, and the remediation of print (2nd ed.). New Jersey: Lawrence Erlbaum Associates.

Bush, V. (1945). As we may think. Atlantic Monthly. Retrieved from

Chandler, D. (2002). Technological or media determinism. Retrieved from

Engelbart, D. (1962) Augmenting human intellect: A conceptual framework. Menlo Park: Stanford Research Institute.

Heidegger, M. (1993). Basic writings. (D.F. Krell, Ed.). San Francisco: Harper Collins.

—–. (1962). Being and time. (J. Macquarrie & E. Robinson, Trans.). San Francisco: Harper Collins.

Keen, A. (2007). The cult of the amateur: How today’s internet is killing our culture. New York: Doubleday.

Lasch, C. (1979). The culture of narcissism: American life in an age of diminishing expectations. New York: W.W. Norton & Company.

McLuhan, M. (1962). The Gutenberg galaxy. Toronto: University of Toronto Press.

Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage.

Surowiecki, J. (2005). The wisdom of crowds. Toronto: Anchor.

Turner, F. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.

—–. (2005). Where the counterculture met the new economy: The WELL and the origins of virtual community. Technology and Culture, 46(3), 485–512.

Wittgenstein, L. (1969). On certainty. New York: Harper.

November 29, 2009   No Comments

Commentary 1: An Observation of How Orality and Literacy Have Changed Interactions Between People

Technology has made significant impacts in oral and written communication and interaction. The difference can be observed between oral and literate cultures through the introduction and evolution of writing technologies. Ong (2002) posits that oral cultures developed mnemonic patterns to aid in memory retention of thought, while literacy forces the creation of grammatical rules and structured dialogue. The jump from orality to literacy would have been a challenge for the cultures wishing to preserve their traditions and thoughts in writing and yet, the knowledge to write and record information has enabled many cultures to pass down important pieces of knowledge to future generations.

Ong (2002) explains how, despite being a late development in human history, writing is a technology that has shaped and powered intellectual activity and that symbols are beyond a mere memory aide. As outlined by Ong, oral cultures had the challenge of retaining information in a particular manner, where, when written, the characteristics of oral speech become more evident with certain patterns of speech.  Given that oral cultures had the challenge of retaining information, does literacy require orality? Postman (1992) supports Thamus’ belief where “proper instruction and real knowledge must be communicated” and further argues that despite the prevalence of technology in the classroom, orality still has a place in the space for learning.

As writing technologies evolve, culture and society have the tendency to evolve toward the technology; thus, developing new ways to organize and structure knowledge (Ong, 2002) in order to communicate information and changing the way interactions take place. The construction of speech and the construction of text change depending on the technology. For instance, with the computer, the individual is permitted to delete or backspace any errors in speech or grammar and construct sentences in different ways with the assistance of automatic synonyms, thesaurus or dictionary usage. Before the computer, errors could not be so easily changed with the typewriter, whose ink would remain on the paper until the invention of white out. Tracking the changes to the original Word document with which this paper was composed would reveal the number of modifications and deletions – a feature of technology that cannot be characterized in orality because culture may note errors in speech but cannot effectively track where each error was made. In public speech, one can observe the changes in behaviour, the pauses, and the “umms” and “uhhs” of speech. This is also how the interaction differs from the norm.

With text messaging, the construction of information is often shortened, even more so than one would find with instant messaging. The abbreviated format of text to fit within a limited space has taught individuals to construct conversations differently; in a manner that would not have been so common 15 to 20 years ago.  The interaction between individuals changed since text messaging requires more of a tendency to decipher the abbreviated format. In a sense, text messaging uses some form of mnemonics in order to convey messages from one person to another. This seemingly new form of literacy, in some cases, requires more abstract thinking and as Postman (2002) suggests, may require orality to communicate the true message, which may occur in the form of a phone call.

Learning materials presented in shorter formats becomes more important, particularly for educational technologies like mobile learning, where technologies such as netbooks and mobile phones are utilized for classroom learning. Postman (1992) posits there is a need for an increased understanding of the efficiency of the computer as a teaching tool and how it changes the learning processes. With mobile technologies, the interaction could be limited by abbreviated formats, as seen with text messaging, and in some cases, may not be an effective form of learning for some students. Despite the invention of newer technologies, orality often helps clarify thought processes, concepts and information. While the student can absorb knowledge on literacy alone, orality can assist in the retention of information.

The complexity of written communication can be taken a level further with the basis of writing – pictograms – images that can be recognized and deciphered by most individuals. Gelb  (in ETEC 540) argues that limited writing systems like international traffic signs avoid language and can yet be deciphered by illiterates or speakers of other languages. Although most traffic signs can be clear, some do require translation for the meaning to be clear, whether the translation is made orally or through writing. Ong (2002) supports the notion that codes need a translation that goes beyond pictures, “either in words or in a total human context, humanly understood” (p. 83).

While writing and writing technologies have evolved and changed the way interactions and communication take place, one thing has not changed: being able to find the most basic way to communicate to individuals illiterate of other languages – a characteristic that orality cannot communicate to individuals who are unfamiliar with a language. Thamus feared that writing would be a burden to society, but its advantages outweigh the disadvantages (in Postman, 2002).


Gelb, I. J. (2009). Module 2: From Orality to Literacy. In ETEC 540 – Text Technologies: The Changing Spaces of Reading and Writing. Retrieved October 4, 2009 from

Ong, W. J. (2002). Orality and Literacy. London: Routledge.

Postman, N. (1992). Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.

October 6, 2009   2 Comments

Cautions and Considerations for Technological Change: A Commentary on Neil Postman’s The Judgment of Thamus

Cautions and Considerations for Technological Change:
A Commentary on Neil Postman’s The Judgment of Thamus

Natalie Giesbrecht
ETEC 540
University of British Columbia
October 4, 2009


Kurzweil (2001) suggests that nowadays there is a common expectation of “continuous technological progress and the social repercussions that follow” (para. 2). In “The Judgment of Thamus”, chapter one of Technopoly, Neil Postman (1992) cautions us of the implications of technological innovation. More specifically he warns us of the “one-eyed prophets” or Technophiles, “who see only what new technologies can do and are incapable of imagining what they will undo” (Postman, 1992, p. 5). Postman consciously avoids boasting the opportunities of new technologies in favour of reminding us of the dangers of blindly accepting these. This skepticism and somewhat of an alarmist attitude could be construed as Chandler (2000) calls it “pessimistic determinism” – an almost fatalist perception where we cannot escape the wrath of technology (para. 14). What we are left with is an unbalanced argument whereby Postman assumes his readers are naïve and may well fall prey to the technological imperative. Underlying his negative outlook though, Postman presents key points to consider when thinking about technological change: 1) costs and benefits; 2) winners and losers; and 3) ecological impact.

Costs and Benefits

Postman (1992) uses Plato’s Phaedrus as a springboard for his discussion on technological change. From this story we learn that it is a mistake to believe that “technological innovation has a one-sided effect” (Postman, 1992, p. 4). Postman (1992) argues that every culture must always be in negotiation with technology as it can “giveth” and “taketh away” (p. 5). This stance asserts that technology is an autonomous force, and as Chandler (2001) explains it, technology becomes “an independent, self-controlling, self-determining, self-generating, self-propelling, self-perpetuating and self-expanding force” (para. 1). Postman briefly attempts to illustrate a more balanced critique of the costs and benefits of technological innovation by citing Freud (1930):

…If I can, as often as I please hear the voice of a child of mine who is living hundreds of miles away…if there had been no railway to conquer distances, my child would never have left his native town (p. 70 as cited in Postman, 1992, p. 6).

Postman might argue here, what has technology undone? He contends that there are unforeseen side-effects of technology and that we can’t predict what is at the end of the road of technological progress – as “our inventions are but improved means to an unimproved end” (Thoreau, 1908, p. 45 as cited in Postman, 1992, p. 8).

Winners and Losers

Innis (1951) discussed the idea of ‘knowledge monopolies’, where those who have control of particular technologies gain power and conspire against those “who have no access to the specialized knowledge made available by the technology” (Postman, 1992, p. 9). Postman (1992) infers that the benefits and costs of technology are not equally distributed throughout society and that there are clear winners and losers. A key example he refers to is the blacksmith, who praises the development of the automobile, but eventually his profession is rendered obsolete by it (Postman, 1992). Again, this viewpoint sees technology “as an autonomous force acting on its users” (Chandler, 2008, para. 8).

There is an unsaid expectation that the winners will encourage the losers to be advocates for technology; however, in the end the losers will surrender to those that have specialized technological knowledge (Postman, 1992). Postman (1992) states that for democratic cultures, that are highly receptive and enthusiastic to new technologies, technological progress will “spread evenly among the entire population” (p. 11). This sweeping statement is what Rose (2003) warns us against. Postman writes off the entire population as passive, mindless victims that have fallen prey to the autonomy of technology. However, he fails to acknowledge that the population may “resist the reality of technological impacts and imperatives every day” (Rose, 2003, p. 150).

Ecological Impact

Technological change is ecological and when new technologies compete with old ones it becomes a battle of world-views (Postman, 1992). For instance, a tug-o-war occurred when print entered the oral space of the classroom. On one side, there is orality, which “stresses group learning, cooperation, and a sense of social responsibility” and on the other is print, which fosters “individualized learning, competition, and personal autonomy” (Postman, 1992, p. 17). Each medium eventually found their respective place to change the environment of learning. Now orality and print wage a new war with computers. Postman (1992) asserts that each time a new technology comes along it “does not add or subtract something. It changes everything” (p. 18). Institutions mirror the world-view endorsed by the technology and when a new technology enters the scene, the institution is threatened – “culture finds itself in crisis” (Postman, 1992, p. 18). With this, Postman gives us a sense that technology is out of control, further evidencing his alarmist viewpoint of technological change.

Finally, the ecological impact of technology extends beyond our social, economic and political world to enter our consciousness. Postman (1992) believes that technology alters what we think about, what we think with and the environment in which thought is developed (Postman, 1992). Postman suggests that the population has a “dull” and “stupid awareness” of the ecological impact of technology (Postman, 1992, p. 20) – indicating that technology may be ‘pulling the wool’ over our eyes.


Rose (2003) warns us against taking extreme stances on technological changes – this leads to ideas that “become concretized in absolute terms rather than remaining fluid and open for analysis and debate” (p. 155). Nardi and O’Day (1999) suggest that extreme positions on technology critique should be replaced by a middle ground where we carefully consider the impact of both sides without rejecting one or another hastily (p. 20). Although it clear that Postman is biased toward a pessimistic outlook of technological change, he presents several key points that encourage us to think twice before accepting any technology and “do so with our eyes wide open” (p. 7). In the end, it is difficult to look past Postman’s bias and thus it is still questionable if in fact culture has blindly surrendered to technology as he suggests.


Chandler (2000). Techno-evolution as ‘progress’. In Technological or media determinism. Retrieved October 2, 2009, from

Chandler, D. (2001).Technological autonomy. In Technological or media determinism. Retrieved October 2, 2009, from

Chandler, D. (2008). Technology as neutral or non-neutral. In Technological or media
. Retrieved October 2, 2009, from

Innis, H. A. (1951). The bias of communication. Toronto, ON: University of Toronto Press.

Kurzweil, R. (2001). The law of accelerating returns. Retrieved October 2, 2009, from

Nardi, B. A., & O’Day, V. L. (1999). Information ecologies: Using technology with heart. Cambridge, MA: MIT Press.

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Rose, E. (2003). The errors of Thamus: An analysis of technology critique. Bulletin of Science, Technology and Society, 23, 147-156.

Thoreau, H.D. (1908). Walden. London: J.M. Dent & Sons, Ltd.

October 4, 2009   2 Comments

How has the Technology of Writing Changed the Act of Teaching?

17th Century Parish Registry Letters
Graphic taken with permission from

In my opinion…

Teaching today, is highly dependent upon technologies such as writing. As members of a literate culture, writing has been the focal point of most all of our learning. From the time we are born we have been encouraged to learn how to write. Whether it’s from our earlier years when we learned to write the alphabet, build words and create sentences or during adolescence when we were expected to take notes and write comprehensive papers, writing has always been and will probably always be a focal point in our lives.

It is hard for us to imagine what the written word has done for learning because as literate people, we haven’t experienced anything else. We have no other point of reference other than what we have read or heard about. “… to try to construct a logic of writing without investigation in depth of the orality out of which writing is permanently and ineluctably grounded is to limit one’s understanding…” (Ong, p. 76) Before we can discuss how writing has changed how we teach, it would seem logical to consider first how teaching was performed prior to the introduction of the written word.

Language existed long before writing which meant that verbal communication was the medium through which all cultural knowledge was passed on to the next generation. As language and culture continued to evolve so did the need for better modes of communication. Early forms of writing date back to the days of pictographs when people scratched drawings on stone walls depicting important events within the lives of their people. It allowed for the transfer of more complex information, ideas and concepts using visual clues. (Kilmon) From pictographs came ideographs or graphic symbols such as those used by the Egyptians (hieroglyphs), the Sumerians (cuneiform) and the Chinese (Chinese characters). Writing is an extension of these and other systems where agreed upon simple shapes were used to create a codified system of standard symbols. These systems continued to evolve throughout ancient history. The Canaanites, the Phoenicians, the Greeks, the Romans, and the Christians either modified existing systems of simply created their own. These systems were not widely understood and were only used and by relatively few people. More often than not, it was the clergy who played an important role in the development and maintenance of these systems during this time. It took the invention of the printing press and the printed word before literacy began to have any mass appeal. (History of Handwriting)

The development of writing shifted the focus of learning orally to learning visually which, in turn, taught us how to interiorize it thus changing the nature of how we learn. “Writing… is not a mere appendage to speech. Because it moves speech from the oral-aural to a new sensory world, that of vision, it transforms speech and thought as well.” (Ong, p. 84) Speaking and writing are two different processes. Speech is universal. Everybody acquires it. Writing is not speech written down. Writing requires systematic instruction followed by practice. Not everyone learns to read and write. (Literacy Skills: Speaking vs. Writing) “Clearly, there are fundamental differences between the medium of writing and the medium of speech which constitute ‘constraints’ on the ways in which they may be used.” (Chandler) It has taken a considerable amount of time for writing to superseded speech as the primary tool for learning. The transition, while it may have been awkward for some, has succeeded in altering the way in which we now learn.

Learning in today’s literate culture relies more on text and writing and less on the spoken word. We devote more time teaching students to how to read and write and we expect them use these newly acquired literacy skills to think and to reason intellectually. As teachers, we tend to measure success with either a letter grade or a number grade. “If it makes sense to us, that is because our minds have been conditioned by the technology of numbers so that we see the world differently then they did.” (Postman, p.13) For teachers, marking or grading is synonymous with learning. Our indoctrination into the literate culture is so complete that it is difficult if not impossible to separate the two. Our dependence on the written word is so complete that it has taken us head on into a new era known as the information age.

The onslaught of the digital age has raised many new issues within education and with existing teaching models in particular. The field of Information technology has grown so rapidly that it’s impossible to keep up with the pace. Technical gadgetry continues to astound even the most computer savvy individual, and while these technologies may have heightened our awareness of the digital world we now live in, it may have also dulled our sensitivity to the dominance it has had on our literate world. “… embedded in every tool is an ideological bias, a predisposition to construct the world as one thing rather than another, to value one thing over another, to amplify one sense or skill or attitude more loudly than another.” (Postman, P13) Technology, in this case the computer and the internet, which has given us instant access to knowledge in the public domain, has challenged the way we view teaching and learning much the same way that writing did so long, long ago.

In conclusion, while I may not necessarily agree with Ong’s statements about the dichotomy of oral and literate cultures, I do believe that there is some merit to his separation of the two cultures. If nothing else, it has helped to explain how previous learning practices may have been altered and current teaching practices will be shaped. I have similar doubts about Postman, about his technopoly taxonomy and his position on computer technologies. However, it does get us to think about how these technologies can alter our conception of learning. It seems apparent that, for the most part, we are in unchartered waters. As teaching practitioners, we have no other choice but to take all of this into consideration as we go about constructing teaching strategies designed to promote practical learning and abstract thinking.


Chandler, D. (2000). Biases of the Ear and Eye. Great Divide Theories. Retrieved on September 25, 2009 from:

History of Handwriting. The Development of Handwriting and the Modern Alphabet. Retrieved on September 24, 2009 from

Kilmon, Jack (1997) The Scriptorium: The History of Writing. Retrieved on September 25, 2009 from

Ong, Walter J. (2002). Orality and Literacy. London, England: Routledge / Taylor & Francis Group

Postman, N. (1992). Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.

Todd, Joanne (2001) Examples of Letters of the 17th Center Found in Parish Registers. Retrieved on September 17th, 2009 from

University of Westminster: Learning Skills Site. Literacy Skills: Speaking vs. Writing. Retrieved on September 23, 2009 from:

October 2, 2009   1 Comment

Closing the gap or re-wiring our brains? Maybe both!

Ong states that “the electronic transformation of verbal expression has both deepened the commitment of the word to space initiated by writing and intensified by print and has brought consciousness to a new age of secondary orality (p. 133).” Secondary orality is the way in which technology has transformed the medium through which we send and receive information. Ong includes various examples such as telephone, radio, television and various kinds of sound tape, and electronic technology (Ong, p. 132).

Ong discusses Lowry’s argument that the printing press, in creating the ability to mass-produce books, makes people less studious (Ong, p. 79).  Lowry continues by stating that “it destroys memory and enfeebles the mind by relieving it of too much work (the pocket‐computer complaint once more), downgrading the wise man and wise woman in favor of the pocket compendium.  Of course, others saw print as a welcome leveler: everyone becomes a wise man or woman (Lowry 1979, pp. 31‐2). (Ong, p. 79).”

The World Wide Web has opened up an entirely new sense of “secondary orality”. Prior to the WWW, texts were primarily written by one or a small group of authors and were read by a specific audience.  Today, with the advent of Web 2.0 the underlying tenets of oral cultures and literate cultures are coming closer together.  Even within ETEC540 we are communicating primarily by text, but we are not entering our own private reading world, we are entering a text-based medium through which we can read and respond to each other’s blog posts (such as this post). In addition, we will contribute to a class Wiki where the information is dynamic and constantly changing. How then, is the WWW changing the way we interpret, digest, and process information?

The Internet has brought about a new revolution in the distribution of text.  Google’s vision of having one library that contains all of the world’s literature demonstrates that “one significant change generates total change (Postman, p. 18).”  Nicholas Carr, in his article, “Is Google Making Us Stupid?” and Anthony Grafton in Paul Kennedy’s podcast “The Great Library 2.0” both make similar arguments about the Internet.  Carr points out, the medium through which we receive information not only provides information, but “they also shape the process of thought”.

Carr contends that the mind may now be absorbing and processing information “the way the Net distributes it: in a swiftly moving stream of particles.”  That is, information is no longer static; it is dynamic, ever changing, and easily accessible and searchable.  Carr gives the example that many of his friends and colleagues and friends in academia have noticed that “the more they use the Web, the more they have to fight to stay focused on long pieces of writing.”

Comparably, Google’s attempt to digitize all the text on earth into a new “Alexandria” is certainly an ambitious project, but as Postman states, new technology “is both a burden and a blessing; not either-or, but this-and that (Postman, 5).”  Some see the library as liberating, making an unfathomable amount of knowledge available to anyone with an Internet connection.  Others, such as Anthony Grafton, argue that reading text off the screen takes away from the romantic adventure that one gets from being the first to read at a rare book found in the library of a far-off country (Grafton in The Great Library 2.0).  Grafton also argues that the ability to search for key-words in electronic texts has created “on-time research” which has made academics and others work at a rapid pace, and fill in parts of work very late using Internet sources.  Carr sites other examples of academics who have lost the ability to read and absorb long texts, but instead have gained the ability to scan “short passages of text from many sources online.”

Lowry’s argument that, to some, print destroyed memory and debilitated the mind, while to others, print created equal accessibility to text has repeated itself with the advent of the Internet.  Carr and Grafton are both argue that instantaneous access to huge databases of information such as Google Books may be detracting from our ability to absorb texts.   That being said, Postman states “once a technology is admitted, it plays out its hand; it does what it is designed to do. Our task is to understand what that design is-that is to say, when we admit a new technology to the culture, we must do so with our eyes wide open (Postman, p. 7).”  Thus, perhaps there is no point in arguing the negatives.  Whether it is Google or a different association that makes all the printed text in world available to us, it is the direction that technology is taking us and there will likely be nothing to stop it.  The question is, what will our societies and cultures look like after it is all done?   It will not be the world plus Library 2.0, but an entirely new world.


Ong, Walter, J. (1982). Orality and Literacy: The Technologizing of the Word. London and New York: Methuen.

Kennedy, Paul (host).  (August 24, 2009). Ideas. The Great Library 2.0. Podcast retrieved from

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Carr, Nicholas. (2008). Is Google Making Us Stupid? The Atlantic. July/August 2008. Accessed September 30, 2009 from

October 2, 2009   2 Comments

Technology as a way of revealing

I noticed that Rich had already posted a passage from Heidegger and “The Question Concerning Technology”, but I would like to discuss another part of it. Early in this essay Heidegger states that “Technology is a way of revealing”. I think that this is important and that the “revealing” Heidegger mentions is closely connected with what he elsewhere calls “regioning”, which he says opens “the clearing of Being”. By this he does not mean a type of conscious thought or unconscious thought but rather something that makes both of these possible to begin with. It is tied up with speaking a language and with dwelling among other people in the world. (He gets rather mystical when he tries to describe it in his later writings.) The view of technology as a way of revealing would suggest that technology is inextricably bound up with the way in which we live, our practices, and our institutions. It would support Neil Postman’s claim that a technology’s function follows from its form and that new technologies threaten institutions. It may be a bit disturbing, though, as we usually like to think of ourselves as rational beings who can represent technology objectively and freely decide how we will use it. As Heidegger himself explains at the end of the essay, though, it is not necessarily a fatalistic picture.

September 24, 2009   No Comments

Module 1 Reflection: Collecting People, Collecting Knowledge

Last semester for ETEC*565, I created blog which housed my personal learning journey for the course. This was my first foray into blogging. From this experience I learned how important it is to regularly contribute to a blog in order to fully engage with the subject matter. As well I ended up with a nice snapshot of my growth over the course of the semester. Our Community Weblog takes a different approach in that all of us post to the same blog – a shift from building individual knowledge to creating distributed knowledge, built on collaborative and network literacies. This exposes all of us to diverse perspectives allowing each of us to construct new understandings of the topics raised. As our communal writing space archives all contributions, we will also be able to reflect back on our collaborative journey by the end of the course.

Postman (1992) states “new technologies alter the structure of our interests: the things we think about. They alter the character of our symbols: the things we think with. And they alter the nature of community: the arena in which thoughts develop” (p. 20). With our Community Weblog environment we can create a knowledge database with a structure that allows for spontaneity, the ability to link back to original sources and the ability to embed images and video to support our writings. The disorientating nature of the navigation differs so greatly from the linear ways of learning most of us are familiar with, thus it can be frustrating for many to determine where to start or how to proceed with participating.

On the other hand, the blog environment encourages us to push our understandings beyond text-to-text communication to create dynamic and flexible interpretations that we can continue to build on. Unlike our discussion forums, ideas are much more dispersed and are organized only by the tags and categories the participants employ. This allows for each individual to create their own connections and take responsibility for their own learning. The comments feature allows us to make connections and begin to tie ideas together. We all are responsible for the success of our shared space and thus participation will be essential throughout the semester for this to be a truly valuable experience.

On that note, with two sections contributing to the blog the amount of posts to-date is overwhelming – almost too many to keep track of. The search, tagging and category functions are absolutely key to being able to find information – somewhat of an organized chaos!

Karen Stephenson states that “experience has long been considered the best teacher of knowledge. Since we cannot experience everything, other people’s experiences, and hence other people, become the surrogate for knowledge. ‘I store my knowledge in my friends’ is an axiom for collecting knowledge through collecting people” (Stephenson, in Siemens, 2004, An Alternative Theory, para.1). As we all have different skills sets, experiences and knowledge we bring to the table, as a collective we are much stronger than we are on our own. Our weblog then becomes yet another link of our personal learning networks. It will be truly interesting to see how this space evolves over time.


Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Siemens, G. (2004). Connectivism: A learning theory for the digital age. Retrieved September 20, 2009, from

September 20, 2009   No Comments