The Changing Spaces of Reading and Writing

Making Connections

Personal Connections– Learning

Four years ago I suffered an injury that tore one of the tendons that controls movement in my thumb. I eventually regained use of the thumb and was able to perform all daily activities with little trouble. All except one. It was difficult and very painful to write. So I turned to using the computer. I bought myself a laptop and thankfully my part-time job was a fully computerized environment. At first I saw it as a very efficient substitute writing tool; it was much quicker to type than to jot down notes. About half a year later I began to feel that learning was becoming more difficult, and causing more fatigue, and my creativity had been highly impacted.

It wasn’t until I took this course that I began to really investigate the relationship between the two.

In thinking about the definition of text and looking at the evolution of writing spaces and technologies made me reflect on my current and previous modes of learning. Earlier notes were meticulously underlined, highlighted, written in different colours (while also possible on the computer, rarely used these functions because I owned a black and white laser printer). The handwriting was all over the page, with little clumps of information, connected by arrows and diagrams. The margins were reserved for ‘outside links’, where I made personal connections and devised memory aids to help me synthesize and remember information and ideas. This practice also extended to any papers, textbooks, and novels that I read. However, the injury discouraged this and I ended up typing a few notes on the computer (instead of directly on the page—which made the information feel … disconnected).

Remediation

The concept of remediation was also very useful in my understanding of the difficulties with embracing technological use in schools. As a TOC I visited many schools and saw many classrooms wherein the computer lab was used for typing lessons, KidPix, or research. Many schools also have Interactive White Boards (IWBs), and teachers use them as, in essence, a very cool replacement for a worksheet. Remediation helps frame and pinpoint the reason for this phenomenon: the use of technology is not just a set of skills, it’s a change in thinking and pedagogy. Literacy is not just literacy anymore, it has become multiliteracies and Literacy 2.0. Teachers cannot continue to teach reading and writing the same way as before, because text is not the same anymore.

December 22, 2009   No Comments

Remediation

“…a newer medium takes place of an older one, borrowing and reorganizing the characteristics of writing in the older medium and reforming its cultural space.” (Bolter, 2001, p. 23)

Bolter’s (2005) definition of remediation struck me a bit like a Eureka! moment as I sat at lunch in the school staffroom, overhearing a rather fervent conversation between a couple of teachers, regarding how computers are destroying our children. They noted how their students cannot form their letters properly, and can barely print, not to mention write in cursive that is somewhat legible. The discussion became increasingly heated as one described how children could not read as well because of the advent of graphic novels, and her colleague gave an anecdote about her students’ lack of ability to edit. When the bell rang to signal the end of lunch, out came the conclusion—students now are less intelligent because they are reading and writing less, and in so doing are communicating less effectively.

In essence, my colleagues were discussing what we are losing in terms of print—forming of letters, handwriting— the physicality of writing. However, I wonder how much of an impact that makes on the world today, and 20 years from now when the aforementioned children become immersed in, and begin to affect society. Judging from the current trend, in 20 years time, it is possible that most people will have access to some sort of keypad that makes the act of holding a pen obsolete. Yes, it is sad, because calligraphy is an art form in itself, yet it strikes me that having these tools allow us the time and brain power to do other things. Take for example graphic novels. While some graphic novels are heavily image-based, there are many that have a more balanced text-image ratio. In reading the latter, students are still reading text, and the images help them understand the story. By making comprehension easier, students have the time and can focus brain processes to create deeper understanding such as making connections with personal experiences, other texts or other forms of multimedia.

As for the communications bit, Web 2.0 is anything but antisocial. Everything from blogs, forums, Twitter, to YouTube all have social aspects to them. People are allowed to rate, tag, bookmark and leave comments. Everything including software, data feeds, music and videos can be remixed or mashed-up with other media. In academia, writing articles was previously a more isolated activity, but with the advent of forums like arxiv.org, scholarly articles could be posted, improved much more efficiently and effectively compared to the formal process that occurs when an article is sent in to a journal. More importantly, scholarly knowledge is disseminated with greater ease and accuracy.

Corporations and educational institutions are beginning to see a large influx of, and reception for Interactive White Boards (IWB). Its large monitor, computer and internet-linked, touch-screen abilities make it the epitome of presentation tools. Content can be presented every which way—written text, word processed text, websites, music, video, all (literally) at the user’s fingertips. The IWB’s capabilities allow for a new form of writing to occur—previously, writing was either with a writing instrument held in one’s hand, or via typing on a keyboard. IWBs afford both processes to occur simultaneously, alternately, and interchangeably. If one so chooses, the individual can type and write at the same time! IWBs are particularly relevant to remediation of education and pedagogy itself, because the tool demands a certain level of engagement and interaction. A lesson on the difference between common and proper nouns that previously involved the teacher reading sentences and writing them on the board, then asking students to identify them—could now potentially involve the students finding a text of interest, having it on the IWB, then students identifying the two types of nouns by directly marking up the text with the pen or highlighter tools.

Effectively, the digital world is remediating our previous notion of text in the sense of books and print. Writing—its organization, format, and role in culture is being completely refashioned.

References

Bolter, J. D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print (2 ed.). Mahwah, NJ: Lawrence Erlbaum.

December 13, 2009   No Comments

Multimodalities and Differentiated Learning

“A picture is worth a thousand words.”

While there are many theories out there on how to meet the needs of diverse learners, there is one common theme—to teach using multimodalities. The strong focus on text in education has made school difficult to a portion of students, students whose strengths and talents lie outside of the verbal-linguistic and visual-spatial-type abilities. Thus the decreasing reliance on text, the incorporation of visuals and other multimedia, and the social affordances of the internet facilitate student learning.

Maryanne Wolf (2008) purports that the human brain was not built for reading text. While the brain has been able to utilize its pre-existing capabilities to adapt, lending us the ability to read, the fact that reading is not an innate ability opens us to problems such as dyslexia. However, images and even aural media (such as audiobooks) take away this disadvantage. Students who find reading difficult can find extra support in listening to taped versions of class novels or other reading material. Also, students with writing output difficulties can now write with greater ease with computers or other aids such as AlphaSmart keyboards.

Kress’ (2005) article highlights the difference between the traditional text and multimedia text that we often find on web pages today. While the predecessor used to be in a given order and that order was denoted by the author, Kress notes that the latter’s order is more open, and could be determined by the reader. One could argue that readers could still determine order with the traditional text by skipping chapters. However, chapters often flow into each other, whereas web pages are usually designed as more independent units.

In addition, Kress (2005) notes that texts have only a single entry point (beginning of the text) and a single point of departure (end of the text). On the other hand, websites are not necessarily entered through their main (home-) pages, readers often find themselves at a completely different website immediately after clicking on a link that looks interesting. The fact that there are multiple entry points (Kress) is absolutely critical. A fellow teacher argued that this creates problems because there is no structure to follow. With text, the author’s message is linear and thus has inherent structure and logic, whereas multiple points of entry lends to divergence and learning that is less organized. Thus it is better to retain text and less of the multimedia approach such that this type of structure and logic is not lost. The only problem is that it still only makes sense to a portion of the population. I never realized until I began teaching, exactly how much my left-handedness affected my ability to explain things to others. Upon making informal observations, it was evident that it is much easier for certain people to understand me—lefties.

Kress’ (2005) article discusses a third difference—presentation of material. Writing has a monopoly over the page and how the content is presented in traditional texts, while web pages are often have a mix of images, text and other multimedia.

It is ironic to note that text offers differentiation too. While the words describe and denote events and characters and events—none of these are ‘in your face’—the images are not served to you, instead you come up with the images. I prefer reading because I can imagine it as it suits me. In this sense, text provides the leeway that images do not.

Multimodalities extend into other literacies as well. Take for example mapping. Like words and alphabets, maps are symbolic representations of information, written down and drawn to facilitate memory and sharing of this information. Map reading is an important skill to learn, particularly in order to help us navigate through unfamiliar cities and roadways. However, the advent of GPS technology and Google Streetview presents a change—there is a decreasing need to be able to read a map now, especially when Google Streetview gives an exact 360º visual representation of the street and turn-by-turn guidance.

Yet we must be cautious in our use of multimodal tools; while multimodal learning is helpful as a way to meet the needs of different learners, too much could be distracting and thus be detrimental to learning.

References

Kress, G. (2005). Gains and Losses: New forms of texts, knowledge, and learning. Computers and Composition, 5-22.

Wolf, M. (2008). Proust and the Squid: The Story and Science of the Reading Brain. New York: Harper Perennial.

December 13, 2009   No Comments

Making [Re]Connections

This is one of the last courses I will be taking in the program and as the journey draws to a close, this course has opened up new perspectives on text and technology. Throughout the term, I have been travelling (more than I expected) and as I juggled my courses with the travels, I began to pay more attention to how text is used in different contexts and cultures. Ong, Bolter and the module readings were great for passing time on my plane rides – I learned quite a lot!

I enjoyed working on the research assignment where I was able to explore the movement from icon to symbol. It gave me a more in-depth look at the significance of visual images, which Bolter discusses along with hypertext. Often, I am more used to working with text in a constrained space but after this assignment, I began thinking more about how text and technologies work in wider, more open spaces. By the final project, I found myself exploring a more open space where I could be creative – a place that is familiar to me yet a place that has much exploration left to it – the Internet.

Some of the projects and topics that were particularly related to this new insight include:

E-Type: The Visual Language of Typography

A Case for Teaching Visual Literacy – Bev Knutson-Shaw

Language as Cultural Identity: Russification of the Central Asian Languages – Svetlana Gibson

Public Literacy: Broadsides, Posters and the Lithographic Process – Noah Burdett

The Influence of Television and Radio on Education – David Berljawsky

Remediation of the Chinese Language – Carmen Chan

Braille – Ashley Jones

Despite the challenges of following the week-to-week discussions from Vista to Wiki to Blog and to the web in general, I was on track most of the time. I will admit I got confused a couple of times and I was more of a passive participant than an active one. Nevertheless, the course was interesting and insightful and it was great learning from many of my peers. Thank you everyone.

December 1, 2009   1 Comment

Major Project – E-Type: The Visual Language of Typography

Typography shapes language and makes the written word ‘visible’. With this in mind I felt that it was essential to be cognizant about how my major project would be presented in its final format. In support of my research on type in digital spaces, I created an ‘electronic book’ of sorts, using Adobe InDesign CS4 and Adobe Acrobat 9. Essentially I took a traditionally written essay and then modified and designed it to fit a digital space. The end result was supposed to be an interactive .swf file but I ran into too many technical difficulties. So what resulted was an interactive PDF book.

The e-book was designed to have a sequential structure, supported by a table of contents, headings and page numbering – much like that of a traditional printed book. However, the e-book extends beyond the boundaries of the ‘page’ as the user, through hyperlinks, can explore multiple and diverse worlds of information located online. Bolter (2001) uses the term remediation to describe how new technologies refashion the old. Ultimately, this project pays homage to the printed book, but maintains its own unique characteristics specific to the electronic world.

To view the book click on the PDF Book link below. The file should open in a web browser. If by chance, you need Acrobat Reader to view the file and you do not have the latest version you can download it here: http://get.adobe.com/reader/

You can navigate through the document using the arrows in the top navigation bar of the document window. Alternatively you can jump to specific content by using the associated Bookmarks (located in left-hand navigation bar) or by clicking on the chapter links in the Table of Contents. As you navigate through the pages you will be prompted to visit websites as well as complete short activities. An accessible Word version of the essay is also available below.

References

Bolter, J.D. (2001). Writing space: Computers, hypertext, and the remediation of print. New Jersey: Lawrence Erlbaum Associates, Publishers.

To view my project, click on the following links:

E-Type: The Visual Language of Typography (PDF Book)

E-Type: The Visual Language of Typography (Word Version)

November 29, 2009   4 Comments

Hypermedia and Cybernetics: A Phenomenological Study

As with all other technologies, hypermedia technologies are inseparable from what is referred to in phenomenology as “lifeworlds”. The concept of a lifeworld is in part a development of an analysis of existence put forth by Martin Heidegger. Heidegger explains that our everyday experience is one in which we are concerned with the future and in which we encounter objects as parts of an interconnected complex of equipment related to our projects (Heidegger, 1962, p. 91-122). As such, we invariably encounter specific technologies only within a complex of equipment. Giving the example of a bridge, Heidegger notes that, “It does not just connect banks that are already there. The banks emerge as banks only as the bridge crosses the stream.” (Heidegger, 1993, p. 354). As a consequence of this connection between technologies and lifeworlds, new technologies bring about ecological changes to the lifeworlds, language, and cultural practices with which they are connected (Postman, 1993, p. 18). Hypermedia technologies are no exception.

To examine the kinds of changes brought about by hypermedia technologies it is important to examine the history not only of those technologies themselves but also of the lifeworlds in which they developed. Such a study will reveal that the development of hypermedia technologies involved an unlikely confluence of two subcultures. One of these subcultures belonged to the United States military-industrial-academic complex during World War II and the Cold War, and the other was part of the American counterculture movement of the 1960s.

Many developments in hypermedia can trace their origins back to the work of Norbert Wiener. During World War II, Wiener conducted research for the US military concerning how to aim anti-aircraft guns. The problem was that modern planes moved so fast that it was necessary for anti-aircraft gunners to aim their guns not at where the plane was when they fired the gun but where it would be some time after they fired. Where they needed to aim depended on the speed and course of the plane. In the course of his research into this problem, Wiener decided to treat the gunners and the gun as a single system. This led to his development of a multidisciplinary approach that he called “cybernetics”, which studied self-regulating systems and used the operations of computers as a model for these systems (Turner, 2006, p. 20-21).

This approach was first applied to the development of hypermedia in an article written by one of Norbert Wiener’s former colleges, Vannevar Bush.  Bush had been responsible for instigating and running the National Defence Research Committee (which later became the Office of Scientific Research and Development), an organization responsible for government funding of military research by private contractors. Following his experiences in military research, Bush wrote an article in the Atlantic Monthly addressing the question of how scientists would be able to cope with growing specialization and how they would collate an overwhelming amount of research (Bush, 1945). Bush imagined a device, which he later called the “Memex”, in which information such as books, records, and communications would be stored on microfilm. This information would be capable of being projected on screens, and the person who used the Memex would be able to create a complex system of “trails” connecting different parts of the stored information. By connecting documents into a non-hierarchical system of information, the Memex would to some extent embody the principles of cybernetics first imagined by Wiener.

Inspired by Bush’s idea of the Memex, researcher Douglas Engelbart believed that such a device could be used to augment the use of “symbolic structures” and thereby accurately represent and manipulate “conceptual structures” (Engelbart, 1962).This led him and his team at the Augmentation Research Center (ARC) to develop the “On-line system” (NLS), an ancestor of the personal computer which included a screen, QWERTY keyboard, and a mouse.  With this system, users could manipulate text and connect elements of text with hyperlinks. While Engelbart envisioned this system as augmenting the intellect of the individual, he conceived the individual was part of a system, which he referred to as an H-LAM/T system (a  trained human with language, artefacts, and methodology) (ibid., p. 11). Drawing upon the ideas of cybernetics, Engelbart saw the NLS itself as a self-regulatory system in which engineers collaborated and, as a consequence, improved the system, a process he called “bootstrapping” (Turner, 2006, p. 108).

The military-industrial-academic complex’s cybernetic research culture also led to the idea of an interconnected network of computers, a move that would be key in the development of the internet and hypermedia. First formulated by  J.C.R. Licklider, this idea was later executed by Bob Taylor with the creation of ARPANET (named after the defence department’s Advanced Research Projects Agency). As a extension of systems such as the NLS, such a system was a self-regulating network for collaboration also inspired by the study of cybernetics.

The late 1960s to the early 1980s saw hypermedia’s development transformed from a project within the US military-industrial-academic complex to a vision animating the American counterculture movement. This may seem remarkable for several reasons. Movements related to the budding counterculture in the early 1960s generally adhered to a view that developments in technology, particularly in computer technology, had a dehumanizing effect and threatened the authentic life of the individual. Such movements were also hostile to the US military-industrial-academic complex that had developed computer technologies, generally opposing American foreign policy and especially American military involvement in Vietnam. Computer technologies were seen as part of the power structure of this complex and were again seen as part of an oppressive dehumanizing force (Turner, 2006, p. 28-29).

This negative view of computer technologies more or less continued to hold in the New Left movements largely centred on the East Coast of the United States. However, a contrasting view began to grow in the counterculture movement developing primarily in the West Coast. Unlike the New Left movement, the counterculture became disaffected with traditional methods of social change, such as staging protests and organizing unions. It was thought that these methods still belonged to the traditional systems of power and, if anything, compounded the problems caused by those systems. To effect real change, it was believed, a shift in consciousness was necessary (Turner, 2006, p. 35-36).

Rather than seeing technologies as necessarily dehumanizing, some in the counterculture took the view that technology would be part of the means by which people liberated themselves from stultifying traditions. One major influences on this view was Marshall McLuhan, who argued that electronic media would become an extension of the human nervous system and would result in a new form of tribal social organization that he called the “global village” (McLuhan, 1962). Another influence, perhaps even stronger, was Buckminster Fuller, who took the cybernetic view of the world as an information system and coupled it with the belief that technology could be used by designers to live a life of authentic self-efficiency (Turner, 2006, p. 55-58).

In the late 1960s, many in the counterculture movement sought to effect the change in consciousness and social organization that they wished to see by forming communes (Turner, 2006, p. 32). These communes would embody the view that it was not through political protest but through the expansion of consciousness and the use of technologies (such as Buckminster Fuller’s geodesic domes) that a true revolution would be brought about. To supply members of these communes and other wayfarers in the counterculture with the tools they needed to make these changes, Stewart Brand developed the Whole Earth Catalogue (WEC). The WEC provided lists of books, mechanical devices, and outdoor gear that were available through mail order for low prices. Subscribers were also encouraged to provide information on other items that would be listed in subsequent editions. The WEC was not a commercial catalogue in that it wasn’t possible to order items from the catalogue itself. It was rather a publication that listed various sources of information and technology from a variety of contributors. As Fred Turner argues (2006, p. 72-73), it was seen as a forum by means of which people from various different communities could collaborate.

Like many others in the counterculture movement, Stewart Brand immersed himself in cybernetics literature. Inspired by the connection he saw between cybernetics and the philosophy of Buckminster Fuller, Brand used the WEC to broker connections between ARC and the then flourishing counterculture (Turner, 2006,  p. 109-10). In 1985, Stewart Brand and former commune member Larry Brilliant took the further step of uniting the two cultures and placed the WEC online in one of the first virtual communities, the Whole Earth ‘Lectronic Link or “WELL”. The WELL included bulletin board forums, email, and web pages and grew from a source of tools for counterculture communes into a forum for discussion and collaboration of any kind. The design of the WELL was based on communal principles and cybernetic theory. It was intended to be a self-regulating, non-hierarchical system for collaboration.  As Turner notes (2005), “Like the Catalog, the WELL became a forum within which geographically dispersed individuals could build a sense of nonhierarchical, collaborative community around their interactions” (p. 491).

This confluence of military-industrial-academic complex technologies and the countercultural communities who put those technologies to use would form the roots of other hypermedia technologies. The ferment of the two cultures in Silicon Valley would result in the further development of the internet—the early dependence on text being supplanted by the use of text, image, and sound, transforming hypertext into full hypermedia. The idea of a self-regulating, non-hierarchical network would moreover result in the creation of the collaborative, social-networking technologies commonly denoted as “Web 2.0”.

This brief survey of the history of hypermedia technologies has shown that the lifeworlds in which these technologies developed was one first imagined in the field of cybernetics. It is a lifeworld characterised by non-hierarchical, self-regulating systems and by the project of collaborating and sharing information. First of all, it is characterized by non-hierarchical organizations of individuals. Even though these technologies first developed in the hierarchical system of the military-industrial-academic complex, it grew within a subculture of collaboration among scientists and engineers (Turner, 2006, p. 18). Rather than  being strictly regimented, prominent figures in this subculture – including Wiener, Bush, and Engelbart -voiced concern over the possible authoritarian abuse of these technologies (ibid., p. 23-24).

The lifeworld associated with hypermedia is also characterized by the non-hierarchical dissemination of information. Rather than belonging to traditional institutions consisting of authorities who distribute information to others directly, these technologies involve the spread of information across networks. Such information is modified by individuals within the networks through the use of hyperlinks and collaborative software such as wikis.

The structure of hypermedia itself is also arguably non-hierarchical (Bolter, 2001, p. 27-46). Hypertext, and by extension hypermedia, facilitates an organization of information that admits of many different readings. That is, it is possible for the reader to navigate links and follow what Bush called different “trails” of connected information. Printed text generally restricts reading to one trail or at least very few trails, and lends itself to the organization of information in a hierarchical pattern (volumes divided into books, which are divided into chapters, which are divided into paragraphs, et cetera).

It is clear that the advent of hypermedia has been accompanied by changes in hierarchical organizations in lifeworlds and practices. One obvious example would be the damage that has been sustained by newspapers and the music industry. The phenomenological view of technologies as connected to lifeworlds and practices would provide a more sophisticated view of this change than the technological determinist view that hypermedia itself has brought about changes in society and the instrumentalist view that the technologies are value neutral and that these changes have been brought about by choice alone (Chandler, 2002). It would rather suggest that hypermedia is connected to practices that largely preclude both the hierarchical dissemination of information and the institutions that are involved in such dissemination. As such, they cannot but threaten institutions such as the music industry and newspapers. As Postman (1993) observes, “When an old technology is assaulted by a new one, institutions are threatened” (p. 18).

Critics of hypermedia technologies, such as Andrew Keen (2007), have generally focussed on this threat to institutions, arguing that such a threat undermines traditions of rational inquiry and the production of quality media. To some degree such criticisms are an extension of a traditional critique of modernity made by authors such as Alan Bloom (1987) and Christopher Lasch (1979). This would suggest that such criticisms are rooted in more perennial issues concerning the place of tradition, culture, and authority in society, and is not likely that these issues will subside. However, it is also unlikely that there will be a return to a state of affairs before the inception of hypermedia. Even the most strident critics of “Web 2.0” technologies embrace certain aspects of it.

The lifeworld of hypermedia does not necessarily oppose traditional sources of expertise to the extent that the descendants of the fiercely anti-authoritarian counterculture may suggest, though. Advocates of Web 2.0 technologies often appeal to the “wisdom of crowds”, alluding the work of James Surowiecki  (2005). Surowiecki offers the view that, under certain conditions, the aggregation of the choices of independent individuals results in a better decision than one made by a single expert. He is mainly concerned with economic decisions,  offering his theory as a defence of free markets. Yet this theory also suggests a general epistemology, one which would contend  that the aggregation of the beliefs of many independent individuals will generally be closer to the truth than the view of a single expert. In this sense, it is an epistemology modelled on the cybernetic view of self-regulating systems. If it is correct, knowledge would be  the result of a cybernetic network of individuals rather than a hierarchical system in which knowledge is created by experts and filtered down to others.

The main problem with the “wisdom of crowds” epistemology as it stands is that it does not explain the development of knowledge in the sciences and the humanities. Knowledge of this kind doubtless requires collaboration, but in any domain of inquiry this collaboration still requires the individual mastery of methodologies and bodies of knowledge. It is not the result of mere negotiation among people with radically disparate perspectives. These methodologies and bodies of knowledge may change, of course, but a study of the history of sciences and humanities shows that this generally does not occur through the efforts of those who are generally ignorant of those methodologies and bodies of knowledge sharing their opinions and arriving at a consensus.

As a rule, individuals do not take the position of global skeptics, doubting everything that is not self-evident or that does not follow necessarily from what is self-evident. Even if people would like to think that they are skeptics of this sort, to offer reasons for being skeptical about any belief they will need to draw upon a host of other beliefs that they accept as true, and to do so they will tend to rely on sources of information that they consider authoritative (Wittgenstein, 1969). Examples of the “wisdom of crowds” will also be ones in which individuals each draw upon what they consider to be established knowledge, or at least established methods for obtaining knowledge. Consequently, the wisdom of crowds is parasitic upon other forms of wisdom.

Hypermedia technologies and the practices and lifeworld to which they belong do not necessarily commit us to the crude epistemology based on the “wisdom of crowds”. The culture of collaboration among scientists that first characterized the development of these technologies did not preclude the importance of individual expertise. Nor did it oppose all notions of hierarchy. For example, Engelbart (1962) imagined the H-LAM/T system as one in which there are hierarchies of processes, with higher executive processes governing lower ones.

The lifeworlds and practices associated with hypermedia will evidently continue to pose a challenge to traditional sources of knowledge. Educational institutions have remained somewhat unaffected by the hardships faced by the music industry and newspapers due to their connection with other institutions and practices such as accreditation. If this phenomenological study is correct, however, it is difficult to believe that they will remain unaffected as these technologies take deeper roots in our lifeworld and our cultural practices. There will continue to be a need for expertise, though, and the challenge will be to develop methods for recognizing expertise, both in the sense of providing standards for accrediting experts and in the sense of providing remuneration for expertise. As this concerns the structure of lifeworlds and practices themselves, it will require a further examination of those lifeworlds and practises and an investigation of ideas and values surrounding the nature of authority and of expertise.

References

Bloom, A. (1987). The closing of the American mind. New York: Simon & Schuster.

Bolter, J. D. (2001) Writing space: Computers, hypertext, and the remediation of print (2nd ed.). New Jersey: Lawrence Erlbaum Associates.

Bush, V. (1945). As we may think. Atlantic Monthly. Retrieved from http://www.theatlantic.com/doc/194507/bush

Chandler, D. (2002). Technological or media determinism. Retrieved from http://www.aber.ac.uk/media/Documents/tecdet/tecdet.html

Engelbart, D. (1962) Augmenting human intellect: A conceptual framework. Menlo Park: Stanford Research Institute.

Heidegger, M. (1993). Basic writings. (D.F. Krell, Ed.). San Francisco: Harper Collins.

—–. (1962). Being and time. (J. Macquarrie & E. Robinson, Trans.). San Francisco: Harper Collins.

Keen, A. (2007). The cult of the amateur: How today’s internet is killing our culture. New York: Doubleday.

Lasch, C. (1979). The culture of narcissism: American life in an age of diminishing expectations. New York: W.W. Norton & Company.

McLuhan, M. (1962). The Gutenberg galaxy. Toronto: University of Toronto Press.

Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage.

Surowiecki, J. (2005). The wisdom of crowds. Toronto: Anchor.

Turner, F. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.

—–. (2005). Where the counterculture met the new economy: The WELL and the origins of virtual community. Technology and Culture, 46(3), 485–512.

Wittgenstein, L. (1969). On certainty. New York: Harper.

November 29, 2009   No Comments

Commentary 3 – text will remain

Hi everyone,

Hayles explains that sometime between 1995 and 1997 a shift in Web literature occurred: before 1995 hypertexts were primarily text based with “with navigation systems mostly confined to moving from one block of text to another (Hayles, 2003).”  Post 1997, Hayles states that  “electronic literature devises artistic strategies to create effects specific to electronic environments (2003).”

Bolter and Kress both contend that technology and text have fused into a single entity. That is, in the latter half of the 20th century, the visual representation of text has been transformed to include visual representations of pictures, graphics, and illustrations. Bolter states that “the late age of print is visual rather than linguistic . . . print and prose are undertaking to remediate static and moving images as they appear in photography, film, television and the computer (Bolter, 2001, p. 48)” Cyber magazines such as Mondo 2000 and WIRED are “aggressively remediating the visual style of television and digital media” with a “hectic, hypermediated style (Bolter, 2001, p. 51).” Kress notes that “the distinct cultural technologies for representation and for dissemination have become conflated—and not only in popular commonsense, so that the decline of the book has been seen as the decline of writing and vice versa (Kress, p.6).” In recent years, perhaps due to increased bandwidth, the WWW has had a much greater presence of multimedia such as pictures, video, games, and animations.  As a result, there is a noticeably less text than what appeared in the first web pages designed for Mosaic in 1993. Furthermore, the WWW is increasingly being inundated with advertisements.

Additionally, text and use of imagery is also evident in magazines that also use visual representations of pictures, graphics, and illustrations as visual aids to their texts. Tabloid magazines such as Cosmo, People, and FHM are filled with advertisements.  For example, the April 2008 edition of Vogue has a total of 378 pages.  Sixty-seven of these pages are dedicated to text, while 378 pages are full-page advertisements.

While there are increasingly more spaces, both in cyberspace and printed works, that contain much imagery and text, there still exist spaces that are, for the most part, text-based.  This is especially evident in academia.  For example, academic journals, whether online or printed, are still primarily text. Pictures, graphics, and illustrations are used almost exclusively to illustrate a concept and, to my knowledge, have not yet included video.  University texts and course-companions are primarily text as well.  Perhaps, as Bolter states, this is because “we still regard printed books and journals as the place to locate our most prestigious texts (Bolter, forthcoming).” However, if literature and humanistic scholarship continues to be printed, it could be further marginalized within our culture (ibid).

Despite there being a “breakout of the visual” in both print and electronic media, Bolter makes a very strong argument that, text can never being eliminated in the electronic form that it currently exists.  That is, all videos, images, animations, and virtual reality all exist on an underlying base of computer code.   What might happen instead is the “devaluation of writing in comparison with perceptual presentation (Bolter, forthcoming).” The World Wide Web is an example of this.  The WWW provides a space in which millions of authors can write their own opinions; Bolter is, in fact, doing this for his forthcoming publication “Degrees of Freedom”.  The difference between Bolter’s text and others is that he uses minimal use of imagery and relies almost entirely on his prose to convey they meaning of his writing.  Be that as it may, Bolter contends that the majority of WWW authors use videos and graphics to illustrate their words (forthcoming). Text will remain a large part of how we learn absorb and communicate information, however, “the verbal text must now struggle to assert its legitimacy in a space increasingly dominated by visual modes of representation (Bolter, forthcoming).”

John

References

Bolter, Jay David. (2001). Writing space: Computers, hypertext, and the remediation of print [2nd edition]. Mahwah, NJ: Lawrence Erlbaum.

Bolter, Jay David. (forthcoming). Degrees of Freedom. Retrieved November 28, 2009 from http://www.uv.es/~fores/programa/bolter_freedom.html.

Hayles, Katherine. (2003). Deeper into the Machine: The Future of Electronic Literature. Culture Machine. 5. Retrieved, August 2, 2009, from http://www.culturemachine.net/index.php/cm/article/viewArticle/245/241

Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and learning Gunther Kress. Computers and Composition, 22(1), 5–22.

November 29, 2009   1 Comment

The Age of Real-Time

I had the opportunity to go to the Annual Conference on Distance Teaching and Learning in Madison, Wisconsin this past August. The last keynote speaker, Teemu Arina discussed how culture and education are changing with emerging technologies. His presentation illustrated how we are moving from linear and sequential environments to those that are nonlinear and serendipitous. Topics of time, space and social media tie into Teemu’s presentation. The video of the presentation is about 45 minutes long but the themes tie nicely into our course and into many other courses within the MET program.

In the Age of Real-Time: The Complex, Social, and Serendipitous Learning Offered via the Web

November 24, 2009   No Comments

MIT Lab and the “Sixth Sense”

As one of themes of this course relates to technology and information retrieval and storage, I thought I would share this video. The folks at MIT have created a wearable device that enables new interactions between the real world and the world of data. The device, based on personal criteria that you input, allows you to interact with an environment and call up relevant information about it, simply by gesturing (e.g. while shopping a hand gesture will bring up information about a particular product). What is controversial about this device is that it makes it easy to infringe on people’s privacy. Filming and photographing can occur by simply moving one’s hand. Also, think about how annoying it is to listen to a multitude of mobile users chat in public spaces – this device allows a user to project and display information on any surface. Imagine, hundreds of people displaying information all over the place at once!

https://www.youtube.com/watch?v=blBohrmyo-I

November 24, 2009   1 Comment

Rip.Mix.Feed Photopeach

Hi everyone,

For my rip.feed.mix assignment, I decided not to re-invent the wheel, but instead to add to an already existing wheel. When I took ETEC565 we were asked to produce a similar project when exploring different web 2.0 tools. We were directed to The Fifty Tools. I used PhotoPeach to create my story. My wife and I moved to Beijing in the fall of 2007 and we’ve been traveling around Asia whenever we get a break from teaching. The story I’ve made is a very brief synopsis of some of our travels thus far. Since the original posting, I have updated the movie with more travels. You can view the story here.  If you’re in China, the soundtrack U2 – Where the Streets Have No Name will not play because it is hosted on YouTube.

What I enjoy most about these tools is that they are all available online, all a student needs to create a photo story is a computer with access to the Internet. To make the stories more personal, it would be great if they had access to their own digital pictures. However, if they have no pictures of their own, they can find pictures, through Internet searches that give results from a creative commons license to include in their stories.

Furthermore, as I teach in an international school in which most students speak English as a second, third, or fourth language, and who come from many different countries, Web 2.0 has “lowered barrier to entry may influence a variety of cultural forms with powerful implications for education, from storytelling to classroom teaching to individual learning (Alexander, 2006).” Creating digital stories about their own culture provides a medium through which English language learners acquire foundational literacies while making sense “of their lives as inclusive of intersecting cultural identities and literacies (Skinner & Hagood, p. 29).” With their work organized, students can then present their work to the classmates for discussion and feedback, build a digital library of age/content appropriate material, and share their stories with global communities (Skinner & Hagood).

John

References

Alexander, Bryan. (2006). “Web 2.0: A New Wave of Innovation for Teaching and Learning?” EDUCAUSE Review, 41(2).

Skinner, Emily N. & Hagood, Margaret C. (2008). “Developing Literate Identities With English Language Learners Through Digital Storytelling.” The Reading Matrix, 8(2), 12 – 38.

November 22, 2009   2 Comments

Capzles – Rip.Mix.Feed

My original plan was to have a short animation re-invention video presentation on Ahead but the application proved too frustrating to use. I kept the link for anyone to see on my website which is run with WordPress. Ahead is similar to Prezi, which I am more familiar with. However, when I went to the Prezi website to create my project, it was down for maintenance so I resorted to restarting something else in Capzle. The Capzles project contains a slideshow of photos from my recent trip to Hong Kong in late September.

If you cannot see the embedded slideshow above, view my Capzles project here.

November 22, 2009   3 Comments

RipMixFeed using del.icio.us

For the RipMixFeed activity I collected a set of resources using the social bookmarking tool del.icio.us. Many of us have already used this application in other courses to create a class repository of resources or to keep track of links relevant to our research projects. What I like about this tool is that the user can collect all of their favourite links, annotate them and then easily search them according to the tagged words that they created. This truly goes beyond the limitations of web browser links.

For this activity I focused on finding resources specifically related to digital and visual literacy and multiliteracies. To do this I conducted web searches as well as searches of other del.icio.us user’s links. As there are so many resources – too many for me to adequately peruse – I have subscribed to the tag ‘digitalliteracy’ in del.icio.us so I connect with others tagging related information. You can find my del.icio.us page at: http://delicious.com/nattyg

Use the tags ‘Module4’ and ‘ETEC540’ to find the selected links or just search using ETEC540 to find all on my links related to this course.

A couple of resources that I want to highlight are:

  1. Roland Barthes: Understanding Text (Learning Object)
    Essentially this is a self-directed learning module on Roland Barthes ideas on semiotics. The section on Readerly and Writerly Texts is particularly relevant to our discussions on printed and electronic texts.

  2. Howard Rheingold on Digital Literacies
    Rheingold states that a lot people are not aware of what digital literacy is. He briefly discusses five different literacies needed today. Many of these skills are not taught in schools so he poses the question how do we teach these skills?

  3. New Literacy: Document Design & Visual Literacy for the Digital Age Videos
    University of Maryland University College faculty, David Taylor created a five part video series on digital literacy. For convenience sake here is one Part II where he discusses the shift to the ‘new literacy’. Toward the end of the video, Taylor (2008) makes an interesting statement that “today’s literacy means being capable of producing fewer words, not more”. This made me think of Bolter’s (2001) notion of the “breakout of the visual” and the shift from textual to visual ways of knowing.

Alexander (2006) suggests that social bookmarking can work to support “collaborative information discovery” (p. 36). I have no people in my Network as of yet. I think it would be valuable to connect with some of my MET colleagues so if you would like share del.icio.us links let’s connect! My username is nattyg.

References

Alexander, B. (2006). Web 2.0: A new wave for teaching and learning? Educause Review, Mar/Apr, 33-44.

Bolter, J.D. (2001). Writing space: Computers, hypertext and the remediation of print. London: Lawrence Erlbaum Associates, Publishers.

Taylor, D. (2008). The new literacy: document design and visual literacy for the digital age: Part II. Retrieved November 13, 2009, from https://www.youtube.com/watch?v=RmEoRislkFc

November 14, 2009   2 Comments

The Photocopier

Hi everyone,

You can find my research paper about the invention of the photocopier in a wiki here. Comments are most welcome!

Enjoy.

John

November 1, 2009   No Comments

Commentary #2 – Which came first, culture or technology?

“It is not a question of seeing writing as an external technological force that influences or changes cultural practices; instead writing is always a part of culture.… technologies do not determine the course of culture or society, because they are not separate agents that can act on culture from the outside.” (Bolter, p. 19)

tn_head-case http://stephilosophy.blogspot.com/

To answer this question, we need to begin with a definition of ‘culture’ and ‘technology’ as it relates to knowledge. Culture can be defined as “… the integrated pattern of human knowledge, belief, and behavior that depends upon the capacity for learning and transmitting knowledge to succeeding generations.” (Merriam-Webster) Technology is defined as “…the practical application of knowledge especially in a particular area.” (Merriam-Webster) The distinction between each is clear, as is the connection between the two. Culture is about acquiring knowledge while technology is about applying knowledge. There has been some debate about culture and technology and whether they are inseparable or not. This commentary will take a look at three of these arguments.

In Writing Space: Computers, Hypertext, and the Remediation of Print, Bolter was very clear as to what he believed, particularly when it came to writing. “The technical and the cultural dimensions of writing are so intimately related that it is not useful to try to separate them…” (Bolter, p. 19) Bolter went to great lengths to explain the connection between technology and culture; how different technologies of writing involved different materials and that these materials were used in different ways and for different reasons. He used ancient writing as an example. Technologies such as papyrus, ink, and the art of book making may have been common to all cultures but what was different were the writing styles and genders of ancient writing and the social and political practices of ancient rhetoric. He argued that modern printing practices followed a similar pattern as does today’s technologies. Computers, browsers, word processors are our writing technologies but these technologies don’t change cultures per say. If anything, culture has a way of initiating changes in technology.

In his book, Orality and Literacy, Ong argued that the introduction of writing and print literacy’s have fundamentally restructured consciousness and culture. In chapter four of his book, Ong discussed the development of script and how this restructures our consciousness. Ong claimed that “…writing (and especially alphabetic writing) is a technology, calling for the use of tools and other equipment… Technologies are not mere exterior aids but also interior transformations of consciousness and never more than when they affect the word.” (Ong, p. 80 – 81) Ong suggested that humans are naturally tool-employing beings and that these tools create opportunities for new modes of expression that would not otherwise exist. He used the example of the violinist who internalizes the technology (violin) making the tool seemly second nature, or a part of the self. “The use of a technology can enrich the human psyche, enlarge the human spirit, intensifying its interior life.” (Ong, p. 82) In terms of culture and technology, Ong’s technological determinism clearly makes it impossible for him to separate the two.

In Understanding Media: The Extensions of Man, Marshall McLuhan argued that technology was nothing more than an extension of man. “The shovel we use for digging holes is a kind of extension of the hands and feet. The spade is similar to the cupped hand, only it is stronger, less likely to break, and capable of removing more dirt per scoop than the hand. A microscope, or telescope is a way of seeing that is an extension of the eye.” (Kappelman) When an individual or society makes use of a technology in such a way that it extends the human body or the human mind, it does so at the expense of some other technology which is then either modified or amputated. “The need to be accurate with the new technology of guns made the continued practice of archery obsolete. The extension of a technology like the automobile “amputates” the need for a highly developed walking culture, which in turn causes cities and countries to develop in different ways. The telephone extends the voice, but also amputates the art of penmanship gained through regular correspondence.” (Kappelman) McLuhan later developed a tetrad to explain his theory. It consisted of four questions or laws; what does the technology extend, what does it make obsolete, what is retrieved and what does the technology reverse into if it is overextended. As was the case with Ong, McLuhan did not make any clear distinction between technology and culture.

Bolter disagrees with the assessment of technological determinists like McLuhan’s “extension of man” claim and Ong’s “restructured consciousness”. He uses cause and effect to prove his point. He points to the early beginnings of the World Wide Web, and how technology (hardware and software) was used to create it. According to Bolter, culture was responsible for changing the Web into “… a carnival of commercial and self-promotional Wes sites…” (Bolter, p. 20) Culture then demanded changes to the hardware and software to allow for such things as censorship. “Wherever we start in such a chain of cause and effect, we can identify an interaction between technical qualities and social constructions – an interaction so intimate that it is hard to see where the technical ends and the social begins.” (Bolter, p. 20) Bolter doesn’t adhere to the ‘doom and gloom’ rhetoric of McLuhan who was “…deeply concerned about man’s willful blindness to the downside of technology.” (Kappelman) and he in mindful of Ong who said “Once the word is technologized, there is no effective way to criticize what technology has done with it…” (Ong, p. 79) Instead, Bolter believed that “… it is possible to understand print technology is an agent of change without insisting that it works in isolation or in opposition to other aspects of culture.” (Bolter, p. 19 – 20)

It seems reasonable to assume that because technology can infringe upon culture and culture can impinge on technology, the two are in a sense inseparable. This may not be a case of one coming before the other as much as both of them coexisting at the same time. Either way, we only need to be cognizant of the fact that both will continue to evolve either as a result of or in spite of the other.

References

Bolter, J.D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print. Mahway, NJ: Lawrence Erlbaum Associates.

culture. (2009). In Merriam-Webster Online Dictionary. Retrieved October 31, 2009, from http://www.merriam-webster.com/dictionary/culture

Kappelman, Todd (July 2002), Marshall McLuhan:”The Medium is the Message”, Probe Ministries. Retrieved from http://www.leaderu.com/orgs/probe/docs/mcluhan.html#text2

Ong, Walter J. (2002). Orality and Literacy (2nd ed.). New York: Routledge.

technology. (2009). In Merriam-Webster Online Dictionary. Retrieved October 31, 2009, from http://www.merriam-webster.com/dictionary/technology

Picture retrevied from http://stephilosophy.blogspot.com/

October 31, 2009   1 Comment

Bada-Bing! The Oxford English Dictionary Taps into Internet Culture

When I think about standardization of language, my first thought is to refer to the dictionary. Sam Winston, a UK artist, has done some neat pieces that use dictionaries as a springboard for playing with language and text. What I like about this project is that the artist’s intent is to make art accessible – which in the context of this course relates back to the press as means to make literature accessible to the masses. Here is short video clip of the project Dictionary Story.

In the video clip, Winston mentions James Gleick’s article for the New York Times, Cyber-Neologoliferation as a source of inspiration. As this course has fueled my interest in language and technology, I decided to search this article out.

Before reading the article I did not have a clue what ‘neologoliferation’ meant. What I learned is that neologism refers to “a newly coined word that may be in the process of entering common use, but has not yet been accepted into mainstream language (Wikipedia, Neologism, para. 1). This word seems completely appropriate to use in the context of the Oxford English Dictionary and their pursuit to capture “a perfect record, perfect repository, perfect[ly] mirror of the entire [English] language (Gleick, 2006, para. 5).

The Oxford English Dictionary (OED) has a long history, dating back about a century and half, and has played an essential role in standardizing the English language. In his article, Gleick explores the workings of the dictionary today and how the online environment is changing the evolution of language. The OED has evolved its immense printed resource of 20 volumes in its second edition to a 3rd edition that now resides completely online. The Internet has not only been a vehicle that houses the dictionary but a tool that allows lexicographers to eavesdrop on the “expanding cloud of messaging in speech” that occurs in resources such as newspapers, online news groups and chat rooms (para. 2).

With these tactics for tapping into culture, the dictionary has moved from being a ‘dictionary of written of language’, where lexicographers comb through works of Shakespeare to find words, to one where ‘spoken language’ is the resource (para.12). Surprisingly, text messaging also serves as a source for new vocabulary. Beyond OED’s hunting and gathering processes, the general public can also connect with them to have a new word assessed for inclusion into the dictionary. The ‘living document’ of the dictionary now seems to require of the participation of the masses. With this, more and more colloquial language is being added to the dictionary (e.g. bada-bing).

The printing press worked to standardized spelling but according to Gleick (2006) with mass communication spelling variation is on the rise. With the Internet, OED is coming to terms with the boundlessness of language. In the past variations of the English language were spoken in many different pockets around the world. These variations still exist but now are more accessible through the Internet (Gleick, 2006). Peter Gilliver, a lexicographer at OED believes that the Internet transmits information differently than past vehicles for communication. He suggests that the ability to broadcast to the masses or communicate one-to-one is impacting the change in language. For OED, the ability to tap into a wide variety of online conversations affords a more accurate representation of word usage all over the world.

Standards in language help us to clearly communicate in a way that is commonly understood. This article makes me wonder, with all the slang being added to the dictionary, what will language look like in 50 years? 100 years? Will a new English language evolve? How will this affect spoken and written language? Will standards become more lax? With all these questions, OED becomes an important historical documentation of the evolution of the English language.

References

Gleick, J. (2006, November 5). Cyber-neologoliferation. New York Times. Retrieved October 18, 2009, from http://www.nytimes.com/2006/11/05/magazine/05cyber.html?_r=1&adxnnl=1&pagewanted=print&adxnnlx=1255864379-QjA08nvBb8FH9FU9ZHJbRg

Neologism. (n.d.). Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Neologism

October 21, 2009   No Comments

Does the Brain Like E-Books?

Does the Brain Like E-Books?

This group of articles was brought to my attention.  Five authors discuss their research on ebooks and the future of literacy.  I am hoping to find some answers to many questions raised in our current reading.

I hope you find it thought provoking and look forward to continuing the discussion on this topic.

October 17, 2009   No Comments

Commentary 1: An Observation of How Orality and Literacy Have Changed Interactions Between People

Technology has made significant impacts in oral and written communication and interaction. The difference can be observed between oral and literate cultures through the introduction and evolution of writing technologies. Ong (2002) posits that oral cultures developed mnemonic patterns to aid in memory retention of thought, while literacy forces the creation of grammatical rules and structured dialogue. The jump from orality to literacy would have been a challenge for the cultures wishing to preserve their traditions and thoughts in writing and yet, the knowledge to write and record information has enabled many cultures to pass down important pieces of knowledge to future generations.

Ong (2002) explains how, despite being a late development in human history, writing is a technology that has shaped and powered intellectual activity and that symbols are beyond a mere memory aide. As outlined by Ong, oral cultures had the challenge of retaining information in a particular manner, where, when written, the characteristics of oral speech become more evident with certain patterns of speech.  Given that oral cultures had the challenge of retaining information, does literacy require orality? Postman (1992) supports Thamus’ belief where “proper instruction and real knowledge must be communicated” and further argues that despite the prevalence of technology in the classroom, orality still has a place in the space for learning.

As writing technologies evolve, culture and society have the tendency to evolve toward the technology; thus, developing new ways to organize and structure knowledge (Ong, 2002) in order to communicate information and changing the way interactions take place. The construction of speech and the construction of text change depending on the technology. For instance, with the computer, the individual is permitted to delete or backspace any errors in speech or grammar and construct sentences in different ways with the assistance of automatic synonyms, thesaurus or dictionary usage. Before the computer, errors could not be so easily changed with the typewriter, whose ink would remain on the paper until the invention of white out. Tracking the changes to the original Word document with which this paper was composed would reveal the number of modifications and deletions – a feature of technology that cannot be characterized in orality because culture may note errors in speech but cannot effectively track where each error was made. In public speech, one can observe the changes in behaviour, the pauses, and the “umms” and “uhhs” of speech. This is also how the interaction differs from the norm.

With text messaging, the construction of information is often shortened, even more so than one would find with instant messaging. The abbreviated format of text to fit within a limited space has taught individuals to construct conversations differently; in a manner that would not have been so common 15 to 20 years ago.  The interaction between individuals changed since text messaging requires more of a tendency to decipher the abbreviated format. In a sense, text messaging uses some form of mnemonics in order to convey messages from one person to another. This seemingly new form of literacy, in some cases, requires more abstract thinking and as Postman (2002) suggests, may require orality to communicate the true message, which may occur in the form of a phone call.

Learning materials presented in shorter formats becomes more important, particularly for educational technologies like mobile learning, where technologies such as netbooks and mobile phones are utilized for classroom learning. Postman (1992) posits there is a need for an increased understanding of the efficiency of the computer as a teaching tool and how it changes the learning processes. With mobile technologies, the interaction could be limited by abbreviated formats, as seen with text messaging, and in some cases, may not be an effective form of learning for some students. Despite the invention of newer technologies, orality often helps clarify thought processes, concepts and information. While the student can absorb knowledge on literacy alone, orality can assist in the retention of information.

The complexity of written communication can be taken a level further with the basis of writing – pictograms – images that can be recognized and deciphered by most individuals. Gelb  (in ETEC 540) argues that limited writing systems like international traffic signs avoid language and can yet be deciphered by illiterates or speakers of other languages. Although most traffic signs can be clear, some do require translation for the meaning to be clear, whether the translation is made orally or through writing. Ong (2002) supports the notion that codes need a translation that goes beyond pictures, “either in words or in a total human context, humanly understood” (p. 83).

While writing and writing technologies have evolved and changed the way interactions and communication take place, one thing has not changed: being able to find the most basic way to communicate to individuals illiterate of other languages – a characteristic that orality cannot communicate to individuals who are unfamiliar with a language. Thamus feared that writing would be a burden to society, but its advantages outweigh the disadvantages (in Postman, 2002).

References

Gelb, I. J. (2009). Module 2: From Orality to Literacy. In ETEC 540 – Text Technologies: The Changing Spaces of Reading and Writing. Retrieved October 4, 2009 from http://www.vista.ubc.ca.

Ong, W. J. (2002). Orality and Literacy. London: Routledge.

Postman, N. (1992). Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.

October 6, 2009   2 Comments

Universal Library

I chose to write about the universal library because this topic has been close to my heart since I was 12 years old.  My biggest dream was to read all the books in the world in all the languages in the world.  Many of my friends were eager to point out how ridiculous that dream was – after all I only spoke Ukrainian and Russian and a bit of Polish and German – the basics I learned from my Grandmother.  I would not let these naysayers dissuade me from my dreams.  How difficult could it be to get a foreign book in translation?  By then I had already read all of Dumas’ adventures of the Three Musketeers and Jane Eyre as well as other classical writers.  Some books were typed on a typewriter and shared among trusted friends and you learned early on that those books were not to be talked about with people who you did not implicitly trust.

O’Donnell asserts that “[the] main features of this vision are a vast, ideally universal collection of information and instantaneous access to that information wherever it physically resides.”  This idea appeals to me tremendously I would love to be able to have access to all of Tolstoy’s works and be able to read my favorite passages whenever I like without having to dig through boxes or travel to the library.  This is not to say that I would not rather have a book in my hand but since it is not easy to find some books in their original language, especially the rare first editions, I would love to see a copy on line.  O’Donnell characterizes this as a

The dream today is weighed down with silicon chips, keyboards, screens, headsets, and other cumbersome equipment — but someday a dream of say telepathic access will make today’s imaginings suddenly as outmoded as a daisy-wheel printer.

It may be so, but where would we be without our imaginings?  The idea of a virtual library is a noble one.  As Hillis points out in the Brand article,” we are now in a period that may be a maddening blank to future historians–a Dark Age–because nearly all of our art, science, news, and other records are being created and stored on media that we know can’t outlast even our own lifetimes.”  True as this may be, should we stop all scanning projects because we are worried about being able to retrieve data.

I will tell you this- when I was doing the readings, and exploring the virtual libraries, I found books of songs and stories that my Grandmother sang and told me when I was a child.  What a gift from people who these books probably meant nothing to!  Where Stewart Brand prophesizes, “there has never been a time of such drastic and irretrievable information loss as right now” and blames the computer industry’s production schedule for the rapid advancement of standards, it must be pointed out – since the process of standardization really took hold, we have seen technologies last many years.  HTML is nearly twenty years old.(Wiki)  The JPEG picture format was defined as a standard in 1992. (Wiki)  The Portable Document Format (aka PDF) is over sixteen years old (Wiki) and is now an open standard.  And a file created with the first version of these standards can still be read on computers today.  He remarks “civilization time is in centuries” but how many of us can understand the earliest books in the English language?  Chaucer’s Canterbury Tales or even the works of William Shakespeare are only a few hundred years old, but they seem to be an entirely different language!

While it is true that the media will probably not outlast our lifetimes, I’m sure that was apparent to the authors of medieval manuscripts.  However – they did not stop copying out illuminated sheets of manuscript simply because they might not survive a fire.  The fires in Alexandria marked a huge blow to the body of knowledge of the time – but one fire in one city will not wipe out a universal library.  There will be backup to tapes, redundant hard drives, and redundant locations to store data.  Sure, if the power goes off that information will just sit there – but think about how it sits there.  Each hard drive is created as a sealed environment and the data could well be readable in 100 years.  The motor that drives the plates inside the drive may have failed, but the platters and the data they contain, could last a very long time.  Under ideal conditions, to be sure – but what book left outside on the table will last beyond the year?  The reason books have been such an efficient method of passing information across the ages isn’t because they are inherently better.  There have simply been so many of them written that a few were bound to make it.  Really, the body of knowledge we inherited from three hundred to two thousand years ago is remarkably small.  I do not know if a universal library will work better for longevity, but it will give more people access to books they might never have otherwise seen.  I do not advocate the end of all print media – it is good to have an alternative to the electronic versions, but I think the electronic versions will become the ones that people make use of.

References

Brand, S. (1999).  Escaping the Digital Dark Ages. Retrieved from http://web.ebscohost.com/ehost/detail?vid=7&hid=4&sid=d441a38a-9c6d-4085-80a4-b520f38fe9ac%40sessionmgr14&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=tfh&AN=1474780

Kelly, K. (2006). Scan This Book! Retrieved from http://www.journalism.wisc.edu/~gdowney/courses/j201/pdf/readings/Kelly%20K%202006%20NYT%20-%20Google%20Print.pdf

O’Donnell, J. J.  The Virtual Library: An Idea Whose Time Has Passed. Retrieved from http://web.archive.org/web/20070204034556/http://ccat.sas.upenn.edu/jod/virtual.html

O’Donnell, J. J. (1998). Avatars of the World: From Papyrus to Cyberspace. Cambridge, MA: Harvard UP, 1998. 44-49.  Retrieved from http://www.public.asu.edu/~dgilfill/speakers/odonnell1.html

October 6, 2009   1 Comment

Cautions and Considerations for Technological Change: A Commentary on Neil Postman’s The Judgment of Thamus

Cautions and Considerations for Technological Change:
A Commentary on Neil Postman’s The Judgment of Thamus

Natalie Giesbrecht
ETEC 540
University of British Columbia
October 4, 2009

Introduction

Kurzweil (2001) suggests that nowadays there is a common expectation of “continuous technological progress and the social repercussions that follow” (para. 2). In “The Judgment of Thamus”, chapter one of Technopoly, Neil Postman (1992) cautions us of the implications of technological innovation. More specifically he warns us of the “one-eyed prophets” or Technophiles, “who see only what new technologies can do and are incapable of imagining what they will undo” (Postman, 1992, p. 5). Postman consciously avoids boasting the opportunities of new technologies in favour of reminding us of the dangers of blindly accepting these. This skepticism and somewhat of an alarmist attitude could be construed as Chandler (2000) calls it “pessimistic determinism” – an almost fatalist perception where we cannot escape the wrath of technology (para. 14). What we are left with is an unbalanced argument whereby Postman assumes his readers are naïve and may well fall prey to the technological imperative. Underlying his negative outlook though, Postman presents key points to consider when thinking about technological change: 1) costs and benefits; 2) winners and losers; and 3) ecological impact.

Costs and Benefits

Postman (1992) uses Plato’s Phaedrus as a springboard for his discussion on technological change. From this story we learn that it is a mistake to believe that “technological innovation has a one-sided effect” (Postman, 1992, p. 4). Postman (1992) argues that every culture must always be in negotiation with technology as it can “giveth” and “taketh away” (p. 5). This stance asserts that technology is an autonomous force, and as Chandler (2001) explains it, technology becomes “an independent, self-controlling, self-determining, self-generating, self-propelling, self-perpetuating and self-expanding force” (para. 1). Postman briefly attempts to illustrate a more balanced critique of the costs and benefits of technological innovation by citing Freud (1930):

…If I can, as often as I please hear the voice of a child of mine who is living hundreds of miles away…if there had been no railway to conquer distances, my child would never have left his native town (p. 70 as cited in Postman, 1992, p. 6).

Postman might argue here, what has technology undone? He contends that there are unforeseen side-effects of technology and that we can’t predict what is at the end of the road of technological progress – as “our inventions are but improved means to an unimproved end” (Thoreau, 1908, p. 45 as cited in Postman, 1992, p. 8).

Winners and Losers

Innis (1951) discussed the idea of ‘knowledge monopolies’, where those who have control of particular technologies gain power and conspire against those “who have no access to the specialized knowledge made available by the technology” (Postman, 1992, p. 9). Postman (1992) infers that the benefits and costs of technology are not equally distributed throughout society and that there are clear winners and losers. A key example he refers to is the blacksmith, who praises the development of the automobile, but eventually his profession is rendered obsolete by it (Postman, 1992). Again, this viewpoint sees technology “as an autonomous force acting on its users” (Chandler, 2008, para. 8).

There is an unsaid expectation that the winners will encourage the losers to be advocates for technology; however, in the end the losers will surrender to those that have specialized technological knowledge (Postman, 1992). Postman (1992) states that for democratic cultures, that are highly receptive and enthusiastic to new technologies, technological progress will “spread evenly among the entire population” (p. 11). This sweeping statement is what Rose (2003) warns us against. Postman writes off the entire population as passive, mindless victims that have fallen prey to the autonomy of technology. However, he fails to acknowledge that the population may “resist the reality of technological impacts and imperatives every day” (Rose, 2003, p. 150).

Ecological Impact

Technological change is ecological and when new technologies compete with old ones it becomes a battle of world-views (Postman, 1992). For instance, a tug-o-war occurred when print entered the oral space of the classroom. On one side, there is orality, which “stresses group learning, cooperation, and a sense of social responsibility” and on the other is print, which fosters “individualized learning, competition, and personal autonomy” (Postman, 1992, p. 17). Each medium eventually found their respective place to change the environment of learning. Now orality and print wage a new war with computers. Postman (1992) asserts that each time a new technology comes along it “does not add or subtract something. It changes everything” (p. 18). Institutions mirror the world-view endorsed by the technology and when a new technology enters the scene, the institution is threatened – “culture finds itself in crisis” (Postman, 1992, p. 18). With this, Postman gives us a sense that technology is out of control, further evidencing his alarmist viewpoint of technological change.

Finally, the ecological impact of technology extends beyond our social, economic and political world to enter our consciousness. Postman (1992) believes that technology alters what we think about, what we think with and the environment in which thought is developed (Postman, 1992). Postman suggests that the population has a “dull” and “stupid awareness” of the ecological impact of technology (Postman, 1992, p. 20) – indicating that technology may be ‘pulling the wool’ over our eyes.

Conclusion

Rose (2003) warns us against taking extreme stances on technological changes – this leads to ideas that “become concretized in absolute terms rather than remaining fluid and open for analysis and debate” (p. 155). Nardi and O’Day (1999) suggest that extreme positions on technology critique should be replaced by a middle ground where we carefully consider the impact of both sides without rejecting one or another hastily (p. 20). Although it clear that Postman is biased toward a pessimistic outlook of technological change, he presents several key points that encourage us to think twice before accepting any technology and “do so with our eyes wide open” (p. 7). In the end, it is difficult to look past Postman’s bias and thus it is still questionable if in fact culture has blindly surrendered to technology as he suggests.

References

Chandler (2000). Techno-evolution as ‘progress’. In Technological or media determinism. Retrieved October 2, 2009, from http://www.aber.ac.uk/media/Documents/tecdet/tdet10.html

Chandler, D. (2001).Technological autonomy. In Technological or media determinism. Retrieved October 2, 2009, from http://www.aber.ac.uk/media/Documents/tecdet/tdet06.html

Chandler, D. (2008). Technology as neutral or non-neutral. In Technological or media
determinism
. Retrieved October 2, 2009, from http://www.aber.ac.uk/media/Documents/tecdet/tdet08.html

Innis, H. A. (1951). The bias of communication. Toronto, ON: University of Toronto Press.

Kurzweil, R. (2001). The law of accelerating returns. Retrieved October 2, 2009, from http://www.kurzweilai.net/articles/art0134.html?printable=1

Nardi, B. A., & O’Day, V. L. (1999). Information ecologies: Using technology with heart. Cambridge, MA: MIT Press.

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Rose, E. (2003). The errors of Thamus: An analysis of technology critique. Bulletin of Science, Technology and Society, 23, 147-156.

Thoreau, H.D. (1908). Walden. London: J.M. Dent & Sons, Ltd.

October 4, 2009   2 Comments

Closing the gap or re-wiring our brains? Maybe both!

Ong states that “the electronic transformation of verbal expression has both deepened the commitment of the word to space initiated by writing and intensified by print and has brought consciousness to a new age of secondary orality (p. 133).” Secondary orality is the way in which technology has transformed the medium through which we send and receive information. Ong includes various examples such as telephone, radio, television and various kinds of sound tape, and electronic technology (Ong, p. 132).

Ong discusses Lowry’s argument that the printing press, in creating the ability to mass-produce books, makes people less studious (Ong, p. 79).  Lowry continues by stating that “it destroys memory and enfeebles the mind by relieving it of too much work (the pocket‐computer complaint once more), downgrading the wise man and wise woman in favor of the pocket compendium.  Of course, others saw print as a welcome leveler: everyone becomes a wise man or woman (Lowry 1979, pp. 31‐2). (Ong, p. 79).”

The World Wide Web has opened up an entirely new sense of “secondary orality”. Prior to the WWW, texts were primarily written by one or a small group of authors and were read by a specific audience.  Today, with the advent of Web 2.0 the underlying tenets of oral cultures and literate cultures are coming closer together.  Even within ETEC540 we are communicating primarily by text, but we are not entering our own private reading world, we are entering a text-based medium through which we can read and respond to each other’s blog posts (such as this post). In addition, we will contribute to a class Wiki where the information is dynamic and constantly changing. How then, is the WWW changing the way we interpret, digest, and process information?

The Internet has brought about a new revolution in the distribution of text.  Google’s vision of having one library that contains all of the world’s literature demonstrates that “one significant change generates total change (Postman, p. 18).”  Nicholas Carr, in his article, “Is Google Making Us Stupid?” and Anthony Grafton in Paul Kennedy’s podcast “The Great Library 2.0” both make similar arguments about the Internet.  Carr points out, the medium through which we receive information not only provides information, but “they also shape the process of thought”.

Carr contends that the mind may now be absorbing and processing information “the way the Net distributes it: in a swiftly moving stream of particles.”  That is, information is no longer static; it is dynamic, ever changing, and easily accessible and searchable.  Carr gives the example that many of his friends and colleagues and friends in academia have noticed that “the more they use the Web, the more they have to fight to stay focused on long pieces of writing.”

Comparably, Google’s attempt to digitize all the text on earth into a new “Alexandria” is certainly an ambitious project, but as Postman states, new technology “is both a burden and a blessing; not either-or, but this-and that (Postman, 5).”  Some see the library as liberating, making an unfathomable amount of knowledge available to anyone with an Internet connection.  Others, such as Anthony Grafton, argue that reading text off the screen takes away from the romantic adventure that one gets from being the first to read at a rare book found in the library of a far-off country (Grafton in The Great Library 2.0).  Grafton also argues that the ability to search for key-words in electronic texts has created “on-time research” which has made academics and others work at a rapid pace, and fill in parts of work very late using Internet sources.  Carr sites other examples of academics who have lost the ability to read and absorb long texts, but instead have gained the ability to scan “short passages of text from many sources online.”

Lowry’s argument that, to some, print destroyed memory and debilitated the mind, while to others, print created equal accessibility to text has repeated itself with the advent of the Internet.  Carr and Grafton are both argue that instantaneous access to huge databases of information such as Google Books may be detracting from our ability to absorb texts.   That being said, Postman states “once a technology is admitted, it plays out its hand; it does what it is designed to do. Our task is to understand what that design is-that is to say, when we admit a new technology to the culture, we must do so with our eyes wide open (Postman, p. 7).”  Thus, perhaps there is no point in arguing the negatives.  Whether it is Google or a different association that makes all the printed text in world available to us, it is the direction that technology is taking us and there will likely be nothing to stop it.  The question is, what will our societies and cultures look like after it is all done?   It will not be the world plus Library 2.0, but an entirely new world.

References:

Ong, Walter, J. (1982). Orality and Literacy: The Technologizing of the Word. London and New York: Methuen.

Kennedy, Paul (host).  (August 24, 2009). Ideas. The Great Library 2.0. Podcast retrieved from http://www.cbc.ca/ideas/podcast.html

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Carr, Nicholas. (2008). Is Google Making Us Stupid? The Atlantic. July/August 2008. Accessed September 30, 2009 from http://www.theatlantic.com/doc/200807/google

October 2, 2009   2 Comments