The Changing Spaces of Reading and Writing

Making Connections

Personal Connections– Learning

Four years ago I suffered an injury that tore one of the tendons that controls movement in my thumb. I eventually regained use of the thumb and was able to perform all daily activities with little trouble. All except one. It was difficult and very painful to write. So I turned to using the computer. I bought myself a laptop and thankfully my part-time job was a fully computerized environment. At first I saw it as a very efficient substitute writing tool; it was much quicker to type than to jot down notes. About half a year later I began to feel that learning was becoming more difficult, and causing more fatigue, and my creativity had been highly impacted.

It wasn’t until I took this course that I began to really investigate the relationship between the two.

In thinking about the definition of text and looking at the evolution of writing spaces and technologies made me reflect on my current and previous modes of learning. Earlier notes were meticulously underlined, highlighted, written in different colours (while also possible on the computer, rarely used these functions because I owned a black and white laser printer). The handwriting was all over the page, with little clumps of information, connected by arrows and diagrams. The margins were reserved for ‘outside links’, where I made personal connections and devised memory aids to help me synthesize and remember information and ideas. This practice also extended to any papers, textbooks, and novels that I read. However, the injury discouraged this and I ended up typing a few notes on the computer (instead of directly on the page—which made the information feel … disconnected).

Remediation

The concept of remediation was also very useful in my understanding of the difficulties with embracing technological use in schools. As a TOC I visited many schools and saw many classrooms wherein the computer lab was used for typing lessons, KidPix, or research. Many schools also have Interactive White Boards (IWBs), and teachers use them as, in essence, a very cool replacement for a worksheet. Remediation helps frame and pinpoint the reason for this phenomenon: the use of technology is not just a set of skills, it’s a change in thinking and pedagogy. Literacy is not just literacy anymore, it has become multiliteracies and Literacy 2.0. Teachers cannot continue to teach reading and writing the same way as before, because text is not the same anymore.

December 22, 2009   No Comments

Remediation

“…a newer medium takes place of an older one, borrowing and reorganizing the characteristics of writing in the older medium and reforming its cultural space.” (Bolter, 2001, p. 23)

Bolter’s (2005) definition of remediation struck me a bit like a Eureka! moment as I sat at lunch in the school staffroom, overhearing a rather fervent conversation between a couple of teachers, regarding how computers are destroying our children. They noted how their students cannot form their letters properly, and can barely print, not to mention write in cursive that is somewhat legible. The discussion became increasingly heated as one described how children could not read as well because of the advent of graphic novels, and her colleague gave an anecdote about her students’ lack of ability to edit. When the bell rang to signal the end of lunch, out came the conclusion—students now are less intelligent because they are reading and writing less, and in so doing are communicating less effectively.

In essence, my colleagues were discussing what we are losing in terms of print—forming of letters, handwriting— the physicality of writing. However, I wonder how much of an impact that makes on the world today, and 20 years from now when the aforementioned children become immersed in, and begin to affect society. Judging from the current trend, in 20 years time, it is possible that most people will have access to some sort of keypad that makes the act of holding a pen obsolete. Yes, it is sad, because calligraphy is an art form in itself, yet it strikes me that having these tools allow us the time and brain power to do other things. Take for example graphic novels. While some graphic novels are heavily image-based, there are many that have a more balanced text-image ratio. In reading the latter, students are still reading text, and the images help them understand the story. By making comprehension easier, students have the time and can focus brain processes to create deeper understanding such as making connections with personal experiences, other texts or other forms of multimedia.

As for the communications bit, Web 2.0 is anything but antisocial. Everything from blogs, forums, Twitter, to YouTube all have social aspects to them. People are allowed to rate, tag, bookmark and leave comments. Everything including software, data feeds, music and videos can be remixed or mashed-up with other media. In academia, writing articles was previously a more isolated activity, but with the advent of forums like arxiv.org, scholarly articles could be posted, improved much more efficiently and effectively compared to the formal process that occurs when an article is sent in to a journal. More importantly, scholarly knowledge is disseminated with greater ease and accuracy.

Corporations and educational institutions are beginning to see a large influx of, and reception for Interactive White Boards (IWB). Its large monitor, computer and internet-linked, touch-screen abilities make it the epitome of presentation tools. Content can be presented every which way—written text, word processed text, websites, music, video, all (literally) at the user’s fingertips. The IWB’s capabilities allow for a new form of writing to occur—previously, writing was either with a writing instrument held in one’s hand, or via typing on a keyboard. IWBs afford both processes to occur simultaneously, alternately, and interchangeably. If one so chooses, the individual can type and write at the same time! IWBs are particularly relevant to remediation of education and pedagogy itself, because the tool demands a certain level of engagement and interaction. A lesson on the difference between common and proper nouns that previously involved the teacher reading sentences and writing them on the board, then asking students to identify them—could now potentially involve the students finding a text of interest, having it on the IWB, then students identifying the two types of nouns by directly marking up the text with the pen or highlighter tools.

Effectively, the digital world is remediating our previous notion of text in the sense of books and print. Writing—its organization, format, and role in culture is being completely refashioned.

References

Bolter, J. D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print (2 ed.). Mahwah, NJ: Lawrence Erlbaum.

December 13, 2009   No Comments

Multimodalities and Differentiated Learning

“A picture is worth a thousand words.”

While there are many theories out there on how to meet the needs of diverse learners, there is one common theme—to teach using multimodalities. The strong focus on text in education has made school difficult to a portion of students, students whose strengths and talents lie outside of the verbal-linguistic and visual-spatial-type abilities. Thus the decreasing reliance on text, the incorporation of visuals and other multimedia, and the social affordances of the internet facilitate student learning.

Maryanne Wolf (2008) purports that the human brain was not built for reading text. While the brain has been able to utilize its pre-existing capabilities to adapt, lending us the ability to read, the fact that reading is not an innate ability opens us to problems such as dyslexia. However, images and even aural media (such as audiobooks) take away this disadvantage. Students who find reading difficult can find extra support in listening to taped versions of class novels or other reading material. Also, students with writing output difficulties can now write with greater ease with computers or other aids such as AlphaSmart keyboards.

Kress’ (2005) article highlights the difference between the traditional text and multimedia text that we often find on web pages today. While the predecessor used to be in a given order and that order was denoted by the author, Kress notes that the latter’s order is more open, and could be determined by the reader. One could argue that readers could still determine order with the traditional text by skipping chapters. However, chapters often flow into each other, whereas web pages are usually designed as more independent units.

In addition, Kress (2005) notes that texts have only a single entry point (beginning of the text) and a single point of departure (end of the text). On the other hand, websites are not necessarily entered through their main (home-) pages, readers often find themselves at a completely different website immediately after clicking on a link that looks interesting. The fact that there are multiple entry points (Kress) is absolutely critical. A fellow teacher argued that this creates problems because there is no structure to follow. With text, the author’s message is linear and thus has inherent structure and logic, whereas multiple points of entry lends to divergence and learning that is less organized. Thus it is better to retain text and less of the multimedia approach such that this type of structure and logic is not lost. The only problem is that it still only makes sense to a portion of the population. I never realized until I began teaching, exactly how much my left-handedness affected my ability to explain things to others. Upon making informal observations, it was evident that it is much easier for certain people to understand me—lefties.

Kress’ (2005) article discusses a third difference—presentation of material. Writing has a monopoly over the page and how the content is presented in traditional texts, while web pages are often have a mix of images, text and other multimedia.

It is ironic to note that text offers differentiation too. While the words describe and denote events and characters and events—none of these are ‘in your face’—the images are not served to you, instead you come up with the images. I prefer reading because I can imagine it as it suits me. In this sense, text provides the leeway that images do not.

Multimodalities extend into other literacies as well. Take for example mapping. Like words and alphabets, maps are symbolic representations of information, written down and drawn to facilitate memory and sharing of this information. Map reading is an important skill to learn, particularly in order to help us navigate through unfamiliar cities and roadways. However, the advent of GPS technology and Google Streetview presents a change—there is a decreasing need to be able to read a map now, especially when Google Streetview gives an exact 360º visual representation of the street and turn-by-turn guidance.

Yet we must be cautious in our use of multimodal tools; while multimodal learning is helpful as a way to meet the needs of different learners, too much could be distracting and thus be detrimental to learning.

References

Kress, G. (2005). Gains and Losses: New forms of texts, knowledge, and learning. Computers and Composition, 5-22.

Wolf, M. (2008). Proust and the Squid: The Story and Science of the Reading Brain. New York: Harper Perennial.

December 13, 2009   No Comments

Connections

In trying to make some final connections between my own research on Graphic Novels, increased literacy and multimodal texts, I read a few of the projects that seemed most relevant to me.  What follows are my thoughts. (Just pretend the italicized words are my thought bubbles.)

I just want to remind myself to consult Drew Murphy’s Wiki on using Digital storytelling for the reluctant reader.  It might be an interesting contrast to what I did for my project.

http://wiki.ubc.ca/User:DrewRyan#Creating_Classroom_Community_Through_Digital_Storytelling

I turned out his project was more about engaging students in storytelling using digital media, rather than getting them to read more.  I think that would be an excellent next step to promoting reading with graphic novels and other types of visual media.  As I thought when I read the title, this is an excellent example of a further remediation of text.  As Bolter describes it, one technology building on the other.  In the same way, the skills learned using multimodal texts allow the reader to progress onto the next, more sophisticated media.  The use of digital texts also allows even more input and creativity from the writer (consumer as producer).

This quote from Noah Burdett: “With the need for speed a literate person needs to be able to think critically about the material in terms of its relevance and its authority.”  NoahBurdett_ETEC540_majorproject  https://blogs.ubc.ca/etec540sept09/2009/11/30/final-project-literacy-and-critical-thinking/

“To become multiliterate “What is also required is the mastery of traditional skills and techniques, genres and texts, and their applications through new media and new technologies” (Queensland, 2004). “from Learning Multiliteracies by Carmen Chan

Philip Salembier discussed the New Literacy and Multiliteracies in From one literacy, to many, to one.

He really explains how we have to be prepared as teachers and parents to understand that literacy means more than reading and writing and that digital literacy is not just understanding how to navigate the internet.  All of these are aspects of the new literacy, along with social networking skills.

Fun interactive story http://wiki.ubc.ca/Course:ETEC540/2009WT1/Assignments/MajorProject/ItsUpToYou by Ryan Bartlett.  Might use this style to get the seniors to do a research project on Social Injustice.

Finally, just because this one blew me away! From Tracy Gidinski https://blogs.ubc.ca/etec540sept09/2009/11/29/the-holocaust-and-points-of-view/ I hope I can use this style at some point either with my Marketing or International Business class or perhaps even a simpler storyline for an FSL course.

December 2, 2009   No Comments

Rip.Mix and Feed Animoto

I opted to use Animoto.com for my rip.mix.feed contribution. It’s a Web 2.0 tool that bills itself as “the end of slideshows.”  It allows the user to combine images and music (which you supply or choose from their selections) in a type of music video/mashup.  The site takes care of  transitions between the images using a variety of effects adjusting timing to the chosen music.  The free version (which I used) is limited to 30-second compositions.  All of the images I used, apart from the first one, I found in Flickr searches for images with creative commons licensing. (I have a list of all the Flickr URLs and if anyone is interested I can post it as an attachment.)  The start image was generated with the Bart Simpson chalk board generator.   The number of images one can squeeze into 30 seconds varies somewhat depending on the music selected, but is around 15-18.  Fast-paced tunes show the slides more quickly and use a few more images than slower ones.  The site allows for easy remixes, and it’s possible to add and remove images until the desired effect is achieved.  It’s also possible to get stuck endlessly tweaking while looking for the ‘perfect’ edit (don’t ask me how I know…).  I’ll post links to two versions that use the same images but different music.  The first, which I not very creatively called Text technologies 1, uses 30 seconds of a song called Finally by the Sunday Runners.  The second, Text technologies v.2, uses a faster indie piece called 1234 by Fake which I enjoy for its lyrics: “I’m just sick of my school, and my teacher’s a fool..” Certainly, many Web 2.0 tools are fun and engaging.  Every time I (re-)make this realization I tell myself that I have to get more creative about encorporating them into my lessons!

December 1, 2009   2 Comments

Making [Re]Connections

This is one of the last courses I will be taking in the program and as the journey draws to a close, this course has opened up new perspectives on text and technology. Throughout the term, I have been travelling (more than I expected) and as I juggled my courses with the travels, I began to pay more attention to how text is used in different contexts and cultures. Ong, Bolter and the module readings were great for passing time on my plane rides – I learned quite a lot!

I enjoyed working on the research assignment where I was able to explore the movement from icon to symbol. It gave me a more in-depth look at the significance of visual images, which Bolter discusses along with hypertext. Often, I am more used to working with text in a constrained space but after this assignment, I began thinking more about how text and technologies work in wider, more open spaces. By the final project, I found myself exploring a more open space where I could be creative – a place that is familiar to me yet a place that has much exploration left to it – the Internet.

Some of the projects and topics that were particularly related to this new insight include:

E-Type: The Visual Language of Typography

A Case for Teaching Visual Literacy – Bev Knutson-Shaw

Language as Cultural Identity: Russification of the Central Asian Languages – Svetlana Gibson

Public Literacy: Broadsides, Posters and the Lithographic Process – Noah Burdett

The Influence of Television and Radio on Education – David Berljawsky

Remediation of the Chinese Language – Carmen Chan

Braille – Ashley Jones

Despite the challenges of following the week-to-week discussions from Vista to Wiki to Blog and to the web in general, I was on track most of the time. I will admit I got confused a couple of times and I was more of a passive participant than an active one. Nevertheless, the course was interesting and insightful and it was great learning from many of my peers. Thank you everyone.

December 1, 2009   1 Comment

Commentary 3 – text will remain

Hi everyone,

Hayles explains that sometime between 1995 and 1997 a shift in Web literature occurred: before 1995 hypertexts were primarily text based with “with navigation systems mostly confined to moving from one block of text to another (Hayles, 2003).”  Post 1997, Hayles states that  “electronic literature devises artistic strategies to create effects specific to electronic environments (2003).”

Bolter and Kress both contend that technology and text have fused into a single entity. That is, in the latter half of the 20th century, the visual representation of text has been transformed to include visual representations of pictures, graphics, and illustrations. Bolter states that “the late age of print is visual rather than linguistic . . . print and prose are undertaking to remediate static and moving images as they appear in photography, film, television and the computer (Bolter, 2001, p. 48)” Cyber magazines such as Mondo 2000 and WIRED are “aggressively remediating the visual style of television and digital media” with a “hectic, hypermediated style (Bolter, 2001, p. 51).” Kress notes that “the distinct cultural technologies for representation and for dissemination have become conflated—and not only in popular commonsense, so that the decline of the book has been seen as the decline of writing and vice versa (Kress, p.6).” In recent years, perhaps due to increased bandwidth, the WWW has had a much greater presence of multimedia such as pictures, video, games, and animations.  As a result, there is a noticeably less text than what appeared in the first web pages designed for Mosaic in 1993. Furthermore, the WWW is increasingly being inundated with advertisements.

Additionally, text and use of imagery is also evident in magazines that also use visual representations of pictures, graphics, and illustrations as visual aids to their texts. Tabloid magazines such as Cosmo, People, and FHM are filled with advertisements.  For example, the April 2008 edition of Vogue has a total of 378 pages.  Sixty-seven of these pages are dedicated to text, while 378 pages are full-page advertisements.

While there are increasingly more spaces, both in cyberspace and printed works, that contain much imagery and text, there still exist spaces that are, for the most part, text-based.  This is especially evident in academia.  For example, academic journals, whether online or printed, are still primarily text. Pictures, graphics, and illustrations are used almost exclusively to illustrate a concept and, to my knowledge, have not yet included video.  University texts and course-companions are primarily text as well.  Perhaps, as Bolter states, this is because “we still regard printed books and journals as the place to locate our most prestigious texts (Bolter, forthcoming).” However, if literature and humanistic scholarship continues to be printed, it could be further marginalized within our culture (ibid).

Despite there being a “breakout of the visual” in both print and electronic media, Bolter makes a very strong argument that, text can never being eliminated in the electronic form that it currently exists.  That is, all videos, images, animations, and virtual reality all exist on an underlying base of computer code.   What might happen instead is the “devaluation of writing in comparison with perceptual presentation (Bolter, forthcoming).” The World Wide Web is an example of this.  The WWW provides a space in which millions of authors can write their own opinions; Bolter is, in fact, doing this for his forthcoming publication “Degrees of Freedom”.  The difference between Bolter’s text and others is that he uses minimal use of imagery and relies almost entirely on his prose to convey they meaning of his writing.  Be that as it may, Bolter contends that the majority of WWW authors use videos and graphics to illustrate their words (forthcoming). Text will remain a large part of how we learn absorb and communicate information, however, “the verbal text must now struggle to assert its legitimacy in a space increasingly dominated by visual modes of representation (Bolter, forthcoming).”

John

References

Bolter, Jay David. (2001). Writing space: Computers, hypertext, and the remediation of print [2nd edition]. Mahwah, NJ: Lawrence Erlbaum.

Bolter, Jay David. (forthcoming). Degrees of Freedom. Retrieved November 28, 2009 from http://www.uv.es/~fores/programa/bolter_freedom.html.

Hayles, Katherine. (2003). Deeper into the Machine: The Future of Electronic Literature. Culture Machine. 5. Retrieved, August 2, 2009, from http://www.culturemachine.net/index.php/cm/article/viewArticle/245/241

Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and learning Gunther Kress. Computers and Composition, 22(1), 5–22.

November 29, 2009   1 Comment

Oldest Bible now in digital….

In the December, 2009 edition of National Geographic, I came across an article by A.R. Williams detailing how the oldest known New Testament is now available online at http://codexsinaiticus.org/en/ . According to Williams, the virtual version lets you see additions that were made and corrections that were overwritten. I tried it out and it is truly realistic. It took scholars, including the British Library, over four years to digitize! Amazing. The tools at http://codexsinaiticus.org/en/manuscript.aspx really give you the feeling that you are flipping through the ancient codex.

November 27, 2009   No Comments

Commentary 2- Mechanization: before and after

Ancient and modern writing are technologies in the sense that they are methods for arranging verbal ideas in a visual space. (Bolter, 2001, pg. 15)

 In my previous commentary, I attempted to talk about the impact of writing as a technology on humans’ development. This second commentary, I’d like to reflect on the transition of writing to writing as a mechanized process and its impact on the way we relate to text.

            Writing, as we have read and discussed in this course, has significantly changed over time due to the necessities and complementary technologies man has created or adapted.

Scroll and papyrus

The scroll and papyrus are the precursors of books as we know today. These “portable” versions of text were the first attempts to make writing and reading a more accessible technology. During this “era”, writing was considered an art form, due to its complexity in elaboration and reproduction, as well as because of the techniques and methods used. These text versions allowed the delivery of information or text in an uninterrupted sequence, which printed books still maintain.

Hand crafted books to manual scripts

Production and reproduction of texts during this and previous eras was done only by those who were fully trained and skilled in writing and reproducing typographies. Writing a book or text and reproducing it took a lot of time, effort and people, resulting in the high cost of texts and low distribution rates; making them practically inaccessible to the general public.

            Migrating from a scrolled text to the bounded pages format, allowed the reader to easily flip through the pages to advance or return to a specific point of the reading. Although initial books were large in size and required to be laid on a high surface to read (table, desk, reading podium, etc.), this new format freed the reader’s hands to be able to write and read simultaneously. This new format also facilitated the production and reproduction of texts, allowing the writer to add ideas in between pages or correct mistakes within a single page. Bounded pages resulted in the need to organize or categorize content within the text, giving way to page numbering and table of contents or index.

The printed book

As writing progressed, the letter press was introduced in the fifteenth century, which allowed word duplication en masse (Bolter, 2001, pg. 14); then came the typography which became the first product in which text could be repeated by a machine. The printing press later became an effective “upgrade” of the letter press and typography, allowing production and reproduction of several pages in a shorter period of time. These rapid and rather radical changes in writing allowed the entire process to be mechanized, automated or “machine-produced” which, as a consequence, facilitated reproduction, reduced costs and man-made mistakes greatly.

            The printed book facilitated reading due to the typography and format used. Since printed books were smaller in size, the reader could easily transport the text. This shift in format made the book accessible to different publics and also allowed a certain sense of ownership of the reader for the book- making margin notes, highlighting or underlining, etc.

The electronic book

The electronic book (E-book) format has been around for about a decade now, but has not been fully adopted as a “mainstream” book format. Commercial E-books initially began as an alternative reading format for printed books, promoting ecological and economical “savings” as their main advantage. Currently, there are many books in electronic format which can be read on a computer screen or special electronic portable devices. According to Freda Turner (2005), “E-books have an advantage over traditional books in that they offer hypertext linking, search features, and connections to other online databases enhancing data comprehension.” Turner mentions that the current lifestyle “requires” information or texts to be interactive and convenient, allowing the reader to jump between topics and ideas, as well as to easily transport a library in a small electronic device.

A shift in the way we relate to text

Before the mechanization of writing and commercial distribution of texts, the relationship between the reader and the text was impersonal and somewhat complicated. The reader could not (or with difficulty) transport the text or have access to texts as freely and easily as today. Before mechanization, reading was usually done on foot and at select spaces, such as libraries, that could afford having a copy of the text. Manually-elaborated texts imposed certain authority over the reader due to high cost, inaccessibility, etc. impeding him to adopt and adapt the text to his necessities. As writing transformed, the reader took certain “ownership” over texts by making marks, comments and easily transporting or sharing the text in different places.

            Electronic text has not only modified the way we read, but also the way we share, write and reproduce text. Electronic readers can manipulate or tailor some texts to their needs or add direct comments to for others to see as well (Bolter, 2001, pg. 11). Both “traditional” and electronic texts encourage the development of different abilities and skills for readers and writers. Some of these competencies are: creative, critical, and associative thinking; organization of ideas and thoughts, as well as the materialization of abstractness. Regarding the production of texts, the digital or electronic era has also allowed different “authors” to cooperate or write a single text without time or geographical limitations. Nowadays, the reader can easily adopt (download, browse, consult) and adapt (edit, highlight, review) texts to tailor specific needs; resulting in a closer, more personal relation with text.

            Several authors, including Turner (2005) have stated that printed texts will become obsolete in a certain point in time. It is my belief, reinforced with discussions made within the course, that electronic books will complement printed texts, not necessarily take-over them. What both digital and printed versions of text have in common is a mechanization process or technical skill of some sort that is required in order to create the final product- the difference relies on the format and form, rather than the substance. The most important aspect to consider, in terms of text and the mechanization of its elaboration process is how the reader and writer relate to it and are able to manipulate and make it their own.

 

References:

Beck, N., & Fetherston, T, (2003). The effects of incorporating a word processor into a year three writing program. Information Technology in Childhood Education Annual, 139-161.   

Ong, W. (2008) Orality and Literacy. The technologizing of the word. Routledge

Turner, Freda. (November, 2005) Incorporating Digital E-books into Educational Curriculum. International Journal of Instructional Technology and Distance Learning, No. 11, Vol. 2, Pg. 47-52. (PDF File)

November 15, 2009   1 Comment

Commentary #2

Commentary #2

Writing Spaces: Hypertext and the Remediation of Print Re-examined

Erin Gillespie

ETEC 540

November 15, 2009

 

The debate surrounding the future of text is never more exciting than when considering the relationship between print and hypertext.  It is in the middle ground that the debate over what is the future of text, hypertext or print, is nicely packaged and tagged as “both” by Bolter (2001) due to one process: remediation. Bolter (2001) contends that interactivity and the merging of text and graphics are strategies inherent in electronic writing that create a more authentic experience for the reader, yet they are dependent on the knowledge of print. In chapter three of Writing Space, Bolter (2001) presents hypertext as the remediation of print, not as its replacement.

Bolter’s (2001) remediation walks a fine line between enthusiasts of new electronic writing and the old guard of traditional print. He argues soundly that hypertext remediates print because it is historically connected to print, while at the same time the two are easily distinguishable from each other (Bolter, 2001). According to Bolter (2001), electronic writing affords movement amongst visual space and conceptual space, and these spaces are different from the space in a book, yet knowledge of a book helps us recognize these affordances.  To optimize our experience when writing electronically, we depend on our former knowledge of print (Bolter, 2001). In other words, hypertext does not stand alone, uninfluenced by the history of print technology. Bolter (2001) argues that this fact is what makes electronic hypertext, ironically, new: Our dependency on and confrontation with our knowledge of the printed book when processing hypertext.  

Remediation may be difficult to apply to the field of text in a few generations, a possibility Bolter (2001) does not explore in chapter three of Writing Space. It is interesting to consider this extreme, and contrast it with Bolter’s (2001) middle ground theory by examining the field of education from an ecological point of view.  One way to re-examine the argument surrounding print and hypertext is to consider Darwin’s theory of evolution. Complex organisms evolve from simplistic organisms over time in an undirected progression of modification (Futuyma, 2005). Continuing with this theory, Darwin’s  natural selection suggests that a member of a species develops a functional advantage and over time, the advantaged members of the species survive to better compete for resources (Futuyma, 2005).

Consider print the simplistic organism: the reader and writer have one entry and exit point and information is linear and fixed, according to Bolter (2001). Less simplistic is hypertext, which can be read from a variety of entry points, is fluid and associative (Bolter, 2001). If we continue with this metaphor, the advantaged members of the species of text will be hypertext if we evolve to value fluidity and associative characteristics in text. Considering the popularity of hypertext and the flow of microcontent in Web 2.0 applications as described by Alexander (2006) and the speed of Jenkin’s (2004) media convergence, this direction in evolution is not unrealistic. Hypertext may survive in the place of print. However, the survival of a species is still dependent on the balance of its ecosystem, an in this metaphor the ecosystem is the student.

It is not illogical to apply an ecological perspective to the pedagogy of a school when discussing the adaptation hypertext. In an examination of factors that affect the use of technology in schools, Zhao and Frank (2003) used an ecological perspective and found it to be an effective analytical framework.  Zhao and Frank’s (2003) framework considers students as the ecosystem, computers a living species, teachers as members of a keystone (the most important) species and external educational innovations as the invasion of an exotic species. It is fair to consider hypertext an external educational innovation in this framework due to its very recent introduction to the field of education and thus, the student. Print, on the other hand, would be a species comfortably functioning in the ecosystem as a textbook. Consider again Bolter’s (2001) contention that hypertext is distinct from yet dependent on print. As an invading exotic species, hypertext is initially dependent on the pre-existing species of print for survival in the ecosystem. Students need to know how to read and how to write text in order to understand hypertext.

However, Bolter’s (2001) theory of remediation holds true only if the ecosystem, or student, is dependent on the species of printed text prior to the introduction of the exotic species of hypertext. However, Bolter (2001) does not look further ahead than remediation. It is possible that in the future, students will be introduced to hypertext prior to developing a dependency on print knowledge. Currently, hypertext is functioning as the exotic, invading species for Tapscott’s (2004) Net Generation and Prensky’s (2001) Digital Natives. However, these same students will produce the Net Generation 2.0.  As parents of the Net Generation 2.0, they will function as Zhao and Frank’s (2003) keystone species, a species already adapted to survive with hypertext. In chapter three, concerning remediation and hypertext, Bolter (2001) argues that print is the tradition that hypertext depends on. However, Bolter (2001) did not consider hypertext as being dependent on previous versions of hypertext. Bolter’s (2001)remediation does not project far enough into the future. The ecosystem, as Net Generation 2.0 students, will remain balanced as the functional advantages of hypertext ensure survival of this exotic species through displacement of the disadvantaged species, traditional print. Remediation of print may lead to the extinction of a dependency on print itself.

 

References

Alexander, B (2006). Web 2.0: A new wave of innovation for teaching and learning? EDUCAUSE, Review, 41(2), 33-44. Retrieved from http://net.educause.edu/ir/library/pdf/ERM0621.pdf

Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print. Mahwah, NJ: Lawrence Erlbaum Associates.

Futuyma, D. J. (2005). Evolution. Sunderland, MA: Sinauer Associates.

Jenkins, H. (2004) The cultural logic of media convergence. International Journal of Cultural Studies, 7(1), 33-43. doi: 10.1177/1367877904040603

Prensky, M. (2001). Digital natives, digital immigrants. On The Horizion, 9 (5), 1-6. Retrieved from http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf

Tapscott, D. (2004). The net generation and the school. Custom course materials ETEC 532 (pp. #2). Kelowna, B.C: University of British Columbia Okanagan, Bookstore. (Reprinted from Milken Family Foundation, http://www.mff.org/edtech/article.taf?_function=detail&Content_uid1=109).

Zhao, Y., & Frank, K.A. (2003). Factors affecting technology uses in schools: An ecological perspective. American Educational Research Journal, 40(4), 807-840. Retrieved from http://www.jstor.org/stable/pdfplus/3699409.pdf

November 14, 2009   1 Comment

How word processors and beyond may be changing literacy

Commentary #2

The word processor, in combination with the computer disk and CRT monitor, was first introduced in 1977 (Kunde, 1986). As Bolter points out “the word processor is not so much a tool for writing, as it is a tool for typography (p. 9).” It seems that, even today, the word processor is essentially used as a tool to mimic conventional methods of typing. Whereas older printing processes lock “the type in an absolutely rigid position in the chase, locking the chase firmly onto a press,” a word processor only differs in that it composes text “on a computer terminal” in “electronic patterns (letters) previously programmed into the computer (Ong, p. 119).” Bolter notes this by stating “most writers have enthusiastically accepted the word processor precisely because it does not challenge their conventional notion of writing. The word processor is an aid for making perfect printed copy: the goal is still ink on paper (p. 9).” The word processor helps better facilitate the processes that were once done on the typewriter. That is, writers still type in text letter by letter, but the computer greatly improves revision. A few of these improvements include copying/cutting and paste, changing fonts and paper size, and inserting automatically updating table of contents, outlines, references. It is “in using these facilities, the writer is thinking and writing in terms of verbal units or topics, whose meaning transcends their constituent words (Bolter, p. 29).” In this regard, the word processor did not change the printed word. However, although the word processor did not fundamentally change how a printed product looks, it did have a major impact on industry and business and on literacy in education.

In the early 1980s there was much focus on the difference word processors were making in industry, business, and scholarly work. Bergman points out that “this electronic revolution in the office [word processing] may change who does what sort of work, create some jobs and eliminate others (p. F3).” In fact, in 1977 5.8% of jobs offered in the New York Times mentioned computer literacy skills such as word processing, this number doubled by 1983 (Compaine, p. 136). This was especially evident in clerical positions in which “the proportion of secretary/typist want ads that required word processing skills went from zero in 1977 to 15 percent in 1982 (Compaine, p. 136).” Furthermore, Word processors, coupled with a phone line greatly increased the speed that documents were sent and received. Instead of mailing or dictating documents to another person, documents including graphs and charts could now be written and transmitted, in seconds, over the telephone, more cheaply than previous methods (Bencivenga, p. 11). Scholars “with the help of a computer programmed to scan the text quickly, picking out passages that contain the same word used in different contexts (Compaine, p. 137).” In the early 1980s Word processors and computers fundamentally changed how we process information and thus had much impact on literacy. Compaine refers “to computer skills as additional to, not replacements (p.139)” to literacy and that “whatever comes about will not replace existing skills, but supplement them (p. 141).” Compaine’s essay was written in 1983, but this trend continues today.

Furthermore, the word processor has affected literacy amongst students. In 1983 Ron Truman published an article in The Globe and Mail in which he reported that elementary teachers said word processors were “having a remarkable effect on how children learn to use language: writing on a computer screen improves spelling, grammar and syntax (p. CL14).” An article by Goldberg et al. entitled “The effect of computers on student writing: A meta-analysis of studies from 1992 to 2002″ summarizes that thirty-five previous studies concluded that the “writing process [in regards to K–ı2 students writing with computers vs. paper-and-pencil] is more collaborative, iterative, and social in computer classrooms as compared with paper-and-pencil” and that “computers should be used to help students develop writing skills . . . that, on average, students who use computers when learning to write are not only more engaged and motivated in their writing, but they produce written work that is of greater length and higher quality (p. 1).” Similarly, Beck and Fetherston conclude that “The use of the word processor promoted students’ motivation to write, engaged the students in editing, assisted proof-reading, and the students produced longer texts” and “produced writing that was better using the word processor than that which was achieved using the traditional paper and pencil method (p. 159).”

Different forms of electronic writing have participated “in the restructuring of our whole economy of writing (Bolter, p. 23).” Even as early as 1983, Compaine predicted that in respect to electronic texts, “many adults would today recoil in horror at the thought of losing the feel and portability of printed volumes . . . print is no longer the only rooster in the barnyard (p. 132).” Looking at present day and into the future, the computer continues to reshape and challenge the traditional form of the printed book: “our culture is using the computer to refashion the printed book, which, as the most recent dominant technology, is the one most open to challenge (Bolter, p. 23).” The World Wide Web and most recently the advent of web 2.0 have challenged traditional writing media and the way in which we create electronic media. Word processors have become one tool in an arsenal of programs developed for electronic publishing (such as Dreamweaver for web development, PowerPoint for presentations, iMovie and Movie Maker, and Adobe Flash for animations). As such, literacy still includes traditional texts, but much has been added with digital literacy. Books, magazines, newspapers, academic journals, etc. predominately written using a word processor (or another desktop publishing software), in their traditional form will not be replaced in the near future, but they have certainly had to give up much of their dominance to non-traditional, electronic, writing spaces.

John

References

Barbara R. Bergmann (1982, May 30). A Threat Ahead From Word Processor. The New York Times. p. F3.

Beck, N., & Fetherston, T. (2003). The effects of incorporating a word processor into a year three writing program. Information Technology in Childhood Education Annual, 2003 (1), 139 – 161.  Retrieved January 15, 2009, from http://www.editlib.org/index.cfm/files/paper_17765.pdf?fuseaction=Reader.DownloadFullText&paper_id=17765.

Bencivenga, Jim (1980, March 28). Word processors faster than dictation. The Christian Science Monitor. p. 11.

Bolter, Jay David. (2001). Writing space: Computers, hypertext, and the remediation of print [2nd edition]. Mahwah, NJ: Lawrence Erlbaum.

Compaine, Benjamin, M. (1983). The New Literacy. Daedalus, 112(1), pp. 129-142.

Goldberg, A., Russell, M., & Cook, A. (2003). The effect of computers on student writing: A meta- analysis of studies from 1992 to 2002. Journal of Technology, Learning, and Assessment, 2(1). Retrieved November 7, 2009, from http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1007&context=jtla

Johnson, Sharon. (1981, October 11). Word Processors Spell Out A New Role for Clerical Staff. New York Times, p. SM28.

Kunde, Brian. (1986). A Brief History of Word Processing (Through 1986). Fleabonnet Press. Retrieved November 7, 2009 from http://www.stanford.edu/~bkunde/fb-press/articles/wdprhist.html

Ong, Walter, J. (1982). Orality and Literacy: The Technologizing of the Word. London and New York: Methuen.

Truman, Ron. (1983, November 24). Word processors prove boon in making youngsters literate. The Globe and Mail. p. CL.14.

November 8, 2009   1 Comment

The Photocopier

Hi everyone,

You can find my research paper about the invention of the photocopier in a wiki here. Comments are most welcome!

Enjoy.

John

November 1, 2009   No Comments

William Blake and the Remediation of Print

One might be inclined to view William Blake’s illuminated books as throwbacks to mediaeval illuminated manuscripts. Yet they should rather be understood as “remediating” older media. According to Bolter (2001, p. 23), remediation occurs when a new medium pays homage to an older medium, borrowing and imitating features of it, and yet also stands in opposition to it, attempting to improve on it. In the case of Blake’s illuminated books, one of the older media being remediated was the mediaeval illuminated manuscript, but another medium being remediated was the printed book, which in Blake’s time had already been in use for three centuries.

Blake adopted the way in which the richly illustrated texts of mediaeval illuminated manuscripts combined the iconic and the symbolic so that the former illumined meaning of the latter, the images revealing the spiritual significance of the scripture. Blake also seized upon an aspect of illuminated manuscripts which would later impress John Ruskin as well (Keep, McLaughlin, & Parmar, 1993-2000)—the way in which they served as vehicles for self-expression. The designs of manuscripts such as the Book of Kells and the Book of Lindisfarne, for instance, reflected the native artistic styles of Ireland and Northumbria and often depicted the native flora and fauna of those lands as well. Blake also adopted some of the styles and idioms of illustration found in mediaeval illuminated manuscripts, producing images in some cases quite similar to ones found in mediaeval scriptures and bestiaries (Blunt, 1943, p. 199). It seems that he also embraced the idea, embodied in the creation of illuminated manuscripts, that the written word can be something sacred and powerful and that it is therefore something to be adorned with gold and lively colours.

Blake’s illuminated books broke with the medium of mediaeval manuscripts mainly by virtue of that which they adopted from the medium of the printed book. Blake produced his illuminated books first by making copper plates engraved with images and text, deepening these engravings with the help of corrosive chemicals. He then used inks to form impressions of the plates on sheets of paper, often colouring the impressed images further with watercolour paints (Blake, 1967, p. 11-2). His use of the copper plates and inks bore similarities to the use of movable type and ink to create printed books. For many years it was believed that, despite this similarity, Blake developed his illuminated books partly as a reaction against the mass production of books, hearkening back to the methods of mediaeval craftsmen – specifically the artists who produced illuminated manuscripts –  who created unique items rather than mass produced articles. Consequently, it was believed that after he produced the copper plates for the illuminated books he created only individual books on commission. This belief, first championed by 19th century writers who claimed William Blake as a predecessor (Symmons, 1995), has recently been overturned, however, by the work of Joseph Viscomi. As a scholar and printer who attempted to physically reproduce the methods that Blake employed to create his illuminated books, Viscomi concluded that Blake mass produced these books in small editions of about ten or more books each (Adams, 1995, p. 444).

The primary way in which the illuminated book was meant to improve on the printed book did not lie in the avoidance of mass production, but rather in the relation between the image and the word. In printed books, engraved images could be included with the text, but as the text had to be formed with movable type the image had to be included as something separate and additional (Bolter, 2001, p. 48). In Blake’s illuminated books, in contrast, the written word belonged to the whole image first engraved on the copper plate and then transferred to paper. It participated in the imaginative power of the perceived image, rather than just retaining a purely conceptual meaning. As with the text of mediaeval illuminated manuscripts, the words in Blake’s illuminated books often merge the iconic and the symbolic (Bigwood, 1991). For example, in plate 22 of Blake’s The Marriage of Heaven and Hell, the description of the devil’s speech trails off into a tangle of diabolical thorns. Furthermore, the words are produced in the same colours used in the images to which they belong, and partake in their significance—light watercolours being used in the first edition of the joyous Songs of Innocence and dark reticulated inks being used in the gloomier Songs of Experience (Fuller, 2003, p. 263). As John Ruskin later observed, this ability to use colour in the text of illuminated books made it a form of writing that uniquely expressed its creator’s imagination (Ruskin, 1888, p. 99).

Like several other artists of his time, Blake was disturbed by the mechanistic and atomistic conception of nature first put forward by the ancient philosopher Democritus and then later revived around the seventeenth and eighteenth centuries by natural philosophers. This was the conception of nature as consisting of atoms in an empty void operating in accordance with mechanistic laws. Blake saw this as connected to the type of rationalism that would impose strict laws of reason on the mind and imprison the divine creative power of the imagination. Like others who opposed the mechanistic and atomistic worldview, Blake was particularly repelled by the mechanistic account of colour offered by Isaac Newton, voicing his objection to “Newton’s particles of light” (Blake, 1988, 153). It was thought that such an account treated colour in isolation from the power of the imagination to which it was naturally connected. It was also seen as severing colour from the living spirit of nature—the poet Goethe famously offering a complex alternative theory of colour which saw it as the result of a dynamic interaction of darkness and light.

For Blake, the printing press would at the very least be symbolic of the mechanistic an atomistic view of the world, the words in the printed text no longer partaking in the power of the imagination and the visible image but rather consisting of atoms of movable type and lying separated by voids of empty space.  The primacy of the imagination would be better served by the medium of illuminated books, where the image did not only illuminate the conceptual meaning of the word but also subsumed the word and imparted a deeper significance to it. The imagination was of central importance for Blake, who was a professional engraver as well as a poet, and for whom the medium of the image was a more fundamental part of his life and work than the written word (Storch, 1991, 458).

The ability to mass produce texts in which the image was primary and the written word secondary would have implications for literacy and education insofar as it could widely disseminate works that encouraged imaginative and perceptual understanding over strictly conceptual thought. While the illuminated book as such never became a widespread medium, some of the principles involved in its remediation of the illuminated manuscript and the printed book survived in the medium of the comic book and the graphic novel, which could also be said to realize some of its implications. These works were also mass produced and also differed from the printed book through the relation between the word and the image. For example, the way in which the symbolic word is made to partake in the imaginative power of the iconic image can be seen in the development of comic books in Britain. Early 20th century British comic books generally consisted of rows of images without words, each image having a block of text below it. When comic books adopted the style that introduced speech bubbles, thought bubbles, and sound effects into the image itself, the words became part of the action.

The illuminated book can also be seen as a precursor of hypertext and its remediation of the printed word, specifically insofar as the image in hypertext is coming to dominate the written word (Bolter, 2001, p. 47). In this regard, hypertext could also be said to be carrying through the implications that illuminated books posed for education and literacy. This is not to say that there are not significant differences between these media, of course. Creators of hypertext may look to the illuminated book for inspiration but leave behind the more laborious aspects of the medium, such as the use of copper plates and corrosive chemicals. This may be seen as both an improvement and a loss. One feature of the illuminated book absent in hypertext is the close connection between the work and the bodily act of creating it. As Carol Bigwood observes (1991, p. 309), reading Blake’s illuminated books is a perceptual experience in which we sense the movements of Blake’s hand and the rigidity of the copper on which the image was first made. So while the illuminated book remediates the printed word it may itself be remediated by hypertext.

References

Adams, H. (1995). Untitled [Review of the book Blake and the idea of the book]. The Journal of Aesthetics and Art Criticism, 53(4), 443-444.

Bigwood, C. (1991). Seeing Blake’s illuminated texts. The Journal of Aesthetics and Art Criticism, 49(4), 307- 315.

Blake, W. (1988). Selected writings. London: Penguin.

—–. (1967). Songs of innocence and of experience. Oxford: Oxford University Press. (Original work published 1794).

Blunt, A. (1943). Blake’s pictorial imagination. Journal of the Warburg and Courtauld Institutes, 6, 190-212.

Bolter, J. D. (2001) Writing space: Computers, hypertext, and the remediation of print (2nd ed.). New Jersey: Lawrence Erlbaum Associates.

Fuller, D. (2003). Untitled [Review of the book William Blake. The creation of the songs: From manuscript to illuminated printing]. Review of English Studies, 54(214), 262-264.

Keep, C., McLaughlin, T., & Parmar, R. (1993-2000). John Ruskin, William Morris and the Gothic Revival. The Electronic Labyrinth. Retrieved from http://elab.eserver.org/hfl0236.html

Ruskin, John. (1888). Modern Painters (Vol. 3). New York: John Wiley & Sons.

Storch, Margaret. (1996). Untitled [Review of the books Blake and the idea of the book & Blake, ethics, and forgiveness]. Modern Language Review, 91(2), 458-459.

Symmons, Sarah. (1995). Untitled [Review of the book Blake and the idea of the book]. British Journal of Aesthetics, 35(3), 308-9.

October 28, 2009   No Comments

Bada-Bing! The Oxford English Dictionary Taps into Internet Culture

When I think about standardization of language, my first thought is to refer to the dictionary. Sam Winston, a UK artist, has done some neat pieces that use dictionaries as a springboard for playing with language and text. What I like about this project is that the artist’s intent is to make art accessible – which in the context of this course relates back to the press as means to make literature accessible to the masses. Here is short video clip of the project Dictionary Story.

In the video clip, Winston mentions James Gleick’s article for the New York Times, Cyber-Neologoliferation as a source of inspiration. As this course has fueled my interest in language and technology, I decided to search this article out.

Before reading the article I did not have a clue what ‘neologoliferation’ meant. What I learned is that neologism refers to “a newly coined word that may be in the process of entering common use, but has not yet been accepted into mainstream language (Wikipedia, Neologism, para. 1). This word seems completely appropriate to use in the context of the Oxford English Dictionary and their pursuit to capture “a perfect record, perfect repository, perfect[ly] mirror of the entire [English] language (Gleick, 2006, para. 5).

The Oxford English Dictionary (OED) has a long history, dating back about a century and half, and has played an essential role in standardizing the English language. In his article, Gleick explores the workings of the dictionary today and how the online environment is changing the evolution of language. The OED has evolved its immense printed resource of 20 volumes in its second edition to a 3rd edition that now resides completely online. The Internet has not only been a vehicle that houses the dictionary but a tool that allows lexicographers to eavesdrop on the “expanding cloud of messaging in speech” that occurs in resources such as newspapers, online news groups and chat rooms (para. 2).

With these tactics for tapping into culture, the dictionary has moved from being a ‘dictionary of written of language’, where lexicographers comb through works of Shakespeare to find words, to one where ‘spoken language’ is the resource (para.12). Surprisingly, text messaging also serves as a source for new vocabulary. Beyond OED’s hunting and gathering processes, the general public can also connect with them to have a new word assessed for inclusion into the dictionary. The ‘living document’ of the dictionary now seems to require of the participation of the masses. With this, more and more colloquial language is being added to the dictionary (e.g. bada-bing).

The printing press worked to standardized spelling but according to Gleick (2006) with mass communication spelling variation is on the rise. With the Internet, OED is coming to terms with the boundlessness of language. In the past variations of the English language were spoken in many different pockets around the world. These variations still exist but now are more accessible through the Internet (Gleick, 2006). Peter Gilliver, a lexicographer at OED believes that the Internet transmits information differently than past vehicles for communication. He suggests that the ability to broadcast to the masses or communicate one-to-one is impacting the change in language. For OED, the ability to tap into a wide variety of online conversations affords a more accurate representation of word usage all over the world.

Standards in language help us to clearly communicate in a way that is commonly understood. This article makes me wonder, with all the slang being added to the dictionary, what will language look like in 50 years? 100 years? Will a new English language evolve? How will this affect spoken and written language? Will standards become more lax? With all these questions, OED becomes an important historical documentation of the evolution of the English language.

References

Gleick, J. (2006, November 5). Cyber-neologoliferation. New York Times. Retrieved October 18, 2009, from http://www.nytimes.com/2006/11/05/magazine/05cyber.html?_r=1&adxnnl=1&pagewanted=print&adxnnlx=1255864379-QjA08nvBb8FH9FU9ZHJbRg

Neologism. (n.d.). Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Neologism

October 21, 2009   No Comments

On to the web… and then back off?

I was reading this New York Times article about Pixable and it made me wonder if a similar trend will emerge in writing. Just as Pixable envisions getting images back off the web and into traditional photo albums, will technology provide the means by which we will get text back into tangible forms?

October 17, 2009   No Comments

Archimedes palimpsest

Archimedes palimpsest was thought to be lost, but it was actually recovered 1000 years later! A palimpsest is defined as “a manuscript written on parchment that has another text written over it, leaving two (or more) layers of visible writing.” (NOVA, 2003).

Archimedes was considered the greatest mathematician in Greek history. His priceless (actually valued at approximately 2 million dollars at auction) palimpsest was traced by NOVA  (2003) and makes for an interesting story related to ancient text and the development of writing technologies. Here is an excerpt:

“circa 1000
A scribe working in Constantinople handwrites a copy of the Archimedes treatises, including their accompanying diagrams and calculations, onto parchment, which is assembled into a book.

circa 1200
A Christian monk handwrites prayers in Greek over the Archimedes text, turning the old mathematical text into a new prayer book. The book is now a palimpsest, a manuscript with a layer of text written over an earlier scraped- or washed-off text”. (NOVA, 2003)

I remembered that Richard Clement  (1997) wrote about the practice of scraping off still-wet ink in Medieval and Renaissance Book Production: Manuscript Books. It is interesting to see an actual example of a 1000 year old text that survived this process! The link has some great images and additional links you may be interested in.

By the way, I found this site by using the Librarian’s Internet Index. I hope it helps some classmates with their research. I also tried to hyperlink in this post, but my links led to a 404 Error message. Ah well, the old fashioned digital literacy method of “cut and paste into your browser ” will work for the links. I posted them below. Erin

References

Clement, R. (1997). Medieval and renaissance book production: Manuscript books. Available online 16, October, 2009, from http://www.the-orb.net/encyclop/culture/books/medbook1.html

Librarian’s internet index. (2009). Available online 16, October, 2009, from http://www.lii.org/

NOVA. (2003). Infinite secrets: The Archimede’s palimpsest. Available online 15, October, 2009 from http://www.pbs.org/wgbh/nova/archimedes/palimpsest.html

October 16, 2009   No Comments

Derrida and Writing

In a number of the readings for this course the philosopher Derrida has been mentioned, along his “graphocentric” view that writing is a more primary type of communication than speech. He is a difficult philosopher to understand, but I’ve studied his thought somewhat in the past and I’d like to try to clarify his ideas about writing as far as I understand them.

The background that Derrida was coming from, and reacting against, was structuralism. According to structuralism, words have their meaning by how they relate to other words in a whole system of language. Proponents of structuralism thus draw a distinction between language (the whole system that gives words their meaning) and speech (the things we actually say). The distinction is discussed by Stephen Fry and Hugh Laurie in this comedy sketch

YouTube Preview Image

A related distinction made by structuralists was that between the signified and the signifier. The signified is the place a word takes in the whole system of language and the signifier is the spoken sound of the word or written mark of the word.

Derrida rejected the idea of a fixed system of language giving meaning to everything written and spoken, and rejected the idea that there is a signified that gives meaning to the signifier. He believed that language should be understood in terms of the signifiers only, which in turn are to be understood as dependent on acts of signifying. These acts of signifying have meaning, he thought, only in relation to all other acts of signifying. With new acts of signifying, these relations could change, and so meanings are never fixed but are open to change, their meaning being constantly “deferred”. His method of “deconstruction” is an attempt to change received meanings and received interpretations, using methods such as reversing the received view about what is important and what is unimportant in a text.

Derrida believed that the notion that speech is primary and writing secondary was based on the mistaken view that, with speech, the meaning of our words is something “present”. According to this view, the person who speaks has mastered the system of language to some extent and is an authority on what he or she means. For instance, when you speak to me I am able to respond to your questions and reply, “No, what I meant was…” The written word, in contrast, is something whose meaning is more elusive, for it depends on what the writer meant when he or she wrote it, and the writer may be absent and might even be dead when we read it.

Although he acknowledged that from a historical point of view speech appeared before writing, Derrida thought that writing revealed the nature of language more fully than speech did, for it reflected the way in which the meanings of what we say are not within our control and are constantly open to revision and reinterpretation.

The clearest introduction to Derrida’s views on writing that I have come across is in Richard Harland’s book Superstructuralism. You can see some of it here.

There’s also a movie about Derrida on google video, which is not too bad

http://video.google.com/videoplay?docid=-7347615341871798222

October 11, 2009   No Comments

Commentary 1: An Observation of How Orality and Literacy Have Changed Interactions Between People

Technology has made significant impacts in oral and written communication and interaction. The difference can be observed between oral and literate cultures through the introduction and evolution of writing technologies. Ong (2002) posits that oral cultures developed mnemonic patterns to aid in memory retention of thought, while literacy forces the creation of grammatical rules and structured dialogue. The jump from orality to literacy would have been a challenge for the cultures wishing to preserve their traditions and thoughts in writing and yet, the knowledge to write and record information has enabled many cultures to pass down important pieces of knowledge to future generations.

Ong (2002) explains how, despite being a late development in human history, writing is a technology that has shaped and powered intellectual activity and that symbols are beyond a mere memory aide. As outlined by Ong, oral cultures had the challenge of retaining information in a particular manner, where, when written, the characteristics of oral speech become more evident with certain patterns of speech.  Given that oral cultures had the challenge of retaining information, does literacy require orality? Postman (1992) supports Thamus’ belief where “proper instruction and real knowledge must be communicated” and further argues that despite the prevalence of technology in the classroom, orality still has a place in the space for learning.

As writing technologies evolve, culture and society have the tendency to evolve toward the technology; thus, developing new ways to organize and structure knowledge (Ong, 2002) in order to communicate information and changing the way interactions take place. The construction of speech and the construction of text change depending on the technology. For instance, with the computer, the individual is permitted to delete or backspace any errors in speech or grammar and construct sentences in different ways with the assistance of automatic synonyms, thesaurus or dictionary usage. Before the computer, errors could not be so easily changed with the typewriter, whose ink would remain on the paper until the invention of white out. Tracking the changes to the original Word document with which this paper was composed would reveal the number of modifications and deletions – a feature of technology that cannot be characterized in orality because culture may note errors in speech but cannot effectively track where each error was made. In public speech, one can observe the changes in behaviour, the pauses, and the “umms” and “uhhs” of speech. This is also how the interaction differs from the norm.

With text messaging, the construction of information is often shortened, even more so than one would find with instant messaging. The abbreviated format of text to fit within a limited space has taught individuals to construct conversations differently; in a manner that would not have been so common 15 to 20 years ago.  The interaction between individuals changed since text messaging requires more of a tendency to decipher the abbreviated format. In a sense, text messaging uses some form of mnemonics in order to convey messages from one person to another. This seemingly new form of literacy, in some cases, requires more abstract thinking and as Postman (2002) suggests, may require orality to communicate the true message, which may occur in the form of a phone call.

Learning materials presented in shorter formats becomes more important, particularly for educational technologies like mobile learning, where technologies such as netbooks and mobile phones are utilized for classroom learning. Postman (1992) posits there is a need for an increased understanding of the efficiency of the computer as a teaching tool and how it changes the learning processes. With mobile technologies, the interaction could be limited by abbreviated formats, as seen with text messaging, and in some cases, may not be an effective form of learning for some students. Despite the invention of newer technologies, orality often helps clarify thought processes, concepts and information. While the student can absorb knowledge on literacy alone, orality can assist in the retention of information.

The complexity of written communication can be taken a level further with the basis of writing – pictograms – images that can be recognized and deciphered by most individuals. Gelb  (in ETEC 540) argues that limited writing systems like international traffic signs avoid language and can yet be deciphered by illiterates or speakers of other languages. Although most traffic signs can be clear, some do require translation for the meaning to be clear, whether the translation is made orally or through writing. Ong (2002) supports the notion that codes need a translation that goes beyond pictures, “either in words or in a total human context, humanly understood” (p. 83).

While writing and writing technologies have evolved and changed the way interactions and communication take place, one thing has not changed: being able to find the most basic way to communicate to individuals illiterate of other languages – a characteristic that orality cannot communicate to individuals who are unfamiliar with a language. Thamus feared that writing would be a burden to society, but its advantages outweigh the disadvantages (in Postman, 2002).

References

Gelb, I. J. (2009). Module 2: From Orality to Literacy. In ETEC 540 – Text Technologies: The Changing Spaces of Reading and Writing. Retrieved October 4, 2009 from http://www.vista.ubc.ca.

Ong, W. J. (2002). Orality and Literacy. London: Routledge.

Postman, N. (1992). Technopoly: The Surrender of Culture to Technology. New York: Vintage Books.

October 6, 2009   2 Comments

Closing the gap or re-wiring our brains? Maybe both!

Ong states that “the electronic transformation of verbal expression has both deepened the commitment of the word to space initiated by writing and intensified by print and has brought consciousness to a new age of secondary orality (p. 133).” Secondary orality is the way in which technology has transformed the medium through which we send and receive information. Ong includes various examples such as telephone, radio, television and various kinds of sound tape, and electronic technology (Ong, p. 132).

Ong discusses Lowry’s argument that the printing press, in creating the ability to mass-produce books, makes people less studious (Ong, p. 79).  Lowry continues by stating that “it destroys memory and enfeebles the mind by relieving it of too much work (the pocket‐computer complaint once more), downgrading the wise man and wise woman in favor of the pocket compendium.  Of course, others saw print as a welcome leveler: everyone becomes a wise man or woman (Lowry 1979, pp. 31‐2). (Ong, p. 79).”

The World Wide Web has opened up an entirely new sense of “secondary orality”. Prior to the WWW, texts were primarily written by one or a small group of authors and were read by a specific audience.  Today, with the advent of Web 2.0 the underlying tenets of oral cultures and literate cultures are coming closer together.  Even within ETEC540 we are communicating primarily by text, but we are not entering our own private reading world, we are entering a text-based medium through which we can read and respond to each other’s blog posts (such as this post). In addition, we will contribute to a class Wiki where the information is dynamic and constantly changing. How then, is the WWW changing the way we interpret, digest, and process information?

The Internet has brought about a new revolution in the distribution of text.  Google’s vision of having one library that contains all of the world’s literature demonstrates that “one significant change generates total change (Postman, p. 18).”  Nicholas Carr, in his article, “Is Google Making Us Stupid?” and Anthony Grafton in Paul Kennedy’s podcast “The Great Library 2.0” both make similar arguments about the Internet.  Carr points out, the medium through which we receive information not only provides information, but “they also shape the process of thought”.

Carr contends that the mind may now be absorbing and processing information “the way the Net distributes it: in a swiftly moving stream of particles.”  That is, information is no longer static; it is dynamic, ever changing, and easily accessible and searchable.  Carr gives the example that many of his friends and colleagues and friends in academia have noticed that “the more they use the Web, the more they have to fight to stay focused on long pieces of writing.”

Comparably, Google’s attempt to digitize all the text on earth into a new “Alexandria” is certainly an ambitious project, but as Postman states, new technology “is both a burden and a blessing; not either-or, but this-and that (Postman, 5).”  Some see the library as liberating, making an unfathomable amount of knowledge available to anyone with an Internet connection.  Others, such as Anthony Grafton, argue that reading text off the screen takes away from the romantic adventure that one gets from being the first to read at a rare book found in the library of a far-off country (Grafton in The Great Library 2.0).  Grafton also argues that the ability to search for key-words in electronic texts has created “on-time research” which has made academics and others work at a rapid pace, and fill in parts of work very late using Internet sources.  Carr sites other examples of academics who have lost the ability to read and absorb long texts, but instead have gained the ability to scan “short passages of text from many sources online.”

Lowry’s argument that, to some, print destroyed memory and debilitated the mind, while to others, print created equal accessibility to text has repeated itself with the advent of the Internet.  Carr and Grafton are both argue that instantaneous access to huge databases of information such as Google Books may be detracting from our ability to absorb texts.   That being said, Postman states “once a technology is admitted, it plays out its hand; it does what it is designed to do. Our task is to understand what that design is-that is to say, when we admit a new technology to the culture, we must do so with our eyes wide open (Postman, p. 7).”  Thus, perhaps there is no point in arguing the negatives.  Whether it is Google or a different association that makes all the printed text in world available to us, it is the direction that technology is taking us and there will likely be nothing to stop it.  The question is, what will our societies and cultures look like after it is all done?   It will not be the world plus Library 2.0, but an entirely new world.

References:

Ong, Walter, J. (1982). Orality and Literacy: The Technologizing of the Word. London and New York: Methuen.

Kennedy, Paul (host).  (August 24, 2009). Ideas. The Great Library 2.0. Podcast retrieved from http://www.cbc.ca/ideas/podcast.html

Postman, N. (1992). Technopoly: The surrender of culture to technology. New York: Vintage Books.

Carr, Nicholas. (2008). Is Google Making Us Stupid? The Atlantic. July/August 2008. Accessed September 30, 2009 from http://www.theatlantic.com/doc/200807/google

October 2, 2009   2 Comments

Upon reflection…

See No Evil, Hear No Evil, Speak no Evil

See No Evil, Hear No Evil, Speak no Evil

Why this picture you ask? I guess it’s because I feel this proverb epitomizes the changing nature of text and technology and the fact that it’s not something we can or should ignore.

I want to begin by saying that it’s taking a while for me to get use to using this type of forum. As day passes, I believe I’m getting a little better at navigating and contributing to our weblog. I must say I was a little skeptical at first, perhaps because I’m more ‘old school’ and more comfortable using older technologies. That being said, I’m always up for a challenge and this certainly seems to be pushing me to the max. It helps knowing that I’m not the only one that’s struggling on the technology side of things. The bottom line is that I’m learning plenty of new and interesting things.

I was surprised to learn in Module 1 that there were so many different definitions for text and technology and that these terms are used in so many different contexts. I’m used to be surrounded by books but in more recent years, I find myself spending more time working from a computer. I’m not a gamer but I can see the attraction. I think the internet is a wonderful thing and I use it for many reasons, everything from communicating to family and friends, to finding out information on just about anything. I believe that, as a technology, the internet is responsible for transforming the way we see and use text. I don’t know if that’s a good thing or a bad thing.

After I listened to O’Donnell’s From Papyrus to Cyberspace and reviewed the discussion postings on the text and technology, I couldn’t help but wonder what lies ahead. I think it might be interesting to listen to what people are prophesying about today, particularly with respect to where they believe the technology is expected to go but also about how they think it will alter the way we view text. Based on the Papyrus to Cyberspace experience, we shouldn’t be surprised to learn that some of what is said will come to pass. We should also expect the unexpected as it is almost impossible to predict with certainly, just how things will unfold. As with most things in life, what we think will happen and what actually happened are usually two different things.

If I could rewrite my first impressions on text and technology I suspect the entries would quite different than what they are now. I can’t say I would have changed what I wrote previously. Instead, I would probably have expanded it to include all the other things I hadn’t thought of previously or hadn’t known until now. Perhaps it would be worthwhile to reflect on these two terms again, towards the end of the course.

Bruce

September 25, 2009   No Comments