The Changing Spaces of Reading and Writing

Multimodalities and Differentiated Learning

“A picture is worth a thousand words.”

While there are many theories out there on how to meet the needs of diverse learners, there is one common theme—to teach using multimodalities. The strong focus on text in education has made school difficult to a portion of students, students whose strengths and talents lie outside of the verbal-linguistic and visual-spatial-type abilities. Thus the decreasing reliance on text, the incorporation of visuals and other multimedia, and the social affordances of the internet facilitate student learning.

Maryanne Wolf (2008) purports that the human brain was not built for reading text. While the brain has been able to utilize its pre-existing capabilities to adapt, lending us the ability to read, the fact that reading is not an innate ability opens us to problems such as dyslexia. However, images and even aural media (such as audiobooks) take away this disadvantage. Students who find reading difficult can find extra support in listening to taped versions of class novels or other reading material. Also, students with writing output difficulties can now write with greater ease with computers or other aids such as AlphaSmart keyboards.

Kress’ (2005) article highlights the difference between the traditional text and multimedia text that we often find on web pages today. While the predecessor used to be in a given order and that order was denoted by the author, Kress notes that the latter’s order is more open, and could be determined by the reader. One could argue that readers could still determine order with the traditional text by skipping chapters. However, chapters often flow into each other, whereas web pages are usually designed as more independent units.

In addition, Kress (2005) notes that texts have only a single entry point (beginning of the text) and a single point of departure (end of the text). On the other hand, websites are not necessarily entered through their main (home-) pages, readers often find themselves at a completely different website immediately after clicking on a link that looks interesting. The fact that there are multiple entry points (Kress) is absolutely critical. A fellow teacher argued that this creates problems because there is no structure to follow. With text, the author’s message is linear and thus has inherent structure and logic, whereas multiple points of entry lends to divergence and learning that is less organized. Thus it is better to retain text and less of the multimedia approach such that this type of structure and logic is not lost. The only problem is that it still only makes sense to a portion of the population. I never realized until I began teaching, exactly how much my left-handedness affected my ability to explain things to others. Upon making informal observations, it was evident that it is much easier for certain people to understand me—lefties.

Kress’ (2005) article discusses a third difference—presentation of material. Writing has a monopoly over the page and how the content is presented in traditional texts, while web pages are often have a mix of images, text and other multimedia.

It is ironic to note that text offers differentiation too. While the words describe and denote events and characters and events—none of these are ‘in your face’—the images are not served to you, instead you come up with the images. I prefer reading because I can imagine it as it suits me. In this sense, text provides the leeway that images do not.

Multimodalities extend into other literacies as well. Take for example mapping. Like words and alphabets, maps are symbolic representations of information, written down and drawn to facilitate memory and sharing of this information. Map reading is an important skill to learn, particularly in order to help us navigate through unfamiliar cities and roadways. However, the advent of GPS technology and Google Streetview presents a change—there is a decreasing need to be able to read a map now, especially when Google Streetview gives an exact 360º visual representation of the street and turn-by-turn guidance.

Yet we must be cautious in our use of multimodal tools; while multimodal learning is helpful as a way to meet the needs of different learners, too much could be distracting and thus be detrimental to learning.

References

Kress, G. (2005). Gains and Losses: New forms of texts, knowledge, and learning. Computers and Composition, 5-22.

Wolf, M. (2008). Proust and the Squid: The Story and Science of the Reading Brain. New York: Harper Perennial.

December 13, 2009   No Comments

Hyperculture

The last two chapters of Bolter (2001) were an excellent choice to close our readings.  As an aside, I would like to say how much I enjoyed the sequencing and intertextuality of the readings in this course.  Most courses I have taken offered carefully chosen readings around the key ideas and topics, but none linked them so successfully and recursively as was done here.  It was helpful to my own thinking and enjoyable to read Bolter on Ong, Kress cited in Dobson and Willinsky, and so on.  I could cite such pairings all the way back to the first readings.  It’s one of those subtle displays of good pedagogy that makes me wonder if I could do a better job selecting and sequencing the readings in my classes.

It was inevitable.

It was inevitable.

To return to Bolter, however, the argument that the technology used for writing changes our relationship to it (p. 189) seems almost self-evident.  I know that my approach to writing changes when the tool is a pen versus a word processor.  And it is largely for this reason that I avoid text messages.  I worry how typing on a tiny keyboard with my thumbs to any great extent would affect my relationship with writing (which is already sufficiently adversarial).  The discussion of ego and the nature of the mind itself as a writing space was also interesting.  I’m not sure that I can follow where Bolter leads when he suggests that if the book was a good means of making known the workings of the Cartesian mind, hypertext remediates the mind (p. 197).  That, it seems to me, accords too little to the ego and too much to networked communications—at least as they currently exist.

Bolter is certainly correct, however, when he asserts that electronic technologies are redefining our cultural relationships (p. 203).  This is especially true for my students.  Writing in 2001, Bolter preceded Facebook by at least three years, but he could have been doing Jane Goodall-style field research in my school (watching students use laptops, netbooks and handheld devices to wirelessly access Facebook) when he suggests that we are rewriting “our culture into a vast hypertext” (p. 206).  My own efforts at navigating the online reading and writing spaces of the course were, I fear, somewhat hampered by having lived most of my life in “the late age of print.”

I didn’t post a lot of comments, although I attempted to chime in on the Vista discussions.  What I realized late in the game was that I should have been more active in posting comments to the Weblog.  The strange thing is that I enjoyed reading the weblog posts—and especially enjoyed reading the comments people made about my weblog postings.  For some reason, however, that didn’t translate to reciprocating with comments in that space.  Perhaps it’s because I’m not a blogger or much of a blog reader outside of my MEd classes.  I still prefer more traditional (read: professional, authoritative) sources for news and opinion.  Though, truth be told, I probably read as much news and opinion online as in print. It doesn’t hurt that the New York Times makes most of its content available online for free and that I have EBSCO and Proquest access at work.  It might also be because the other online courses I’ve taken in the past two years tended to use the Vista/Blackboard discussion space as the discussion area, so I think of that as the “appropriate” space for that type of writing.  It’s fascinating to analyze one’s own reading and writing behaviours and assumptions in light of what we’ve read and discussed.  It also takes me again to my own practice as a teacher.  When I next use wikis, for example, with my students, I will try to devise a way (survey, discussion tab in the wiki, etc.)  to find out how they believe their previous online reading and writing experiences influence their interactions and contributions.

References

Bolter, J.D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print. Mahwah, NJ: Lawrence Erlbaum.

December 2, 2009   2 Comments

It’s Up To You

For my course project, I decided to create an interactive fictional story for students learning English as a foreign language.  The target audience is a small to medium class of upper intermediate students between the ages of 15 and 25 who have recently learned the difference between direct and reported speech.  Appropriate level reading material for non-native English students is hard to come by, especially in a non-English speaking country and is greatly appreciated when available.  As indicated in the directions to be read before students start their reading journey, the activity can either be completed individually or as a group.  Often when there is a competitive element to activities such as these, students are much more motivated to participate as a group.  It could potentially be completed remotely but would best be suited for a face-to-face-to-screen computer lab scenario.   

This project is a product of my exploration and experimentation of the mixed media hypertext as a teaching tool.  Therefore the focus should be much more on the medium than on the actual content.  The storyline is of course fictional and is relatively inconsequential other than providing some authentic dialogue (between the reader and their cellmate) and vocabulary appropriate to the students’ level.  The story is somewhat shorter than I originally expected, however as I was writing it, I realized that it would be better to start with a simple storyline both for students and a writer that are new to this genre and the tools to create it. “An interactive fiction is an extension of classical narrative media as it supposes a direct implication of spectators during the story evolution. Writing such a story is much more complex than a classical one, and tools at the disposal of writers remain very limited compared to the evolution of technology” (Donikian and Portugal, 2004).  I also had an idea of how the story would go before I started writing, but the direction changed in the process as well and I learned that creating a graphic storyboard is very helpful for organizing the different directions it can take readers.  There are multiple endings, yet students are redirected to try the story again until they reach “the end.” 

Bush, Nelson, and Bolter were the three main authors we read in ETEC540 in order to gain an understanding of the origins, complexity and implications of hypertext.  Both Bush and Nelson were primarily concerned with hypertext as a natural means to disseminate nonfictional information, while Bolter’s chapter on fictional hypertext is the by far longest chapter in Writing Space: Computers, Hypertext, and the Remediation of Print.  In that chapter, he presents many literary techniques using hypertext to move readers between elements such as time, place, character, voice, plot, perspective, etc.  Although these techniques are intriguing, their complexity is not appropriate for my target audience.  Bolter’s analysis of hypertext goes further by pointing out that instead of being nonlinear, it is actually multilinear. He points out that all writing is linear, but hypertext can go in many different directions.  Even in his chapter titled Hypertext and the Remediation of Print, he writes, “The principal task of authors of hypertextual fiction on the Web or in stand-aloe form is the use links to define relationships among textual elements, and these links constitute the rhetoric of the hypertext” (Bolter, 2001, p. 29). 
 
Unlike a traditional storyline, hypertextual storytelling gives the students the freedom over how they read it.  This (perceived) control is a much more common characteristic to the way we interact with digital information today and therefore should be incorporated into classroom activities regularly.  Putting the student in the proverbial driver’s seat is indicative of a constructivistic teaching approach, which is especially effective when employing ICT in the classroom.  However, as Donikian and Portugal observe, “Whatever degree of interactivity, freedom, and non linearity might be provided, the role that the interactor is assigned to play always has to remain inside the boundaries thus defined by the author, and which convey the essence of the work itself” (2004).  For that reason, I have suggested that students actually modify and customize the story after they have read it.  They could do that individually or in pairs in class or for homework.  Most often, the more control students are given, the more they are motivated to participate and learn.  For their final project, they could create a complete story with multiple endings.

There are so many possibilities when writing fiction with hypertext and I have hardly scratched the surface in my first exploration into this genre.  This project has given me a solid base from with to create longer and more complex pieces for wider teaching contexts.  I hope you enjoy it and that it inspires you experiment with this exciting medium as well.  Click here to access the story or copu and paste this url: http://wiki.ubc.ca/Course:ETEC540/2009WT1/Assignments/MajorProject/ItsUpToYou

References:

Bolter, J.D. (2001). Writing Space: Computers, hypertext, and the remediation of print. Mahway, NJ: Lawrence Erlbaum Associates, pp. 27-46, 121-160.

Bush, V. (1945). As we may think. The Atlantic Monthly, 176(1), 101-108.

Donikian, S. & Portugal, J. (2004). Writing Interactive Fiction Scenarii with DraMachina. Lecture notes in computer science, pp. 101–112

Nelson, Theodore. (1999). Xanalogical structure, needed now more than ever: Parallel documents, deep links to content, deep versioning and deep re-use.

November 29, 2009   2 Comments

Major Project – E-Type: The Visual Language of Typography

Typography shapes language and makes the written word ‘visible’. With this in mind I felt that it was essential to be cognizant about how my major project would be presented in its final format. In support of my research on type in digital spaces, I created an ‘electronic book’ of sorts, using Adobe InDesign CS4 and Adobe Acrobat 9. Essentially I took a traditionally written essay and then modified and designed it to fit a digital space. The end result was supposed to be an interactive .swf file but I ran into too many technical difficulties. So what resulted was an interactive PDF book.

The e-book was designed to have a sequential structure, supported by a table of contents, headings and page numbering – much like that of a traditional printed book. However, the e-book extends beyond the boundaries of the ‘page’ as the user, through hyperlinks, can explore multiple and diverse worlds of information located online. Bolter (2001) uses the term remediation to describe how new technologies refashion the old. Ultimately, this project pays homage to the printed book, but maintains its own unique characteristics specific to the electronic world.

To view the book click on the PDF Book link below. The file should open in a web browser. If by chance, you need Acrobat Reader to view the file and you do not have the latest version you can download it here: http://get.adobe.com/reader/

You can navigate through the document using the arrows in the top navigation bar of the document window. Alternatively you can jump to specific content by using the associated Bookmarks (located in left-hand navigation bar) or by clicking on the chapter links in the Table of Contents. As you navigate through the pages you will be prompted to visit websites as well as complete short activities. An accessible Word version of the essay is also available below.

References

Bolter, J.D. (2001). Writing space: Computers, hypertext, and the remediation of print. New Jersey: Lawrence Erlbaum Associates, Publishers.

To view my project, click on the following links:

E-Type: The Visual Language of Typography (PDF Book)

E-Type: The Visual Language of Typography (Word Version)

November 29, 2009   4 Comments

Hypermedia and Cybernetics: A Phenomenological Study

As with all other technologies, hypermedia technologies are inseparable from what is referred to in phenomenology as “lifeworlds”. The concept of a lifeworld is in part a development of an analysis of existence put forth by Martin Heidegger. Heidegger explains that our everyday experience is one in which we are concerned with the future and in which we encounter objects as parts of an interconnected complex of equipment related to our projects (Heidegger, 1962, p. 91-122). As such, we invariably encounter specific technologies only within a complex of equipment. Giving the example of a bridge, Heidegger notes that, “It does not just connect banks that are already there. The banks emerge as banks only as the bridge crosses the stream.” (Heidegger, 1993, p. 354). As a consequence of this connection between technologies and lifeworlds, new technologies bring about ecological changes to the lifeworlds, language, and cultural practices with which they are connected (Postman, 1993, p. 18). Hypermedia technologies are no exception.

To examine the kinds of changes brought about by hypermedia technologies it is important to examine the history not only of those technologies themselves but also of the lifeworlds in which they developed. Such a study will reveal that the development of hypermedia technologies involved an unlikely confluence of two subcultures. One of these subcultures belonged to the United States military-industrial-academic complex during World War II and the Cold War, and the other was part of the American counterculture movement of the 1960s.

Many developments in hypermedia can trace their origins back to the work of Norbert Wiener. During World War II, Wiener conducted research for the US military concerning how to aim anti-aircraft guns. The problem was that modern planes moved so fast that it was necessary for anti-aircraft gunners to aim their guns not at where the plane was when they fired the gun but where it would be some time after they fired. Where they needed to aim depended on the speed and course of the plane. In the course of his research into this problem, Wiener decided to treat the gunners and the gun as a single system. This led to his development of a multidisciplinary approach that he called “cybernetics”, which studied self-regulating systems and used the operations of computers as a model for these systems (Turner, 2006, p. 20-21).

This approach was first applied to the development of hypermedia in an article written by one of Norbert Wiener’s former colleges, Vannevar Bush.  Bush had been responsible for instigating and running the National Defence Research Committee (which later became the Office of Scientific Research and Development), an organization responsible for government funding of military research by private contractors. Following his experiences in military research, Bush wrote an article in the Atlantic Monthly addressing the question of how scientists would be able to cope with growing specialization and how they would collate an overwhelming amount of research (Bush, 1945). Bush imagined a device, which he later called the “Memex”, in which information such as books, records, and communications would be stored on microfilm. This information would be capable of being projected on screens, and the person who used the Memex would be able to create a complex system of “trails” connecting different parts of the stored information. By connecting documents into a non-hierarchical system of information, the Memex would to some extent embody the principles of cybernetics first imagined by Wiener.

Inspired by Bush’s idea of the Memex, researcher Douglas Engelbart believed that such a device could be used to augment the use of “symbolic structures” and thereby accurately represent and manipulate “conceptual structures” (Engelbart, 1962).This led him and his team at the Augmentation Research Center (ARC) to develop the “On-line system” (NLS), an ancestor of the personal computer which included a screen, QWERTY keyboard, and a mouse.  With this system, users could manipulate text and connect elements of text with hyperlinks. While Engelbart envisioned this system as augmenting the intellect of the individual, he conceived the individual was part of a system, which he referred to as an H-LAM/T system (a  trained human with language, artefacts, and methodology) (ibid., p. 11). Drawing upon the ideas of cybernetics, Engelbart saw the NLS itself as a self-regulatory system in which engineers collaborated and, as a consequence, improved the system, a process he called “bootstrapping” (Turner, 2006, p. 108).

The military-industrial-academic complex’s cybernetic research culture also led to the idea of an interconnected network of computers, a move that would be key in the development of the internet and hypermedia. First formulated by  J.C.R. Licklider, this idea was later executed by Bob Taylor with the creation of ARPANET (named after the defence department’s Advanced Research Projects Agency). As a extension of systems such as the NLS, such a system was a self-regulating network for collaboration also inspired by the study of cybernetics.

The late 1960s to the early 1980s saw hypermedia’s development transformed from a project within the US military-industrial-academic complex to a vision animating the American counterculture movement. This may seem remarkable for several reasons. Movements related to the budding counterculture in the early 1960s generally adhered to a view that developments in technology, particularly in computer technology, had a dehumanizing effect and threatened the authentic life of the individual. Such movements were also hostile to the US military-industrial-academic complex that had developed computer technologies, generally opposing American foreign policy and especially American military involvement in Vietnam. Computer technologies were seen as part of the power structure of this complex and were again seen as part of an oppressive dehumanizing force (Turner, 2006, p. 28-29).

This negative view of computer technologies more or less continued to hold in the New Left movements largely centred on the East Coast of the United States. However, a contrasting view began to grow in the counterculture movement developing primarily in the West Coast. Unlike the New Left movement, the counterculture became disaffected with traditional methods of social change, such as staging protests and organizing unions. It was thought that these methods still belonged to the traditional systems of power and, if anything, compounded the problems caused by those systems. To effect real change, it was believed, a shift in consciousness was necessary (Turner, 2006, p. 35-36).

Rather than seeing technologies as necessarily dehumanizing, some in the counterculture took the view that technology would be part of the means by which people liberated themselves from stultifying traditions. One major influences on this view was Marshall McLuhan, who argued that electronic media would become an extension of the human nervous system and would result in a new form of tribal social organization that he called the “global village” (McLuhan, 1962). Another influence, perhaps even stronger, was Buckminster Fuller, who took the cybernetic view of the world as an information system and coupled it with the belief that technology could be used by designers to live a life of authentic self-efficiency (Turner, 2006, p. 55-58).

In the late 1960s, many in the counterculture movement sought to effect the change in consciousness and social organization that they wished to see by forming communes (Turner, 2006, p. 32). These communes would embody the view that it was not through political protest but through the expansion of consciousness and the use of technologies (such as Buckminster Fuller’s geodesic domes) that a true revolution would be brought about. To supply members of these communes and other wayfarers in the counterculture with the tools they needed to make these changes, Stewart Brand developed the Whole Earth Catalogue (WEC). The WEC provided lists of books, mechanical devices, and outdoor gear that were available through mail order for low prices. Subscribers were also encouraged to provide information on other items that would be listed in subsequent editions. The WEC was not a commercial catalogue in that it wasn’t possible to order items from the catalogue itself. It was rather a publication that listed various sources of information and technology from a variety of contributors. As Fred Turner argues (2006, p. 72-73), it was seen as a forum by means of which people from various different communities could collaborate.

Like many others in the counterculture movement, Stewart Brand immersed himself in cybernetics literature. Inspired by the connection he saw between cybernetics and the philosophy of Buckminster Fuller, Brand used the WEC to broker connections between ARC and the then flourishing counterculture (Turner, 2006,  p. 109-10). In 1985, Stewart Brand and former commune member Larry Brilliant took the further step of uniting the two cultures and placed the WEC online in one of the first virtual communities, the Whole Earth ‘Lectronic Link or “WELL”. The WELL included bulletin board forums, email, and web pages and grew from a source of tools for counterculture communes into a forum for discussion and collaboration of any kind. The design of the WELL was based on communal principles and cybernetic theory. It was intended to be a self-regulating, non-hierarchical system for collaboration.  As Turner notes (2005), “Like the Catalog, the WELL became a forum within which geographically dispersed individuals could build a sense of nonhierarchical, collaborative community around their interactions” (p. 491).

This confluence of military-industrial-academic complex technologies and the countercultural communities who put those technologies to use would form the roots of other hypermedia technologies. The ferment of the two cultures in Silicon Valley would result in the further development of the internet—the early dependence on text being supplanted by the use of text, image, and sound, transforming hypertext into full hypermedia. The idea of a self-regulating, non-hierarchical network would moreover result in the creation of the collaborative, social-networking technologies commonly denoted as “Web 2.0”.

This brief survey of the history of hypermedia technologies has shown that the lifeworlds in which these technologies developed was one first imagined in the field of cybernetics. It is a lifeworld characterised by non-hierarchical, self-regulating systems and by the project of collaborating and sharing information. First of all, it is characterized by non-hierarchical organizations of individuals. Even though these technologies first developed in the hierarchical system of the military-industrial-academic complex, it grew within a subculture of collaboration among scientists and engineers (Turner, 2006, p. 18). Rather than  being strictly regimented, prominent figures in this subculture – including Wiener, Bush, and Engelbart -voiced concern over the possible authoritarian abuse of these technologies (ibid., p. 23-24).

The lifeworld associated with hypermedia is also characterized by the non-hierarchical dissemination of information. Rather than belonging to traditional institutions consisting of authorities who distribute information to others directly, these technologies involve the spread of information across networks. Such information is modified by individuals within the networks through the use of hyperlinks and collaborative software such as wikis.

The structure of hypermedia itself is also arguably non-hierarchical (Bolter, 2001, p. 27-46). Hypertext, and by extension hypermedia, facilitates an organization of information that admits of many different readings. That is, it is possible for the reader to navigate links and follow what Bush called different “trails” of connected information. Printed text generally restricts reading to one trail or at least very few trails, and lends itself to the organization of information in a hierarchical pattern (volumes divided into books, which are divided into chapters, which are divided into paragraphs, et cetera).

It is clear that the advent of hypermedia has been accompanied by changes in hierarchical organizations in lifeworlds and practices. One obvious example would be the damage that has been sustained by newspapers and the music industry. The phenomenological view of technologies as connected to lifeworlds and practices would provide a more sophisticated view of this change than the technological determinist view that hypermedia itself has brought about changes in society and the instrumentalist view that the technologies are value neutral and that these changes have been brought about by choice alone (Chandler, 2002). It would rather suggest that hypermedia is connected to practices that largely preclude both the hierarchical dissemination of information and the institutions that are involved in such dissemination. As such, they cannot but threaten institutions such as the music industry and newspapers. As Postman (1993) observes, “When an old technology is assaulted by a new one, institutions are threatened” (p. 18).

Critics of hypermedia technologies, such as Andrew Keen (2007), have generally focussed on this threat to institutions, arguing that such a threat undermines traditions of rational inquiry and the production of quality media. To some degree such criticisms are an extension of a traditional critique of modernity made by authors such as Alan Bloom (1987) and Christopher Lasch (1979). This would suggest that such criticisms are rooted in more perennial issues concerning the place of tradition, culture, and authority in society, and is not likely that these issues will subside. However, it is also unlikely that there will be a return to a state of affairs before the inception of hypermedia. Even the most strident critics of “Web 2.0” technologies embrace certain aspects of it.

The lifeworld of hypermedia does not necessarily oppose traditional sources of expertise to the extent that the descendants of the fiercely anti-authoritarian counterculture may suggest, though. Advocates of Web 2.0 technologies often appeal to the “wisdom of crowds”, alluding the work of James Surowiecki  (2005). Surowiecki offers the view that, under certain conditions, the aggregation of the choices of independent individuals results in a better decision than one made by a single expert. He is mainly concerned with economic decisions,  offering his theory as a defence of free markets. Yet this theory also suggests a general epistemology, one which would contend  that the aggregation of the beliefs of many independent individuals will generally be closer to the truth than the view of a single expert. In this sense, it is an epistemology modelled on the cybernetic view of self-regulating systems. If it is correct, knowledge would be  the result of a cybernetic network of individuals rather than a hierarchical system in which knowledge is created by experts and filtered down to others.

The main problem with the “wisdom of crowds” epistemology as it stands is that it does not explain the development of knowledge in the sciences and the humanities. Knowledge of this kind doubtless requires collaboration, but in any domain of inquiry this collaboration still requires the individual mastery of methodologies and bodies of knowledge. It is not the result of mere negotiation among people with radically disparate perspectives. These methodologies and bodies of knowledge may change, of course, but a study of the history of sciences and humanities shows that this generally does not occur through the efforts of those who are generally ignorant of those methodologies and bodies of knowledge sharing their opinions and arriving at a consensus.

As a rule, individuals do not take the position of global skeptics, doubting everything that is not self-evident or that does not follow necessarily from what is self-evident. Even if people would like to think that they are skeptics of this sort, to offer reasons for being skeptical about any belief they will need to draw upon a host of other beliefs that they accept as true, and to do so they will tend to rely on sources of information that they consider authoritative (Wittgenstein, 1969). Examples of the “wisdom of crowds” will also be ones in which individuals each draw upon what they consider to be established knowledge, or at least established methods for obtaining knowledge. Consequently, the wisdom of crowds is parasitic upon other forms of wisdom.

Hypermedia technologies and the practices and lifeworld to which they belong do not necessarily commit us to the crude epistemology based on the “wisdom of crowds”. The culture of collaboration among scientists that first characterized the development of these technologies did not preclude the importance of individual expertise. Nor did it oppose all notions of hierarchy. For example, Engelbart (1962) imagined the H-LAM/T system as one in which there are hierarchies of processes, with higher executive processes governing lower ones.

The lifeworlds and practices associated with hypermedia will evidently continue to pose a challenge to traditional sources of knowledge. Educational institutions have remained somewhat unaffected by the hardships faced by the music industry and newspapers due to their connection with other institutions and practices such as accreditation. If this phenomenological study is correct, however, it is difficult to believe that they will remain unaffected as these technologies take deeper roots in our lifeworld and our cultural practices. There will continue to be a need for expertise, though, and the challenge will be to develop methods for recognizing expertise, both in the sense of providing standards for accrediting experts and in the sense of providing remuneration for expertise. As this concerns the structure of lifeworlds and practices themselves, it will require a further examination of those lifeworlds and practises and an investigation of ideas and values surrounding the nature of authority and of expertise.

References

Bloom, A. (1987). The closing of the American mind. New York: Simon & Schuster.

Bolter, J. D. (2001) Writing space: Computers, hypertext, and the remediation of print (2nd ed.). New Jersey: Lawrence Erlbaum Associates.

Bush, V. (1945). As we may think. Atlantic Monthly. Retrieved from http://www.theatlantic.com/doc/194507/bush

Chandler, D. (2002). Technological or media determinism. Retrieved from http://www.aber.ac.uk/media/Documents/tecdet/tecdet.html

Engelbart, D. (1962) Augmenting human intellect: A conceptual framework. Menlo Park: Stanford Research Institute.

Heidegger, M. (1993). Basic writings. (D.F. Krell, Ed.). San Francisco: Harper Collins.

—–. (1962). Being and time. (J. Macquarrie & E. Robinson, Trans.). San Francisco: Harper Collins.

Keen, A. (2007). The cult of the amateur: How today’s internet is killing our culture. New York: Doubleday.

Lasch, C. (1979). The culture of narcissism: American life in an age of diminishing expectations. New York: W.W. Norton & Company.

McLuhan, M. (1962). The Gutenberg galaxy. Toronto: University of Toronto Press.

Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage.

Surowiecki, J. (2005). The wisdom of crowds. Toronto: Anchor.

Turner, F. (2006). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Chicago: University of Chicago Press.

—–. (2005). Where the counterculture met the new economy: The WELL and the origins of virtual community. Technology and Culture, 46(3), 485–512.

Wittgenstein, L. (1969). On certainty. New York: Harper.

November 29, 2009   No Comments

Final Project – Graphic Novels, Improving Literacy

Before I started this course, I had noticed the increased availability of graphic novels in our school library.  My teenage son is a fan, preferring Manga to the North American style comic books.  When our school recently began school-wide silent reading to promote literacy, student interest in and requests for graphic novels increased further.  There seemed a clear link between this form of literature and the need to improve  literacy rates as part of our province’s Student Success initiatives.

In the past weeks, I researched the topic of graphic novels to find the link between improved literacy and an alternate form of literature.  This website is meant to be an informative document.  My hope is to link it to the school website for parents to find documented answers to their questions about how to get reluctant readers engaged in regular reading.

A website is unlike a traditional essay in that I found it difficult to conclude the document.  You will find both internal and external links.  Typical of websites, the readers can choose the path to follow – it was never meant to be linear.  Ultimately, I hope this site encourages readers to continue their own journey in learning about graphic novels.

November 28, 2009   1 Comment

The Age of Real-Time

I had the opportunity to go to the Annual Conference on Distance Teaching and Learning in Madison, Wisconsin this past August. The last keynote speaker, Teemu Arina discussed how culture and education are changing with emerging technologies. His presentation illustrated how we are moving from linear and sequential environments to those that are nonlinear and serendipitous. Topics of time, space and social media tie into Teemu’s presentation. The video of the presentation is about 45 minutes long but the themes tie nicely into our course and into many other courses within the MET program.

In the Age of Real-Time: The Complex, Social, and Serendipitous Learning Offered via the Web

November 24, 2009   No Comments

Observation

I am attending an IT conference put on by my school board today.  So far, 2 of 3 sessions have been useful.  One session, however, was disappointing in that it was not what we’d hoped to learn about.  The general gist of the presentation was about students being involved in creating their own assessment.

I am sitting here reflecting on what exactly I am learning in the current session, realizing that we are all on a learning journey.  As adults in this professional learning workshop, we’ve been able to choose what to explore.  So we hope to maximize our learning as a result of choosing sessions that are part of our learning path.

When relating that to students choosing their own assessment or being involved in it at least, I wonder if that’s possible because they don’t have the ability to choose their learning path as we do.  They might choose certain elective courses and even what stream they want to follow, but those are so limited.

When you consider that most digital natives are used to choosing their information path because of the nature of the internet (hypertext links and all) and the speed at which they access all the information they need/want, is it any wonder they can’t sit still without being connected to some electronic device or feel they can decide the outcome of everything they put effort into?  I think it explains why my students seem to think they can negotiate every assignment I give them.

November 22, 2009   2 Comments

Revolutionizing information organization and academic authority

Commentary #2 – In response to Michael Wesch’s video, “Information R/evolution” (Module 4)

Appropriately “hyper” for the purposes of framing hypertext and the changing technologies of writing and archiving information, Micheal Wesch’s Information R/evolution is a dynamic interplay of text technologies that incorporates both the hypertext discussion of Jay David Bolter and the organization discussion of Walter Ong. Wesch speaks to the evolution of the pre-typographic notion that information is “a thing… housed in a logical place… where it can be found” and how we have now moved towards a place where technology affords the ability for anyone to create, critique, organize and understand. Information R/evolution touches upon two interesting developments supported by the hypertext environment of our technological world: the nature by which information is stored and the nature of authority.

Information R/evolution starts out with images of the typewriter, standard filing cabinet and card catalogue. This is intentional as each of these three objects were, for many years, definitive symbols of the way by which information was recorded, stored and retrieved. In unpacking the information evolution, these images quickly transform into those of word processing programs, blogs and search engines. Wesch suggests that it does not take an expert to attend to organizational tasks; rather, we are all responsible for the tagging, bookmarking, categorizing and otherwise organizing of information. The organizational affordances of technology are illustrated in the video and echo Walter Ong’s discussion about categories and lists and how they create meaning out of space, impressing through “tidiness and inevitability” (Ong, 2002, p.120). Wesch illustrates this revolution as a true transcendence of place with regards to the means by which information can be rethought “beyond material constraints”. The ability to store information simultaneously in multiple places is not only crucial to the way information is stored but also crucial to the speed at which information is retrieved. Bolter (2001) further discusses this issue in his study of hypertext and cites hyperlinking as the process by which the reader can “continue indefinitely…through the textual space…throughout the Internet” (p.27). An interesting facet of Wesch’s video is that he does not rely on lengthy text to illustrate his point, rather, he demonstrates visually the remediation of print by modeling the organizational affordances of hypertext on a single computer screen, devoid of the paper trail that previously defined information technology.

The nature of authority is touched upon in Information R/evolution and it is suggested that the nature of modern typographic culture has broadened the constraints of previously established information authority (academics, librarians etc.). Information R/evolution raises the issue of how people, either for personal or academic purposes, come to find the information they are seeking and what format they are ultimately presented with. Simply put, “together, we create more information than experts”, is a powerful truth that highlights not only the responsibility of those posting on the web to categorize their information, but also the fact that authorship is seemingly more open. The boundaries of expert and non-expert were more defined in a chirographic and early typographic culture whereby there was an entire process surrounding how one became an author and therefore, an authority. Wesch encourages the viewer to think about authority in the context of this information revolution. While there exists scholarly access points through university libraries, Google Scholar etc., the mainstream user relies on search engines such as Yahoo and Google in order to find definitive sources of information. The breadth of information allows the viewer to view not only authoritative sites (National Geographic, BBC, etc.) but also collaboratively edited sites (Wikipedia) and personal sites (parenting blogs, personal interest sites, etc.) thereby creating a multidimensional approach to any given topic.

However, Wesch indirectly highlights the flip side, which is the uncertainty of the information found. The access itself may be much easier by being able to use one’s personal computer to access library catalogues and search engines rather than searching, in person, through an onerous card catalogue, however, the expanse of the web does lessen the power of established authority. Wesch cites Wikipedia as an example by stating “Wikipedia has 15 times as many words as the next largest encyclopedia, Encyclopedia Britannica”. While this is a seemingly simple statement, it has much larger ramifications for the growing debate about authority on the web, as Wikipedia is a collaboratively created encyclopedia that can be openly edited. More powerful than this statement is the fact that Wesch uses a live screen clip showing himself editing Wikipedia in “real time” and then adding one more person to the tally of the 282,874 contributors that appeared at the time, illustrating the very fluid and “living” nature of information on the Internet.  While effective in drawing forth questions about authority and research, I would be interested to see Wesch explore, more closely, the nature of how one conducts research through a similarly styled video.

Bolter speaks of the “breakout of the visual” and in that spirit, Wesch shows that the dominating visual message of Information R/evolution can be just as powerful as written prose exploring the same topic. Wesch’s visual inspires reflective thought about the evolution of information but also the current revolution taking place in terms of information organization, conducting research and the nature of authority.

References:

Bolter, Jay David. (2001). Writing space: Computers, hypertext, and the remediation of print [2nd edition]. Mahwah, NJ: Lawrence Erlbaum.

Ong, Walter. (2002) Orality and literacy: The technologizing of the word. London: Methuen.

Wesch, Michael. (2007). Information R/evolution . Retrieved from http://www.youtube.com/watch?v=-4CV05HyAbM

November 7, 2009   1 Comment

Xanadu and Ted Nelson

I found a great site by Ted Nelson called “Ted Nelson’s Computer Paradigm Expressed as One-Liners”. It examines the cultural ramifications of the web and hypertext with a bit of humour. You can visit it here: http://www.xanadu.com.au/ted/TN/WRITINGS/TCOMPARADIGM/tedCompOneLiners.html

A gem under the section titled Two Cheers for the World Wide Web: “The Web is the minimal concession to hypertext that a sequence-and-hierarchy chauvinist could possibly make” (Nelson, 1999)

Reference

Nelson, T. (1999). Ted Nelson’s computer paradigm expressed as one-liners. Available Online 29, October, 2009, from http://www.xanadu.com.au/ted/TN/WRITINGS/TCOMPARADIGM/tedCompOneLiners.html

October 30, 2009   No Comments