Multimodalities and Differentiated Learning
“A picture is worth a thousand words.”
While there are many theories out there on how to meet the needs of diverse learners, there is one common theme—to teach using multimodalities. The strong focus on text in education has made school difficult to a portion of students, students whose strengths and talents lie outside of the verbal-linguistic and visual-spatial-type abilities. Thus the decreasing reliance on text, the incorporation of visuals and other multimedia, and the social affordances of the internet facilitate student learning.
Maryanne Wolf (2008) purports that the human brain was not built for reading text. While the brain has been able to utilize its pre-existing capabilities to adapt, lending us the ability to read, the fact that reading is not an innate ability opens us to problems such as dyslexia. However, images and even aural media (such as audiobooks) take away this disadvantage. Students who find reading difficult can find extra support in listening to taped versions of class novels or other reading material. Also, students with writing output difficulties can now write with greater ease with computers or other aids such as AlphaSmart keyboards.
Kress’ (2005) article highlights the difference between the traditional text and multimedia text that we often find on web pages today. While the predecessor used to be in a given order and that order was denoted by the author, Kress notes that the latter’s order is more open, and could be determined by the reader. One could argue that readers could still determine order with the traditional text by skipping chapters. However, chapters often flow into each other, whereas web pages are usually designed as more independent units.
In addition, Kress (2005) notes that texts have only a single entry point (beginning of the text) and a single point of departure (end of the text). On the other hand, websites are not necessarily entered through their main (home-) pages, readers often find themselves at a completely different website immediately after clicking on a link that looks interesting. The fact that there are multiple entry points (Kress) is absolutely critical. A fellow teacher argued that this creates problems because there is no structure to follow. With text, the author’s message is linear and thus has inherent structure and logic, whereas multiple points of entry lends to divergence and learning that is less organized. Thus it is better to retain text and less of the multimedia approach such that this type of structure and logic is not lost. The only problem is that it still only makes sense to a portion of the population. I never realized until I began teaching, exactly how much my left-handedness affected my ability to explain things to others. Upon making informal observations, it was evident that it is much easier for certain people to understand me—lefties.
Kress’ (2005) article discusses a third difference—presentation of material. Writing has a monopoly over the page and how the content is presented in traditional texts, while web pages are often have a mix of images, text and other multimedia.
It is ironic to note that text offers differentiation too. While the words describe and denote events and characters and events—none of these are ‘in your face’—the images are not served to you, instead you come up with the images. I prefer reading because I can imagine it as it suits me. In this sense, text provides the leeway that images do not.
Multimodalities extend into other literacies as well. Take for example mapping. Like words and alphabets, maps are symbolic representations of information, written down and drawn to facilitate memory and sharing of this information. Map reading is an important skill to learn, particularly in order to help us navigate through unfamiliar cities and roadways. However, the advent of GPS technology and Google Streetview presents a change—there is a decreasing need to be able to read a map now, especially when Google Streetview gives an exact 360º visual representation of the street and turn-by-turn guidance.
Yet we must be cautious in our use of multimodal tools; while multimodal learning is helpful as a way to meet the needs of different learners, too much could be distracting and thus be detrimental to learning.
References
Kress, G. (2005). Gains and Losses: New forms of texts, knowledge, and learning. Computers and Composition, 5-22.
Wolf, M. (2008). Proust and the Squid: The Story and Science of the Reading Brain. New York: Harper Perennial.
December 13, 2009 No Comments
Formal Commentary #2 by Dilip Verma
Hypermedia Literacy and Constructivist Learning Theory
The changing form of representation in modern media, and the changing relationship between reader and author in hypertext both call for a change in the method by which literacy is taught. The way that hypertext, or better still hypermedia, is experienced and produced requires a different set of skills than those taught in the traditional classroom. The fact that some of the changes called for by the New London Group closely mirror practices suggested in constructivist learning theory gives added weight to the impetus for a shift in classroom methodology. In constructivism, learning is student centered, and meaning is personal, being constructed actively by the student within a social context. These teaching techniques are precisely what are required to produce students literate in hypermedia.
Hypermedia incorporates multi-modes of meaning involving design decisions in, at the very least, the linguistic, audio, spatial and visual realms. Education has traditionally focused on the linguistic logical intelligence, but multi-literacy requires designers and viewers to develop multiple intelligences (as defined by Gardner) and multiple grammars for different modes of representation. Though parallel means of representation do exist between grammars (Cope and Kalantzis, 2006 citing Kress, 2000b and Kress and van Leeuwen, 1996), on the whole, different modes of representation present meaning differently. For example, speech, and consequently writing, organizes events temporally, whilst images represent spatially arranged entities (Kress, 2005, p.13). Therefore, language literacy requires a different grammar to visual literacy. Individual students naturally vary in their mastery of these grammars; one may have an instinctive understanding of spatial representation, while another is more aware of linguistic meaning. Traditionally, literacy has been taught mono grammatically, whereas constructivism embraces the idea of individual perspectives in a classroom that work collectively to create meaning.
The Pedagogy of Multiliteracies (The New London Group, 1996) calls for the active construction of meaning and teaches learners how to be “active designers of meaning” (Cope and Kalantzis, 2006, p10). In the traditional classroom, learners are encouraged to repeat modes of representation in the production or consumption of media rather than construct new, personalized designs influenced by their own perspective, a perspective influenced by cultural mediation based on Vygotsky’s Cultural Historical Activity Theory. In the “Multiliterate” classroom, students become constructors of meaning and are transformed in the process. “Meaning makers remake themselves” (The New London Group, 1996, p15). The Pedagogy of Multiliteracies is a student centered, active process that furthers a Constructivist agenda.
In the traditional text, as in the traditional classroom, the author offered a single vision or mode of representation to which the student adapted herself and “followed the strict order established by the writer while needing to interpret the word signifiers, turning them into his or her signs” (Kress, 2005, p.9). In hypermedia, it is the visitor, not the author, who determines the path (Kress, 2005) and students are “agents” (Cope and Kalantzis, 2006, p. 7) of their own knowledge path. Rather than being passive, hypermedia readers are “meaning makers (that) don’t simply use what they have been given; they are fully makers and remakers of signs and transformers of meaning” (Cope and Kalantzis, 2006, p.10). The fluid nature of meaning suggests a constructivist epistemology and a shift from the author or teacher as authority. The New London Group does not see meaning as a concept external to the learner, but rather as internal. Traditional teachers, just like authors, were authorities, establishing a path through their text, which the reader or student followed diligently. Digital authors and teachers are no longer mappers of knowledge; they are not sources of knowledge, just sources of information. If the students of today are to be “actors rather than audiences” (Cope and Kalantzis, 2006, p. 8), a student-centered focus for education is called for.
Finally, digital literacy requires a “more holistic approach to pedagogy” (Cope and Kalantzis, 2006, p.3). The interconnected modes of representation suggest a classroom where the focus is on ways of knowing rather than the division of knowledge into isolated areas. Modern literacy requires a knowledge of multiple grammars, those of linguistic, visual, audio, gestural, and spatial and representation (The New London Group, 1996, p. 17). Moreover, an understanding of how these modes combine synaesthetically is a separate grammar all together. This last form, the multimodal representation of meaning, is special in that it represents the way the other modes play off each other to create interconnected patterns of meaning (The New London Group, 1996, p. 17). This multimodal grammar is important for digital literacy as children are naturally synaesthetic, in the way they combine their modes of representation, and “much of our everyday representational experience is intrinsically multimodal” (Cope and Kalantzis, 2006, p. 13). If literacy is to be relevant to learners, then pedagogical activities must be authentic and related to students’ experience in a world of multimodal communication. Hence it is counterproductive and unnatural to compartmentalize modes of meaning as traditional pedagogy has done.
References
Cope, B., & Kalantzis, M. (2006). ‘Multiliteracies’: New Literacies, New Learning. Pedagogies: An International Journal, 4(3), 164-195.
Kress, G. (2005). Gains and losses: New forms of texts, knowledge, and learning. Computers and Composition, 22(1), 5-22.
The New London Group. (1996). A Pedagogy of Multiliteracies. Designing Social Futures. Harvard Educational Review, 66(1), 60-92.
November 14, 2009 1 Comment