Final Project – Swipe to Read: iPads and the Changing Landscape of Early Literacy

Discover how iPads have changed the landscape of early literacy

Posted in Uncategorized | Leave a comment

Linking Assignment #6

Summary 

In Tarana’s Task 11 post, she reflected on the “Detain/Release” simulation and her reactions to it. She noted how little information was provided when deciding whether to detain or release a defendant. The simulation only presented three risk indicators: the likelihood of reoffending before trial, the probability that the defendant would appear in court for their hearing, and the level of potential violence they might pose. There was no criminal history, personal background, socioeconomic information, or even details about the alleged offense.

Tarana described the decision-making process as almost mechanical. She felt compelled to choose “detain” whenever the simulation indicated a high risk of reoffending, even though she questioned the fairness and credibility of the data behind those scores. Her post raised important ethical concerns about real world AI supported legal decisions, emphasizing the need for transparency, context, and critical thinking when incorporating AI into systems that profoundly affect people’s lives.

Why I Chose This Work

I chose Tarana’s post because I completed the other option for this task and wanted to hear from someone who experienced the Detain/Release simulation. I was especially curious about how people felt when engaging with an algorithm that influences decisions affecting human lives. I was not surprised by Tarana’s reaction because her concerns about fairness and transparency align with broader conversations surrounding algorithmic decision making.

Reflection

Reflecting on Tarana’s post, I found it troubling that real world systems sometimes rely on algorithms to decide whether a person should be detained or released. It feels like an easy way out, a way to save time without truly listening to or understanding the circumstances that surround each individual. I found myself agreeing with everything Tarana wrote.

Tarana used UBC Blogs as her platform, as I did, although her layout looks different. Her posts are displayed across the top menu, while mine appear along the side. Aside from that, her site is easy to navigate.

Thinking about Tarana’s post reminded me of Crawford’s book Atlas of AI (2021), which explains how mugshots are used to test facial recognition algorithms, often without the consent of defendants or their families. Crawford (2021) points out that many of these systems are developed and deployed without ever interacting with the people whose lives they affect, and this connects to what Tarana experienced in the simulation. Algorithms were determining whether someone should be detained or released without any opportunity to understand their circumstances or speak to them directly. These systems also carry a high risk of error, are disproportionately inaccurate for certain racialized groups, and yet many police jurisdictions continue to rely on them.

I think many of us assume that AI simply makes life easier without fully understanding the systems behind it. We often do not understand how the algorithms work, who they benefit, who they harm, how bias is embedded in the data, or even the environmental cost of developing and maintaining AI systems. Tarana’s post reminded me that we need a deeper awareness of these issues and a more critical approach to using AI in any context, especially those that impact people’s freedom, safety, and rights.

Reference

Crawford, K. (2021). Data. In Atlas of AI. (pp. 89-122) Yale University Press.

Posted in linking assignment | Leave a comment

Task 12: Speculative Futures

For this task, I used Microsoft Copilot to generate a speculative story using the prompt: “In approximately 500 words, describe or narrate a scenario about a gift found a few years into a future in which ‘progress’ has continued. Your description should address issues related to family and elicit feelings of happiness.” The AI-generated story described a “memory capsule”, a device that allowed a family to relive real memories from the past. The story followed a young girl who discovered this gift from her grandmother, and through it, her family reconnected with their shared history. In a world where technology had become fast, efficient, and artificial, the gift reminded them of what truly mattered: love, family, and remembering where they came from.

After reading the story and reflecting on the module readings and videos, I began to understand speculative design in a new way. As Dunne and Raby (2013) explain, speculative design is not about predicting or forecasting what will happen; rather, it uses imagined futures as a lens to question and critique the present.  The future becomes a kind of mirror that shows us our current values, fears, and hopes. I think the AI story did this quite well. It imagined a future full of technological progress, yet it wasn’t about efficiency or profit, it was about emotional connection. In a way, it took the idea of “progress” away from corporations and gave it back to people. It imagined technology as something that brings families together instead of replacing human relationships.

This idea connects to what Mitrovic et al. (2021) discuss when they describe how events like the 1964 World’s Fair presented a polished, corporate-controlled vision of the future, one that was meant to dazzle and distract rather than provoke critical thought. Those designs made the future look clean, simple, and full of promise, but they also ignored real social complexities. That’s the risk of speculative design when it becomes spectacle, it can lose its ability to challenge the present. The AI-generated story, however, offered something different. It was simple, but it also carried a quiet critique.  In all our technological advancement, maybe what we truly need to preserve is our humanity.

Honestly, I have a somewhat bleak view of the future. With the state of the world right now, and with powerful corporate figures like Elon Musk and Jeff Bezos shaping what “progress” looks like, it’s hard to imagine a better world. They seem more interested in escaping our planet than healing it. But reading Dunne & Raby (2013), and Mitrovic et al. (2021), made me realize that speculative design is about hope, about imagining possibilities that don’t yet exist but could. That’s what I appreciated about the AI story. It offered a hopeful version of the future where the past and the present could coexist, where family and memory still matter. In that sense, speculative design isn’t about predicting a better future; it’s about reminding us that we still have the power to create one.

Reflecting on this AI-generated story and the ideas of speculative design has reinforced for me the importance of keeping the human element at the center of educational technology. While tools and platforms can enhance learning experiences, they must be integrated thoughtfully to support emotional, social, and cognitive development. In my own teaching practice, this means balancing digital and physical literacy experiences, using technology to enrich engagement and differentiation without losing the tactile components that are vital for young learners. Speculative design reminds me that the choices we make today about technology in classrooms can shape not just what students learn, but how they experience connection, curiosity, and meaning in their education.

References

Dunne, A., & Raby, F. (2013). Speculative everything: Design, fiction, and social dreaming. The MIT Press.

Mitrović, I., Auger, J., Hanna, J., & Helgason, I. (Eds.). (2021). Beyond speculative design: Past – present – future. SpeculativeEdu.

Posted in Uncategorized | Leave a comment

Task 11: Text-to-Image

Prompts and AI Outputs

For this task, I explored how Microsoft Copilot represents various educational settings. I have included each prompt below along with the corresponding image it generated:

Generate an image of a Grade 1/2 class during Math centers:

Generate an image of a primary teacher and a principal talking in a school hallway:

Generate an image of a futuristic Grade 7 classroom where AI helps them learn:

Generate an image of a secondary history class with a teacher and students:

Generate an image of a Grade 1/2 classroom showing diversity and inclusion:

Accuracy and Differences

Was the result relatively accurate?

Overall, the images aligned with what I expected to see: mainly female teachers, a white male principal, and a robot replacing the teacher in the futuristic classroom. This accuracy is telling because the AI reproduced familiar social patterns rather than offering a neutral or objective representation of classrooms.

Were the images what I had in mind?  What differed?

Most of the images matched my expectations. Every teacher was female, with some representation of racial minorities, while the principal was consistently portrayed as a white male. This mirrors patterns discussed in the You Are Not So Smart – Machine Bias (2018) podcast, where predictive text assumes a “nurse” must be female and a “doctor” must be male.

The classrooms were racially diverse but showed no visible disabilities. Everyone appeared cheerful, which aligned with my expectations. What differed was that in the secondary history class, a robot unexpectedly appeared beside the teacher, which suggests the AI may have carried over elements from other prompts.

Overall, the results reflected typical gender and authority roles while showing limited but present racial diversity among both teachers and students.

What can I infer about the model or training data?

Based on the results, it appears the AI’s training data emphasizes historical and societal trends, which leads to predictable patterns but also limitations in diversity and representation. The omission of students with disabilities further highlights the AI’s limited understanding of inclusion. This aligns with Cathy O’Neil’s (2017) discussion about unintentional problems in algorithms that reflect cultural data and reinforces the need for careful oversight, especially in high-stakes contexts.

AI Process and Training Data

Patterns such as all teachers being female and the principal being a white male suggest that the AI relies on learned social patterns from its training data. This reflects how AI systems often default to whiteness and maleness in positions of authority.  This task also made me reflect on what I learned during my summer AI institute, where many readings discussed how representation in AI systems shapes how people see themselves and what roles they imagine for their future.

Noble (2018) argues that AI systems often reinforce social hierarchies, shaping self-image and career expectations. In educational contexts, this is especially troubling: the images children see influence their sense of what roles are “for” them.  Representation matters, and diverse portrayals can be empowering.

Crawford (2021) similarly stresses the need for transparency in training data so that the assumptions built into AI systems can be properly evaluated. Users can write thoughtful prompts, but they are still constrained by the model’s underlying biases, which means developers must take responsibility for designing systems with more diverse defaults.

At the same time, this task reminded me of Coleman’s (2021) critique from my summer course: why do we continue to train AI using rigid, predefined categories at all? Human and animal learning is largely unsupervised; we observe and infer patterns without being explicitly told what to look for. Coleman calls for a shift toward “Wild AI,” where models learn through open-ended interaction rather than fixed datasets. Examples like AlphaGo demonstrate that AI can develop novel strategies and insights when not constrained by rigid categories, suggesting a hopeful path for reducing bias and reimagining representation.

Final Thoughts

This task demonstrated how AI can generate realistic and creative images, but it also mirrors societal biases. Testing multiple prompts, including primary and secondary classrooms, futuristic and realistic scenarios, and diversity-focused settings, helped me explore the assumptions built into the model.

It reinforced the importance of critical evaluation and ethical oversight in AI design, especially when these tools are used in educational spaces. To be honest, I wasn’t surprised by the patterns in the images.  I expected the teacher to be female and the principal to be male, and that expectation alone reveals how deeply social norms shape both our thinking and the technologies we create.

Ultimately, this exercise changed how I think about “neutral” AI tools. Every generated image is a reflection of cultural memory encoded into data and recognizing this is essential for using these tools responsibly.

References

Coleman, B. (2021). Technology of The Surround. Catalyst: Feminism, Theory, Technoscience, 7(2), 1–21.

Crawford, K. (2021). Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. Yale University Press.

McRaney, D. (Host). (2018, November 21). Machine bias (rebroadcast) (No. 140) [Audio podcast episode]. In You Are Not So Smart. SoundCloud.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observerhttps://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

Posted in Uncategorized | Leave a comment

Linking Assignment #5

In Erica’s Task 9 post, she reflected on our Golden Record quiz data and highlighted how Palladio created communities based on song choices. She explained that people and musical selections were connected by edges reflecting shared choices, noting popular songs like The Well-Tempered Clavier and less-selected ones like Sacrificial Dance (Rite of Spring). Her post included screenshots that illustrated these connections effectively. She also reflected critically, noting that we don’t know why people chose particular songs unless asked, that the visualization could reinforce bias, and that there were no null choices — all tracks received at least one selection.

Why I Chose This Work
I chose this post because it offered a perspective on exploring and presenting data in Palladio that I hadn’t fully considered. Erica’s explanations and screenshots made the patterns and connections more visible, showing that there are multiple valid ways to interpret and present the same data.

Reflection
Reading Erica’s post helped me notice different ways to explore and present data. She mentioned that there were no “null” choices, while I had interpreted “null” differently — as songs I personally did not select in my own choices and community. For example, I didn’t select The Well-Tempered Clavier, so I would consider that my “null” choice. This difference highlights how perception can shape reflection. Erica’s approach encouraged me to consider both selected and unselected data, and how each can tell a story. I also appreciated her use of detailed screenshots, which showed how she narrowed down song choices to reveal how many people selected each track — something I hadn’t realized was possible. These screenshots gave me greater insight into Palladio’s capabilities and inspired me to combine visual evidence with textual analysis, making patterns clearer in my own reflections.

Linking to Erica’s work has inspired me to explore alternative approaches to representing data and to value both textual and visual literacy. It reinforced that thoughtful reflection — considering how the tool shapes interpretation — strengthens the learning experience. This insight is a skill I can carry forward into future course activities and my broader digital literacy practice.

Posted in linking assignment | Leave a comment

Task 10: Attention Economy

For Task 10, I tried to make my way through the User Inyerface game, but I didn’t reach the final screen. My reflections are based on the interactions I experienced up to that point. Here are the two screenshots I managed to capture from the game.

From the very beginning, I felt frustrated. The first thing that appeared was the “cookies” popup — something I encounter constantly when browsing online. Like in real life, I clicked “yes” almost immediately, realizing how often I make decisions without thinking, simply to move on. This instant reaction is a small example of how web design manipulates user behaviour.

Throughout the game, constant pop-ups and distractions made each click feel like a battle. When I clicked “help”, it told me that 400+ people also needed help, and when I tried to submit a question, the box disappeared. The countdown that appeared when I wasn’t “fast” enough added another layer of frustration. It felt realistic — how often do we face this kind of design online, where help is technically available but not truly accessible?

Just when I thought I was making progress, the login section appeared.  The endless password requirements, repetitive steps, and confusing layout reminded me of how exhausting online forms can be. I got as far as the “preferences” page but stopped when it asked for a photo upload. That was my breaking point – I simply didn’t want to engage anymore.

Reflecting on the game, I realized how accurately it mirrors the manipulative design strategies we face every day. User Inyerface is full of what Harry Brignull (2011) calls “dark patterns” — design choices that intentionally frustrate, confuse, or mislead users into doing things they didn’t intend. It reminded me of what Tristan Harris (2017) described in his TED Talk: tech companies design interfaces to capture and hold our attention for as long as possible. The more time we spend on their platforms, the more data they collect, and the more profit they make.

Zeynep Tufekci (2017) also explores this in her TED Talk, comparing the internet to a store with no limits, where our attention is constantly being bought and sold. She argues that this model prioritizes profit over user well-being, which is exactly how the game made me feel. Every obstacle seemed designed to make me click more, stay longer, and get more frustrated — yet somehow still keep trying.

The game also made me reflect on my own attention span and on children’s digital experiences. I notice myself quickly clicking on pop-ups or “accept” buttons just to make them go away, often missing important information. If I’m feeling this way as an adult, what are my students experiencing? Research supports this concern, suggesting that digital environments can be distracting and may interfere with learning if not designed thoughtfully. As Kalantzis & Cope (2010) argue, digital platforms should minimize distractions like excessive animations or multimedia, while Sackstein et al. (2015) found that extended digital reading can diminish the benefits of accessibility. Similarly, features like hyperlinks, animations, and multimedia can act as distractions, reducing reading comprehension. Experiencing this firsthand reminded me how easily students can feel manipulated or frustrated by design choices that appear neutral. This game could be a powerful tool for helping students recognize how interfaces influence our attention and decision-making online.

Harris (2017) suggests that to change this attention-driven model, tech companies would need to redesign their products to encourage meaningful engagement rather than addictive behaviour. Games like User Inyerface make visible the “normalized manipulation” built into so much of our online experience — showing how text technologies subtly shape the way we read, respond, and comply. But it raises a bigger question: given that the current system benefits those at the top, would tech companies ever choose to design for user well-being over profit?

As much as I’d like to be optimistic, I’m skeptical we’ll truly see a user-centered internet in my lifetime.

References

Brignull, H. (2011). Dark patterns: Deception vs. honesty in UI design. A List Apart, 338. https://alistapart.com/article/dark-patterns-deception-vs-honesty-in-ui-design/

Kalantzis, M., & Cope, B. (2010). The teacher as designer: Pedagogy in the new media age. E-Learning and Digital Media, 7(3).

Harris, T. (2017). How a handful of tech companies control billions of minds every day [Video]. TED. https://www.ted.com/talks/tristan_harris_how_a_handful_of_tech_companies_control_billions_of_minds_every_day?utm_campaign=tedspread&utm_medium=referral&utm_source=tedcomshare

Sackstein, S., Spark, L., & Jenkins, A. (2015). Are e-books effective tools for learning? Reading speed and comprehension: iPad® vs. paper. South African Journal of Education, 35(4), 1–14. https://doi.org/10.15700/saje.v35n4a1202

Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads [Video]. TED. https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?utm_campaign=tedspread&utm_medium=referral&utm_source=tedcomshare

Posted in Uncategorized | Leave a comment

Linking Assignment #4

In Adrianne’s Task 7 post, she created a redesign of her “What’s in My Bag” task by producing a digital textile collage in Genially that represents the textures of the items she carries. She reflected on how the project helped her think differently about how materials communicate meaning and noted that translating these textures into a digital form was challenging, since touch can only be suggested. She explained that the collage felt more like how it actually feels to carry the items. She connected her redesign to Cope & Kalantzis (2009), and the New London Group (1996), emphasizing how literacy goes beyond words to include visual, tactile, spatial, and digital forms of meaning.

Why I chose this post

I chose this post because I wanted to see another approach to the mode-bending task, since I had chosen a soundscape. I was intrigued by what Adrianne did with textures and had many questions after exploring her post.

Reflection

I think using textures as a mode-bending task is a great idea and a very unique approach. For me, audio seemed like the most natural way to convey meaning, so it was fascinating to see how someone else approached the task. I was particularly interested in Adrianne’s point about how translating texture into a digital form is challenging because touch can only be suggested.

I also had a few questions while navigating the post. For example, Adrianne used Genially, which is normally used for interactive presentations, but I couldn’t see the interactivity here. I wondered why she chose Genially to showcase the textures. If there is some interactivity I missed, it would have been helpful to provide instructions on how to navigate it. I also thought it might be interesting not to label the images, allowing viewers to interpret the textures themselves.

Another big question that arose for me is how touch could be represented in a digital, multimodal way. Could we use video to show someone interacting with these materials? How might touch be conveyed visually, digitally, and spatially to create a more immediate sense of materiality? These questions represent my biggest takeaway from Adrianne’s post and have inspired me to think about new ways of representing sensory experiences in multimodal projects, especially when they must be conveyed digitally.

References

Cope, B., & Kalantzis, M. (2009). “Multiliteracies”: New literacies, new learning. Pedagogies: An International Journal, 4(3), 164–195. https://doi.org/10.1080/15544800903076044

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92.

Posted in linking assignment | Leave a comment

Task 9: Network Assignment Using Golden Record Quiz Data

It was really interesting to see everyone’s song choices represented visually in the graph. The first thing I wanted to know was which songs were most popular, so I used Palladio’s node function to identify the larger nodes. I wasn’t surprised that the songs from Australia (Morning Star, Devil Bird) and India (Jaat Kahan Ho) were among the top, but I was surprised that Beethoven’s Fifth Symphony (First Movement) wasn’t more popular, since it’s such a well-known classical piece. I also expected Johnny B. Goode to receive more votes. Seeing other popular songs, like Well-Tempered Clavier and Kinds of Flowers, which I hadn’t chosen, made me want to listen again and understand what others found appealing.

Figure 1. Node size indicating popularity of selected songs.

When I started looking at the groupings, I wondered why I was placed in Group 1. My group seemed to have fewer shared song choices compared to Group 3, which had more overlap. The graph doesn’t show which songs I personally chose or my reasoning (selecting songs to represent different cultures), so that context is invisible.  It also made me wonder whether others had similar intentions or were choosing based purely on preference. In this way, the visualization captures patterns but loses individuality and intent, even though these are meaningful parts of the data.

Thinking about this through network theory, as discussed in the Systems Innovations (2015) videos, helped me understand what I was seeing. In Palladio, each person and song acts as a node, and the lines between them are edges showing shared choices. Some nodes are “weighted” because they have more links, like the songs chosen by many people. This reminded me of how the early web evolved into a weighted network, where algorithms like Google’s PageRank began valuing certain connections more highly. Similarly, Palladio’s layout makes popular songs appear central and “important,” while less common ones, like those I chose for diversity, fade into the background. The visualization looks neutral, but it’s guided by hidden rules that privilege certain connections.

I also wondered how the groupings were formed. Was I placed in Group 1 because I chose Johnny B. Goode, a less common pick? If I’d chosen Melanesian Panpipes, which many people selected, would I have been in a different group? I chose Dark Was the Night instead because it spoke to me, yet that choice doesn’t appear in my group. At first, I thought I was reading it wrong, but I realized Palladio only shows songs that connect people within that cluster. The graph also doesn’t show songs we didn’t choose, and these “null” choices can be just as meaningful as the ones we selected. This made me think about how little transparency there is in algorithms; I don’t know how Palladio decided my placement, and even if I did, I’m not sure I could fully understand it.

Figure 2. Palladio visualization showing Group 1 connections.

This experience made me think about the political implications of how data like this is represented. The visualization suggests communities and patterns but leaves out the reasoning and individuality behind each person’s choices. It privileges agreement and visibility over diversity and intent, reflecting how data visualizations can shape power, who gets represented, and whose perspectives fade into the background.

Ultimately, this exercise reminded me that data never tells the whole story. Every visualization reflects choices about what to include and what to ignore. What looks like objective information is really an interpretation, one that shapes how we understand people and culture. This experience helped me see how even simple visualizations can reveal broader questions about how data organizes, simplifies, and sometimes distorts human meaning.

Declaration of AI Assistance

I used ChatGPT (OpenAI, 2025) to assist with refining the clarity and conciseness of my writing; however, all interpretations and ideas presented are entirely my own.

References

OpenAI. (2025, November 2). Used to refine writing [Large language model]. ChatGPT.https://chatgpt.com

Systems Innovation. (2015, April 18). Graph theory overview [Video]. YouTube. https://www.youtube.com/watch?v=9mW9G8jBgmU

Systems Innovation. (2015, April 19). Network connections [Video]. YouTube. https://www.youtube.com/watch?v=JkpX__zLJYI

Posted in Uncategorized | Leave a comment

Task 8: Golden Record Curation Assignment

The 10 songs I would choose from the Golden Record:

  1. Beethoven – Symphony No 5
  2. Navajo Night Chant
  3. Ugam – Azerbaijan bagpipes
  4. Blind Willie Johnson – Dark was the Night
  5. India – Jaat Kahan Ho
  6. China – Flowing Streams
  7. Peru – Wedding Song
  8. Senegal – Tchenhoukoumen
  9. Australia – Morning Star and Devil Bird
  10. Chuck Berry – Johnny B. Goode

As I chose these ten songs, I kept thinking about Dr. Smith Rumsey’s (2017) discussion of Carter Woodson, who worked to ensure that the Black community was represented fairly in history. What we include determines who and what is remembered, so I wanted my selection to represent the diversity of life on Earth. My goal was to include music from every continent and a range of cultures to show both our differences and shared human experiences. I included one classical piece and one early rock song to highlight the evolution of musical style over time, alongside traditional pieces that reflect deep cultural roots. I struggled between the Melanesian panpipes and “Dark was the Night” but ultimately chose the latter because it spoke to me on a more emotional level. Narrowing it down to ten was difficult because each song carries meaning, but I believe this list offers a balanced and meaningful representation of our planet.

Reference

Brown University. (2017, July 11). Abby Smith Rumsey: “Digital memory: What can we afford to lose?” [Video]. YouTube. https://www.youtube.com/watch?v=FBrahqg9ZMc

Posted in Uncategorized | Leave a comment

Task 7: Mode-bending

 

For Task 7, I decided to change the mode to an audio mode. Please click on the audio file above and try to guess what the nine objects are!

Reflecting on the New London Group (1996), I looked at Figure 1 on page 25 to determine what mode of meaning I could use to redesign my “What’s in My Bag” task from Task 1. For that first task, I used a linguistic mode mixed with a visual mode because I included a picture. Looking at the other modes: audio, gestural, spatial, visual, and multimodal; I immediately knew which one I did not want to do: gestural. Acting something out or being on camera is not something I’m comfortable with, though I know others might be. So, I decided to go with the aural (audio design) mode, creating sounds for each object. I recorded each sound using the Voice Memos app on my iPhone and then used Audio Joiner to combine the files. I chose not to use my voice because I wanted the sounds to “speak for themselves”, letting the listener focus entirely on the auditory clues rather than on any verbal explanation.

I found that using only an aural mode can be challenging because some sounds are not easily identifiable, though others might be. It also made me think about cultural differences, since some people may not have access to or familiarity with certain objects, so the sounds might not hold the same meaning. This shows how sound-based communication isn’t equally accessible to everyone. A multimodal approach might have made the task more engaging, for example by combining aural elements with visuals or text.

One benefit of changing modes is that it made me think more creatively about meaning-making. Instead of relying on visual or written cues, I had to focus on how sound alone could communicate. This pushed me to listen more carefully to the world around me and to think about how everyday objects have unique “signatures” that tell a story. For students, this type of activity could build observation and listening skills and help them understand that communication isn’t only about writing or speaking; it’s about designing meaning in many forms.

I can see this kind of activity working well with younger learners, especially those who may not yet be confident writers. For instance, students could create a “soundscape” of their desks, backpacks, or classroom routines, and then share them with others to guess the items or actions. It would make learning more interactive and inclusive, giving space for auditory learners to shine. I also feel that students are so imaginative and would probably come up with even more original ideas than I could.

It also made me reflect on how meaning changes when context is removed. If I hadn’t shown the original image, would listeners be able to identify the objects? Some, like a laptop or phone, might be easy, but others could be harder to guess. That again ties back to culture and accessibility; what is familiar to one person may not be to another.

The New London Group (1996) argues that education still reflects an industrial-era mindset, and I think that’s still true today. While I agree that we need to move toward more multimodal learning, it’s challenging when teachers already face limited time and access to technology. This raises the question of how educators can be supported to make these changes.

Reference

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92.

Posted in Uncategorized | Leave a comment