Category Archives: ETEC 540

Connecting Literacies (Linking Assignment)

Introduction:

I chose these tasks after reviewing many projects from our MET 540 class. Each one showed a different way of using language, images, sound, or technology. These differences helped me see how literacy changes when the medium changes. By comparing their work with my own, I could better understand the main ideas of the course. These tasks also showed how much we learn from each other when we share our stories, designs, and reflections.

Reflections on My Colleagues’ MET 540 Tasks

Ice’s Task 3: Voice to Text Task

https://blogs.ubc.ca/iceetec540/2025/09/21/task-3-voice-to-text-task/

When I read Ice’s Task 3 story, I noticed right away that it sounded like real talking. The sentences were long. Ideas came quickly. There were fillers, fast changes, and strong emotions. This matches what Walter Ong says about spoken language. He explains that oral speech is “additive, redundant, and close to real life” (Ong, 2002). Ice’s story feels alive. I could imagine the crowded bus and the embarrassing moments. The voice-to-text tool kept the mistakes and the fast flow. Because of this, the story stays close to natural speech. This also fits with Haas’s idea that writing tools shape how our thoughts appear on the page. She calls this the “materiality of literacy” (Haas, 1996). My story is a little more careful, but the focus here is Ice’s text. His webpage design also supports this “spoken” feeling. It shows how technology and course tasks help us learn about orality and writing.

References:

Haas, C. (12013). Writing Technology: Studies on the Materiality of Literacy. Routledge.

Ong, W. J. (2002). Orality and Literacy: The Technologizing of the Word. Routledge.

 

James Lin’s Task 4: Manual Scripts and Potato Printing

https://blogs.ubc.ca/jlinmet/2025/09/29/4-4-task-4-manual-scripts-and-potato-printing-optional-task/

In his Task 4, I noticed that his handwriting experience was shaped by speed and technology. He usually types very fast and finds handwriting slow and uncomfortable. He also explains his ideas through McLuhan’s message that “the medium is the message,” meaning the tool changes how we think and write. Looking at theory, his reflections match ideas from the “Mechanization” reading, which explains how writing tools change the pace and rhythm of writing (Bolter, 2001). My story is a little different. For me, handwriting feels rare and emotional after immigration. For him, it feels slow compared to typing. But both pages show that writing tools shape our thinking. He uses WordPress in a clean, structured way. My page feels more personal. Both of us respond to the course goal: learning how writing changes when the medium changes.

References

Bolter, J. D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print. Lawrence Erlbaum Associates.

 

 

Sara Johnson’s Task 6: An Emoji Story

https://blogs.ubc.ca/sjohnstonetec540/2025/10/16/task-6-an-emoji-story/

In Sara’s emoji story, the first thing I noticed was how quickly her ideas appeared. Her story feels simple and playful. It looks like a small comic made of pictures. Even though it feels light, it shows that pictures can carry strong meaning. Emojis give feelings fast. They do not need long sentences to explain themselves. This connects to what Bolter says about the growing power of visual communication in the digital age (Bolter, 2001). Sara uses emojis to show actions, emotions, and reactions. Each emoji works like a tiny picture. This makes her story quick to read and easy to imagine. It also matches the way many people communicate today. We often use symbols instead of writing long messages. Pictures, colors, and shapes help us understand ideas right away. Her story also reminded me of early picture-based writing, like ancient symbols and pictographs. Emojis work the same way, but their meaning changes depending on the reader. This makes the story interactive. The reader helps build the meaning. Kress explains that modern literacy is multimodal, using many forms, like as images, words, and sound to make meaning (Kress, 2005). In this task, I focused more on the history of writing. Sara’s story focused more on feeling and emotion. Both approaches are valuable. Her work helped me see how strong images are in digital storytelling. It showed me that meaning today is not only written. Meaning is also designed with symbols, pictures, and imagination.

References

Bolter, J. D. (2001). Writing Space: Computers, Hypertext, and the Remediation of Print. Lawrence Erlbaum Associates.
Kress, G. (2003). Literacy in the New Media Age. (Chapter 7: Multimodality and design of meaning). Routledge

 

Laila’s Task 7: Mode-Bending

https://blogs.ubc.ca/etec540laila/2025/10/19/mode-bending-a-desktop-exploration/

When I explored Laila’s Mode-Bending project, the first thing that caught my attention was how she used her desktop as a “text.” Her computer screen was not just a tool. It became a full multimodal space filled with colors, icons, and emotional traces of her daily life. Each part of her desktop like as folders, pictures, tabs worked like a small sign that carried meaning about how she thinks, works, and organizes her world. This follows the view in digital-literacy research that new technologies change the “shape of our interests” and the “character of our symbols” (Postman, 1993). Laila’s project also fits with multiliteracies theory, which says literacy today includes visual, spatial, and digital modes, not only writing (New London Group, 1996). She designs meaning by arranging digital objects, much like students using multimodal “available resources” to express identity. When I compare this to my own audio story, the difference becomes clear. My work uses voice, tone, pause, and rhythm to create meaning. She uses layout, color, and digital arrangement. Both projects show what digital-literacy scholars describe as the shift toward multimodal meaning-making, where reading and writing happen across many modes (Dobson & Willinsky, 2009). Laila helped me to see that storytelling today is not only about words. It is also about design. She reminded me that my audio story shapes meaning through sound, while her desktop shapes meaning through space. Together, both works show how MET encourages us to explore literacy as something dynamic, creative, and multimodal.

References

Dobson, T. & Willinsky, J. (2009). Digital Literacy. Cambridge Handbook of Literacy
The New London Group (1996). A Pedagogy of Multiliteracies; Designing social futures. Harvard Educational Review66(1), 60-92.
Postman, N. (2011). Technopoly: The Surrender of Culture to Technology. Knopf Doubleday Publishing.

 

Kyle’s Task 10: Attention Economy

https://blogs.ubc.ca/contrabot/2025/11/10/bad-ui-an-example-of-what-not-to-do/

In Kyle’s project on “Bad UI”, the first thing I noticed was how clearly, he showed the tricks built into many digital designs. His examples make it easy to see how some interfaces confuse users on purpose. This reminded me the dark patterns” idea, where designers hide information or push users into choices they did not want to make (Brignull, 2011). Kyle’s work helped me understand how these designs quietly shape our decisions. He shows how a simple button change or a hidden checkbox can make people click without thinking. This connects strongly with Zeynep Tufekci’s warning that today’s technologies can influence people in “hidden, subtle, and unexpected ways” (Tufekci, 2017). Compared to my experience with the attention-economy game, Kyle’s project looks outside the “game world” and focuses on real websites. My work was about how a single interface manipulated me step-by-step. Kyle instead shows how these manipulations exist everywhere: shopping sites, signup pages, and everyday apps. Both projects point to the same problem: modern design often uses our attention as a resource. Kyle’s analysis also connects to Roger McNamee’s point that big tech systems grow by capturing and manipulating user attention, sometimes without users noticing (McNamee, 2019).

Kyle helped me see that dark patterns are not accidents; they are part of a larger system where design, data, and profit work together. His project made me reflect more deeply on how my own web experiences are shaped by choices I never fully see.

References

Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. A List Apart, 338.
McNamee, R. (2019). I Mentored Mark Zuckerberg. But I Can’t Stay Silent About What’s Happening. Time Magazine.
Tufekci, Z. (2017). We’re Building a Dystopia Just to Make People Click on Ads. [Video]. TED.

 

Michael Cafuta’s Task 11: Text-to-Image Using Sora

https://blogs.ubc.ca/mcafuta/task-11-text-to-image-using-sora/

When I studied his project, I saw more than just creative images. I saw how AI systems shape our memories and our sense of truth. Michael expected the AI to recreate a real snorkeling moment from his childhood, but the tool added bright life jackets, dramatic scenery, and details that never happened. This showed me something important: AI does not remember, It predicts. It builds new images from patterns learned in huge datasets, not from the storyteller’s actual experiences. Shannon Vallor describes this effect as the “AI mirror” a system that reflects our world back to us, but always with distortions based on the data it has absorbed (Vallor, 2025). The more I read, the more I realized that these distortions can be powerful. Cathy O’Neil explains that many algorithms are treated as objective, even when they contain hidden assumptions or errors (O’Neil, 2016). Her idea helped me understand why Michael felt uneasy seeing his memory reshaped by the system. The AI was confident, but not accurate. In my task, where I explored AI in justice and risk scoring, I now see the same pattern: AI responds with patterns, not understanding. Michael’s images reminded me that, in any field such as art, justice, or storytelling, AI’s output is a statistical guess, not a human truth. His work encouraged me to think more carefully about how we trust AI and how easily its confident guesses can influence our feelings, memories, and decisions.

References

Santa Clara University.  (2025). Lessons from the AI Mirror Shannon Vallor. [Video]. YouTube.

O’Neil, C. (2016). Weapons of Math Destruction: How big data increases inequality and threatens democracy. Crown.

A Future Literacy: Why Humans and AI must create a shared language (Final project- Podcast)

“The digital revolution is far more significant than the invention of writing or even of printing. ” Douglas Engelbart – American Inventor, and Computer Pioneer (1925–2013)

I chose this topic for my podcast because it connects directly to the main ideas of MET 540. Throughout the course, we learned how new technologies change writing, reading, and communication. Today, artificial intelligence is one of the biggest changes, because we are now communicating not only with other people but also with machines that can create language. This raises important questions about how humans and AI understand each other, how prompts work as a new form of writing, and how multiliteracies now include text, images, emojis, and machine responses. The topic allows me to link the history of writing with the new reality of AI-supported communication.

References:

Belshaw, D.A.J. (2012, March 22). The essential elements of digital literacies: Doug Belshaw at TED [Video file]. https://www.youtube.com/watch?v=A8yQPoTcZ78                   He introduces the elements of digital literacy: civic, critical, creative, communicative, confident, cultural, cognitive, and constructive. He argues that literacy depends on context, tools, and social practices, not only on reading and writing.

Cope, B., & Kalantzis, M. (2015). A pedagogy of multiliteracies. Chapter 1: The things you do with language: A contemporary guide to literacy teaching. PALGRAVE MACMILLAN.   They define literacy as a process of “design” using multiple modes such as linguistic, visual, spatial, gestural, and audio. So, modern literacy involves multimodal communication.

New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92. This paper introduces the concept of “multiliteracies,” and highlighting the importance of multimodal meaning and cultural diversity in communication.

Danesi, M. (2016). The semiotics of emoji: The rise of visual language in the age of the Internet. Chapter 9: Universal Languages. Bloomsbury Academic. Danesi analyzes emojis as a structured visual sign system with semantic and emotional functions in online communication. He explains how emojis reduce ambiguity and enhance emotional expression.

Haas, C. (1996). Writing technology: Studies on the materiality of literacy. Chapter: 4 Materiality and Thinking: The Effects of Computer Technology on Writers’ Planning Routledge. Haas argues that writing technologies (from paper to computers) shape cognitive processes and social practices. She shows how tools influence how people think, communicate, and construct meaning.

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education, Learning and Individual Differences, Vol. 103, 102274. This article reviews the educational use of ChatGPT, identifying risks such as misinformation and bias, and opportunities such as personalization and writing support.

Luke, C. (2003). Pedagogy, connectivity, multimodality, and interdisciplinarity. Reading Research Quarterly, 38(3), 397–403. Luke argues that digital technologies have transformed literacy into a multimodal and highly connected practice, where meaning is made through text, image, sound, and interaction.

Mars, R. (Host). (2017, September 5). The Age of the Algorithm (no. 274). [Audio podcast episode].  This episode examines how algorithms quietly shape everyday decisions and raises questions about their social impact.

 

 

The Book of Stillness (Task 12)

Prompt: “Describe or narrate a scenario about a book found a decade into a future in which society as we know it has come apart. Your description should address issues related to the brain and elicit feelings of calm.”

Senario (made by Microsoft Copilot)

It was found in the hushed remains of a library, its roof long since surrendered to vines and sky. A decade had passed since the Collapse—when networks failed, cities emptied, and the noise of civilization dissolved into wind and birdsong. Amid the silence, a scavenger named Lira uncovered the book beneath a pile of moss-covered debris. Its cover was soft, bound in faded blue linen, the title barely legible: Neuroharmony: A Guide to Gentle Thought. She opened it with reverence. The pages, though yellowed, were intact. The text was handwritten, not printed—a rarity in the age before the Collapse. Each letter curved with care, as if the author had whispered their thoughts through ink. The book was not a manual, nor a manifesto. It was a meditation. Inside were reflections on the brain, not as a machine of logic or ambition, but as a garden. The author spoke of neural pathways as streams, of thoughts as leaves drifting gently across their surface. There were exercises, too: breathing with awareness of the occipital lobe, visualizing the hippocampus as a sanctuary of memory, tracing the rhythm of the prefrontal cortex like a slow drumbeat.

Lira read beneath the broken skylight, light dappling the pages. The world outside was quiet, but inside her mind had always been loud echoes of panic, fragments of the old world’s urgency. Yet as she turned each page, a stillness settled in. The book did not demand understanding; it invited presence. One passage described the amygdala as a frightened animal, easily startled but capable of trust. “Speak to it gently,” the author wrote. “Let it know the danger has passed.” Lira closed her eyes and imagined her own amygdala curled in a nest, breathing slowly. She felt her pulse soften.

The book had no author’s name, no publication date. It seemed to exist outside time, as if it had been waiting for the world to quiet down enough to be heard. In the margins were notes, different handwriting, perhaps from past readers. “This helped me sleep,” one said. Another: “I read this during the last blackout. It saved me.” Lira carried the book with her for weeks, reading it aloud to others in the scattered settlements. People gathered around fires, listening to descriptions of the brain as a forest, a temple, a tide. They did not ask for proof. They only listened and breathed. In a world stripped of screens and sirens, the book became a kind of medicine. Not to fix, but to soothe. It reminded them that even in collapse, the brain could be a place of peace. That thought itself could be gentle. Eventually, the book was placed in a new library, a quiet room built from reclaimed wood and stone. It sat on a cushion, open to a page that read: “You are not broken. You are unfolding.” Visitors came not to study, but to sit. And in that stillness, the world began to heal.

Critical Reflection on “The Book of Stillness”

Introduction: For Task 12, I did not use only one prompt. I chose many different prompts. I wanted to see how the AI changes its style for different topics. I also wanted to see if it can stay clear and logical in every case. Then I asked Copilot another question. I asked it what sources, data, and internal tools it uses to write these scenarios. It said that it uses training data, language models, and patterns in words. It also uses mathematical chances to decide what to write next. This showed me something important. AI does not create ideas the way humans do. It builds text from patterns, not feelings. This was the most important part for me. The results surprised me. The AI wrote long and clean scenarios in a few seconds. The grammar was correct. The tone was smooth. Some parts even sounded emotional. But this also worried me. I asked myself: What happens when machines become the main way we create meaning and imagination? Will people spend more time with machines than with other humans? Because of this, I read “The Book of Stillness” in a different way. I did not read it as a companion text. I read it as a contrast. It is a human story. But we are moving into a world shaped by machines.

My Reflection

“The Book of Stillness” shows a future after a major collapse. In that future, people find calm again. They read together. They breathe together. They help each other. The scenario says that humans find meaning through other humans. But today, this idea feels old. It feels like a memory from a world we may be losing. Dunne and Raby say that future stories are not predictions. They are tools that help us see our world more clearly. The “Book of Stillness” does this. It shows what we may lose. In the scenario, the mind is like a garden. It grows through human presence. Today, our world is different. We spend more time with machines. Machines are smart but do not feel. Shannon Vallor says that AI is “machine-augmented thinking.” It copies our patterns but does not share our emotions. Machines answer fast. They know many things. But they cannot fear, love, or trust. Humans feel these things. Machines do not. This makes the relationship unequal. This problem affects society. Vallor warns that AI can make inequality and confusion worse. It can split people apart. Because of this, the calm and shared world in scenario is hard to imagine today. People may stop reading together. They may stay alone with their screens. They may live in private digital spaces that feel like communication but are not real connection.

Leah Price studied the history of books. She says technology has changed reading many times. The danger today is deeper. Books will not disappear. But our shared understanding may disappear. Without a shared mental world, even a peaceful book like The Book of Stillness loses its power. In the end, the world in the scenario feels like a reminder. It shows a time when humans understood each other. Today, machines stand between us. This may break our shared language, our shared feelings, and our shared goals.

References:

Dunne, A., & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press.

Santa Clara University. (2018, November 6). Lessons from the AI Mirror. [Video]. YouTube.

Price, L. (2019, September 17). Books Won’t Die. The Paris Review.

Fairness, Fear, and Facts: A Student’s Look at AI in Justice (Task 11)

Participating in the Detain/Release simulation was both interesting and emotional for me. It reminded me of the movie Twelve Angry Men. In that film, people must make a fair decision, and they need more than facts. They also need kindness, careful thinking, and the ability to question things (Lumet, 1957).

In the simulation, I did not discuss anything with others. Instead, I had to look at computer-made risk scores. These scores looked scientific and very official. But they were still created by algorithms. O’Neil explains that these systems can repeat unfair patterns from the past even when they look mathematical and neutral (O’Neil, 2016).

As I read each case, I felt pulled in two directions. One direction was the personal stories of the defendants. These stories made me feel empathy and made me want to give them a chance. The other direction was the algorithm’s risk numbers. The Guardian warns that algorithms can “tell lies” when their data is biased or incomplete (O’Neil, 2017). I noticed this when the “high-risk” label made me ignore what the defendant was saying.

Shannon Vallor talks about the “AI mirror.” She says that AI tools copy simple versions of human judgment. They look confident, but they hide their limits (Vallor, 2018). This made sense to me. The colorful bars and charts in the simulation felt very convincing, even when the real situation was more complicated.

I also felt pressure from things like jail capacity and public fear. These forces were not about the defendant, but they still affected my decisions. This showed me that algorithms do not work alone. They are connected to human feelings and rules.

In the end, the simulation taught me something important: AI can help us, but it should not replace human judgment. Fair decisions need both data and human empathy.

References:

Lumet, S. (Director). (1957). Twelve Angry Men [Film]. Orion-Nova Productions.

O’Neil, C. (2016). Weapons of Math Destruction. Talks at Google. [Video]. YouTube.

O’Neil, C. (2017). How can we stop algorithms telling lies? The Guardian.

Vallor, S. (2018). Lessons from the AI Mirror. Santa Clara University. [Video]. YouTube.

Click. Confuse. Continue: A Game That Plays You (Task 10- Attention Economy)

When I started Task 10 and opened the User Inyerface game, I thought it would be a logical challenge. I expected a puzzle that needed reasoning and planning. Very soon, I realized it was not about problem-solving at all. It was about manipulation through design.

From the first screen, the User Inyerface confused me. A large button asked for consent, but instead of “Yes,” it said “No.” Below, a line read click HERE to continue.” The word “click” has underline and the word HERE, written in capital letters, looked clickable, but they were not. I moved the cursor several times before it finally worked. That moment made me understand the game’s true purpose. It was not built to guide or help. It was designed to trick and frustrate the player.

As I continued, every new step created more confusion. The password rules were written in tiny green letters on a blue background. It was very hard to read. The password and email boxes would not clear properly. The text “Password” or “Your email” stayed there while I typed. Then the message “Time is ticking” appeared, adding more stress. I tried several times, but the page froze repeatedly. At one point, a box said, “If you do not accept the terms and conditions, check this box.” Only after I unchecked it allowed me to continue.

These patterns reminded me of Brignull’s idea of “dark patterns.” He defines them as “user interfaces carefully designed to trick users into doing things they might not otherwise do.” They make us act against our real intentions.  The User Inyerface game uses confusion, color, and timing to manipulate. It teaches frustration, not skill.

While playing, I thought about how this reflects real digital life.  Harris (2017) explains that all digital platforms compete in a “race for our attention.” Their main goal is not to inform or empower us. He mentioned that technology is made to pull us in and keep us engaged. Notifications, autoplay videos, and pop-ups all serve this purpose. That happened for me in this game. Every click, pause, or delay pulled me back in. I was trapped in the cycle of reaction and effort that Harris describes.

The game reminded me of Tufekci’s talk (2017). She explains that we are not only being shown ads anymore, but also, we are being individually targeted through invisible systems. In Inyerface, I could feel this kind of control. Every confusing choice and misleading button tested my patience. It changed my behavior without me noticing it. It seems in this game, the line between participation and manipulation was not defined. Even the “Help” button gave no real help. It simply redirected me to the same page or showed how many people were on the waiting line. This experience taught me a lesson in how design can control behavior and emotion.

On the next page, there were several options. I had to choose only three of them. However, all of them were already selected by default. So, I had to deselect most of them before I could continue. Then, the game asked me to upload a photo. At that point, I started to think that this might be a fake or deceptive game. It was really “getting on my nerves.” It also seemed to be collecting information from users while wasting time with confusing and pointless tasks. Each step looked like it was designed to confuse and tire the player with too many options, tricky questions, and reversed logic. My focus was getting weaker with every move. So, I decided to stop and not continue the game. When I finally stopped, I realized that the game could not be finished. It was one aspect of digital life. It showed how technology captures attention and wastes time. As Harris (2017) warns, “Technology is not neutral. It is shaping our thoughts, feelings, and actions.” In my opinion, this game was a small version of the global digital reality.

In the end, User Inyerface taught me something important. It showed how social media design can exploit psychology. It also reminded me that our attention has become a valuable product, traded between advertisers and algorithms. As users and educators, we need to understand these patterns. Recognizing persuasive design is the first step to regaining control in a world that profits from distraction.

References

  • Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. A List Apart, 338.
  • Harris, T. (2017). How a handful of tech companies control billions of minds every day [TED Talk].
  • Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads [TED Talk].

Learning in Layers: Networking transforms Text, Culture, and Technology (Task 9)

In this project that made with Palladio, I created a network with two types of nodes: classmates and tracks (songs). Each line, or edge, shows that a classmate chose a song. Every line means one person picked that track. The size of each track node shows how many classmates selected that song. A larger node means more people liked it. All classmate nodes are the same size. In the top right corner, there is a small example. Two person are both connected to Track 7. This means they both chose the same song. In the top left corner, one person connects to only one song, Track 16. She was the only person who picked it.

In the middle of the image, there are many track nodes with different sizes. Bigger ones show songs that were chosen more often. This helps us see which songs were popular and which classmates share similar choices. It can be seen around, people as nodes connecting to the tracks placed in the center of the page. It seems they linked to one another by the tracks. I did not give special weights to classmates. I wanted the focus to stay on how songs connect people through shared preferences. If I wanted to explore more, I could use network tools to see which songs or classmates are most connected. This could reveal small groups that enjoy similar music.

When I looked at the weighted tracks, I noticed some surprising results. For example, track 13 (a song from Peru) and track 17 (The Well-Tempered Clavier- Bach) were chosen by many classmates. They were selected more often than track 18 (Beethoven’s Symphony No. 5) or track 11 (The Magic Flute-Mozart). It was also interesting that track 2 (a song from Java) had one of the highest numbers of selections overall.

This shows something important. We all respect diversity, but our choices were very different from those made by NASA for its Golden Record fifty years ago. NASA’s selections reflected an Anglo-American and Euro-classical bias. Two pieces were by Beethoven, and most were from Western composers. This reflects what Williamson (2019) calls expert networks shaping cultural and technological narratives in education and media.

Today, after fifty years, the world has changed. Views about culture and diversity are much broader. The type of selection NASA made no longer feels fully fair or representative. What inspired me in my classmates’ network was that their choices naturally showed respect for diversity. Without aiming for perfect equality, they reached a kind of geographical and aesthetic balance. For example, the song from Peru received far more attention than expected. This revealed a wider and fairer idea of what beauty in music can mean. I believe this is a more truthful picture of our planet. If we ever send a new version of the Golden Record into space, it should truly represent all world cultures, not just a limited group of Western traditions.

This project also connects to what graph-based text models. Each connection in a network carry meaning beyond words. It shows hidden structures of interpretation (Sánchez-Antonio et al., 2024). In my case, every edge connected not only a listener to a track but also one worldview to another. Together, these edges formed a cultural web like semantic graphs in natural-language processing. As Nabhan and Shaalan (2016) explain, “graph structures reveal larger patterns beyond simple text analysis.” My visualization showed similar results. It revealed how people connect through shared art and emotion.

So far, we have explored many ways to express and document information through images, emojis, hyperlinks, and voice recordings. Adding network visualization takes this further. We are not just documenting data; we are showing relationships. Gonçalves et al. (2022) noted that “graph representation refines understanding of complex data and reveals missing links between entities.” Networking lets us see connections between cultures more clearly. Each node can lead to another person, idea, or song. This shows how knowledge and culture intertwine in meaningful ways. This visualization also shows that all world cultures share a common origin in humanity. Networking helps us see that link clearly. Each line (edge) can lead to a new node, and each node connects us to another person, idea, or culture. In more complex networks, this method helps us reach multiple resources at the same time. We can explore and compare diverse information more efficiently. This allows technology to bring knowledge and cultures closer together.

When we use emojis, written text, or voice recordings, our communication usually happens in one dimension. It moves from one idea to the next. With hyperlinks, our view becomes two-dimensional. Clicking a link takes us to another page and opens a new layer of meaning. This made sharing and accessing information faster and easier. Networking, however, takes this even further. It creates a kind of three-dimensional world. In a network, we can see many connections at once. We do not move one step at a time. Instead, we can explore several links together, choose which ones to follow, and make thoughtful choices about where to go next. In a traditional hyperlink system, we must leave one page to reach another. But in a network visualization, we can see the whole structure at the same time. We can decide which paths matter and explore them freely.

The example we used here is simple, but the idea becomes much more powerful in complex social or cultural systems. Networking allows information to flow faster. It also helps us understand the relationships between ideas, systems, and cultures more clearly. This makes learning and knowledge-sharing more connected, inclusive, and dynamic.

Author’s Note: Because of my limited experience with Palladio, I could not use all its features in this project. I believe the software offers more tools and options than I explored. From my experience with other network visualization programs (Network Meta-analysis in health systems), I know that both node size and edge thickness can change. These settings help show different kinds of relationships. For example, the thickness of an edge can represents how much similarity or shared data exists between two nodes or two data points. In this project, the edge weight did not appear meaningful in my visualization. This was probably because I am still learning how to use Palladio

References

Gonçalves, L. B., Nesic, I., Obradovic, M., Stieltjes, B., Weikert, T., & Bremerich, J. (2022). Natural language processing and graph theory: Making sense of imaging records in a novel representation frame. JMIR Medical Informatics, 10(12), e40534. https://doi.org/10.2196/40534

Nabhan, A. R., & Shaalan, K. (2016). A graph-based approach to text genre analysis. Computación y Sistemas, 20(3), 527–539. https://doi.org/10.13053/CyS-20-3-2471

Sánchez-Antonio, C., Valdez-Rodríguez, J. E., & Calvo, H. (2024). TTG-Text: A graph-based text representation framework enhanced by typical testors for improved classification. Mathematics, 12(22), 3576. https://doi.org/10.3390/math12223576

Williamson, B. (2019). New power networks in educational technology. Learning, Media and Technology, 44(4), 395–398. https://doi.org/10.1080/17439884.2019.1672724

Golden Record Curation (Task 8)

Music Without Borders

First, I categorized the 27 Golden Record tracks by continent. The Golden Record’s tracks represent five continents, but not in proportion to their 1970s populations. Asia, with 56% of the world’s people, had only four tracks, while Europe, with 18%, had ten. It shows an imbalance between cultural representation and population size.

In my opinion, making a list of famous songs/sounds and sending them into space to show Earth’s cultures and languages is a great idea. Many of the songs came from famous composers from the last few centuries. That makes sense, but we should also remember that there were other artists in places like India and China who were just as talented. Some of them didn’t become famous in west countries because they didn’t have access to universal distributing media like radio, television, or the internet. In these countries, the artists are loved by millions or even billions of people. So, if the songs had been chosen based on population and cultural diversity, it would have been fairer. This way, more people around the world could feel connected to this project that represents Earth in space.

So, I decided to select ten pieces according to the largest cultural groupings in the world. This list reflects the five major cultural families: four pieces from Asia, two from Africa, two from Europe, one from the Americas, and one from Oceania. In this selection, not only were the most populous cultures represented (Asia and Africa), but also those with enduring musical heritage (Europe and the Americas). Oceania, despite its very small population, was also included due to the distinctiveness and authenticity of its cultural traditions.

When I chose my 10 favorite songs, I only looked at the 27 tracks we had. There might be other songs in the world that are even better. I picked the songs that sounded the nicest to me. Some of them were in languages I didn’t understand, but I still liked how they sounded.

My Top 10 Selections

For each song, I tried to write one reason why I chose it.

Asia

  • China – Flowing Streams: express the appreciation for the magnificence of nature..
  • India – Jaat Kahan Ho:A golden voice calls across the sky with love and spirit. This sound is magic, expressing feelings of the human heart in a very different way.
  • Japan – Shakuhachi:A beautiful song and wonderful instrument, producing a floating slice of life on earth. It represents humanity.
  • Azerbaijan – Ugam:Mugam of Azerbaijan is considered the mother music of “Persian Dastgah”, “Raga of India”, and “Arabic Mugham”.

Africa

  • Senegal – Percussion:The drums beat strong: the heartbeat of the earth itself.
  • Zaire – Pygmy girls:Bright voices rise together like sunshine.

Europe

  • Germany – Beethoven String Quartet No. 13 (Cavatina):the absolute greatest string quartet ever composed. The melody speaks both sorrow and hope in every note.
  • Austria –The Magic Flute Mozart:The textbook definition of music. Music sparkles like stars: full of joy, wonder, and light.

Americas

  • USA – Dark Was the Night:A lonely song travels through darkness, carrying a small flame of hope.

Oceania

  • Australia – Aboriginal Songs:Beautiful, plain, and simple. Ancient voices sing to the sky and the land, keeping the stories alive.

The Story Behind My Bag- Audio Format (Task 7)

 

Reflection on Mode-Changing

The task 1, What’s in my bag,” first asked me to list the things I carry every day. The goal was to show why these items are important in my daily life. I redesigned the task as a short audio    story for task 7. This fits the New London Group’s idea that learning should use many modes, not just writing (New London Group, 1996). For redesigning, I followed these steps: think about the purpose, change from text to audio, add a story voice, and reflect on the meaning of the items. This matches the multiliteracies idea that we should design and redesign meaning (Leander & Boldt, 2012).

Benefits. Audio is more exciting than writing a list. When you hear my voice, you notice tone, pauses, and rhythm. These make the story more alive. Lunsford (2006) explains that modern writing is often multimodal, so it can mix sound, image, and text. By telling my bag story with sound, I try to show that my objects are not just things, they are symbols of how I live, learn, and travel.

Challenges. But audio has problems too. It is harder to go back and find exact sentences or words. In this mode, highlighting or underlining key content is not possible. Some people prefer to read quickly instead of listening. Also, recordings need some complicated tools and skills, like microphones or editing software.

In the end, changing the mode from text to audio showed me something new: literacy is about more than reading and writing on paper. It can be about using other forms, such as sound and images, to tell my stories.

References:

The New London Group. (1996). A pedagogy of multiliteracies- Designing social futures. Harvard Educational Review 66(1), 60-92.

Leander, K., & Boldt, G. (2013). Rereading” a pedagogy of multiliteracies”: bodies, texts, and emergence. Journal of Literacy Research, 45(1), 22-46.

Lunsford, A. A. (2006).  Writing, technologies, and the fifth canon. Computers and composition, 23(2), 169-177.

From Hieroglyphs to Emojis (Task 6)

 

My film review in emojis language

Reflection on Emoji Translation

Translating a movie into emojis for me was both fun and challenging. Emojis are not the same as words. Sometimes one emoji can show a whole word, like  for “sea.” Other times, a group of emojis is needed to show an idea, like for “a teacher guiding a student.”.

I did not begin with the title. Title could be symbolic and hard to capture in emojis. I started with the plot. By working through the story events, I could build a set of emojis that matched with    actions and feelings. Later, I think about the title.

I picked one movie (story) with strong pictures in my mind. People’s feel, nature, and action are easier to translate than abstract ideas. The hardest part was choosing the emojis. Using too few emojis leaves out some details. I had to choose carefully to keep the story simple but clear. This process showed me that emojis are not for fun. They can be a storytelling tool. Like hieroglyphs long ago, emojis might even grow into a new kind of writing.


From Hieroglyphs to Emojis

Long ago, people spoke with one another, but they need to keep their ideas for next                 generation. They began to draw shapes that looked like trees, animals, or simple lines. These shapes became symbols. Later, people created writing systems like cuneiform and hieroglyphs.

Over thousands of years, writing grew and changed. Alphabets were invented, and words were formed. People began to share ideas through books. Printing made it possible to copy many books at once. That is why today we can read text so easily.

In our time, something new is happening. With the start of the digital age, people began using small pictures, called emojis, in place of words. There are now thousands of emojis. They can show emotions, actions, or even whole ideas. Emojis appear in text messages, emails, and on social media every day. Another change is the use of abbreviations. Short forms like “OMG” or “LOL” can replace an entire word or even a full sentence. This makes writing fast, but it can also make communication less rich. Some people worry that this shift could weaken reading and writing skills. (1)

It seems this is the beginning of a new kind of writing. Just as hieroglyphs were once used to write stories and messages, emojis may one day become their own form of language. If that happens, the future may look back at our books the same way we look at ancient papyrus, need experts to understand them.

In summary and in a big picture view, we may be at the start of a new method of writing. Emojis and digital code might become as important for our future as hieroglyphs once were for the past. (2)

 

References:

1. Luke, C. (2003). Pedagogy, Connectivity, Multimodality, and Interdisciplinarity. Reading Research Quarterly, 38(3), 397–403.

2. Hayles, N. K. (2003). Deeper into the Machine: The Future of Electronic Literature. Culture Machine, 5.

Creating a Story in digital time (Task 5)

Witness.html

When I created “The Witness Story” in Twine, I wanted readers to explore the truth from different sides, just like real investigators. Twine lets writers build hypertext stories1, where each part connects to others through links. Readers don’t need to move in a straight line; they can choose what to read next.

This idea comes from Nelson’s work on hypertext. He explained that digital writing connects information “like a web of ideas,” not like the one-way flow of a printed book (Nelson, 1999). When I linked different witness scenes together, I was following that same idea. I let readers jump between voices and form their own understanding.

In this story, each witness talks in a natural and conversational way, so readers feel as if they are hearing people speak rather than reading formal text. (Ong, 2002). This program and the others help me reorganize my thoughts. (Heim, 1999). During revising my Twine story, I realized that this program didn’t just help me write, it helped me think. It showed me how stories and memories can change depending on the path we take.

References
Heim, M. (1999). Electric language: A philosophical study of word processing. Yale University Press.

Nelson, T. (1999). Xanalogical Structure, Needed Now More Than Ever. ACM Computing Surveys, 31(4).

Ong, W. J. (2002). Orality and Literacy:The technology of the words. Routledge

 


1 Hypertext plays an important and beautiful role in how we read and share information today. It lets a story or idea unfold in many layers at the same time. People can click on a link to jump ahead to another part of the story or go back to reread something that wasn’t clear. Instead of reading in a straight line from start to finish, readers can move around freely.

You can imagine of hypertext like a magic door in Alice in Wonderland,” where there are many doors and paths to take. Hyperlinks also work like a modern table of contents. They connect not just to chapters, but to exact ideas, examples, and definitions. This helps readers find information faster and understand topics from several aspects. However, this freedom has a downside too. When readers keep clicking link after link, they might lose track of where they started like being lost in a maze. So, hypertext is both a wonderful tool for discovery and a challenge that asks readers to stay focused.