Author Archives: MANoMET

The Book of Stillness (Task 12)

Prompt: “Describe or narrate a scenario about a book found a decade into a future in which society as we know it has come apart. Your description should address issues related to the brain and elicit feelings of calm.”

Senario (made by Microsoft Copilot)

It was found in the hushed remains of a library, its roof long since surrendered to vines and sky. A decade had passed since the Collapse—when networks failed, cities emptied, and the noise of civilization dissolved into wind and birdsong. Amid the silence, a scavenger named Lira uncovered the book beneath a pile of moss-covered debris. Its cover was soft, bound in faded blue linen, the title barely legible: Neuroharmony: A Guide to Gentle Thought. She opened it with reverence. The pages, though yellowed, were intact. The text was handwritten, not printed—a rarity in the age before the Collapse. Each letter curved with care, as if the author had whispered their thoughts through ink. The book was not a manual, nor a manifesto. It was a meditation. Inside were reflections on the brain, not as a machine of logic or ambition, but as a garden. The author spoke of neural pathways as streams, of thoughts as leaves drifting gently across their surface. There were exercises, too: breathing with awareness of the occipital lobe, visualizing the hippocampus as a sanctuary of memory, tracing the rhythm of the prefrontal cortex like a slow drumbeat.

Lira read beneath the broken skylight, light dappling the pages. The world outside was quiet, but inside her mind had always been loud echoes of panic, fragments of the old world’s urgency. Yet as she turned each page, a stillness settled in. The book did not demand understanding; it invited presence. One passage described the amygdala as a frightened animal, easily startled but capable of trust. “Speak to it gently,” the author wrote. “Let it know the danger has passed.” Lira closed her eyes and imagined her own amygdala curled in a nest, breathing slowly. She felt her pulse soften.

The book had no author’s name, no publication date. It seemed to exist outside time, as if it had been waiting for the world to quiet down enough to be heard. In the margins were notes, different handwriting, perhaps from past readers. “This helped me sleep,” one said. Another: “I read this during the last blackout. It saved me.” Lira carried the book with her for weeks, reading it aloud to others in the scattered settlements. People gathered around fires, listening to descriptions of the brain as a forest, a temple, a tide. They did not ask for proof. They only listened and breathed. In a world stripped of screens and sirens, the book became a kind of medicine. Not to fix, but to soothe. It reminded them that even in collapse, the brain could be a place of peace. That thought itself could be gentle. Eventually, the book was placed in a new library, a quiet room built from reclaimed wood and stone. It sat on a cushion, open to a page that read: “You are not broken. You are unfolding.” Visitors came not to study, but to sit. And in that stillness, the world began to heal.

Critical Reflection on “The Book of Stillness”

Introduction: For Task 12, I did not use only one prompt. I chose many different prompts. I wanted to see how the AI changes its style for different topics. I also wanted to see if it can stay clear and logical in every case. Then I asked Copilot another question. I asked it what sources, data, and internal tools it uses to write these scenarios. It said that it uses training data, language models, and patterns in words. It also uses mathematical chances to decide what to write next. This showed me something important. AI does not create ideas the way humans do. It builds text from patterns, not feelings. This was the most important part for me. The results surprised me. The AI wrote long and clean scenarios in a few seconds. The grammar was correct. The tone was smooth. Some parts even sounded emotional. But this also worried me. I asked myself: What happens when machines become the main way we create meaning and imagination? Will people spend more time with machines than with other humans? Because of this, I read “The Book of Stillness” in a different way. I did not read it as a companion text. I read it as a contrast. It is a human story. But we are moving into a world shaped by machines.

My Reflection

“The Book of Stillness” shows a future after a major collapse. In that future, people find calm again. They read together. They breathe together. They help each other. The scenario says that humans find meaning through other humans. But today, this idea feels old. It feels like a memory from a world we may be losing. Dunne and Raby say that future stories are not predictions. They are tools that help us see our world more clearly. The “Book of Stillness” does this. It shows what we may lose. In the scenario, the mind is like a garden. It grows through human presence. Today, our world is different. We spend more time with machines. Machines are smart but do not feel. Shannon Vallor says that AI is “machine-augmented thinking.” It copies our patterns but does not share our emotions. Machines answer fast. They know many things. But they cannot fear, love, or trust. Humans feel these things. Machines do not. This makes the relationship unequal. This problem affects society. Vallor warns that AI can make inequality and confusion worse. It can split people apart. Because of this, the calm and shared world in scenario is hard to imagine today. People may stop reading together. They may stay alone with their screens. They may live in private digital spaces that feel like communication but are not real connection.

Leah Price studied the history of books. She says technology has changed reading many times. The danger today is deeper. Books will not disappear. But our shared understanding may disappear. Without a shared mental world, even a peaceful book like The Book of Stillness loses its power. In the end, the world in the scenario feels like a reminder. It shows a time when humans understood each other. Today, machines stand between us. This may break our shared language, our shared feelings, and our shared goals.

References:

Dunne, A., & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press.

Santa Clara University. (2018, November 6). Lessons from the AI Mirror. [Video]. YouTube.

Price, L. (2019, September 17). Books Won’t Die. The Paris Review.

Fairness, Fear, and Facts: A Student’s Look at AI in Justice (Task 11)

Participating in the Detain/Release simulation was both interesting and emotional for me. It reminded me of the movie Twelve Angry Men. In that film, people must make a fair decision, and they need more than facts. They also need kindness, careful thinking, and the ability to question things (Lumet, 1957).

In the simulation, I did not discuss anything with others. Instead, I had to look at computer-made risk scores. These scores looked scientific and very official. But they were still created by algorithms. O’Neil explains that these systems can repeat unfair patterns from the past even when they look mathematical and neutral (O’Neil, 2016).

As I read each case, I felt pulled in two directions. One direction was the personal stories of the defendants. These stories made me feel empathy and made me want to give them a chance. The other direction was the algorithm’s risk numbers. The Guardian warns that algorithms can “tell lies” when their data is biased or incomplete (O’Neil, 2017). I noticed this when the “high-risk” label made me ignore what the defendant was saying.

Shannon Vallor talks about the “AI mirror.” She says that AI tools copy simple versions of human judgment. They look confident, but they hide their limits (Vallor, 2018). This made sense to me. The colorful bars and charts in the simulation felt very convincing, even when the real situation was more complicated.

I also felt pressure from things like jail capacity and public fear. These forces were not about the defendant, but they still affected my decisions. This showed me that algorithms do not work alone. They are connected to human feelings and rules.

In the end, the simulation taught me something important: AI can help us, but it should not replace human judgment. Fair decisions need both data and human empathy.

References:

Lumet, S. (Director). (1957). Twelve Angry Men [Film]. Orion-Nova Productions.

O’Neil, C. (2016). Weapons of Math Destruction. Talks at Google. [Video]. YouTube.

O’Neil, C. (2017). How can we stop algorithms telling lies? The Guardian.

Vallor, S. (2018). Lessons from the AI Mirror. Santa Clara University. [Video]. YouTube.

Click. Confuse. Continue: A Game That Plays You (Task 10- Attention Economy)

When I started Task 10 and opened the User Inyerface game, I thought it would be a logical challenge. I expected a puzzle that needed reasoning and planning. Very soon, I realized it was not about problem-solving at all. It was about manipulation through design.

From the first screen, the User Inyerface confused me. A large button asked for consent, but instead of “Yes,” it said “No.” Below, a line read click HERE to continue.” The word “click” has underline and the word HERE, written in capital letters, looked clickable, but they were not. I moved the cursor several times before it finally worked. That moment made me understand the game’s true purpose. It was not built to guide or help. It was designed to trick and frustrate the player.

As I continued, every new step created more confusion. The password rules were written in tiny green letters on a blue background. It was very hard to read. The password and email boxes would not clear properly. The text “Password” or “Your email” stayed there while I typed. Then the message “Time is ticking” appeared, adding more stress. I tried several times, but the page froze repeatedly. At one point, a box said, “If you do not accept the terms and conditions, check this box.” Only after I unchecked it allowed me to continue.

These patterns reminded me of Brignull’s idea of “dark patterns.” He defines them as “user interfaces carefully designed to trick users into doing things they might not otherwise do.” They make us act against our real intentions.  The User Inyerface game uses confusion, color, and timing to manipulate. It teaches frustration, not skill.

While playing, I thought about how this reflects real digital life.  Harris (2017) explains that all digital platforms compete in a “race for our attention.” Their main goal is not to inform or empower us. He mentioned that technology is made to pull us in and keep us engaged. Notifications, autoplay videos, and pop-ups all serve this purpose. That happened for me in this game. Every click, pause, or delay pulled me back in. I was trapped in the cycle of reaction and effort that Harris describes.

The game reminded me of Tufekci’s talk (2017). She explains that we are not only being shown ads anymore, but also, we are being individually targeted through invisible systems. In Inyerface, I could feel this kind of control. Every confusing choice and misleading button tested my patience. It changed my behavior without me noticing it. It seems in this game, the line between participation and manipulation was not defined. Even the “Help” button gave no real help. It simply redirected me to the same page or showed how many people were on the waiting line. This experience taught me a lesson in how design can control behavior and emotion.

On the next page, there were several options. I had to choose only three of them. However, all of them were already selected by default. So, I had to deselect most of them before I could continue. Then, the game asked me to upload a photo. At that point, I started to think that this might be a fake or deceptive game. It was really “getting on my nerves.” It also seemed to be collecting information from users while wasting time with confusing and pointless tasks. Each step looked like it was designed to confuse and tire the player with too many options, tricky questions, and reversed logic. My focus was getting weaker with every move. So, I decided to stop and not continue the game. When I finally stopped, I realized that the game could not be finished. It was one aspect of digital life. It showed how technology captures attention and wastes time. As Harris (2017) warns, “Technology is not neutral. It is shaping our thoughts, feelings, and actions.” In my opinion, this game was a small version of the global digital reality.

In the end, User Inyerface taught me something important. It showed how social media design can exploit psychology. It also reminded me that our attention has become a valuable product, traded between advertisers and algorithms. As users and educators, we need to understand these patterns. Recognizing persuasive design is the first step to regaining control in a world that profits from distraction.

References

  • Brignull, H. (2011). Dark Patterns: Deception vs. Honesty in UI Design. A List Apart, 338.
  • Harris, T. (2017). How a handful of tech companies control billions of minds every day [TED Talk].
  • Tufekci, Z. (2017). We’re building a dystopia just to make people click on ads [TED Talk].

Learning in Layers: Networking transforms Text, Culture, and Technology (Task 9)

In this project that made with Palladio, I created a network with two types of nodes: classmates and tracks (songs). Each line, or edge, shows that a classmate chose a song. Every line means one person picked that track. The size of each track node shows how many classmates selected that song. A larger node means more people liked it. All classmate nodes are the same size. In the top right corner, there is a small example. Two person are both connected to Track 7. This means they both chose the same song. In the top left corner, one person connects to only one song, Track 16. She was the only person who picked it.

In the middle of the image, there are many track nodes with different sizes. Bigger ones show songs that were chosen more often. This helps us see which songs were popular and which classmates share similar choices. It can be seen around, people as nodes connecting to the tracks placed in the center of the page. It seems they linked to one another by the tracks. I did not give special weights to classmates. I wanted the focus to stay on how songs connect people through shared preferences. If I wanted to explore more, I could use network tools to see which songs or classmates are most connected. This could reveal small groups that enjoy similar music.

When I looked at the weighted tracks, I noticed some surprising results. For example, track 13 (a song from Peru) and track 17 (The Well-Tempered Clavier- Bach) were chosen by many classmates. They were selected more often than track 18 (Beethoven’s Symphony No. 5) or track 11 (The Magic Flute-Mozart). It was also interesting that track 2 (a song from Java) had one of the highest numbers of selections overall.

This shows something important. We all respect diversity, but our choices were very different from those made by NASA for its Golden Record fifty years ago. NASA’s selections reflected an Anglo-American and Euro-classical bias. Two pieces were by Beethoven, and most were from Western composers. This reflects what Williamson (2019) calls expert networks shaping cultural and technological narratives in education and media.

Today, after fifty years, the world has changed. Views about culture and diversity are much broader. The type of selection NASA made no longer feels fully fair or representative. What inspired me in my classmates’ network was that their choices naturally showed respect for diversity. Without aiming for perfect equality, they reached a kind of geographical and aesthetic balance. For example, the song from Peru received far more attention than expected. This revealed a wider and fairer idea of what beauty in music can mean. I believe this is a more truthful picture of our planet. If we ever send a new version of the Golden Record into space, it should truly represent all world cultures, not just a limited group of Western traditions.

This project also connects to what graph-based text models. Each connection in a network carry meaning beyond words. It shows hidden structures of interpretation (Sánchez-Antonio et al., 2024). In my case, every edge connected not only a listener to a track but also one worldview to another. Together, these edges formed a cultural web like semantic graphs in natural-language processing. As Nabhan and Shaalan (2016) explain, “graph structures reveal larger patterns beyond simple text analysis.” My visualization showed similar results. It revealed how people connect through shared art and emotion.

So far, we have explored many ways to express and document information through images, emojis, hyperlinks, and voice recordings. Adding network visualization takes this further. We are not just documenting data; we are showing relationships. Gonçalves et al. (2022) noted that “graph representation refines understanding of complex data and reveals missing links between entities.” Networking lets us see connections between cultures more clearly. Each node can lead to another person, idea, or song. This shows how knowledge and culture intertwine in meaningful ways. This visualization also shows that all world cultures share a common origin in humanity. Networking helps us see that link clearly. Each line (edge) can lead to a new node, and each node connects us to another person, idea, or culture. In more complex networks, this method helps us reach multiple resources at the same time. We can explore and compare diverse information more efficiently. This allows technology to bring knowledge and cultures closer together.

When we use emojis, written text, or voice recordings, our communication usually happens in one dimension. It moves from one idea to the next. With hyperlinks, our view becomes two-dimensional. Clicking a link takes us to another page and opens a new layer of meaning. This made sharing and accessing information faster and easier. Networking, however, takes this even further. It creates a kind of three-dimensional world. In a network, we can see many connections at once. We do not move one step at a time. Instead, we can explore several links together, choose which ones to follow, and make thoughtful choices about where to go next. In a traditional hyperlink system, we must leave one page to reach another. But in a network visualization, we can see the whole structure at the same time. We can decide which paths matter and explore them freely.

The example we used here is simple, but the idea becomes much more powerful in complex social or cultural systems. Networking allows information to flow faster. It also helps us understand the relationships between ideas, systems, and cultures more clearly. This makes learning and knowledge-sharing more connected, inclusive, and dynamic.

Author’s Note: Because of my limited experience with Palladio, I could not use all its features in this project. I believe the software offers more tools and options than I explored. From my experience with other network visualization programs (Network Meta-analysis in health systems), I know that both node size and edge thickness can change. These settings help show different kinds of relationships. For example, the thickness of an edge can represents how much similarity or shared data exists between two nodes or two data points. In this project, the edge weight did not appear meaningful in my visualization. This was probably because I am still learning how to use Palladio

References

Gonçalves, L. B., Nesic, I., Obradovic, M., Stieltjes, B., Weikert, T., & Bremerich, J. (2022). Natural language processing and graph theory: Making sense of imaging records in a novel representation frame. JMIR Medical Informatics, 10(12), e40534. https://doi.org/10.2196/40534

Nabhan, A. R., & Shaalan, K. (2016). A graph-based approach to text genre analysis. Computación y Sistemas, 20(3), 527–539. https://doi.org/10.13053/CyS-20-3-2471

Sánchez-Antonio, C., Valdez-Rodríguez, J. E., & Calvo, H. (2024). TTG-Text: A graph-based text representation framework enhanced by typical testors for improved classification. Mathematics, 12(22), 3576. https://doi.org/10.3390/math12223576

Williamson, B. (2019). New power networks in educational technology. Learning, Media and Technology, 44(4), 395–398. https://doi.org/10.1080/17439884.2019.1672724

Golden Record Curation (Task 8)

Music Without Borders

First, I categorized the 27 Golden Record tracks by continent. The Golden Record’s tracks represent five continents, but not in proportion to their 1970s populations. Asia, with 56% of the world’s people, had only four tracks, while Europe, with 18%, had ten. It shows an imbalance between cultural representation and population size.

In my opinion, making a list of famous songs/sounds and sending them into space to show Earth’s cultures and languages is a great idea. Many of the songs came from famous composers from the last few centuries. That makes sense, but we should also remember that there were other artists in places like India and China who were just as talented. Some of them didn’t become famous in west countries because they didn’t have access to universal distributing media like radio, television, or the internet. In these countries, the artists are loved by millions or even billions of people. So, if the songs had been chosen based on population and cultural diversity, it would have been fairer. This way, more people around the world could feel connected to this project that represents Earth in space.

So, I decided to select ten pieces according to the largest cultural groupings in the world. This list reflects the five major cultural families: four pieces from Asia, two from Africa, two from Europe, one from the Americas, and one from Oceania. In this selection, not only were the most populous cultures represented (Asia and Africa), but also those with enduring musical heritage (Europe and the Americas). Oceania, despite its very small population, was also included due to the distinctiveness and authenticity of its cultural traditions.

When I chose my 10 favorite songs, I only looked at the 27 tracks we had. There might be other songs in the world that are even better. I picked the songs that sounded the nicest to me. Some of them were in languages I didn’t understand, but I still liked how they sounded.

My Top 10 Selections

For each song, I tried to write one reason why I chose it.

Asia

  • China – Flowing Streams: express the appreciation for the magnificence of nature..
  • India – Jaat Kahan Ho:A golden voice calls across the sky with love and spirit. This sound is magic, expressing feelings of the human heart in a very different way.
  • Japan – Shakuhachi:A beautiful song and wonderful instrument, producing a floating slice of life on earth. It represents humanity.
  • Azerbaijan – Ugam:Mugam of Azerbaijan is considered the mother music of “Persian Dastgah”, “Raga of India”, and “Arabic Mugham”.

Africa

  • Senegal – Percussion:The drums beat strong: the heartbeat of the earth itself.
  • Zaire – Pygmy girls:Bright voices rise together like sunshine.

Europe

  • Germany – Beethoven String Quartet No. 13 (Cavatina):the absolute greatest string quartet ever composed. The melody speaks both sorrow and hope in every note.
  • Austria –The Magic Flute Mozart:The textbook definition of music. Music sparkles like stars: full of joy, wonder, and light.

Americas

  • USA – Dark Was the Night:A lonely song travels through darkness, carrying a small flame of hope.

Oceania

  • Australia – Aboriginal Songs:Beautiful, plain, and simple. Ancient voices sing to the sky and the land, keeping the stories alive.

The Story Behind My Bag- Audio Format (Task 7)

 

Reflection on Mode-Changing

The task 1, What’s in my bag,” first asked me to list the things I carry every day. The goal was to show why these items are important in my daily life. I redesigned the task as a short audio    story for task 7. This fits the New London Group’s idea that learning should use many modes, not just writing (New London Group, 1996). For redesigning, I followed these steps: think about the purpose, change from text to audio, add a story voice, and reflect on the meaning of the items. This matches the multiliteracies idea that we should design and redesign meaning (Leander & Boldt, 2012).

Benefits. Audio is more exciting than writing a list. When you hear my voice, you notice tone, pauses, and rhythm. These make the story more alive. Lunsford (2006) explains that modern writing is often multimodal, so it can mix sound, image, and text. By telling my bag story with sound, I try to show that my objects are not just things, they are symbols of how I live, learn, and travel.

Challenges. But audio has problems too. It is harder to go back and find exact sentences or words. In this mode, highlighting or underlining key content is not possible. Some people prefer to read quickly instead of listening. Also, recordings need some complicated tools and skills, like microphones or editing software.

In the end, changing the mode from text to audio showed me something new: literacy is about more than reading and writing on paper. It can be about using other forms, such as sound and images, to tell my stories.

References:

The New London Group. (1996). A pedagogy of multiliteracies- Designing social futures. Harvard Educational Review 66(1), 60-92.

Leander, K., & Boldt, G. (2013). Rereading” a pedagogy of multiliteracies”: bodies, texts, and emergence. Journal of Literacy Research, 45(1), 22-46.

Lunsford, A. A. (2006).  Writing, technologies, and the fifth canon. Computers and composition, 23(2), 169-177.

From Hieroglyphs to Emojis (Task 6)

 

My film review in emojis language

Reflection on Emoji Translation

Translating a movie into emojis for me was both fun and challenging. Emojis are not the same as words. Sometimes one emoji can show a whole word, like  for “sea.” Other times, a group of emojis is needed to show an idea, like for “a teacher guiding a student.”.

I did not begin with the title. Title could be symbolic and hard to capture in emojis. I started with the plot. By working through the story events, I could build a set of emojis that matched with    actions and feelings. Later, I think about the title.

I picked one movie (story) with strong pictures in my mind. People’s feel, nature, and action are easier to translate than abstract ideas. The hardest part was choosing the emojis. Using too few emojis leaves out some details. I had to choose carefully to keep the story simple but clear. This process showed me that emojis are not for fun. They can be a storytelling tool. Like hieroglyphs long ago, emojis might even grow into a new kind of writing.


From Hieroglyphs to Emojis

Long ago, people spoke with one another, but they need to keep their ideas for next                 generation. They began to draw shapes that looked like trees, animals, or simple lines. These shapes became symbols. Later, people created writing systems like cuneiform and hieroglyphs.

Over thousands of years, writing grew and changed. Alphabets were invented, and words were formed. People began to share ideas through books. Printing made it possible to copy many books at once. That is why today we can read text so easily.

In our time, something new is happening. With the start of the digital age, people began using small pictures, called emojis, in place of words. There are now thousands of emojis. They can show emotions, actions, or even whole ideas. Emojis appear in text messages, emails, and on social media every day. Another change is the use of abbreviations. Short forms like “OMG” or “LOL” can replace an entire word or even a full sentence. This makes writing fast, but it can also make communication less rich. Some people worry that this shift could weaken reading and writing skills. (1)

It seems this is the beginning of a new kind of writing. Just as hieroglyphs were once used to write stories and messages, emojis may one day become their own form of language. If that happens, the future may look back at our books the same way we look at ancient papyrus, need experts to understand them.

In summary and in a big picture view, we may be at the start of a new method of writing. Emojis and digital code might become as important for our future as hieroglyphs once were for the past. (2)

 

References:

1. Luke, C. (2003). Pedagogy, Connectivity, Multimodality, and Interdisciplinarity. Reading Research Quarterly, 38(3), 397–403.

2. Hayles, N. K. (2003). Deeper into the Machine: The Future of Electronic Literature. Culture Machine, 5.

Creating a Story in digital time (Task 5)

Witness.html

When I created “The Witness Story” in Twine, I wanted readers to explore the truth from different sides, just like real investigators. Twine lets writers build hypertext stories1, where each part connects to others through links. Readers don’t need to move in a straight line; they can choose what to read next.

This idea comes from Nelson’s work on hypertext. He explained that digital writing connects information “like a web of ideas,” not like the one-way flow of a printed book (Nelson, 1999). When I linked different witness scenes together, I was following that same idea. I let readers jump between voices and form their own understanding.

In this story, each witness talks in a natural and conversational way, so readers feel as if they are hearing people speak rather than reading formal text. (Ong, 2002). This program and the others help me reorganize my thoughts. (Heim, 1999). During revising my Twine story, I realized that this program didn’t just help me write, it helped me think. It showed me how stories and memories can change depending on the path we take.

References
Heim, M. (1999). Electric language: A philosophical study of word processing. Yale University Press.

Nelson, T. (1999). Xanalogical Structure, Needed Now More Than Ever. ACM Computing Surveys, 31(4).

Ong, W. J. (2002). Orality and Literacy:The technology of the words. Routledge

 


1 Hypertext plays an important and beautiful role in how we read and share information today. It lets a story or idea unfold in many layers at the same time. People can click on a link to jump ahead to another part of the story or go back to reread something that wasn’t clear. Instead of reading in a straight line from start to finish, readers can move around freely.

You can imagine of hypertext like a magic door in Alice in Wonderland,” where there are many doors and paths to take. Hyperlinks also work like a modern table of contents. They connect not just to chapters, but to exact ideas, examples, and definitions. This helps readers find information faster and understand topics from several aspects. However, this freedom has a downside too. When readers keep clicking link after link, they might lose track of where they started like being lost in a maze. So, hypertext is both a wonderful tool for discovery and a challenge that asks readers to stay focused.

 

 

 

Writing by Hand in a Digital Age (Task 4)

 

Here are two samples of my handwriting in English and Persian.

I usually, or perhaps I should say most of the time now, type instead of writing by hand. Especially after immigrating, my habit of using a pen has become very rare. I notice this most clearly in my personal diary. Nowadays, handwriting has become difficult for me. Holding a pen and moving it across the paper feels strange and tiring. I also find that I can no longer form letters as beautifully as before. If you look at a page from my diary (Persian one), you will see how my handwriting has lost its neatness and flow. Even though I do not type with the full ten finger method, I still rely on typing almost every day. My typing is not professional, but it works. I am forced to depend on typing because almost everything I need to prepare for assignments, proposals, or documents to be sent, must be submitted in Word or PDF format. Because of this, typing has become the natural and mandatory choice for me, while handwriting has become more of a rare, nostalgic practice.

When I write my diary, if I make a small mistake, I usually just ignore it and continue writing. But if the mistake changes the meaning of sentence, I have no choice but to correct it. I cannot erase the word. Instead, I cross it out with a line and write the correct word above or below it. Editing by hand is therefore more manual and visible, and the corrections themselves become part of the page (Clement, 1997).

When I type, the process is very different. I use Word software that has autocorrect and spell-check features. This can be very helpful. However, it can sometimes create new mistakes. For example, if I am not careful, the autocorrect feature may change a word into something completely different, with a different meaning (Lekamge & Smith, 2025).

Because of this, I think the medium plays a very important role in how I edit my work. With handwriting, I am responsible for finding and correcting every mistake myself, and my corrections stay visible on the page. With typing, the computer helps me, but it also sometimes makes errors that I must fix. In both cases, editing affects the overall beauty and clarity of the text, but in different ways (Lamb & McCormick, 2020, May 26).

From my perspective, the most important difference between handwriting and mechanized writing is the way the flow of thought is affected. When I type, the presence of autocorrect sometimes interrupts my idea rhythm. The software may change or suggest something that alters the structure of the sentence. This can break the natural stream of my thinking and shift my focus from meaning to correction. When I write by hand, even though it feels harder for me now, my thoughts and my hand move together. I do not worry too much in the moment about whether the word is perfectly correct; instead, the fluency of the sentence and the clarity of the idea show themselves more naturally. With practice, handwriting can also be very enjoyable because of the beauty of the script itself (Kiefer et al., 2015).

I also feel that handwriting leaves a kind of trace of my learning. The small corrections and crossed-out words remain visible on the page, like a memory of my past mistakes. When I look back, I can see where I wrote a wrong word or made a grammatical error, and it reminds me that I am improving. With typing, this does not happen. Because of this, I personally prefer handwriting, since it carries both aesthetic value and a visible record of growth, even though typing is more practical in my daily work.

References:

Clement, R. W. (1997). Medieval and Renaissance Book Production. Utah State University.

Kiefer, M., Schuler, S., Mayer, C., Trumpp, N. M., Hille, K., & Sachse, S. (2015). Handwriting or Typewriting? The Influence of Pen- or Keyboard-Based Writing Training on Reading and Writing Performance in Preschool Children. Adv Cogn Psychol, 11(4), 136-146. https://doi.org/10.5709/acp-0178-7

Lamb, R., & McCormick, J. (2020, May 26). From the vault: Invention of the book, part 1 & 2.

Lekamge, R., & Smith, C. (2025). Impact of Auto-Correction Features in Text-Processing Software on the Academic Writing of ESL Learners. English Language Overseas Perspectives and Enquiries, 22(1), 69-91.

 

My Journey with a Voice-to-Text Story (Task 3)

My Journey with a Voice-to-Text Story (Task 3)

This week, for maybe the second or third time, I tried using apps that change voice to text (https://speechnotes.co/). Because of this, I decided to share a memory about the first book I prepared and published about thirty years ago. I read the story in simple English into the app. Of course, English is my second language, so some of my pronunciations may not have been clear. The result was very surprising for me. Below, you can see the text that the app created. After that, I will share my thoughts about the text and how the app changed my words. At the end of my post, I will also include the edited version so you can compare and see the differences.

                 

The voice-to-text story feels different from normal written English. It sounds more like someone talking than like something written down. The sentences are long and run together without punctuation, so it’s hard to tell where one idea ends and another begins. Sometimes small words like “the” are missing. There are also grammar mistakes and odd word choices. For example, “library well very strict with big box light Britannica” or “many of their Ray sources” don’t make sense. These are mistakes from the speech-to-text program. In proper writing, sentences would be shorter, the verb tense would stay the same, and the words would be clearer.

Still, the text is not all wrong. There are problems like run-on sentences, wrong verb forms, misheard words, missing punctuation, and confusing sections. It doesn’t show the tone of voice or body language you’d expect when someone is speaking. But the story itself makes sense. It flows in a way that feels natural. It shows feelings of nostalgia, effort, and pride. The voice feels personal and conversational, which makes it sound like a real person telling their own story. Most importantly, the main ideas still come through. We can see how hard research was before the internet and how much patience and passion it took.

The most common mistakes are easy to spot. The sentences run on because I don’t talk in neat, written chunks. Misheard words happen because the app guesses at sounds and sometimes gets them wrong. Missing words show up because the app often skip little words like “the or “to when they talk. Verb tense changes appear because I jump between past and present without noticing. These are considered mistakes because they don’t follow the rules of written English. That makes the text harder to read and less exact.

If the story had been scripted, it would look very different. I made it shorter and easier to read, with correct grammar, punctuation, and spelling. I try to stick the main points like the libraries, the copying rules, the translations, and the handwritten drafts. Also, there wouldn’t be as much repetition or side-tracking. But something would also be lost. The natural emotion of the spoken version might disappear. Oral storytelling has a kind of life and feeling that a scripted version can’t always capture.

This shows how oral storytelling is not the same as written storytelling. Oral stories feel alive because they use tone, pauses, and even body language to make them real. They can wander a little, repeat things, or change tense, and that’s okay in speech. Written stories are different. They need structure, spelling, punctuation, and grammar to make sense. They aim to be clear because the reader can’t hear the voice or see the gestures. Oral storytelling connects with people in the moment. Written storytelling is built to last.