For this activity, I used Sora to generate three images based on memories from Japan.
Picture 1:
My first prompt described “a realistic nighttime cityscape viewed from the 20th floor of a building in Umeda, Osaka, Japan, facing south at 10 p.m.”
The result felt surprisingly accurate and deeply nostalgic. My prompt wasn’t very detailed, but Sora captured the atmosphere well. The glow of the city, the quiet hum below, and that reflective feeling of looking out over Osaka at night looked quite familiar.
Picture 2:
The second image, “an underwater view of a family of four snorkeling near the Blue Cave in Okinawa,” didn’t turn out the way I imagined.

It reminded me how specific I needed to be when describing what I wanted. The people looked a little strange, and the faces had that odd AI look. It also added things I hadn’t pictured, like bright life jackets underwater. Wouldn’t they be floating? I realized that Sora wasn’t really recreating my memory, but it was guessing what a “snorkeling family” might look like based on other examples. Next time, I’d try adding more details to guide it better.
Picture 3:
My final prompt was “a young Japanese woman riding her mamachari bicycle with a salmon-colored dog backpack, along a street lined with cherry blossoms and Osaka Castle in the distance. The dog is a long haired chihuahua and is poking its head out of the back.” This one came out beautifully.

I spent more time crafting the description, learning from the earlier attempts, and the result closely matched my memory.
Reflection:
Overall, I noticed how the level of detail in my language directly shaped the quality of Sora’s output. It almost felt like collaborating with memory itself…rebuilding moments that once existed, but now filtered through AI’s imagination. It reminded me of Cathy O’Neil’s article “Justice in the Age of Big Data,” where she describes how algorithms reflect and amplify the data they’re given. Like predictive models that unintentionally learn human bias, Sora’s interpretations depend on what I provide. In both cases, technology doesn’t just process information, it mirrors our choices, our gaps, and sometimes, our misunderstandings.
References:
O’Neil, C. (2017, April 6). Justice in the age of big dataLinks to an external site.. TED. Retrieved August 12, 2022.
OpenAI. (2025). Sora [Generative AI text-to-video model]. https://openai.com/sora