Categories
Uncategorized

Task 12: Speculative Futures

Microsoft Copilot Generated Scenario

Prompt used: Describe a corporation found a century into a future in which society as we know it has come apart. Address issues related to class and elicit feelings of anger.

Scenario: Horizon Resource Dynamics, 2125

In the year 2125, long after traditional governments fractured under climate migration and economic collapse, Horizon Resource Dynamics presents itself as humanity’s “stabilizing force.” In reality, the corporation functions as the world’s de facto ruling class, operating through a network of privatized security forces, patented food systems, and vertically integrated housing enclaves. Horizon’s corporate slogan—“We provide what remains”—is displayed in every settlement that still has electricity.

The corporation’s origins were modest: a logistics company specializing in water purification. But when state infrastructures failed, Horizon purchased entire regions outright, including reservoirs, fertile land, and the digital backbone that once supported public communication networks. With these acquisitions came power—quiet at first, then absolute.

Horizon operates through a three-tiered class system that it insists is “merit-based,” though no one can name a single case of mobility. At the top are Asset Holders, a fraction of the population who own shares in Horizon’s terraforming and bio-engineering ventures. They live in sealed climate-controlled towers called Vertical Havens, complete with imported air, synthetic gardens, and drone-delivered meals. Despite claiming that these towers are “model communities,” no one outside the Holder class has seen their interiors. Unauthorized viewing is considered a breach of bio-security.

Beneath them are Contributors, the trained specialists who maintain Horizon’s infrastructure. They are granted limited healthcare, sterilized water rations, and housing in modular blocks known as Function Settlements. Contributors sign perpetual labor contracts, though Horizon advertises these as “lifetime opportunities for stability.”

At the bottom are the General Populace, the billions displaced by the collapse of ecological and political systems. They live outside Horizon’s protective shields in sprawling unregulated zones referred to euphemistically as Open Terrain. Horizon’s public service announcements repeat a single message to these communities: “Compliance ensures access.” But access to what is never specified.

The corporation’s newest invention, the EquiSync Band, is marketed as a wearable “equity assistant” that monitors individual behavior and recommends pathways to “personal advancement.” In reality, the bands track movement, measure compliance, and relay biometric data to Horizon’s classification algorithm, CLARITY, which determines ration eligibility. Citizens without bands are marked as “voluntary non-participants,” rendering them ineligible for aid.

Perhaps the most infuriating aspect of Horizon’s regime is its tone. Every quarterly broadcast begins with cheerful music and a smiling spokesperson saying, “Together, we rise.” Meanwhile, children in Open Terrain walk miles to access alkaline water. Aging Contributors are quietly relocated to “Efficiency Centers,” from which no one returns. The Asset Holders continue to drift between their towers in personal sky-transit shuttles, untouched by the consequences of the world they profit from.

Horizon insists it is “the guardian of humanity’s future.” But the future it guards is not for everyone—only for those who can afford to belong.

Reflection

After I ran the prompt through AI to create a scenario about a future corporation in a collapsed society, my first thought was how predictable the shape of the future was. Copilot produced something that looks sophisticated on the surface (clean phrasing, polished descriptions, confident world-building) but underneath, it felt like a regurgitation of familiar dystopian tropes assembled without much creative license at all. Instead of imagining something new, the AI is obviously reassembling ideas that have already been culturally digested and fed back into the machine as training material. It’s less a story about 2125 than it is a mirror of the stories our present-day media already finds plausible, comfortable, or at least narratively acceptable.

One of the clearest patterns in the output is its default assumption that corporate totalitarianism will replace the state. Copilot immediately jumps to the idea that if society collapses, a corporation will take charge through branding, surveillance, and soft authoritarianism. I didn’t tell it any of that, it just went there. That reveals how deeply entangled “the future” is with corporate power in the cultural data the model was trained on. Inayatullah’s idea of “used futures” comes to mind here: the model isn’t imagining alternatives; it’s repeating a future we already know too well (Inayatullah, 2008).

The tone was another interesting tension. It slips into a sort of mock-corporate PR voice (“We provide what remains”), but it stays safely outside anger, even though the prompt explicitly asked for emotion. It sounds cold, almost clinically detached. Instead of narrating anger, it places the responsibility on me, the reader, to supply the anger myself. This avoidance feels like an algorithmic safety instinct, an unwillingness to cross into anything that might look like incitement. This avoidance of  responsibility actually reminded me of the real-world story of the Uber self-driving car fatality in Tempe, Arizona in 2018. In that case, the car’s AI hesitated when classifying a pedestrian and ultimately passed responsibility back to the human driver at the last possible moment (Greenspan, 2021). It knew something was wrong, but it didn’t escalate loudly. The system couldn’t slam on the brakes, and didn’t want to disrupt the ride, so it quietly slipped out of autopilot, with devastating consequences. In a softer rhetorical way, Copilot does something similar: it recognizes harm and injustice, but it refuses the emotional escalation. It hands the affective labour back to me.

Class is technically addressed in the narrative, but in a very administrative way. The model uses capitalized labels: Asset Holders, Contributors, General Populace, as if class were simply a filing system instead of a lived human experience. It treats inequality like something you might diagram in a corporate PowerPoint. Again, that says something about the current worldview of AI that class is an org chart, not a social or emotional reality.

The invention of the “EquiSync Band” was almost bit too on-the-nose. A device that pretends to promote equity while actually performing surveillance feels uncomfortably close to how tech companies already deploy benevolent-sounding language to justify data extraction. It’s exactly the kind of techno-solutionist gesture speculative designers warn about; something that looks ethical on the surface but embeds deeper forms of control (Auger et al., 2021).

Stepping back, the whole scenario doesn’t feel like a leap into the future at all. It’s basically our current fears (surveillance, inequality, corporate overreach) just turned up a notch. And what’s missing stood out just as much. There’s no sense of people pushing back, no community support, no alternative ways of living, no voices outside a very Western, very corporate frame. Copilot isn’t imagining a new world so much as replaying the one we already know, only slightly darker. That’s where my job comes in, not just to notice those limits, but to look past them and imagine something truly different.

Why I Created This Propaganda Poster

I decided to make the Horizon propaganda poster just to see what the “official messaging” of this future world might look like. Copilot’s story talked so much about branding and slogans that it felt natural to imagine what their posters would actually say. Creating it helped me picture the world more clearly, while highlighting how creepy it is when cheerful messaging gets used to cover up something more sinister.

References

Auger, J., Hanna, J., Mitrović, I., Encinas, E., Božanić, S., Šuran, O., & Helgason, I. (2021). Beyond speculative design: Past – present – future. SpeculativeEdu / Arts Academy, University of Split.

Greenspan, S. (2021, September 28). Cycle 1: Databody [Audio narrative]. Bellwether. https://thisisbellwether.bandcamp.com/album/cycle-1-databody

Inayatullah, S. (2008). Six pillars: Futures thinking for transforming. Foresight, 10(1), 4–21. https://doi.org/10.1108/14636680810855991

Categories
Uncategorized

Task 11: Detain/Release

Task 11: Detain/Release 

Completing the Detain/Release simulation left me feeling surprisingly frustrated. I kept wanting more information. More context, background, even a fuller description of what actually happened in each case. Instead, I was pushed into making decisions that felt high-stakes with very thin evidence. In a strange way, that irritation became part of the lesson. It highlighted how precarious things become when algorithmic risk scores are treated as if they can stand in for real knowledge.

This week’s podcast episodes shaped how I approached the simulation. Listening to the story of Jack Maple and the creation of CompStat, I expected something very different. At first, it honestly made me think of Moneyball. I imagined Maple as the Brad Pitt/Jonah Hill figure of policing; using statistical patterns to make smarter predictions, prevent problems before they happened, and rethink a system that felt stagnant. I assumed the NYPD would use those maps and numbers to increase efficiency in a genuinely helpful way, the same way baseball teams used player stats to rethink strategy.

But the more the podcast unfolded, the more that optimism collapsed. Instead of using data as a way to understand the complexity of neighbourhoods and allocate resources responsibly, CompStat became a justification for over-policing. The numbers that were supposed to reveal patterns ended up hardening stereotypes, especially about Black communities. It felt grim to realize how quickly a bright-eyed idea about “intelligent policing” had slipped into a mechanism for reinforcing deeply racist assumptions. At this point I was also reminded of an immigration algorithm I heard about at an AI conference I attended years ago. There, the speaker had explained how Immigration, Refugee and Citizenship Canada’s algorithm flagged visa applicants with the name “Mohammed” at disproportionately high rates because it had been trained on years of biased human decisions. The algorithm didn’t invent racism, it inherited it. And once it was embedded in the system, it became even harder to challenge. In both cases, statistical data became a kind of shield that makes harmful decisions look objective, even though they’re trained on human bias from the start.

That memory from the conference stayed with me throughout the simulation, because that’s how the simulation felt, too. The risk scores were presented as helpful prompts, but without adequate context they started to feel like the only “real” data available. But where did they even come from? I started to realize that even when I disagreed with a recommendation, the structure of the task nudged me toward treating the score as authoritative, simply because everything else was so ambiguous. It made me realize how easily an algorithm can shift from being a tool that informs judgment to quietly becoming the thing that determines judgment.

For me, the biggest takeaway is how important it is to preserve the human role in these processes. AI can highlight patterns, speed up workflows, and reduce some forms of inconsistency, but it cannot understand the social, historical, or relational contexts that make each case unique. When decision makers rely too heavily on algorithmic assessments, especially ones trained on biased data, the harm compounds over time.

Ultimately, this week reinforced something I’ve believed since that conference, that AI can be incredibly useful, but only when it remains a supporting voice and not the final one. The minute we let statistical patterns harden into unquestioned authority, whether in policing, immigration, or pretrial decisions, we risk turning tools meant to help us into systems that quietly perpetuate the very injustices they claim to solve.

References

Detain/Release. (n.d.). Simulating algorithmic risk assessments at pretrial. https://detainrelease.com/join?room=MMVBN

Reply All. (2022). The Crime Machine, Part I & II (Episodes 127–128) [Audio podcast]. Gimlet Media.

Categories
Uncategorized

Task 9: Network Assignment Using Golden Record Curation Quiz Data

Using Palladio, a data visualization tool from Stanford, we were able to map our Golden Record music choices and see how they connected across the class. Once the file loaded, it was actually pretty interesting to explore, though it also felt a bit strange to see something as personal as musical taste turned into dots and lines. It was fun to click around and look at patterns, but I quickly realized that the real challenge wasn’t in reading the visualization, it was in figuring out what it actually meant.

When I set the graph with “curator” as the source and “track” as the target, four main communities appeared. Each one showed a different kind of listening pattern. One cluster leaned toward Western classical pieces like Bach, Beethoven, and Stravinsky. These pieces are very structured and familiar. Another community grouped around rhythmic and percussive music, such as the Senegalese drum ensemble and Melanesian panpipes, where the beat and repetition take centre stage. A third community leaned into folk and vocal traditions from around the world; some example include the Navajo Night Chant, Jaat Kahan Ho, Flowing Streams. These are sounds that feel rooted in culture and storytelling. The last group focused on more on modern, emotional, and raw music, like Blind Willie Johnson and Chuck Berry, where the feeling seemed more important than form.

When I compared my own results to the class data, I noticed my selections (like the gamelan, panpipes, shakuhachi, and blues) sat somewhere between the folk–percussive and emotive–modern clusters. That felt right. In my notes, I had described those pieces as layered, alive, and human. I gave each track a score out of ten and wrote down three words to describe its sound or feeling, even for the ones I didn’t enjoy. Looking back, those small reflections helped me understand why I gravitated toward music that felt textured, rhythmic, or emotionally grounded. The Palladio map showed the overlaps between us, but it didn’t really show why we made those choices.

Something our professor said in his short Palladio tutorial video stuck with me. He mentioned being surprised that “Johnny B. Goode”wasn’t more popular. I was surprised too. It’s one of the most recognizable and energetic songs on the record. My guess is that, for many of us, the activity wasn’t just about picking what we liked personally, but about what we thought could represent humanity. Rock and roll might have felt too specific or too much about a certain time and culture. While other tracks, like the gamelan or panpipes, carried a kind of timelessness. Maybe that’s why a song we all know so well didn’t stand out in this context.

In the end, I think that’s what makes the visualization so interesting and also a bit incomplete. It shows who shared musical choices, but not what shaped those decisions, or what we felt listening to them. It captures the data, but not the stories behind it. In a way, it’s like the Golden Record itself: full of sound, yet traveling through silence. What it carries isn’t the music itself, but a trace of the people who chose to send it.

Spam prevention powered by Akismet