Prompt – In about 500 words, describe or narrate a scenario about a law found a few years into a future in which society as we know it has come apart. Your description should address issues related to algorithms and elicit feelings of contentment.
Output 1 – generated with Co-Pilot
In the year 2047, after decades of social upheaval and the collapse of traditional governance structures, a pivotal law was enacted: the “Algorithmic Trust and Harmony Act.” This law emerged from the collective will of a world where humanity had witnessed the dangers of unchecked algorithms—the biases, the exploitation, and the dehumanization. In its wake, this legislation sought to reshape society by redefining the relationship between humans and technology.
The law mandated that all algorithms influencing critical areas of life—housing, healthcare, justice, education, and employment—be rendered entirely transparent and comprehensible to the average person. The days of opaque black-box systems were over. Developers were required to publish their code in plain language summaries, accessible to everyone. Communities, not corporations, were tasked with overseeing these systems through local councils. These councils were diverse in composition, including technologists, sociologists, ethicists, and ordinary citizens. The algorithmic councils became a new pillar of governance, decentralizing power and fostering trust.
One of the most profound changes brought about by the law was the concept of the “Algorithmic Commons.” Previously, algorithms had been proprietary, jealously guarded by tech giants. Now, they were declared the shared property of all humanity. Anyone could use or improve upon them, provided their modifications adhered to ethical guidelines established under the Act. This fostered a collaborative culture, where communities tailored algorithms to meet their specific needs, values, and aspirations.
A key feature of the law was the “Bias Disclosure and Correction Framework.” Every algorithm had to regularly undergo bias testing and publish results. When biases were detected, immediate corrective actions were mandatory, overseen by the councils. Communities celebrated this transparency, as they saw the systemic inequalities of the past begin to dissolve. No longer would an algorithm reject a housing application because of a zip code or predict recidivism rates based on flawed data.
The Act also introduced the “Human Dignity Override.” At any time, individuals affected by an algorithmic decision could appeal to the council. If the council determined that the decision violated human dignity or fundamental rights, the algorithm’s ruling was overturned, and immediate reparative actions were taken.
Life under the Algorithmic Trust and Harmony Act felt different—not just safer but profoundly more collaborative. People spoke of how their voices mattered again. Decisions that once felt cold and distant, made in boardrooms or server farms, now reflected the collective will and wisdom of their communities.
One evening, in a small town nestled beside a glimmering lake, an elder stood before her community, recounting the past. “There was a time,” she said, “when we let algorithms shape us without question. But today, we shape them—and in doing so, we’ve rediscovered what it means to be human.”
The air was quiet as the sun dipped below the horizon, yet a quiet contentment filled the hearts of those listening. They lived in a world where technology no longer ruled, but served—where humanity had reclaimed its destiny, one algorithm at a time.
Output 2 – generated with Co-Pilot
A few years into a future marked by the collapse of traditional societal structures, humanity finds itself in a patchwork of autonomous communities. Technology remains an enduring backbone of human existence, but its unchecked influence has led to the disintegration of the centralized order. The chaos birthed a new law, known simply as “The Algorithmic Compass Act.”
As the story unfolds, the Act is introduced in a community called Havenridge, nestled amidst forests reclaimed from urban sprawl. Havenridge thrives on a philosophy of balance—between nature, people, and the tech that once dominated lives. With the Algorithmic Compass Act, every algorithm deemed critical to decision-making must now be paired with human discernment and accountability, reflecting a newfound respect for collaboration between mind and machine. The law’s core principle is simple yet profound: algorithms provide insights and recommendations, but humans hold the reins to ensure justice, empathy, and individuality prevail.
Take Naomi, a schoolteacher who had always feared the cold precision of algorithms. In the old world, predictive models once labeled her students—some marked as promising, others as failures—on flimsy data. Under the Act, these models now suggest broad strategies for tailoring lessons, but Naomi decides how to apply them. She’s empowered to nurture creativity in a quiet child who would have been overlooked or challenge the brightest students without boxing them in.
The town square reveals another transformation. Algorithms once dictated resource allocations—water usage, housing assignments—based purely on efficiency metrics. Havenridge now uses these algorithms only as guides, alongside councils that listen to personal stories, weigh cultural priorities, and preserve the community’s spirit. It’s a slower process, but people find peace in knowing they have a voice, a seat at the table.
Tom, the farmer, watches with quiet satisfaction as weather-predicting models guide crop planting schedules. Yet it’s his intuition, honed from decades of living alongside the soil, that decides how to space his rows to protect against pests. His decisions are celebrated in Havenridge as essential to the community’s sustainable way of life.
Though challenges abound, Havenridge thrives on its belief in humanity’s worth and its ability to coexist harmoniously with technology. The Algorithmic Compass Act reminds people daily that algorithms are not infallible gods but tools meant to uplift and support human wisdom, not replace it.
Under the morning sun filtering through Havenridge’s leafy canopy, Naomi and Tom meet at the community market. There’s a palpable sense of contentment in the air—not just because lives are simpler, but because they are richer with meaning. In this future, the Act represents hope: a world where algorithms are partners, not rulers, and humanity’s core values guide the path forward.
Reflection
I appreciate the opportunity to explore AI’s capabilities and enjoyed reading the output and reflecting. I was so intrigued that I ran the prompt twice to see what else would be generated. The prompt itself was fantastic—timely and thought-provoking, as it touches on concerns that many people have.
The idea of society falling apart due to algorithms reminded me of a Munk Debate I listened to. The debate centered on the statement: “Be it resolved, AI research and development poses an existential threat” (Munk Debates, 2023).
There wasn’t anything particularly shocking or creative in Output 1. Most of the proposed laws are common recommendations, like transparency and bias testing. The idea of a council feels romanticized and utopian—people have their own biases, and power tends to corrupt. While idealistic, this scenario seems unlikely given historical patterns of power and bias.
I like how Output 2 includes specific examples of humans using technology to achieve better outcomes. For instance, the teacher using algorithms to tailor lesson plans or the farmer combining intuition with data models to maximize his efforts. Like Output 1, there’s nothing groundbreaking here, but it aligns with how many people already envision the role of technology in society.
Dunne & Raby (2013) describe speculative design as one that “thrives on imagination and aims to open up new perspectives on what are sometimes called wicked problems, to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely” (p. 2). I feel that the AI narratives fall short of truly opening up new perspectives. Given that AI is, at its core, just an algorithm, this limitation isn’t surprising.
Also discussed by Dunne & Raby (2013) is the idea of not predicting the future but rather exploring possible futures to better understand the present and shape the kind of future people want. In this sense, the narratives created by Co-Pilot are valid. Both stories offer a glimpse into concerns people have about algorithms while also summarizing calls to action to prevent potential unwanted consequences that could lead to society’s downfall.
References:
Dunne, A., & Raby, F. (2013). Speculative everything: Design, fiction, and social dreaming. The MIT Press
Microsoft. (2024). Copilot (April 2024 version) [Large Language Model]. https://copilot.microsoft.com/chats/
Munk Debate. (2023, June 22). Artificial Intelligence Debate. https://munkdebates.com/debates/artificial-intelligence/