Categories
Uncategorized

Task 11: Detain/Release

Task 11: Detain/Release 

Completing the Detain/Release simulation left me feeling surprisingly frustrated. I kept wanting more information. More context, background, even a fuller description of what actually happened in each case. Instead, I was pushed into making decisions that felt high-stakes with very thin evidence. In a strange way, that irritation became part of the lesson. It highlighted how precarious things become when algorithmic risk scores are treated as if they can stand in for real knowledge.

This week’s podcast episodes shaped how I approached the simulation. Listening to the story of Jack Maple and the creation of CompStat, I expected something very different. At first, it honestly made me think of Moneyball. I imagined Maple as the Brad Pitt/Jonah Hill figure of policing; using statistical patterns to make smarter predictions, prevent problems before they happened, and rethink a system that felt stagnant. I assumed the NYPD would use those maps and numbers to increase efficiency in a genuinely helpful way, the same way baseball teams used player stats to rethink strategy.

But the more the podcast unfolded, the more that optimism collapsed. Instead of using data as a way to understand the complexity of neighbourhoods and allocate resources responsibly, CompStat became a justification for over-policing. The numbers that were supposed to reveal patterns ended up hardening stereotypes, especially about Black communities. It felt grim to realize how quickly a bright-eyed idea about “intelligent policing” had slipped into a mechanism for reinforcing deeply racist assumptions. At this point I was also reminded of an immigration algorithm I heard about at an AI conference I attended years ago. There, the speaker had explained how Immigration, Refugee and Citizenship Canada’s algorithm flagged visa applicants with the name “Mohammed” at disproportionately high rates because it had been trained on years of biased human decisions. The algorithm didn’t invent racism, it inherited it. And once it was embedded in the system, it became even harder to challenge. In both cases, statistical data became a kind of shield that makes harmful decisions look objective, even though they’re trained on human bias from the start.

That memory from the conference stayed with me throughout the simulation, because that’s how the simulation felt, too. The risk scores were presented as helpful prompts, but without adequate context they started to feel like the only “real” data available. But where did they even come from? I started to realize that even when I disagreed with a recommendation, the structure of the task nudged me toward treating the score as authoritative, simply because everything else was so ambiguous. It made me realize how easily an algorithm can shift from being a tool that informs judgment to quietly becoming the thing that determines judgment.

For me, the biggest takeaway is how important it is to preserve the human role in these processes. AI can highlight patterns, speed up workflows, and reduce some forms of inconsistency, but it cannot understand the social, historical, or relational contexts that make each case unique. When decision makers rely too heavily on algorithmic assessments, especially ones trained on biased data, the harm compounds over time.

Ultimately, this week reinforced something I’ve believed since that conference, that AI can be incredibly useful, but only when it remains a supporting voice and not the final one. The minute we let statistical patterns harden into unquestioned authority, whether in policing, immigration, or pretrial decisions, we risk turning tools meant to help us into systems that quietly perpetuate the very injustices they claim to solve.

References

Detain/Release. (n.d.). Simulating algorithmic risk assessments at pretrial. https://detainrelease.com/join?room=MMVBN

Reply All. (2022). The Crime Machine, Part I & II (Episodes 127–128) [Audio podcast]. Gimlet Media.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet