Task 11 – Detain/Release

The detain/release game is incredibly conflicting, which I think is a testament to how well put together it is. It’s so difficult making each decision, and the game forces one to balance a variety of conflating factors, from your fictional position as the judge, the recommendation of the prosecution, the defendant’s statement, and the algorithm’s ratings. Though the photos are obscured, there is sufficient information for unconscious bias to seep into the decision making regardless: Gender and age are provided, and the photos provide an – albeit vague – notion of race.

I played through the game three times, and each proved to be a slightly different experience.

The gamification of the decision making was initially a major influence for me. I found myself overly focused on maintaining a low fear rating and an acceptable prison population. To do this, I placed significant weight on the algorithm’s ratings. This resulted in high detentions. At the same time, my fear rose as well. Defendants with low risk ratings would fail to appear, or commit additional crimes.

On subsequent playthroughs I focused on balancing the advisement of the prosecution and the statements of the defendants, along with the algorithm’s rating. Still, my fear level rose just as quickly as my prison population.

On none of my playthroughs did I feel as though my decision-making was having any measurable impact. No matter my decision, detain or release, neither felt as though it had an outcome much better than random. Releasing even low rated offenders lead to risk of re-offence or flight. Detention was always the safe decision, assuming the population wasn’t climbing too high.

The incentives seem misaligned, the information insufficient. It makes it easy to defer to the algorithm’s recommendation. The algorithm seems as ineffectual as the prosecutors recommendations, which itself seems to have as little influence as the defendant’s statement.

The way I picture it is very much like the network graph of our collective Golden Record track selections – a network graph that many algorithms would have been trained on for purposes of evaluating potential detain/release risk. There’s simply too much missing context for this algorithmic data to be used for such grave and consequential decision making.

As is explained in The Crime Machine I and II, over-reliance on algorithms for decision-making can lead to extremely skewed, outright harmful outcomes. In detain/release, these outcomes don’t feel much different than outright random selection.

References

Vogt, P. (2018, October 12a). The Crime Machine, Part I (no. 127) [Audio podcast episode]. In Reply All. Gimlet Media.

Vogt, P. (2018, October 12b). The Crime Machine, Part II (no. 128) [Audio podcast episode]. In Reply All. Gimlet Media.

 

 

 


Posted

in

by

Tags:

Comments

One response to “Task 11 – Detain/Release”

  1. Steph Takeda Avatar
    Steph Takeda

    Hi Duncan,

    I found your post very insightful, especially your observation that decision-making in the game felt ineffective and the outcome heavily influenced by the algorithm. Like in the network task, the lack of context made it hard to fully understand the decisions. I completely agree—it was a revealing and somewhat disheartening realization.

    I also played the game several times, each with a different focus, but I was never fully satisfied with the results. My approach varied: the first time, I based my decisions solely on the crimes and ended up releasing too many people. The second time, I used more of the algorithm and made decisions more quickly and automatically. While this approach led to a “better” outcome, I found my over-reliance on the green and red indicators troubling (instead of taking the time to carefully consider each case).

    What I found really interesting was how the fear and jail numbers began to influence my decision-making. I’m not sure if you felt the same. I suspect this replicates some of the real-world pressures that judges face. As time went on, those little bar tabs of information started to drive my decisions, becoming more of a measure than a tool for my decision making. I found that eye-opening.

    As you pointed out, algorithms are meant to assist in these processes, but they can also have unintended consequences. As O’Neill (2016) highlighted, the success of an algorithm is determined by the person who builds it, which can unintentionally introduce bias and, if not properly managed, unfairly impact people’s lives. I saw this clearly in the game, which is why it is such an effective learning tool. If only there were more tools like this for the big algorithms out there…pretty illuminating.

    Thanks,

    Steph

    Reference:

    O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet