During the Detain/Release simulation (Pocaro, 2019), I acted as a county judge who was tasked with deciding whether defendants should be detained or released at their bail hearing while awaiting trial. I attempted the simulation twice because the first time I realized I had entered the incorrect name, so I restarted with the correct name. I should mention that the first time, I was unable to complete the simulation because I released too many people who subsequently committed another crime and lost my position as a judge due to public opinion. I made it through the second time, but had nearly filled both my “jail capacity” and “fear” meters.
I found the simulation quite difficult and struggled with the choices. At first, I really wanted to release each person because of their statements about losing jobs, housing, or families. I kept thinking that if they were allowed to await trial at home with family or other supports, they might be less likely to reoffend. For example, if someone awaiting trial for robbery lost a lot of potential income while they were detained and awaiting trial, they would be more likely to commit another robbery once released due to their financial situation. However, the the simulation did not extend to what the defendant did or did not do post-trial. Therefore, in several of the cases where I released the defendant, they either committed another crime prior to their trial or were absent from their trial. This caused public fear.
The strongest influences in my decisions were the nature of the alleged crime (charges of rape, murder, and assault automatically made me more inclined to detain the person, even though they may have been innocent) and the projected risk of violence. Additionally, if the defendant had at least a medium risk in all of the three categories (flight, crime, violence) I was likely to detain them as well.
Considering the strong influence that the risk assessment algorithms had on my decision making, I think it is extremely important to understand how the risks are calculated. I am not certain how bail hearings normally progress or what information is made available to the judge, but based on my experience with the simulation, I think that I would want to have access to all the data that went into the risk assessment to be able to make a decision more confidently. Can we be certain that the algorithm is not discriminating against certain groups of people? Dr. Cathy O’Neil (2017) described several algorithms that reflected cultural biases, in some cases unfairly targeting black people, and I can’t help but wonder if this risk assessment algorithm is prone to the same mistakes.
Additionally, humans have motive for their actions, and this is not something that an algorithm could necessarily capture. For example, if someone commits a theft of food because their child is hungry, and is likely to commit the same crime again before their trial because their child needs to eat, should they be detained and lose custody of their child, get fired from their job, and miss out on time they could spend working on a defense? What another person is stealing wallets because they want money to support a gambling habit? Both people just show up as a “high” risk to commit a crime while awaiting trial. Do they deserve equal consideration for release? This brings about a whole bunch of ethical considerations, which I will not discuss here. My main concern is that algorithms may only account for the “what” but not the “why” of human actions, which only tells a part of the story.
O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Guardian. https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies
Pocaro, K. (2019, January 8). Detain/Release: simulating algorithmic risk assessments at pretrial. Berkman Klein Center Collection. https://medium.com/berkman-klein-center/detain-release-simulating-algorithmic-risk-assessments-at-pretrial-375270657819