Implications of Algorithms

This task surprised me. Initially, I was grateful for the identity being blurred, however, I did note that one might still make decisions based on whether the individual was a person of colour. Gender was also revealed, and that too may hold bias. I began by having a mental inventory of my hard lines in terms of releasing and detaining. Are they at risk of being violent? Are they likely to re-offend? However, as the Jail capacity and fear indicators changed, I began to make decisions that didn’t align with my values. I released more people, knowing that this may put others at risk.
Additionally, the lack of data doesn’t allow for an in-depth representation of who the person is. The algorithm provides a one-dimensional response to a complex and layered problem. Moreover, the simulation may say whether a person is supporting a family but doesn’t explore whether they do so sufficiently or meet the emotional and physical needs of their family. An individual may be providing food but maybe abusive or neglectful in another capacity. The indicators would benefit from elaborations to help one make a more informed decision. Moreover, if a person is arrested with drugs, is that referring to some personal use of marijuana or selling crack in a local park? The information is vague and misleading.
Finally, the jail capacity is a troubling value to be at the top. Several factors might contribute to an at the capacity facility. Perhaps they don’t have a large enough facility in comparison to a given population. Judicial decisions made on behalf of a given population need to be made in relation to the crime and not on whether there is room in jail.
I see benefits and enormous risks in such algorithms. In using data to inform decisions, one must ensure that the information is all-encompassing, free from bias and representative. Moreover, the way decisions are made and represented needs to be precise. For example, when the simulator indicates ‘Fear,’ what does that even mean? Fear for whom?
There are real consequences to ‘broad strokes’ when it comes to gathering creating data for algorithms. Organizations need to ensure data collection platforms allow for clear and bias-free engagement separated from potentially persuasive features.

1 Thought.

  1. Hi Grace – I also completed this Detain / Release simulation, but got very different results than yours. By the time I got to the 20th defendant, the town’s ‘trust in me’ was depleted and I was told that I had been voted out as a judge! Still, it’s interesting because reading your reflection here, we seemed to have many of the same concerns while completing the simulation. I appreciate the questions you were asking about the size of the jail (I hadn’t considered that). The ‘fear’ factor must have been related to whether any of the defendants we released ended up back in court (I actually had three of these, and the fear factor went up each time).
    I also felt that we were given very vague information about these defendants and I felt manipulated when I noticed the same crime being committed by two young men of different ethnicities, yet the AI told me to release one over the other. I really agree with you about the consequences of making decisions based on this type of data, especially for a judge who might be trying to be ethic, but pressured into using these tools that seem to have bias within them.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet