Task 11: Detain/Release

This week I chose to use the Detain/Release simulator for my weekly task. Since I am unfamiliar with the law system, I relied on the algorithmic risk assessment for judgment. The major category that I looked at was “commit a crime” and “violence”. If a defendant scores high in these two categories, the chance for them to be detained will be higher. I also found the colour-coded display of information very nerve-racking. When I see the table in red I would almost immediately click the “detain” button. However, I feel uncomfortable solely depending on the algorithm because I do not know how it works. For example, I have no idea what questions were asked to determine the risk level or how the information was collected. Making life-changing decisions for other people based on an assessment that provides nothing more than a risk level makes me feel uneasy.

Even though algorithms and big data are claimed to be “unbiased”, we all know it is not true. “Algorithms are nothing more than opinions embedded in code” and they are subjective, reflecting the measurement that the designer thinks would help to solve a particular problem (Dr. O’Neil, 2016). Therefore, algorithms are biased by the value and beliefs of the people who build them. From the questions in the questionnaire for pretrial risk assessment like LSIR, traditionally marginalized groups are again unfavoured in these algorithms. Due to the limited transparency, a lot of decisions were made without knowing the criteria, further increasing injustice and inequity towards people of colour and people in poverty. 

Another reason why algorithms can be problematic is prioritizing quantitative data. In the podcast Crime Machine: Part Ⅱ, we see the example of police, in order to avoid punishment and demonstrate their work, downgrading crimes to lower the crime rate and arresting people for unnecessary reasons to increase police activities since these data are used to assess their performance. This is a dangerous path that leads to unethical practices. All these examples are detrimental consequences of using algorithms falsely, and the question is how do we change it? How do we ensure algorithms are optimizing our success, not widening the existing gaps? As Dr. O’Neil mentioned (2016), there is no single answer or set of rules. Every algorithm needs to be examined in its own context and intention. Meanwhile, the mathematical thinking behind algorithms need to be transparent, so people know what and how they are being assessed.

 

Reference

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.

Vogt, P. (n.d.-b). The Crime Machine, Part II. In Reply All.

Leave a Reply

Your email address will not be published. Required fields are marked *