Task 11: Detain/Release

Detain Release

This week’s module highlighted the perils of algorithmic decision-making and its implications for public justice systems. During the “detain and release” activity, I noticed that the cases were becoming repetitive, and I wasn’t sure how I had previously responded. As I continued through the module, the added pressure of the glowing red fear index and jail capacity level created extra angst around my decisions. Initially, I was considering the violence level of the accused as an important factor. However, even though a drug trafficker may not have shown indications of violence, there is a level of violence and societal impact associated with the larger criminal enterprise/network. This quickly becomes a very complex issue, and I understand the desire for a system to make analytical choices based on perceived risks. The question is: how accurate is our reporting of these risks? What metrics can we rely on that will accurately describe a person’s violence level, especially when we are making judgments about the likelihood of an event that hasn’t happened yet?

AI models encode personal biases, and their mass deployment amplifies these biases in ways that can be catastrophic. As we saw in previous modules, the use of these systems leads to a variety of unintended consequences, such as an increase in citations being given as a result of the predictive policing machine (Vogt, 2018). I liked the descriptions of these processes termed by O’Neil (2016) as “math-powered applications that encode human prejudice” and “weapons of math destruction” (2017). Unfortunately, once these models are running, it’s quite difficult for us to know how they are making decisions. This lack of transparency is uncomfortable at a minimum and harmful to society, especially when considering the gravity of a situation such as the criminal justice system. How can we increase the human management of a system that many humans don’t fully comprehend? To ensure the accuracy and reliability of AI systems, it’s essential to understand how they are audited. A growing group of people is increasingly becoming concerned about the use of these tools, highlighting the need for enhanced regulation and safety measures.

O’Neil, C. (2017, April 6) Justice in the age of big data. TED. Retrieved August 12, 2022.

Talks at Google. (2016, November 2). Weapons of math destruction | Cathy O’Neil | Talks at Google. [Video]. YouTube

Vogt, P. (2018, October 12a). The Crime Machine, Part I (no. 127) [Audio podcast episode]. In Reply All. Gimlet Media.

Vogt, P. (2018, October 12b). The Crime Machine, Part II (no. 128) [Audio podcast episode]. In Reply All. Gimlet Media.

One Reply to “Task 11: Detain/Release”

  1. Hi Katy, thank you for sharing your experience and reflection! I think you raised crucial points about the challenges and implications of relying on AI algorithms for decision-making, particularly in high-stakes areas like the criminal justice system.

    When I was completing the activity, I found myself lose focus and initial goal when it became repetitive as well. I was focused on balancing jail space and fear amongst the public, that I was becoming more and more desensitized. This can cause human oversight, and create dependency on algorithms. As you mentioned, now that we don’t really even understand how algorithms work (or how far it can go), this poses complexity in assessing risk of future criminal activity and behaviour. I think all of this highlight the need for enhanced regulation and safety measures in all industries where AI is used to make sure the systems are managed responsibly and ethically. Thank you for sharing again!

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet