Detain or Release? The Crime Machine

Judging others’ behaviour should not be complicated, but it is. This week’s task facilitated a hypothetical situation where we, as judges, had to choose whether to detain or release individual defendants. The three factors were the defendants’ likelihood of failing to appear in court, the probability of committing a crime if released, and whether that crime would be violent. Setting about judging each individual and having the opportunity to view their pleads to be released was heart-wrenching. Some claimed they would lose custody of their children, and others sought medical attention that would not be accessible if they were detained. 

I have learned that I am a terrible judge and that Keith Porcaro’s goal of demonstrating the dangers of using algorithmic risk assessment tools can lead to inaccuracies was right. Keith set out to show his students “think[ing] critically about how software and data-driven tools can influence legal ecosystems — sometimes in unexpected ways” (Porcaro, 2019). For instance, what I found affected my judgement was the jail’s capacity and fear generated if I had released a defendant the had committed another violent crime. In a sense, my experience was similar to how a machine learning program can be taught to differentiate between a cat or a duck. By using something like a classification application, an algorithm could differentiate between “deciding that some combination of features constitutes a car or a stop sight” (Moyer, 2021). My original task of weighing morality was changed. Instead, the task simply became limiting fear and exceeding the jail’s capacity. 

A computer algorithm’s ability to process data to aid in deciding if a defendant is likely to offend should not be left alone to computers. Although, using computer algorithms and data analytics as a way to aid in making consistent decisions would be a worthwhile exploration in the field of data analytics and AI. 

Check out this podcast by Radiolab that further explores the idea of responsibility within justice and how the time of day may affect our decision-making. 

Radiolab – Revising the Fault Line

References

Moyer, B. (2021, November 4). Easier And Faster Ways To Train AI. Semiconductor Engineering. https://semiengineering.com/easier-and-faster-ways-to-train-ai/

Porcaro, K. (2019, April 17). Detain/Release: simulating algorithmic risk assessments at pretrial. Berkman Klein Center Collection. https://medium.com/berkman-klein-center/detain-release-simulating-algorithmic-risk-assessments-at-pretrial-375270657819

Radiolab. (2017). Revising the Fault Line | Radiolab | WNYC Studios. WNYC Studios. https://www.wnycstudios.org/podcasts/radiolab/articles/revising-fault-line

 

Leave a Reply