For this Detain/Release simulation, I knew I would have bias, yet, being aware and trying to control for that bias I was not very successful. I was unable to see the top of my screen where the “Jail Capacity/Fear” bars were located because of how my browser was configured. I did not become aware of all the “red,” until afterwards. This was done purposefully out of curiosity given my past life experience. I was not surprised by the biases of my results.
I graduated from university in 1988 with a bachelor’s degree in criminology and corrections and immediately obtained a job with the local county office of probation for juvenile justice. I worked at this job for just 8 months, I was not cut out for it. This job required following a strict set of rules and guidelines to determine people’s lives, I felt that the system was too rigid. There was little room for empathy or understanding in the way that cases were handled, so I left.
At the time, I felt like they just wanted me to be like a computer following an algorithm, more than thirty years later the process is still the same using AI programs. AI program may be able to analyze large amounts of data and identify patterns and trends that could inform decision-making, about recidivism but it lacks the ability to understand the emotional and psychological complexities of human behaviour and the circumstances of each case.
AI-informed decision-making has significant implications and consequences for the criminal justice system. If AI algorithms are trained on biased data, they may replicate and even amplify existing biases in the criminal justice system. This can result in unfair outcomes, particularly for marginalized groups who may already be disproportionately impacted by the criminal justice system.
Data! Data! Data! The use of algorithms in law enforcement and the potential for bias and discrimination is not a new idea in the criminal justice system. This is highlighted in the podcast, The Crime Machine, P. Vogt (2018), discusses, young police officers being given quota directives. Arresting people instead of actually solving crimes and downgrading crime. He goes so far as to emphasise the Summoning of people in impact zones, targeting Hispanic and black young men, and of course, white men we never targeted, all for COMPSTAT data!
A major concern with AI programs, algorithms, and data is the potential for bias and discrimination. If AI algorithms are trained on biased data, they may replicate and even amplify existing biases in the criminal justice system. This can result in unfair outcomes, particularly for marginalized groups who may already be disproportionately impacted by the criminal justice system.
Another concern is the lack of transparency and accountability in AI decision-making. It may be difficult for defendants and their lawyers to understand how decisions were made and to challenge those decisions if they are based on opaque and complex algorithms. This can compromise due process and the right to a fair trial
Additionally, there is the potential for AI to reinforce existing power imbalances in the criminal justice system. If AI algorithms are primarily developed and controlled by those in power, they may be used to perpetuate existing structures of oppression and further marginalize already marginalized groups.
Overall, while AI has the potential to improve decision-making in the criminal justice system, it is important to carefully consider the potential implications and consequences of its use, particularly with regard to issues of bias, transparency, and power. It is essential that AI is used in a way that is ethical, accountable, and just.
References
Vogt, P. (2018, October 12a). The Crime Machine, Part I (no. 127) [Audio podcast episode]. In Reply All. Gimlet Media.
Vogt, P. (2018, October 12b). The Crime Machine, Part II (no. 128) [Audio podcast episode]. In Reply All. Gimlet Media.