I thought it was tough to decide if I should detain a criminal or release the individual. However, I found it hard to determine without sitting through each trial. Each person had a line saying why they should be released and what troubles they would have if they were detained. Assuming that the trial information was summarized and based on that information, each “criminal” was ranked according to the cards, I am not able to hear the various testimonies hear the perspective from the “criminal.” Words on a screen are just words on a screen. Did the “criminal” show signs of remorse? What was their body language during the trial? When people were on the stand, did the “criminal” show different facial expressions, make comments, or reject anything other people were saying?

How would AI decide which criminal to detain? Is it able to detect emotion and body language? Is it when they are a high risk for all three factors, two factors, or just one? Body language, facial expression, and testimonial word choices and tones should all be considered to determine if an individual should be detained or released. AI cannot detect any of that; it would compile the data and decide to detain or release.

In The Crime Machine podcast, the result of using AI in crime fighting is that they were downgrading the type of crime, and the actual crime numbers are inaccurate (Vogt, 2018). As a result, the police team is fighting against crime numbers rather than recording and fighting actual crime (Vogt, 2018). How is that morally ethical? How can AI determine if someone belongs in jail or fits the profile of a repeat offender? In McRaney’s podcast, the group discussed what AI is and how humans shaped it. After hearing it, I can’t help but think, “humans might have created a monster.” AI doesn’t know what type of world we live in and how to feel emotions (McRaney, 2018). Arresting someone and putting them in jail is not as simple as fulfilling a checklist; there must be a thorough investigation and hearing various testimonies of what happened. In the podcast, the group said we need to help shape the ethics of AI, but how do we know that AI is always just (McRaney, 2018)? It is evident that humans don’t always make the right decisions, and in return, AI is also programmed with flaws. I believe there is more work to be done if AI is here to stay. If we really want AI to decide who to arrest, put in jail, and who to release, we need to do a better job at data entry and programming. There must be more ways for AI to interpret this data and evaluate who needs to be arrested and detained. As of right now, I don’t have complete confidence in AI dictating our judicial system.

References: 

McRaney, D. (Host). (2018, November 21). Machine Bias (rebroadcast) (no. 140). [Audio podcast episode]. In You Are Not so Smart. SoundCloud.

Vogt, P. (2018, October 12a). The Crime Machine, Part I (no. 127) [Audio podcast episode]. In Reply All. Gimlet Media.

Vogt, P. (2018, October 12b). The Crime Machine, Part II (no. 128) [Audio podcast episode]. In Reply All. Gimlet Media.