Anu Thale

Task 11: Detain/Release or Algorithms of Predictive Text

I found it challenging to make decisions in the detain/release stimulation based on the information provided for each defendant. As there is a space limit to the jail, I had to choose which person to detain and release. By the end of the stimulation, I had little capacity in my jail, and my public fear graph was on the lower end. I used the information generated by AI (which predicted their probability to commit a crime, violence, and failure to appear for the court hearing) to predominately determine if I should release/detain the individual. I also read the statements from the individuals and defendant to further inform my decision. I wish I had more information, as I did not have the full picture or story of the individual (specific details for their crime, their history, etc.). Also as I do not have any training or experience in law enforcement, I did not feel qualified to make these decisions, as I do not understand how the system works. For a few individuals, using the information based on information generated by AI steered me in the wrong direction. One individual that I released, AI said the failure to appear for court was low. I later received a notification that the individual did not show up at court. For most individuals, it appears the information provided by AI  steered me in the correct direction. But is it fair to those individuals that are put away based on information from algorithms that may be biased by race or is incorrect? Decisions based on AI could aid one in making decisions. However, it could allow for some innocent people to be treated unfairly due to mistakes generated by AI algorithms. Also, there will be always biases in AI, as it would be impossible for the individuals creating AI to be free of bias. Consequently, this bias would be reflected in the information provided by AI.

I also noticed that most of the individuals in the stimulation were males, and belonged to a minority ethnicity. Many individuals need to be released so they could work, make ends meet for their family, or had an illness. This task reminded me of the targeted summons that was given to specific groups as explained in the crime machine podcast. Furthermore, I remembered from this week’s module how detaining individuals would further negatively impact an individual as in jail they would interact with prisoners (whom may be violent, murders, or drug dealers) and this would affect their emotional and physical well-being. Additionally, upon their release, being in jail would affect their ability to get a job. I had not realized how algorithms are being currently used in society. For example, when the individual travelling on the United flight was forced to leave the flight and was treated unfairly due to information provided by algorithms. Furthermore, I did not know that some teachers in the United States were suspended from their job due to calculations generated by algorithms. It is not ethical for humans to be treated this way and transparency is required if we will continue to use information from AI algorithms to make decisions in society. If individuals will be screened by AI algorithms before applying for a job, they have the right to know that. If AI algorithms will assess a person’s “risk” when they are buying insurance or a car, they have the right to know that. With AI algorithms providing information and influencing decisions people make based on data from AI, the repercussions of this could be extensive. There could be great potential in the future using algorithms, but I believe we should be cautious using them until we fully understand how they fully work and their effect upon society. Until we fully understand how they work, algorithms should be regulated to ensure fairness and transparency.

« »

Spam prevention powered by Akismet