I struggled with this task and whether or not to do it. The reason being I have been in a mixed relationship for the past 4 years. Within those 4 years the handful of encounters that I have had with the police, my black boyfriend, and I, within different parts of the world, for a variety of instances, none criminal on our part, instances of just standing in the wrong spot at the wrong time. I do not want to think about what the outcomes could have been if I was not there, more clearly my white privilege. So with my experiences, I do not know how to do this reflection without biases. So I will try my best to stick to the readings and just make it clear this is my perspective from my experiences.
For as amazing and life-changing technology can be, that life-changing part can also have a massive negative impact. With the task, we are told the charge, not exactly what took place so with the information placed into an algorithm we are given stats to decide if we should release or detain the individual. Porcaro (2019) discusses how “software has framing power: the mere presence of a risk assessment tool can reframe a judge’s decision-making process and induce new biases, regardless of the tool’s quality.” To understand that just because the software states something does not mean it does not have faults. Which has me thinking about human error. Something can simply be added by mistake without the intention of harm, but because of the mistake, false results can be given which can cause harm. So my strategy was looking at the violence part, now I know it’s the software telling me if violence will occur with a repeat offence, but with the info given that is what I based my detain and release on.
In the 2 part podcast, O’Neil (2017) talks about the intention of the creation of Com Stat by Jack Maple was to clean up crime and make New York a safer place. But then it was figured out how to “cheat” the system so that true crime rates were not reported. The intent was there, but then human error, prejudice whatever you want to call it, takes over and finds a way to change the system, with serious consequences. Groups become targets to just “fill the summons” for the day. So how do we then experience criminal justice equally with data that isn’t equal? “This creates a pernicious feedback loop — the policing itself spawns new data, which justifies more policing. And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes. Most of them come from impoverished neighborhoods, and most are black or Hispanic. So even if a model is color-blind, the result of it is anything but” Cathy O’Neil (2017). We have a justice system that is there with good intentions, to serve and protect, we just need better systems in place so that statement holds true.
Dr. Cathy O’Neil (2017) in her article “How can we stop algorithms telling lies” talks about Ben Shneiderman, a computer science professor at the University of Maryland, proposed the concept of a National Algorithms Safety Board, in a talk at the Alan Turing Institute. She continues further talking about how “we should investigate problems when we find them, and it’s good to have a formal process to do so. If it has sufficient legal power, the board can perhaps get to the bottom of lots of commonsense issues. But it’s not clear how comprehensive it could be.” To start building a tracking system so at least there is a process, as with most crimes individuals will always find a way to beat the system, at least we can be more prepared and find the patterns within these algorithms. Knowledge is power, knowing that there are harmful algorithms and how we teach our students’ digital citizenship, well then maybe we should start teaching this too. A system to beat the system.
References:
O’Neil, C. (2017, April 6). Justice in the age of big data. Retrieved June 18, 2019, from ideas.ted.com website: https://ideas.ted.com/justice-in-the-age-of-big-data/
O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies
Porcaro, K. (2019, April 17). Detain/release: Simulating Algorithmic Risk Assessments at pretrial. Medium. Retrieved November 21, 2021, from https://medium.com/berkman-klein-center/detain-release-simulating-algorithmic-risk-assessments-at-pretrial-375270657819.
LeilaniRuffini
December 6, 2021 — 11:30 am
Hi!
I really like your quote at the end “ Knowledge is power, knowing that there are harmful algorithms and how we teach our students’ digital citizenship, well then maybe we should start teaching this too. A system to beat the system.”
It is really important that we teach learners the biases that systems have, where they come from, and how to attempt to prevent it from happening. I personally believe that it is impossible to get rid of bias %100 because humans are shaped by their experiences and surroundings. The goal instead is to minimize bias and make learner aware of how their biases can affect others and the system. When people become more aware and learn to minimize it, then we can get closer to creating a system to beat the system.
Melissa Guzzo
December 7, 2021 — 1:45 am
Thanks Leilani! I completely agree, it is as simple as having conversations of all sides from different viewpoints and perspectives.