Tasks

Task 11: Detain/Release 

In the book Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Noble she explores how biased algorithms are effectively oppressing people of color, in a deceitful and manipulative way. Within the introduction of the text, she prefaces that these algorithms are impacting users in very legitimate and pragmatic ways. However, they primarily function without investigation from the public and are hidden from everybody except a very few select individuals. So, for the most part, the public has a very limited grasp of the machine-learning functions that dictate these highly-impactful algorithms (Noble, 2018). Its also the case that officials using these algorithms to make decisions, like police officers, are unaware of the parameters and biases that impact the algorithm’s computing method and therefore the results. We see this concept in action through the “Detain and Release Simulation” produced by the Berkman Klein Centre for Internet and Society. 

Unfortunately, this simulation is a crude example of the ways in which algorithms are used to classify individuals (Porcaro, 2019). My initial impression of the game, that the player needs to assess “criminals”, demonstrates that it’s clear this algorithm is being used on marginalized individuals who likely do not have as much credibility to voice their concerns. The algorithm categorizes each case through three separate categories, flight risk, their likely hood to re-offend, and level of violence. Each category is then given a level between low, medium, and high. No information is provided on how these levels are distinguished or why an individual might pose more risk than others. Despite that, the player of the game needs to navigate each case and determine if they are going to release the offender or detain them. 

My assumption is that the algorithm is using information like the offender’s previous criminal record, employment status, income, social background, geographical area, etc.. to determine these low-high rankings. However, these parameters are constructed using overarching generalizations that are likely informed by other biased algorithms or information. As Noble points out, these algorithms feed off one another and perpetuate inaccurate representations of marginalized people (Noble, 2018). The player does not have the opportunity to ask for more contextual information, nor are they encouraged to, as the limited function of the game suggests that the sole purpose of the player is to sort through each file. We could say that the player is acting on their own judgments when making their decisions, and to some degree that might be true, but the player is determining a risk threshold based on the information provided by the algorithm. So, in many ways, the element of human judgment is almost irrelevant considering the algorithm had already determined which cases are more likely to be detained. For example, an ethical person would likely not allow a highly-violent individual to re-enter society so any file with a high level of violence is unlikely to be released. 

The player might release an individual they deem low-risk but later find out through a news article that the offender has gotten into some trouble. The article reads something along the lines of “Judge allows violent offender back onto the streets”. What the headline of this article implies, is that the player, or decision-maker, is at fault for determining this offender is low-risk. However, the language then places the accountability onto the decision maker, which calls into question the player’s judgment. The accountability placed on the decision-maker allows the inaccuracies of the algorithm to go left unchecked. Rather than reassessing the variables that determine the algorithm’s accuracy, the player continues navigating the remaining files with a higher level of scrutiny and apprehension. This comes back to the idea that algorithms are seen as neutral actors, incapable of being swayed by human biases because they are founded on mathematical equations – when that’s simply just not the case. 

References

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Porcaro, K. (2019, April 17). Detain/release: Simulating Algorithmic Risk Assessments at pretrial. Medium. Retrieved March 20, 2023, from https://medium.com/berkman-klein-center/detain-release-simulating-algorithmic-risk-assessments-at-pretrial-375270657819

Standard

Leave a Reply

Your email address will not be published. Required fields are marked *