ETEC 540 Task 11: Detain/Release

The purpose of this task was to consider the implications and consequences of AI-informed decision-making. Through the Detain/Release simulation, I was tasked with assuming the role of juror to decide whether defendants pending trial would be detained or released. The following information was available to aid my decisions:

  • an assessment, resulting in low/medium/high ratings, on each defendant on their likelihood to:
    • fail to appear,
    • commit a crime,
    • commit a violent crime;
  • a recommendation from the prosecution;
  • a statement from the defendant.

After undertaking this simulation twice, I ‘failed’ both times. Either the jail capacity was reached or the community fear, triggered by additional crimes supposedly committed by those released, surpassed a threshold and created mass panic. And both times, I, as the juror making the decisions, took the blame.

Dr. O’Neil (2016) explains that algorithms are based on two main things (and lots of little things:

  • the data used to train the algorithm;
  • the definition of success.

In the Detain/Release simulation, it was unclear to me what the data that trained the algorithm was and how the algorithm was concluding with its low, medium and high-risk assessments. Despite this, I did notice myself using it to guide my decisions, particularly because they seemed to align with the prosecutor’s recommendation of whether to detain (medium/high risks) or release (low risks). It also felt bad when I’d made the ‘wrong’ decision, as evidenced when reoffences occurred. Here, it seems as though ‘success’ was defined as: detain as many people as possible without overfilling the jail to prevent repeat offences. While that logically sort of makes sense, it is also an indication that this is a ‘Weapon of Math Destruction,’ or WMD (O’Neil, 2016).

O’Neil (2016) goes on to explain that a WMD is particularly nefarious when three characteristics exist:

  • it is widespread, meaning it impacts a large group of people;
  • it is mysterious, meaning one cannot see or understand the algorithm (and thus hiding the biases ingrained);
  • it is destructive, meaning it unfairly ruins peoples’ lives and often makes the problem it set out to solve worse.

Based on the above, the AI algorithm used in the Detain/Release simulation seems to adhere to all three characteristics.

So what are the implications and complications of AI-informed decision-making? As Dr. Vallor (2018) explains, AI algorithms can serve as accelerants to amplify risks already existing within our society. Many of the defendants had statements citing their innocence, lack of evidence, their need to work and/or be with their family, or the financial limitations inflicted through detainment that would directly impact their ability to defend themselves. Of course, a judge cannot assume that every defendant is telling the truth, or that their truth is the reason why they will not commit another crime, and thus the decision must be made on the combination of factors, and there is a sense of relief when one can claim they are relying on ‘data.’ But that doesn’t mean we can ignore the pain and harm inflicted on these people, or their victims, or society. We can acknowledge that AI in this case serves as a mirror reflecting human bias (Vallor, 2018), and the systems reliant on those biased choices, like a judge deciding who gets detained or released) are broken. AI-informed decision-making should not be seen as a perfect or ideal solution. However, it is promising that there are researchers, like O’Neil (2017) and Vallor (2018), who do recognize that AI, when used for good and monitored for unintentional (and intentional) problems, can be used to help make our society better.

References

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Guardian. https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

Santa Clara University. (2018, November 6). Lessons from the AI Mirror Dr. Shannon Vallor [Video]. YouTube. https://www.youtube.com/watch?v=40UbpSoYN4k

Talks at Google. (2016, November 2). Weapons of math destruction: Cathy O’Neil [Video]. YouTube. https://www.youtube.com/watch?v=TQHs8SA1qpk&list=PLUp6-eX_3Y4iHYSm8GV0LgmN0-SldT4U8

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet