As I navigated through the Detain/Release simulation, I found myself acutely aware of the potential bias in the pre-trial algorithmic risk assessment tool. Despite limited information, I attempted to critically reflect on the factors influencing my decision-making. My observations were as follows:
-
Gender – As a mother of two young children, I was more sympathetic to individuals who had families or stated that jailing would influence their home stability/job status.
-
Violence – I found myself more swayed by an individual’s violence risk assessment compared to their flight risk assessment. However, as I navigated through the simulation, I found my thoughts veering toward a more critical analysis of the factors that may have influenced the algorithm’s determination of one’s violence risk. Were certain individuals flagged as a violence risk based on a biased algorithm, or were they truly a risk to harm others?
-
Societal pressure—I was surprised at how influenced I was by the jail status and community fear. Initially, I was prone to release more individuals, opting to assume the best of these individuals; however, as the game progressed and community fear increased, I was less sympathetic to the defendants’ claims and more likely to place them in jail.
Implications and consequences that AI-informed decision-making brings to certain aspects of life.
A key takeaway from this activity was my need for more information. Having completed the week’s readings, I had a heightened awareness of the bias that the algorithm was likely “feeding me.” On the podcast Machine Bias (2018), Valor discusses how predictive algorithms can create tremendous damage and self-fulfilling prophecies (Santa Clara University, 2018). Valor (Santa Clara University, 2018) states that “AI is an accelerant,” meaning this algorithm can accelerate the bias that currently exists in the criminal justice system instead of being an objective predictor of human behaviour and I did not want to rely on the algorithm for decision-making. I wanted more data regarding these individual’s lived experiences.
Throughout this activity, I also reflected on the bias that exists in our everyday lives and more specifically, the bias I encounter in healthcare. As a nurse, I have witnessed how bias can influence care and communication in the healthcare setting. My goal is to be as mindful as possible of the prejudice and systemic racism that exist in our system, and I frequently find myself debriefing with students on ethics, bias and systemic racism throughout the term. It is not uncommon to receive a handover report where patients are labelled, creating a perception of who the individual is before meeting them. However, I also know that this anchoring bias (McLaughlin & James, 2018) can skew my perceptions of an individual and influence my body language and interactions.
When examining the implications and consequences of artificial intelligence, we need to look at our own lived experience and the values and principles that guide how we live our lives. Valor (McRaney, 2018) expresses that technical experts need knowledge of history, social dynamics, ethics, and politics in addition to technical knowledge; however, I would take this further and say that we all need to do this. We are all responsible to reflect on our own bias and view of the world and try to see each others’ lived experience before making judgments. This is the only way we will allow artificial intelligence to “write” a better version of ourselves and prevent biased algorithms from reinforcing harmful social patterns (McRaney, 2018). We must focus on the ethics of the people behind the machine (McRaney, 2018).
One response to “Task 11 – Detain & Release”