This assignment was particularly interesting to not only me but my husband as well. He works in the Justice Ministry in BC and he was really keen to try the simulation as well. He even spoke to some of his colleagues at work. Unfortunately, my husband was called to work away while I was writing this as I would have liked to include his perspective as well. This also linked with the two courses I took in the summer, Effective and Ethical use in AI and Culture & Communication.
I think that the Detain/Release simulation is a powerful reminder of the layers of human bias, judgement and assumptions that are present even in neutral algorithms. I tried to be consistent and remember my answers and I realized that I was soon down the rabbit hole of AI risk assessment that is rigid and hidden. This resonated well with Cathy O’Neil (2017) when she talks about how AI scales rather than eliminated bias in her TED talk. This also made me think about What O’Neil write about in her article featured in The Guardian about how algorithmic systems distort or work on incomplete information with absolute confidence! There are two things wrong here, the first is that there is a risk that the algorithm is wrong and the second is that we are being conditioned to trust the results without questioning the underlying data. The opacity of AI is alarming. We are starting to accept AI as an everyday item which we are being led to believe is intelligent, competent and neutral. This worries me especially when human lives are sometime literally at stake.
I was particularly interested in the 99% Invisible podcast episode The Eliza Effect as we did an exercise in the AI course where we interacted with our AI platform of choice in personal way, we could ask for advice or tell it how we were feeling. This was very revealing, I can see how a person who is vulnerable would find this service very comforting as it is available 24/7.Some of the answers were so relatable that it would be easy to be convinced that there was a human on the other end. This effect that they are discussing is the human tendency to give meaning and depth where there isn’t any. In the simulation I completed for this assignment I found myself trying to make sense of why a score was logical and giving the algorithm an emotional component it didn’t have. I was also giving the benefit of the doubt.
The Detain/Release simulation brought to light the concerns we should have when using AI in high consequence situations like this. The real-life implications that are evident in this type of setting and not the same as an AI generated picture or using AI to make a recipe for you of the ingredients in your fridge. The consequences of using Algorithms in this context are literally life altering. This should not be taken lightly. These systems reflect historical injustices, racialization and marginalization and do not take into account the harm that they are perpetuating. This assignment made me think about how using algorithms to make decisions is more than a technical issue, it is an ethical and systematical one. We need to be wary of claims of efficiency and neutrality and remember the risks of perpetuating past, current and new biases. We need to be challenging algorithms and demanding a system of accountability.
The one aspect that I am always aware of when looking at algorithm use is the cost of these systems. Whether it is a human cost or an environmental cost I like to be mindful of my AI use because of this.
References:
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer.
Talks at Google. (2016, November 2). Weapons of math destruction | Cathy O’Neil | Talks at Google [Video]. YouTube.
Mars, R. (Host). (2017, September 5). The Age of the Algorithm (no. 274) [Audio podcast episode]. In 99 Percent Invisible.