Think about the implications and consequences that AI-informed decision-making brings to certain aspects of life. Please remember that the post is the task and not the completion of the simulation.

The availability of big data has made it easy to obtain new insights through the use of computers. Therefore, algorithms, which are a set of step-by-step instructions that computers follow to perform the desired task, have become more sophisticated and pervasive tools for automated decision-making and making predictions on the future based on historical patterns. O’Neil (2017) highlighted the fact that lots of algorithms go bad unintentionally and some of them, however, are made to be criminal. This means algorithms are used in many different contexts and the score of an algorithm ends up being human decisions which is worrisome.  For example, targeting or making judgments from data about identities, test results, and preferences to generate future results.

Before big data and data analytics became a trend, humans made decisions through transparent traditional means, such as  advertising through television, radio, or printed flyers. Fast-forward to today, all it takes is for you to have a Facebook account and search for “childrensplace.ca” through google, then I go on Facebook and all the deals for children’s place pop up in an ad.  This is a perfect example of the implications of AI-informed decision making because it exploits customers’ spending habits in the pretense of improving “customer experience”. It also makes these decisions based on algorithm bias harnessing data to influence human decisions.

Another implication can be seen in the criminal justice system with the use of Risk assessment tools. According to  (O’Neil, 2017), “Judges can look to this supposedly scientific analysis, crystallized into a single risk score. And those who take this score seriously have reason to give longer sentences to prisoners who appear to pose a higher risk of committing other crimes” . The simulation example generates its own “bias”, by us determining detention or release. I realize that making the different choices can generate incorrect conclusions based on how you feel about a particular group or characteristics (for example, being a parent and won’t get to spend time with kids, etc.). These decisions can later affect individuals and can result in longer sentences or higher bails imposed especially on people of color or specific demographics.

Not surprisingly, it is interesting that these scores are widely used across America, and as a person of color it is really overwhelming to see how not using a safe algorithm can have a significant impact on people’s lives. While big data has a lot of pros, big data needs to be evaluated on an ethical framework.

Lastly, from an educational perspective, there are various concerns that the use of analytics generates in higher education, (Campbell et al., ) and Sclater (2017) outline a few including privacy, invalid predictions, prejudicial categorization, and demotivation to name a few. While privacy is always a concern, in my experience Prejudicial Categorisation is real, whether it be race, gender, or religion. Labeling can affect staff perceptions and can follow students wherever they go.

References

Campbell, J., John P Campbell, Peter B DeBlois, & Diana G Oblinger. (7). ACADEMIC ANALYTICS: A New Tool for A New Era. EDUCAUSE Review, 42(4), 40.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.

O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. Retrieved from https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies

Sclater, N. (2017). Learning analytics explained. Routledge.