Machine Bias

Big Data, AI, and machine learning are hot topics in education. Before their popularity in education, however, they first boomed in commercial industries. For example, it was an algorithm that chose who to boot off that United Airlines flight when it was overbooked in 2017 (United Airlines faces backlash after dragging man from plane – CBS News). Even though news articles say that passengers were chosen at random, it was really an algorithm that calculated who in the plane was the least valuable, who was the least likely to ride with United again, and who was not a part of their rewards program. In other words, United Airlines made a decision on who was the most dispensable. Big data, AI, and machine learning pervade in many aspects of our lives, hidden, in the sense that we do not know when they are employed to make decisions, and when we do know, the code that generates these decisions are often proprietary and blocked from public scrutiny.

Computer generated decisions appear to be 100% objective because a machine cannot be swayed by emotion or bias, and this can be great when they are designed to solve climate change, traffic congestion, or to model a pandemic. This seemingly unbiased nature of computer programs is appealing because we want to eliminate human error while simultaneously be able to crunch huge volumes of data. However, we cannot forget that it is humans that create these algorithms in the first place, and in doing so, prejudice and bias can be unintentionally coded in, which can result in severe consequences if the programs are making decisions on people. Prime examples of algorithms and machine learning gone wrong can be found in the justice system where decisions on sentence severity, bail, and even arrests are made by machines. If arrests are made more frequently in poorer neighbourhoods and with people of colour because cops are prejudiced and racist when they do their patrols, then the data fed into a machine that is learning how to predict crime is prejudiced and racist. It is not so hard to imagine how a computer might grant bail more easily to a non person of colour, even if that person’s crime is more severe than that from a person of colour. This podcast from ETEC 523 discusses the ethical and moral issues with current algorithms and AI.

As Big Data, AI, and machine learning step foot into education, we must prioritize how not to make the same ethical mistakes as we did with current programs. If in the future we are to allow machines to assess students, to decide what programs students should enroll in, or to decide how students should be corrected for poor behaviour, we must make sure that the algorithms do so ethically and equitably without marginalizing at-risk groups. Special care must be taken to prevent encoding prejudice into education algorithms as decisions made for young people can have long-term impacts. I think about three implications on teaching and learning as we see companies come at us with tracking software, large assessment tools, etc.:

  1. We need to ask these companies what data, and where it is sampled from, is being fed into such programs. Is the data bias free?
  2. We need to start asking post-secondary software engineering programs to add history and social justice courses into their degree so that future coders are more ready to spot and eliminate code bias.
  3. Remember that studies show that teachers can determine their students’ ability more accurately than any standardized test.
  4. Remember that a human touch is necessary for child development.

( Average Rating: 5 )

3 responses to “Machine Bias”

  1. Evelyne Tsang

    Hi Ying,

    You raise very good questions about using algorithms to determine futures. As mentioned in your podcast on bail recommendations: “we trust the math, we trust the machine, and we just move forward.” This assumption that a machine is unbiased can lead to great impacts on a human’s life.
    The question of machine bias reflects the current issues on systemic racism, sexism, and socio-economic disparities. Why is race important? Why are gender or age important? How can any external factor relating to how a person grew up or their environment be important? When analysing data, we need to define the word “important”. Knowing the demographics can help mitigate bias – by knowing what perspectives are missing or misinterpreted.

    Cassie Kozyrkov gives a great example of perspective in her video “How to fight AI bias” https://youtu.be/zUqmo9eNs2M

    Perhaps this is one more item to keep in mind when using machine learning to help supplement human thinking.


    ( 1 upvotes and 0 downvotes )
  2. philip pretty

    Hi Ying Gu, this is a very interesting read you have presented. I am especially intrigued by the bias that is coded into programs and used to make important split decisions in a world that is becoming more and more automated. Another interesting piece to the AI and Big Data debate is the many facets. You describe the objective goals of some AI above as “designed to solve climate change, traffic congestion, or to model a pandemic.” This is scientific in nature. Another facet sees the use of data in marketing to entice consumers and drive capitalist urges. I would even suggest of course that AI and big data is probably designed with an ideological bias as well. There are many examples of there being an ideological bias in social platforms, consider the promotion of liberal ideas versus the stifling of conservative ones by Twitter and Facebook for example.
    After listening briefly to the podcast above the predicative text exercise demonstrates that humans still need to reign in the technology that we use daily.
    Vincent Tabora suggests that Algorithms are not biased, but the results of using them have unintended consequences. (https://medium.datadriveninvestor.com/algorithms-are-not-inherently-biased-its-a-result-of-expectations-with-unintended-consequences-1d8c144f52af)

    Thanks again Ying,
    I really enjoyed this informative post.


    ( 1 upvotes and 0 downvotes )
    1. Ying Gu

      Hi Philip,

      Thanks for that article! It was a great read on how bias exists, even if some parts of it were too technical for someone with zero background in coding to fully understand (wish I knew how to code!). Your point on how algorithms are not biased, but their use and interpretation is, is a great one. They are just lines of code after all that operate based on our will. Dr. Shannon Vallor describes algorithms as mirrors that reflect human bias. She did a talk entitled, Lessons from the AI Mirror, during which she argued that AI are just extensions of us, tools that amplify our thoughts. Because they can do things faster than us and go through volumes of data faster than us, the amount of prejudice that can come out is magnified. I think your article complements her talk nicely. She also notes that in the news, AI and algorithms are called racist, when really it is us who are.


      ( 1 upvotes and 0 downvotes )

Leave a Reply

You must be logged in to post a comment.