Task 11: Detain/Release or Algorithms of Predictive Text

The detain and release algorithm program inspired more questions than it answered on the whole. It gave extremely helpful but limited information that made me want to know more about the circumstances of each case. Any time the bars were yellow or red, indicating a high likelihood of a failure to appear, committing a crime or violence, I had more questions than answers that would help me make such an important judgement about somebody’s life. The prosecutors suggestion was often the first thing that I looked at, followed by the likelihood of problems occurring if they were released. I carefully examined the defendants statements at first, but as I progressed with my choices I started looking at them less and less if it wasn’t a grey-area case. Part of what prompted this fairly callous procedure for me were the bars indicating how full the jails were and the fear score for the community. It ended up feeling like points in a video game seeing either of those bars go up and made every subsequent decision more difficult, as I tried to maintain a sense of fairness and empathy while keeping one eye on my “scores”. This system definitely made me more efficient than I would have been having to go through case files, but it removed some necessary human empathy and consideration from the process. 

Algorithms seem like an easy way to streamline processes that can be overwhelming or have large potential for human error. I thought that what O’Neil (2016) said about how “[Algorithms] show up when there’s a really difficult conversation that people want to avoid” was very astute. When we can make something quantitative that’s should be qualitative in nature, it can quickly shut down opposition by elevating the status of processes or decisions with their mathematical associations. Sometimes there are no right answers to problems or situations in the moment, there is only human judgement, which is fallible, but sometimes a better option than many of the processes she described as currently being the purview of algorithms, such as personality tests for hiring and using test scores to assess teacher performance. 

Algorithms can be positive, and I would argue that algorithms are needed as an adaptation to keep up with the expected pace of the world today, at work, school or in our home lives. Despite the benefits offered by algorithms in streamlining our day to day lives, I do have concerns with the direction some of these are taking in order to build more effective schools. When I worked in an English secondary school I had concerns about how student learning data was interpreted by algorithms, in a way that is similar to what was described by Kathy O’Neill. The school that I worked in was given a “four” in all evaluation categories as determined by OFSTED (Office for Standards in Education). A “four” was given the descriptor of “inadequate” and the school was promptly put into a state called “special measures”, which leads to increased scrutiny, benchmark goals to be achieved and more frequent inspections by OFSTED inspectors. This school happened to be in an area where half of the student body was currently on, or had been on benefits (welfare) in the past five years. Many of the challenges associated with educating students came from unmet social-emotional needs, trauma and generational poverty, none of which were addressed by these inspections.

Schools in England have computer systems that evaluate students’ past performance, based on data that has been put in the system and generates predictions of student success the following year. Their system uses levels of progress to evaluate success rather than grades and teachers were told to put large stickers on the front of every students’ workbook with their current level of progress and their expected level of progress before the end of the year. The difficulty with this system was that the criteria for determining the predicted levels was not shared with staff members or students and the predicted levels were very publicly displayed. This led students to wonder why someone with the same current level as them would have an entirely different predicted level. It often left students feeling that they were inherently less smart or less likely to be successful than the student sitting next to them, despite showing similar current skill levels. 

The algorithm that determined the scores was veiled in secrecy and to make matters more confusing, administrators would sometimes input different scores than those submitted by teachers to give the impression that they were meeting goals set out by OFSTED, which skewed the data further. Students who had very low levels of predicted progress would sometimes meet their predicted score before the end of the year and lose all motivation to keep learning because they had already met the expected level that had been laid out for them by the data, leading to somewhat of a self-fulfilling prophecy. Ultimately, I found the lack of transparency about the criteria that the algorithm based these predictions on and the public nature of the data to be overwhelmingly harmful to the students in my class. These programs were also applied to schools across the country in the same way, regardless of a students personal circumstances and didn’t take into account the complexities of my students lives outside of school. I wonder what biases exist within the program that might further disadvantaged students in this low-income neighbourhood.

The question of how to speed up student progress is complex, and while we do not use this same system in Canada we do try to motivate or evaluate students using a number of other algorithms, even if they are less formally applied. Games like Prodigy or No Red Ink have become increasingly popular for helping students gain skills in Math and Language Arts, while providing data to teachers on student progress. While entertaining for students, they can also provide parents and teachers with data about where students are in relation to grade-level. I have often wondered exactly how they determine this, as the process for how the child in compared to grade-level is not clearly laid out. It would be easy as a teacher to base assessment of student learning on these algorithms, especially given class sizes and marking load but I feel that it would do a disservice to students to put too much weight on these assessments given the veiled nature of the criteria for success.

However, algorithms have a place in education given that the people creating them are transparent about how they function. The place where I can see them impacting student learning in the most in positive way is in differentiating learning. By having students working in stations, with some groups working on computer programs tailoring material to their skill level, teachers can better support the entire class and create a more individualized learning environment. There is tremendous potential for AI in education but teachers need to be given the chance to work with those in the tech sector in developing these programs to avoid some of the pitfalls in current educational AI. Much like the detain and release algorithm, educational AI tends to devalue the empathy and consideration of the student as a whole that must be a part of education in order for it to be effective.

References:

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.

Leave a Reply

Your email address will not be published. Required fields are marked *