Education is not about a bunch of snakes and they don’t have an interview with Rachel during the day.
On its own I enjoy the above text-prompted sentence in its nonsensical grammatical correctness; after all everyone knows that snakes always—without exception—interview with Rachel in the evening. Otherwise how will education or society function? Really.
More important I find this sentence illustrative of the point in this module’s fascinating and thought-provoking podcasts, videos, and readings that algorithms can make or inform widespread decisions which impact people in negative ways and perpetuate discriminatory cycles. Hiring decisions based on responses to mental health questions; parole, sentencing, and recidivism predictions based on reported family crime history and race; policing procedures encouraging questionable workarounds, legalistic summonses quotas, and unequal treatment of comparable crimes; predictive text prompts employing historic gender stereotypes; evaluations of teaching based on subjective expectations of student performance without verifiable justification of the legitimacy or importance of the data, are potent examples.
As a lengthy, relevant aside to the main critique, mostly as timely personal processing, the last example above resonates with me as I serve on the student evaluations of teaching team in the Centre for Teaching, Learning and Technology at UBC. Student evaluation of teaching is a very political topic in higher education. Although UBC students may underestimate the weight of evaluations, many people have strong divergent opinions on the subject.
While CTLT promotes, considers, and responds to relevant research on student evaluation in its application of UBC Senate’s policy which states, “as part of a larger strategy to support and foster quality teaching and learning at UBC . . . Student evaluations should be considered as part of an overall teaching evaluation system that includes regular peer review, faculty self-assessment, and other forms of assessment, as appropriate,” many fundamentally disagree with the value placed in evaluations. For example, the UBC Faculty Association holds, “On the matter of student evaluations of teaching (SEoT), our position is clear: we propose that these measures not be used in the summative evaluation of teaching for appointment, reappointment, promotion, and tenure. The invalidity of these instruments has been known for a long time.”
Following a personally rigorous, intense three-week preparation as a member of an understaffed team, after we launched over 65,000 evaluations at UBC campuses last night, by 10 this morning I already had read the following messages from a student:
“I’m not interested in filling these out. Every year I [receive] more than 10 emails to do these. Please remove me from this list so I don’t receive these again.”
and an instructor:
“Why are you still continuing with running this survey which has been totally discredited?
“This is a sad testimony for a university devoted to evidence.”
I expect more to follow.
Although my responsibility is to support the evaluations from a technical perspective and I do not use the data to make administrative decisions, being responsible for reading and responding to such feedback on UBC’s mandated feedback mechanism feels personally disheartening. Even so I understand, as it pertains to large datasets and theories driving decision-making processes, there is a temptation to overvalue and oversimplify the objectivity, neutrality, and significance of a number. While numbers may be pure and neutral, data rarely, if ever, is. Data always has a context; data requires evaluative decisions in its acquisition, preparation, and interpretation. And as Ryan Hamilton argues in How You Decide: The Science of Human Decision Making, humans are susceptible to make decisions based on irrelevant data; we are wired to decide based on some reason—even if 100% irrelevant—rather than to decide without a reason capable of being articulated in mind at all. Therefore, although the examples above may give candid disregard for the impact on the well-intentioned, hard-working persons commissioned to administer the University’s student evaluation of teaching policies, they indicate how some people view student evaluations of teaching at UBC while voicing a legitimate and necessary concern that UBC Senate’s endorsement of students’ elective responses to six Likert scale questions as a useful mechanism to reveal an objective measure of effectiveness of instruction may be like dressing up a far-reaching, in-vogue, hepatoscopic fairy tale as an unadulterated data panacea.
The example sentence above is funny in a Mad Libs®-ian fashion and elusive and incompetent as the fruit of a predictive text generator. But it exemplifies the point that widespread algorithms informing important decisions can cause problems.
How such algorithms will continue to shape reading and writing I do not know. I am thankful that text-related algorithms may reduce the remaining times I will encounter gems such as, “Their is a party nexp Saprturday nite!” However, if it is a race between the Infinite monkey theorem monkey or predictive text algorithms to produce Romeo & Juliet I will bet on the Infinite monkey theorem monkey, albeit a coin toss of equally uninspiring odds. However, might algorithms be capable of upstaging humans in generating shorter propaganda messages such as advertisements? That seems more likely and conceivable with our societal direction.
In the intersection of higher education and the consequences of algorithmic decision making using big data I believe it was the WhatsApp’s predictive text prompt that said it best: Education is not about a bunch of snakes and they don’t have an interview with Rachel during the day. I suppose now each of us can parse what is the interview and the day and who are Rachel and the snakes in that provocative, subversive statement.
If I were to use predictive text as a reply:
Hi David, I feel that the olden days in my life needs to be able to support the idea of what you are doing – the same type of mindful acrobatics and then delving into the office on Monday open the door to get to see the equality of opportunity.
Basically find the use of predictive text to be somewhat of a black mirror. You get a reflection of how you relay messages on your device.
It is a pity that you get such negative feedback on the surveys. It is not a perfect system, but it opens up the opportunity for discussion, and hopefully, amelioration.
Keep calm and survey on!
Well played “mindful acrobatics”, Evelyne! Your response reminds me of Task 6 where I wondered if classmates would interpret the instruction of, “Do not write anything orthographically” to include their comments. That would have been interesting to try to converse around guessing titles with only emojis!
As expected we received a few more complaints in the last few weeks. Even so I agree with your hope that evaluations will be a part of improving instruction at UBC. In some cases we shared this post with students: https://www.reddit.com/r/UBC/comments/a18ecp/who_reads_course_evaluations/