Link to Original Post: Task 11 – Detain/Release or Text-To-Image (Lee, 2023)
Comment
Hi Elaine,
I enjoyed reading your post for this task!
I found your reflection on AI’s implications in the educational environment particularly interesting, as it’s something that I think about often in my own work environment. Although we work in different instructional domains (post-secondary and K-12), the proliferation of natural language processing tools like ChatGPT and DALL-E 2 present real challenges for us as practitioners, as we navigate the best way to use these tools responsibly, while safeguarding students from discriminatory biases. This recognizes the need to support development of digital literacies that are responsive to these tools- especially as they become integrated with other tools that we might work with on a daily basis (e.g., Microsoft’s new AI ‘Copilot’ in Word comes to mind [Spataro, 2023]).
You address the fact that most of the dominant AI tools are largely proprietary and closed-systems, meaning that there is very little transparency afforded for users/educators to identify which data sets were used to train these systems in order to understand it’s biases. Furthermore, you note that there is a historical lack of engagement between AI vendors and educators and its implications when these tools are put into practice. I make note of an instance of this in my own Week 11 post, where the coded racial biases of an AI-powered risk assessment system’s led to harmful categorizations of Black students as low achievers (Pasquini & Gilliard, 2021). This was further compounded by the fact that the teachers were not properly trained on how to use the risk assessment system (Feathers, 2021).
I’m also reminded about the use of automated test proctoring tools, which have been known to use trained algorithms designed to detect cheating based on specific behavioural factors, such as gaze and movement. Such proctoring systems have been widely reported as discriminatory- such as failing to detect the faces of Black students, or flagging certain involuntary physical behaviours that may be present for students who are neurodivergent (Corbyn, 2022).
Going forward, I’m attentive to how similar, biased patterns will emerge in the context of natural and generative language processing language models that are being integrated into the tools we are already using (such as Word). With that said- I fully align with your stance that educators and administrators must be diligent and critically aware of how these tools operate before putting them into action.
Reflection
Elaine’s post prompted me to extend my thinking around the implications of generative AI systems on literacy, especially given the problematic oppressive biases prevalent in AI and machine learning systems over the years. Since the start of this course, I’ve found myself thinking more deeply about the surge of generative AI systems and the New London Group’s (1996) suggestion to foster digital literacy through the use of multimodal communication tools. I’ve wondered how these tools can be responsibly integrated into digital literacy instruction, given that there is so little transparency around how they operate. One thing that is certain is the need for educational practitioners to apply a critical lens to understand the benefits and risks of these systems when designing instruction, so that harmful biases aren’t perpetuated through them. Stommel’s (2014) Critical Digital Pedagogy could provide a useful framework to work from, as it encourages learners to use and critique digital tools while developing a systems-level awareness about how they work.
When comparing our two posts, Elaine’s focus was more on the use of AI generative systems; while mine was more focused on the Detain and Release simulation. We both use text and images representations in our posts: I use a screenshot of Detain/Release to share the outcome of the simulation, while Elaine shares a screenshot of an biased AI-generated image made in Craiyon. By sharing a real-world example, Elaine powerfully supports her key points by capturing how bias operates in the context of AI. Despite working in different instructional domains (higher ed and K-12), we both bring a pedagogical lens to our respective posts by considering the impact of generative AI in the context of education.
References
CAST (2018a). Universal Design for Learning Guidelines version 2.2. Retrieved from http://udlguidelines.cast.org
Corbin, Z. (2022, August 26). ‘I’m afraid’: Critics of anti-cheating technology for students hit by lawsuits. The Guardian. https://www.theguardian.com/us-news/2022/aug/26/anti-cheating-technology-students-tests-proctorio
Feathers, T. (2021, March 3). Texas A&M drops “race” from student risk algorithm following markup investigation. The Markup. https://themarkup.org/machine-learning/2021/03/30/texas-am-drops-race-from-student-risk-algorithm-following-markup-investigation
Lee, E. (2023, March 20). Task 11: Detain/release or text-to-image. ETEC 540 Reflections. https://blogs.ubc.ca/etec540elainelee/2023/03/20/task-11-detain-release-or-text-to-image/
The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66(1), 60-92. http://newarcproject.pbworks.com/f/Pedagogy%2Bof%2BMultiliteracies_New%2BLondon%2BGroup.pdf
Pasquini, L. & Gilliard, C. (Hosts) (2021, April 15). Between the chapters #23 looking in the black box of A.I. with @hypervisible (No. 52). [Audio podcast episode]. In 25 Years of Ed Tech. Laura Pasquini. https://25years.opened.ca/2021/04/15/between-the-chapters-artificial-intelligence/
Spataro, J. (2023, March 16). Introducing Microsoft 365 Copilot – your copilot for work. Official Microsoft Blog. https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/
Stommel, J. (2014, November 17). Critical digital pedagogy: A definition. Hybrid Pedagogy. https://hybridpedagogy.org/critical-digital-pedagogy-definition/