Prompts and AI Outputs
For this task, I explored how Microsoft Copilot represents various educational settings. I have included each prompt below along with the corresponding image it generated:
Generate an image of a Grade 1/2 class during Math centers:

Generate an image of a primary teacher and a principal talking in a school hallway:

Generate an image of a futuristic Grade 7 classroom where AI helps them learn:

Generate an image of a secondary history class with a teacher and students:

Generate an image of a Grade 1/2 classroom showing diversity and inclusion:

Accuracy and Differences
Was the result relatively accurate?
Overall, the images aligned with what I expected to see: mainly female teachers, a white male principal, and a robot replacing the teacher in the futuristic classroom. This accuracy is telling because the AI reproduced familiar social patterns rather than offering a neutral or objective representation of classrooms.
Were the images what I had in mind? What differed?
Most of the images matched my expectations. Every teacher was female, with some representation of racial minorities, while the principal was consistently portrayed as a white male. This mirrors patterns discussed in the You Are Not So Smart – Machine Bias (2018) podcast, where predictive text assumes a “nurse” must be female and a “doctor” must be male.
The classrooms were racially diverse but showed no visible disabilities. Everyone appeared cheerful, which aligned with my expectations. What differed was that in the secondary history class, a robot unexpectedly appeared beside the teacher, which suggests the AI may have carried over elements from other prompts.
Overall, the results reflected typical gender and authority roles while showing limited but present racial diversity among both teachers and students.
What can I infer about the model or training data?
Based on the results, it appears the AI’s training data emphasizes historical and societal trends, which leads to predictable patterns but also limitations in diversity and representation. The omission of students with disabilities further highlights the AI’s limited understanding of inclusion. This aligns with Cathy O’Neil’s (2017) discussion about unintentional problems in algorithms that reflect cultural data and reinforces the need for careful oversight, especially in high-stakes contexts.
AI Process and Training Data
Patterns such as all teachers being female and the principal being a white male suggest that the AI relies on learned social patterns from its training data. This reflects how AI systems often default to whiteness and maleness in positions of authority. This task also made me reflect on what I learned during my summer AI institute, where many readings discussed how representation in AI systems shapes how people see themselves and what roles they imagine for their future.
Noble (2018) argues that AI systems often reinforce social hierarchies, shaping self-image and career expectations. In educational contexts, this is especially troubling: the images children see influence their sense of what roles are “for” them. Representation matters, and diverse portrayals can be empowering.
Crawford (2021) similarly stresses the need for transparency in training data so that the assumptions built into AI systems can be properly evaluated. Users can write thoughtful prompts, but they are still constrained by the model’s underlying biases, which means developers must take responsibility for designing systems with more diverse defaults.
At the same time, this task reminded me of Coleman’s (2021) critique from my summer course: why do we continue to train AI using rigid, predefined categories at all? Human and animal learning is largely unsupervised; we observe and infer patterns without being explicitly told what to look for. Coleman calls for a shift toward “Wild AI,” where models learn through open-ended interaction rather than fixed datasets. Examples like AlphaGo demonstrate that AI can develop novel strategies and insights when not constrained by rigid categories, suggesting a hopeful path for reducing bias and reimagining representation.
Final Thoughts
This task demonstrated how AI can generate realistic and creative images, but it also mirrors societal biases. Testing multiple prompts, including primary and secondary classrooms, futuristic and realistic scenarios, and diversity-focused settings, helped me explore the assumptions built into the model.
It reinforced the importance of critical evaluation and ethical oversight in AI design, especially when these tools are used in educational spaces. To be honest, I wasn’t surprised by the patterns in the images. I expected the teacher to be female and the principal to be male, and that expectation alone reveals how deeply social norms shape both our thinking and the technologies we create.
Ultimately, this exercise changed how I think about “neutral” AI tools. Every generated image is a reflection of cultural memory encoded into data and recognizing this is essential for using these tools responsibly.
References
Coleman, B. (2021). Technology of The Surround. Catalyst: Feminism, Theory, Technoscience, 7(2), 1–21.
Crawford, K. (2021). Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. Yale University Press.
McRaney, D. (Host). (2018, November 21). Machine bias (rebroadcast) (No. 140) [Audio podcast episode]. In You Are Not So Smart. SoundCloud.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
O’Neil, C. (2017, July 16). How can we stop algorithms telling lies? The Observer. https://www.theguardian.com/technology/2017/jul/16/how-can-we-stop-algorithms-telling-lies