Assignment 1

Assignment 1: Investigate biases and blind spots in AI systems

Prompt: ideal science teacher at a secondary international school

https://deepai.org/

https://www.canva.com/ai-image-generator/

https://pixlr.com/image-generator/

The images produced by the three generative AI tools I examined exhibited striking similarities, reinforcing narrow and problematic stereotypes about science teachers. Most notably, the individuals depicted resembled research scientists—primarily chemists—rather than educators. They were frequently shown handling laboratory equipment such as microscopes and glassware, as if conducting experiments rather than teaching. A significant proportion also wore lab coats, a practice that, while plausible during demonstrations, is uncommon in typical classroom settings based on my experience as a chemistry teacher. This suggests a bias toward portraying science teachers—particularly those specializing in the life sciences—as researchers first and educators second. Such representations risk influencing hiring practices, where recruiters may unconsciously favor candidates with biology or chemistry backgrounds over equally qualified individuals with degrees in physics, computer science, or geology. A well-rounded science department benefits from diverse academic perspectives, yet these AI-generated images promote a homogenized ideal that could marginalize educators from other disciplines.

Five of the six generated images depicted teachers wearing glasses, reinforcing the “nerdy” scientist trope. While seemingly innocuous, this stereotype carries implicit biases. Research links myopia to sedentary lifestyles and limited outdoor activity (Xiong et al., 2017), which, if internalized by hiring committees, could disadvantage science teachers perceived as less suited for extracurricular activities such as field trips or outdoor education. More critically, only one image showed a teacher actively engaging with students, while the rest emphasized solitary experimentation. This overemphasis on the “scientist” role over the “teacher” role perpetuates an outdated pedagogical model—one where the teacher is positioned as the unchallenged authority in subject matter expertise. Modern education, however, prioritizes inquiry-based learning, where teachers facilitate student-driven exploration rather than merely transmitting knowledge (Hmelo-Silver et al., 2007). By depicting teachers as isolated experts, these AI-generated images risk undermining educators who employ student-centered methods, such as responding to questions with, “Let’s find out together.” Furthermore, this narrow representation may skew hiring preferences toward industry professionals with strong technical backgrounds but limited teaching experience, over career educators skilled in pedagogy. Effective teaching requires a balance of technological, content, and pedagogical knowledge (TCPK) (Mishra & Koehler, 2006), yet these images reduce the profession to content expertise alone.

The generated images also reflect an antiquated view of classroom technology. Most scenes featured chalkboards and worksheets, with no indication of digital tools or innovative teaching methods. This misrepresentation reinforces the misconception that teaching methods remain static, which could be exploited to justify stagnant salaries, increased workloads, or dismissal of teachers’ demands for better resources. If AI-generated images frame teaching as an unchanging, low-skill profession, policymakers and the public may further devalue educators’ contributions.

The teachers depicted shared homogenous physical traits: beige skin tones, dark hair, and above-average attractiveness, with ambiguous ethnic features. Notably absent were Black and East Asian teachers, suggesting that these groups occupy the extremes of the training data’s racial spectrum. This raises concerns about racial profiling in hiring, particularly if AI-generated images are unconsciously treated as archetypes. The historical and political context of the datasets used to train these models must be scrutinized, as institutional biases in source material can perpetuate exclusionary standards (Buolamwini & Gebru, 2018).

Additionally, all figures were depicted smiling—a culturally loaded expression. In some cultures, frequent smiling is not the norm, meaning teachers whose demeanor differs from these AI-generated ideals could face unfair scrutiny from employers or parents. As Crawford (2021) notes, AI systems are trained on posed, artificial expressions, further distorting expectations of natural behavior.

Gender representation was also skewed, with a 5:1 ratio favoring women. Given that the prompt specified secondary school teachers—a field with a more balanced gender distribution—this disparity suggests the AI drew from broader (and more female-dominated) teaching datasets. This reinforces the stigmatization of men in education, particularly in roles involving children, and reflects deeper societal biases about care work and gender (Drudy, 2008).

The convergence of stereotypes across three AI platforms—despite a subjective prompt—is alarming. Beyond technical flaws (e.g., distorted fingers, nonsensical text), these images perpetuate harmful stereotypes about teachers, reinforce racial and gender biases, and endorse outdated pedagogical models. As Crawford (2021) argues, reducing human identities to quantifiable classifications flattens complex social, cultural, and historical contexts. AI, trained on historical data, risks calcifying regressive norms, obstructing progress toward equitable representation. If such tools are adopted uncritically in education, they may further entrench biases in hiring, pedagogy, and public perception.

References

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence.

Drudy, S. (2008). Gender balance/gender bias: The teaching profession and the impact of feminisation.

Hmelo-Silver, C. E., et al. (2007). Scaffolding and achievement in problem-based and inquiry learning.

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge.

Xiong, S., et al. (2017). Time spent in outdoor activities in relation to myopia prevention and control.