Task 11: Option 2 Text-to-Image

Prompt to Copilot: can you create an image of a family playing a board game around a table?

Here is the image it generated including this message: Here’s your image — a cozy family game night scene is ready now.

As the image was generating, I thought back to Turing’s provocative question from 1950: “Can machines think?” AI has not definitively shown that machines can “think” in the human sense, but it has demonstrated that they can perform many tasks Turing believed would qualify as evidence of thinking. Modern AI systems can generate responses that are often indistinguishable from human ones. They can learn from data, adapt, and improve without being explicitly programmed for every situation. Like humans, AI systems can reason, solve problems, and use natural language to communicate, answer questions, and carry out instructions in ways that feel intelligent to most people. These abilities show that machines can behave as if they think, exactly the criterion Turing proposed. Although machine thinking is not the same as human thinking, modern AI has narrowed the gap far more than Turing could have imagined.

What I pictured in my mind and what was generated were certainly different, even though the underlying idea was the same. I imagined a family sitting around a dining room table with a game board in the middle, showing varying joyful expressions. I expected the image to be an external or overhead view that captured the full scene, rather than a frontal view where every family member faces the camera.

The generated photo also depicted what might be considered a “typical” family, a mom, dad, son, and daughter. In contrast, my mental image included a more diverse group, possibly a mix of family and friends of different ages and ethnicities.

Based on these results, I can infer that the model’s training data likely contains a high volume of conventional or stereotypical portrayals of families. Heilweil (2022) mentions that AI generates ideas based on what it has read and processed before. As a result, the model defaults to these common patterns when asked to generate an image, rather than representing broader diversity in age, group composition, or ethnicity. This suggests the model reflects the norms and biases present in its training images, which can limit the variety of outputs it produces.

Out of curiosity I typed the same prompt into ChatGPT and this is what was generated:

The images are quite similar. This one shows a young, happy family playing a game together, but there isn’t much depth to the photo. Like many AI-generated images, it looks staged or scripted. This similarity isn’t surprising, since AI image generators learn from large datasets containing millions of publicly available or licensed images. These datasets often include common, stereotypical, stock-photo-style depictions of families, which leads to nearly identical outputs across different platforms. As Ananya (2024) notes, many AI systems tend to default to familiar stereotypes, which can unintentionally reinforce and amplify existing cultural biases. This highlights the importance of critically examining AI-generated content, and being intentional when prompting, to ensure more inclusive and diverse representations.

References:

Ananya. (2024). AI image generators often give racist and sexist results: Can they be fixed? Nature (London)627(8005), 722–725.

Heilweil, R. (2022, December 7). AI is finally good at stuff. Now what? Vox. https://www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-opena

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet