See Full Text Conversations here: Experiment 1: Text to Infographics Experiment 2: Text to Photos
Introduction
This week in ETEC 540 we learned all about algorithms and discussed some technological applications (such as generative pre-trained transformers (GPT) and text-to-image generative models) that are central to the debates about technology and its broader implications for the human experience.
Task: Text-to-Infographic and Text-to-Image with Microsoft Copilot
This week, our task was to explore Microsoft Copilot. As someone who hadn’t used it before, I was eager to learn about a new tool. We first were asked to read the instructions and more details about the platform: https://canvas.ubc.ca/courses/4318/pages/microsoft-copilot-2 Next, we were asked to give Copilot a few prompts and downloading relevant or interesting images.
I decided to do 2 experiments with Copilot: one generating graphics for displaying my course notes to see how it handled text to graphic creation, and one experiment generating images to see how it handles diversity in humans.
I asked a few text based questions, wondering if it would feed the algorithm the text data before creating the images.
Reflection
“Algorithms are a set of instructions with which to solve problems” (TED-Ed, 2013)
Although I cannot see the algorithm or training data behind the text and images I generated with Copilot, it’s clear that the tool follows a formulaic and systematic approach. The responses and visuals appeared to follow preset instructional patterns, and the infographics and images consistently used similar colours and styles throughout the experiment.
Experiment #1: Infographics
I began by asking Copilot for information about Joseph Weizenbaum’s ELIZA program, hoping to generate some infographics based on the results.
Here’s what I observed:
- It generally did what I asked using natural language processing.
- The images it created looked very similar to those I’ve generated in ChatGPT.
- When I asked for its sources, it didn’t provide external links until I specifically requested articles.
- It was difficult to determine where the information was actually coming from.
- The sources it eventually cited included Wikipedia, Goodreads, Bookey, and a personal website by Kory Mathewson.
- Because Wikipedia, Goodreads, and Bookey are all crowdsourced, and Mathewson’s site is personal, the accuracy and reliability of the information are uncertain. If the data feeding the algorithm is biased, then the output from Copilot will reflect that bias.
- The website link shown at the bottom of one generated image was entirely fabricated.
- I noticed that Copilot felt conversational, and I quickly realized I was beginning to anthropomorphize it.
- It described its approach as “statistical language processing” (Storied, 2021).
- The longer I interacted with it, the more mistakes it began to make in the images.
(Oddly, the ELIZA text only let me generate 5 images and for some reason didn’t let me log in, the photos experiment worked fine using my UBC login for Copilot)




Experiment #2: Photos
I began by asking Copilot to generate an image of a person using a computer with a teacher watching over their shoulder, and then continued adjusting the image through follow-up prompts.
Here’s what I noticed:
- There was noticeable bias in the appearance of the people it generated, including a lack of diversity.
- The teacher was consistently portrayed as an attractive, youthful woman.
- The tool alternated between depicting the teacher as a white woman and a Black woman, with no prompting.
- When I asked for the teacher to be shown as a professor, the image shifted to an older, grey-haired man.
- The computer keyboard began to glitch in the images.
- The students were mostly depicted as male, although the teacher was shown helping a female student.
Overall, the images looked fine at first glance for a quick image generator, but a closer look revealed noticeable biases, repetitive patterns, and increasing glitches over time. It became clear how easily AI-generated images can reinforce and perpetuate existing biases.
Copilot has the following disclaimer for its product which I’d advise to follow if you’re thinking about using it: “Do not rely on the generated content. It can be biased and inaccurate. Think critically about the generated response and determine whether or not it makes sense within the context of your work.”






See Full Text Conversations here: Experiment 1: Text to Infographics Experiment 2: Text to Photos
AI disclaimer: I used ChatGPT to edit my writing for clarity and grammar. All ideas and final edits are my own.
References
UBC. (n.d.) Microsoft Copilot [Class handout]. Canvas. https://canvas.ubc.ca/courses/4318/pages/microsoft-copilot-2
Microsoft. (2024). Copilot [Large language model]. https://copilot.microsoft.com/
OpenAI. (2025). ChatGPT (GPT-5) [Large language model]. https://chat.openai.com/chat
Storied. (2021, May5). From Alan Turing to GPT-3: The Evolution of Computer Speech | Otherwords [Video]. YouTube. https://www.youtube.com/watch?v=d2UccTPnl4w&t=273s
TED-Ed. (2013, May 20). What’s an algorithm? – David J. Malan [Video]. YouTube. https://www.youtube.com/watch?v=6hfOvs8pY1k
