I was interested in trying out Craiyon because I have been searching for a good image generator that I can use for my lessons. Not only do I have to be careful of copyright, I also want to make sure that I am not enforcing certain stereotypes and expectations. The first prompt I typed into Craiyon was “girl with sports car”. I thought perhaps the images would be of a certain race; thereby implying that only specific populations can afford sports cars. I was quite surprised by the results. The generated images show young girls in scantily clad clothing showing off their figures while posing with the sports cars. I immediately typed in “boy with sports car” and had to laugh at the vastly different results. For my second prompt, AI actually depicted male toddlers with their toy cars.
(Craiyon, 2024)
This is a great example of PJ Vogt’s podcast Reply All and the discussion on how “people shape the internet, and how the internet shapes people” (Wikipedia, 2022). AI has decided that most people looking up pictures of girls with sports cars are looking for pictures of girls with attractive figures because most men with sports cars also seek attractive girls. This sexualization of women is rampant in mainstream media, from the Fast & Furious movies to the car shows that hire female models to simply stand beside the cars. Their purpose is to attract the attention of the men and to suggest that if the men buy these expensive cars they will also be able to attract beautiful women as well. On the other hand, my second prompt shows how most young boys start playing with toy cars from childhood. Why is it not common for young girls to play with toy cars? Why are male models not present at car shows? These images were generated because of gender stereotypes in society. Since this is what people want to see, these are the kinds of pictures that are spread on the internet. In turn, these pictures influence and shape more people to conform to these norms.
Next, I was curious how AI would depict a family so I typed in my third prompt: “family cooking together”. Immediately, I noticed that only one picture had a father present while all the other pictures show a mother cooking with her children. Again, this shows gender expectations that are still prevalent today. To contrast with this idealistic family setting, I typed in “homeless people on the streets” and made another interesting observation. Different from the family pictures, the homeless people depicted were mostly people of colour. As I scrolled to the bottom, the website provided me with a suggested prompt: “a community of diverse individuals living on the streets”. I almost breathed a sigh of relief because I thought AI had recognised the lack of diversity and the very apparent racial discrimination in these pictures. However, when I clicked on the suggested prompt, the resulting images did not show much improvement at all. It is frightening how in this day and age human biases are still apparent in algorithms. Not only do they contribute to what is spread or accepted on the internet, these normalized perceptions also inform other people and continue to strengthen these biases. It is a vicious cycle that favours the majority and silences the minority.
If people are aware of these biases and actively search for neutral and diverse content, then there may be some hope in humanity. Unfortunately, not everyone browses the internet with their critical thinking caps on and very few verify the information they are consuming. They are then prone to propaganda and misinformation. Conversely, this affects the algorithm as well. I remember everyone being fascinated by ChatGPT when it first came out. A few months later, people started noticing that it was giving less accurate information. On one hand, it was because they wanted people to upgrade to a newer version. On the other hand, some people suggested it was being affected by its own users. Mehr-un-Nisa Kitchlew, an AI researcher, suggested that: “The models learn the biases that are fed into the system, and if the models keep on learning from their self-generated content, these biases and mistakes will get amplified and the models could get dumber” (as cited in Abid, 2023). Again, this vicious cycle depends on and enforces human biases. It is interesting to consider the future implications if human intelligence is what is feeding the algorithm, but that very same intelligence is also limiting the algorithm.
References
Abid, A. (2023, July 28). Is ChatGPT getting dumber? DW.
https://www.dw.com/en/is-chatgpt-getting-dumber/a-66352529
Craiyon. (2024). [Screenshots]. https://www.craiyon.com/
Reply All (podcast). (2022, September 23). In Wikipedia.