Monthly Archives: November 2023

Task 12: Speculative Futures

Prompt:
Describe or narrate a scenario about a piece of clothing found a decade into a future in which “progress” has continued. Your description should address issues related to disease and elicit feelings of resentment.

Narrative:
The year is 2033. The world has advanced rapidly over the past decade due largely to the exponential growth of machine learning and artificial intelligence systems. Many of the large technology conglomerates from the 2020s have merged into a single corporation, OneWorld, that oversees and controls almost all global industries. The world is more interconnected than ever before. Borders have started to disappear, one central global currency is beginning to dominate markets and the world is largely at peace. Or so it seems.

This process of amalgamation was expedited by the Virus War, a series of bio-warfare attacks that brought the world to a standstill between 2025-2027. Airborne disease is now a part of daily life from the fallout from the war. Millions of people die each year because of illnesses contracted from breathing in contaminated air. Those in historically poor areas of the world continue to suffer the most. Everyone who can afford one now wears a BreatheClean, OneWorld’s most popular product. Those responsible for launching the Virus War have never been identified. Theories are rampant and varied. Many believe that OneWorld was in some way responsible, though no evidence has ever been uncovered. 

“What is this thing? A dirty bandage?”, Mason asks as he flings the unidentified object towards his Dad.

“Would you look at that. This dirty piece of fabric is a disposable facemask from years ago. I’m surprised that these things can still be found, let alone worn. You wouldn’t remember but when you were very young we even had some of these things for you”, his Dad replied.

“For me?’, Mason gawked, “why would I need a facemask that looks like this? This wouldn’t be able to protect me from anything…only my BreatheClean can. This looks like nothing more than a sock with some elastic bands on it”.  

“We’ve talked about this. Before the Virus War there was a pandemic that essentially shut the world down for a year or two.” Mason’s Dad reminds him, “many people think that that is what the Virus War grew from. They say that some government or maybe even some huge company saw some kind of potential in the fear and panic and decided to recreate it.”

‘That’s just nasty”, Mason scoffed, “how would killing millions of people be good for anyone?”.

His Dad shakes his head. “I’m not sure. What I do know is that there is one company that has beyond profited from the war. Look around, OneWorld touches everything. Our masks, our food, our vehicles, our global currency. Hell- even our colony on Mars and the mining operation on the Moon.” 

“Oh come on, Dad”, Mason mocks, “not that conspiracy theory. We have talked at school about how only wackos believe that OneWorld could be responsible for the war. It makes no sense. They are the ones protecting us”, he says as he taps his BreatheClean. “Plus we are learning about how they are trying to clean up our air, make it so that we don’t need to wear our masks anymore”.

Mason’s Dad looks down at the dirty piece of fabric at his feet. He kicks it and motions for Mason to follow him. Like most people, he knows that deep down OneWorld is in some way to blame. He doesn’t push the issue with Mason. He knows that Mason’s school is funded by OneWorld, so it would be a losing battle. 

OneWorld’s “BreatheClean”

Generated by Craiyon
Prompt: “hyper realistic person wearing hi-tech face mask”

Task 11: Text-to-Image

For this week’s task, I explored the generative AI platform Craiyon. As outlined in some of this week’s module content, Craiyon, like other generative AI models uses algorithms to create images based on text prompts by the user. As with all generative AI models, Craiyon’s algorithm needs to be trained using sets of data. In the case of Craiyon, I would imagine that the model was trained using word and image association. 

As mentioned in this module’s content, a large issue with any generative AI platform is the data set or large language model that it was trained on. Bias is, therefore, an inherent part of any AI model since the programmers essentially decide what information will be used and what information will be excluded. This practice forces the programmers to place value on content and serve as judges of information. This is a flawed practice and can lead to the perpetuation of certain limited perspectives, ideologies and ways of thinking. In some cases, and as explored in this week’s module, this can result in models that have the power to exhibit hateful and discriminative responses.  

My goal with this week’s task was to see if I could uncover some possible bias built into the Craiyon algorithm. In my prompts, I opted to use the term “hyper-realistic” as this is a tip that I had seen before to have AI return realistic-looking images. My first prompt sought to determine if the platform seemed to be biased towards a certain race. To test this I wrote two different prompts that were race-neutral. The first prompt was: “hyper-realistic child playing baseball” (see image 1). The results were nine, very clearly white children. This led me to think that perhaps the algorithm had learned to associate baseball with white-looking children. While baseball reaches audiences around the world, to test this further, I selected a truly global sport- football/soccer. My prompt was: “hyper-realistic person playing football” (see image 2). The results this time were slightly more reflective of race around the world but still predominantly white. What was interesting to note from this prompt was largely the absence of female representation. To test this possible bias my third prompt was: “hyper-realistic company boss sitting at desk” (see image 3). Again, the results were overwhelmingly white in race, visibly old in age and limited in gender diversity. 

Based on these three prompts, I think it is clear that there are issues in terms of representation in this particular generative AI platform. While adjectives to specify race could be included in the written prompts, this highlights a deeper issue embedded into the algorithm that white and male are the default returns on generic human prompts. To push this hypothesis even further I decided to see if the platform also generated images that perpetuated stereotypes. I prompted the platform with: “hyper-realistic gang member” (see image 4). The generated images point to extreme stereotyping of race. Gang members, while largely negative forces in society, are present across every country and every race. Yet, at the results showcase, this platform sees gang members as exclusively black and asian. 

These results cast serious doubt on the ability of this algorithm to be truly representative of the world. Instead, it seems as though the algorithm draws upon long-standing inequalities and the perpetuation of stereotypes. While this is simply one generative AI text-to-image model, it would be worth examining the extent to which other platforms are similar or divergent in this sense. My intuition tells me that most, if not all, text-to-image platforms have these issues.

Image 1:

Image 2:

Image 3:

Image 4:

Task 9: Network Assignment

This week’s task was quite interesting, albeit challenging, as I was unfamiliar with network theory beyond a foundational understanding of the web and the connectivity of web nodes through algorithms. I have also never used Palladio as a tool through which to interpret data and realized that I am much more familiar with charts, graphs and tables and more attuned to Google Sheets than a visualization platform like Palladio. Palladio provides us with the visualization of the Golden Record track selection data within which I represent a node when looking at the data as a whole, or seemingly an edge when looking at each group individually. While using this platform to interpret data was an interesting experience, I struggled to understand how to truly leverage the site options to help me make strong conclusions, even after independent research. Perhaps this course could aim to scaffold this particular activity further moving forward to set students up for greater success and discussion. 

Looking through the visualized data I first sought to see which groups I was omitted from. I did not appear to be connected to Community 4 as I do not represent an edge connecting any of the song nodes. This makes sense as I did not select any of the tracks that are nodes represented in that community. A conclusion that could be drawn from this is that the individuals most connected within Community 4 may have had much different selection criteria for the tracks than I did. By contrast, individuals in that community may have had certain criteria that were similar to mine but may have assessed certain tracks using that criterion differently. This could be an interesting extension to visualize the criteria each participant used to guide their selections and cross-reference that with the tracks they selected.

By contrast, within Community 1 I seem to appear near the middle of the visualization and connect to four of the nodes. This points to the idea that other individuals in that community near the centre might have also had similar selection criteria for the activity. Namely, Bingying, Hassan and Louisa might share a similar interpretation of criteria. In Community 3 Hassan and I are again in a similar situation concerning the two tracks we both selected. In looking at all six communities at the same time, this connection between Hassan and I is again clear. 

This data visualization makes connections explicit and suggests possible conclusions, but cannot identify, with certainty, why specific songs were more popular and others were not. The visualization outlines, for example, that Bridget, Hassan Nisrine and myself selected many of the same tracks. What is missing from this interpretation is why that is. It could very well be that we shared a similar set of criteria in selecting our songs, or perhaps our interpretations of our unique criteria led to similarities. As mentioned above, I believe that the only way to truly understand why certain song selections were more popular than others would be to include categories that influenced selection. In that case, individuals would not only be linked to the categories of selection but also to their selected songs.