Skip to content

Text-to-Image

Prompt:create an image of an inclusive makerspace in studio Ghibli style

“Your Studio Ghibli–style inclusive makerspace is ready now ????✨.”

The image immediately reminded me of the warm, soft lighting and expressive character designs typical of Ghibli animations. The scene included a diverse group of makers, different ages, races, and physical abilities and even a cute little robot, which I think adds to the whimsical Ghibli feeling. The accuracy surprised me. I didn’t expect the AI to capture the tone of Studio Ghibli so well, even though the details weren’t exactly what I imagined.

What stood out most was how the AI interpreted the word inclusive. It incorporated visible markers of diversity, such as a person using a wheelchair and some racial identities and a prosthetic leg. This really showed me how the AI pulls from its internal patterns and associations rather than my actual intentions. This aligns with what Porcaro (2019) explains about algorithmic systems: they do not “understand” values they generate outcomes based on statistical associations learned from training data. The AI likely pulled from datasets tagged with keywords like “inclusive classroom,” “diverse students,” or “makerspace,” blending those associations with the recognizable aesthetic features of Studio Ghibli.

Even though the image was charming and fairly accurate stylistically, it also highlighted the limitations of the model. The AI gives the illusion of intentional design, but really it is simulating patterns based on what it has “seen” before. The Medium article emphasizes how algorithmic systems operate through probabilistic predictions and not genuine reasoning. As Porcaro (2019) describes, these systems can produce outputs that feel coherent but are ultimately shaped by their training data and not by a deeper understanding of context. This is exactly what I noticed in my image! My prompt guided the model, but the assumptions embedded in its training data shaped the final result more than any intentional artistic choice.

The process made me think critically about how AI “decides” what to generate. It simulated inclusivity, but that simulation is based on patterns in data rather than any genuine understanding of equity, accessibility, or design. I believe, it still produced a beautiful image that aligns with my passion for inclusive makerspaces, but the reflection reminds me that AI-generated art is always a remix of past data rather than a true understanding of human values.

References

Porcaro, K. (2019, January 8). Detain/Release: simulating algorithmic risk assessments at pretrial.Links to an external site. Medium.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet