Critical Learning Task #4: Race, Gender, Identity and Intersectionality in the Maker Environment

Brief

Design a critical and cohesive overview of race, gender, identity, and intersectionality in the maker environment. Introduce others to representational problematics in makerspaces and communities.

Choose your means of presentation so long it is multimodal.


Project: Inclusive Makerspace Twine Game 

This is an interactive text-based story surrounding a group of diverse makers that challenged the space’s gender norms and definition of “making” in the local makerspace the Iron Forge.

There are two main aspects of inclusivity that I focused on, specifically on gender and culture.

The story draws upon themes mentioned in the lecture by Kimberly Martin on   Centering Gender: A feminist analysis of makerspaces and digital humanities centres  especially the notions of the difference between “Making vs Crafting” and how that often is intertwined with the idea of “New Domesticity” when it comes to “softer” forms of making with sewing, knitting, weaving, etc. having it be dismissed as mere “craft”.

Another theme that really resonated with me in that lecture was the idea of “invisible labor” of women when it comes to expectations within makerspaces and the stereotypical gender roles affecting how work is organized into private and public spheres that in turn shape how we categorize tasks into “men vs women’s” work.

The narratives I have included are completely fictional and made with the help of ChatGPT. I wanted to utilize AI in this project as I was curious to see what stories it creates and the underlying biases and stereotypes that arises when it comes to inclusivity and diversity in makerspaces.

Below is an excerpt that I did not use in my story, but was given to me after prompting.

As the weeks passed, the dynamic between the men and women evolved. The men, who had been skeptical, began to respect the women’s leadership, learning new techniques and embracing projects they had never considered. Meanwhile, the women appreciated the technical knowledge the men brought, weaving it into their more holistic, creative projects.

By the end of the month, the makerspace had become a true melting pot of ideas, where gender was no longer a barrier to creation. Men and women worked side by side, designing everything from wind-powered kinetic art installations to smart home devices that responded to voice commands. They learned from one another, breaking down stereotypes and building something even stronger: a community that thrived on diversity and collaboration.

The women had taken charge, not by forcing the men out, but by expanding what was possible in the space—and in doing so, they had created a makerspace that belonged to everyone.

It felt like it was perpetuating the gender binary, even though there was one non-binary character in the story, and it felt like it was lumping “non-men” into one category of “women”. The erasure of non-binary-ness was somewhat surprising to me, and it played into a conundrum that I had when I was creating this project. Namely, the challenge of  wanting to “break gender stereotypes” but needing to lean into and “exaggerate” the stereotypes in order to highlight that gap. I felt like that was the case when I was reading some of the papers, like by talking about under-representation first entailed the re-enforcement of gender stereotypes!

In terms of culture, I think the story reflects the kinds of cultural assumptions nested within the creation tools used in makerspaces, especially the recent tendency for integration of Arduinos, circuit boards, e-textiles to add the “Art” into STEM. Many papers seems to place an emphasis on empowering girls to participate more in science and computational activities, but it feels like there is less of a push to empower boys to lean into their artistic side. Seo and Richard (2021) mentioned that bidirectionally responsive design (BRD) activities were shown to “dissipate gendered preconceptions, and the learners across gender equally saw value of crafting, coding, sewing and design”, which might be a start, but I am curious to investigate more into such design-thinking and the more practical aspects in a broader context.

Ideally, I would have liked to include characters that are based on lived experiences, and to make sure that the representations of the characters are reflective of people with such positionalities that the characters themselves occupy. Potential development would be by asking friends with culturally relevant maker experiences to be part of the storytelling.

Should I have more time, I would have liked to explore more into disability and accessibility aspect of makerspaces in terms of content and subject matter.

In terms of the making of this artifact, I would have liked to find ways to embellish the story with more visual elements, such as photos of the makerspace, the characters, the kinds of projects they are making, etc. to create a more immersive and visual experience.

 

References

Martin, K. (2017, November 28). Centering gender: A feminist analysis of makerspaces and digital humanities centers [Video]. Media Collections Online. https://media.dlib.indiana.edu/media_objects/47429d523

Norris, A. (2014). Make-her-spaces as hybrid places: Designing and resisting self constructions in urban classrooms. Equity & Excellence in Education, 47(1), 63-77. https://doi.org/10.1080/10665684.2014.866879

Parekh, P. (2024). Girls’ reluctance and intersectional identities in STEM-rich makerspaces. Education Sciences, 14(6), 628.

Seo, J., & Richard, G. T. (2021). SCAFFOLDing all abilities into makerspaces: A design framework for universal, accessible and intersectionally inclusive making and learning. Information and Learning Sciences, 122(11/12), 795-815.

 

GeoDASH: Exploring Predictive Policing Technology

As someone living in Vancouver for the past 9 years, the first thing I did when I saw the map was zoom into my neighborhood to see what has been happening in my surrounding areas. In addition, I also looked around the locations that I frequent.

After tinkering with the map, I decided to look read the GeoDASH FAQ page for more details on how the data was collected and visualized on the map. Overall, due to the sensitive nature and need to protect the privacy of those involved in the incidents, it seems like most of the information displayed can only be considered as proxy measures, as most of the actual locations have been rounded to the approximate block level.

With the lack of transparency in terms of how information is being collected, reported, and mapped, it is rather challenging to assess the validity of information presented on GeoDASH at face value without cross-referencing other sources, to gain a more comprehensive understanding of the crime trends in different neighborhoods.

Some inherent biases that may appear when interpreting the data on the map are intersectional in nature— using GeoDASH without additional understanding of geographic, socioeconomic, and demographic characteristics of Vancouver may result in misinterpretation of crime trends.

For example, geographically speaking, certain areas may have higher concentrations of crime due to population density, land-use patterns, or proximity to transportation hubs (e.g. central business districts of Downtown Vancouver area vs suburban residential area in Langely)

From a socioeconomic perspective, areas with higher socioeconomic status may have greater resources for crime prevention measures leading to more reported crimes (e.g. West Vancouver), compared to more disadvantaged neighborhoods where crime may be more prevalent but underreported (e.g. Downtown Eastside).

Vancouver being a diverse city with pockets of immigrant cultural hubs and communities, differences in law enforcement practices targeting specific demographic groups can also result in skewed representations of crime patterns and disparities of enforcement outcomes.

To conclude, interpreting GeoDASH information as someone not from Vancouver might be challenging at face value,  as there is a lack of nuanced understanding of the makeup of this city, which requires a lot of contexts to provide a more holistic and accurate interpretation of the presented data. Even for a Vancouverite, the historical background can be an insightful starting point to gain a better understanding of the city that I live in.

ELIZA: Exploring Early Artificial Intelligence

Brief

Approach the interaction as both a user and a critic.

Formulate 2-3 hypotheses to test and 3-5 questions to address before testing the limitations of the ELZIA software by engaging in conversation. Pay attention to the chatbot’s language processing and quality of response.

Annotate the transcript of the interaction with observations and feelings. In your reflection, discuss the ethical and practical limitations of the technology. Save the transcript of your interaction to submit with your work.


Hypotheses

Based on the old-fashioned interface of the chatbot, I immediately dialed back my expectations.

I presume that this is a very “user-driven” conversational chatbot, such that it will echo or rephrase what the user has said to help with reflection and further articulation of what the user has offered as a starting point in conversation.

Given that it uses “pattern matching” to give canned responses, I believe this chatbot will not be able to provide a wide range of conversational breadth and will be relying on pre-programmed scripts or more ‘generic therapeutic’ prompts.

Following the Rogerian psychotherapeutic methods of “returning the question back to the patient”, I believe that the conversations with the chatbot can get rather cyclical and stagnant, which may not be user-friendly and perhaps frustrating in terms of usability.

Questions

When it comes to human-computer interaction, I always think of the concept of the ‘uncanny valley’, originally introduced by Japanese robotics engineering professor Masahiro Mori (Kageki, 2012). It is in the uncanny valley where the emotional response of a human observer becomes increasingly negative towards a robot as it approximates the ‘almost but not quite there’ likeness of a human yet doesn’t quite reach full human likeness, especially unnatural appearances or interactions that deviate from normal human behaviors can elicit a discomfort and eeriness.

At the same time, there is the ELIZA effect, coined after this chatbot, in which humans have the tendency to project human traits of semantic comprehension, or even empathy into computer programs that have a textual interface (Weizenbaum, 1976)

Given these two phenomena, how do these two affect each other? Will the ELIZA effect bridge the valley such the uncanniness becomes overlooked? Will repeated interaction with a chatbot reveal its limitations such that it widens the valley even more?

How is it the case that Weizenbaum’s students exhibited strong emotional connections when they were texting with the ELIZA? Is it because they were not accustomed to this kind of interaction?

Now that texting and chatting online is so ubiquitous in our daily lives, is it just that we have become more sensitive and accustomed to texting etiquette and nuances such that we are able to distinguish texting patterns such that we are not overcome by the ELIZA effect?

Conversation with ELIZA

How to be a good friend?
Conversation Thoughts/Feelings
How to be a good friend?
Eliza Why do you ask?
I want to be a good friend to my friends
Eliza I sometimes also want to be a good friend to your friends. Found it interesting that ELIZA made a self-referential comment, though it seems like it is having a hard time with (reflexive) pronouns
Do you have friends?
Eliza We were discussing you, not me. This response feels a little jarring and perhaps overly blunt?
What are some ideas to be a good friend?
Eliza What is it that you really want to know?
Do my friends like me?
Eliza Would you say that you have psychological problems? Felt a bit blind sighted by this remark. Unclear about the relevance to my previous question.
Not many.
Eliza Ok…”What am some ideas to be a good friend”. Tell me more.
I think I am a good listener for my friends.
Eliza Do you enjoy being a good listener for your friends?
Sometimes. Sometimes I wish they could listen to me too.
Eliza Would you say that you have psychological problems? Feels a little bit discouraging to be asked the second time, unsure how to react to this, hence ending the conversation here
How to be happier?
Conversation Thoughts/Feelings
How to be happier?
Eliza Have you asked anyone else?
Yes, my family and friends.
Eliza Are you sure about that? I assume this is ELIZA prompting me to dig deeper, but feels like it is questioning my information source and validity
And my roommates.
Eliza Can you elaborate on that? Seems like an empty canned response, as it is unspecific and seems a little off-topic to my previous response
On my roommates?
Eliza That is an interesting response. Let me make a note of that for next time. Unsure how to respond to this and therefore I stopped the conversation

 

I Have Been Feeling A Little Sad Recently
Conversation Thoughts/Feelings
I have been feeling a little sad recently
Eliza Is it normal to feel a little sad recently?
Not sure, but I don’t feel good.
Eliza Why don’t you feel good?
I feel stressed about my relationship with my family
Eliza Do you often feel stressed out about your relationship with your family?
Yes, quite often actually
Eliza Are you sure about that? Feels a little stand-offish to be questioned about my certainty on how I feel, unsure if this is to prompt for further introspection
Yes, I am sure.
Eliza Did you come to me because you are sure? Unclear why ELIZA would want to focus on my certainty my feelings rather than how I feel
Not exactly, but I think it would be helpful to talk to someone about it.
Eliza Do you really think so? Could be a prompt to help me further examine whether my thoughts/feelings are true, even though it feels somewhat blunt and insensitive
Yes, I don’t think it is the best keeping everything bottled up.
Eliza Do you wish to be able to think it is the best keeping everything bottled up? This is an interesting question, I believe this is where the user “can see through the code” of this canned response and how they extract certain keywords to put into pre-made sentences, due to the negative phrasing of my previous response, it seems ‘out of place’ to ask such a question
No, I do not wish to think that way.
Eliza You are being a bit negative and that won’t help us move forward. Am a little bit surprised about how blunt ELIZA can be with its tone!
How to stop procrastinating?
Conversation Thoughts/Feelings
How to stop procrastinating?
Eliza Why do you ask?
I would like to stop putting off doing certain things, especially when I have a deadline for it
Eliza Does that question interest you? Not sure why the rhetorical question?
Yes
Eliza Are you sure about that? Feeling like second guessing myself
Yes
Eliza Let’s take a moment here. Think about what you just said and try to rephrase it a bit more specifically. Not sure what the connections are to the previous two Yes/No question prompts
I want to be more proactive and take action
Eliza What would it mean to you if you got to be more proactive and take action
I feel like I have more agency and self control with allocating my time and energy
Eliza Which feelings? Tell me more…
About how I feel when I am in control? Not the best at providing clarification
Eliza Do you enjoy feeling when you are in control?
Sometimes, I enjoy knowing that I can make decisions and follow through with them
Eliza I think you are blocking what you really want to say
I like the feeling of having agency
Eliza What does that suggest to you?
That I can have an affect on my surroundings and change
Eliza I am not sure I understand you fully. I am assuming this is to prompt the user to expand and elaborate a little more on how they feel
As in I can take action and my actions will move things forward to gain momentum.
Eliza I see. Let’s try another topic and we will come back to that issue later I have decided to see what happens when I continue to push through the conversation and change to a different topic
Sure, I have a hard time making decisions.
Eliza What does that suggest to you?
I am indecisive.
Eliza Do you believe it is normal to be indecisive?
Yes, it is normal sometimes.
Eliza I understand. Unsure how to respond to this and therefore I stopped the conversation

Reflection (~500 words)

After chatting with ELIZA rather earnestly, there were instances when ELIZA’s prompts and responses felt like they landed within an acceptable range of ‘more or less’ human responses, yet still somehow to manage to fall short in other responses. It created a feeling of unease and perhaps mild discomfort, as if this chatbot is trying to understand, yet is unable to comprehend what I am trying to express. Sometimes ELIZA’s tone can seem overly blunt and seems to be missing the little nuances in human interactions that can elicit a stronger sense of empathy and understanding. For example, providing validation before asking to follow-up questions; paraphrasing and summarizing to show ‘active listening’ and engagement; actually circling back to topics that it has put on pause; asking how the different topics connect. Not to mention is has very limited ability to adapt to unexpected inputs or more complex discussions that are not provided in its programmed script. My experience of utilizing ELIZA was very much aligned with my initial hypotheses.

With the inconsistencies and errors that ELIZA showcased the longer I talked to it, I felt myself gradually losing more and more trust in the authenticity and reliability of its responses as I slowly felt like I could ‘see through the code’ of how this chatbot functions, gradually feeling less earnest and genuine with my responses. If this is the case, how is it that early users of ELIZA still attributed human-like feelings to the computer program despite it being (so blatantly, in my opinion) in the uncanny valley?

Perhaps if the chatbot stuck to more objective factual matters, the creepiness of the uncanny valley might not affect user’s attitudes compared to chatbots that are move involved in subjective emotional matters; in other words, I wonder if the uncanny valley effect feels more amplified towards chatbots like ELIZA, especially to those who might be in vulnerable mental states and seeking genuine emotional connections?

With the improvement and further development of technology, chatbots have improved in quality with more robust programming for speech and text recognition, expanding beyond textual interactions. The gap between the uncanny valley will slowly narrow, how will we then distinguish between the human and non-human?

References

Kageki, N. (2012, June 12). An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond. IEEE Spectrum. Retrieved from https://spectrum.ieee.org/an-uncanny-mind-masahiro-mori-on-the-uncanny-valley-and-beyond

Ronkowski, R. (n.d.). Eliza – Computer Therapist. Retrieved from https://web.njit.edu/~ronkowit/eliza.html

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.

Tipping Point – Open Education Resource Textbooks Case Study

Creation of Open Education Resource Textbook with Interactive H5P elements for FREN1205 – French Conversation course in the Modern Languages Department at Langara College

Introduction

For the case of technological displacement, we were curious to explore the tendency and shift from physical textbooks to digital Open Education Resources (OERs) in higher education institutions. We were specifically interested in the tensions and opportunities that arose from the transition to online teaching and learning after the pandemic, especially with the normalization of online and hybrid e-learning. 

We are grounding this inquiry of technological displacement in the case study of the creation of OER textbook with interactive H5P elements for a French conversation course at Langara College. In this assignment, we analyze the usability aspect of OERs from the instructor and student perspective, as well as explore the concerns of artificial intelligence, and issues surrounding digital labor in the process of creating OERs in higher education institutions. 

 

Motivation and Background

The FREN1205 – French Conversation course at Langara College is offered in-person with the utilization of a digital OER textbook Le Français Interactif created by the instructor Mirabelle Tinio. To support our work, we had the opportunity to speak with the instructor to learn more about the case study. All case study context provided in this assignment came from this conversation. Below are some of the motivators for the creation of the OER textbook from both the students’ and instructor’s perspectives. 

From the student perspective, the education landscape had been drastically transformed during the emergency transition to online teaching and learning during the beginning of the pandemic in 2020. The effects can be seen gradually resuming in-person teaching and learning once again in 2021, in which student surveys reflected that having additional supportive resources online available helped with their learning process and overall experience taking online courses. In addition, students reflected that physical textbooks were expensive and inaccessible, especially the ones that were ‘single-use’ for an individual course, and were less inclined to make such purchases.

From the instructor’s perspective, there were many factors that contributed to the transition of physical textbooks to a digital OER. The instructor that we interviewed had been teaching the French conversation course for the past at least 12 years. Though the original textbook they were using provided activities and exercises for everyday conversation scenarios, she found that the content was not up-to-date or culturally relevant enough for the students within the classroom. The instructor therefore found herself turning to other available language learning resources to patch together a curriculum plan that included vocabulary, grammar structure, and socio-cultural activities. The process was rather time consuming and she was never really satisfied with the existing resources. 

With both students and instructor identifying that the current resources were not meeting their needs, it became clear that another resource should be introduced to solve the problem of learning resources for this course. Here, we can use the concept of technological utility to demonstrate, in part, why a tipping point occurred. Utility asks the question of if the technology fulfills the users’ needs or if it does what the users need it to do (Issa & Isaias, 2015, p. 4). Physical textbooks were not meeting the learners’ and instructor’s utility needs, therefore, a new technology needed to be introduced. 

Simultaneously while working partially in the Educational Technology Department, there were many other instructors utilizing Pressbooks and other OER platforms to input resources into Brightspace, a learning management system. The existing integration of the learning management system and potential for further adaptation was an additional motivator for developing her own textbook as an OER for the class. 

The Tipping Point

The opportunity and tipping point presented itself when BCcampus Open Education Foundation Grant for Institutions applications were open for project proposals for specifically utilizing H5P for Pressbooks in 2021. The grant was intended for British Columbia post-secondary institutions wishing to explore, initiate or relaunch open educational practices, resources, support and training on their campuses. Through this grant, the instructor was able to secure additional funding and support for creating the French Conversation OER textbook. 

Benefits

Multi-modality, Interactivity and Flexibility  

Learning languages is an activity that is inherently multimodal and incorporates a combination of multi-sensory and communicative modes (Dressman, 2019). The utilization of online OERs makes it possible to include multimedia and interactive H5P elements such that students can actively engage with the learning content, allows for more diversity in learning methods, as well as increasing the accessibility of course content. 

Though the OER textbook included many different chapters and topics, each unit contained a similar format: the learning objectives, pre-test questionnaire, vocabulary, practice exercises, oral comprehension exercises, a post-test evaluation questionnaire, and self-reflection. This repeated format increases the OER’s usability because it is quickly learnable and memorable (Issa & Isaias, 2015, p. 33). The OER therefore creates a smoother user experience with less friction or frustration to navigate to the content than the physical textbooks, demonstrating again why this tipping point occurred (Issa & Isaias, 2015, p. 30).

The goal was to make the learning content accessible to both students and instructors with maximum flexibility and adaptability. Students could preview the units and prepare ahead of time before the classes; or review the units and practice on areas for further improvement, all at their own pace, with self-assessments available. Instructors can supplement the course delivery with additional resources, in-class activities or outing experiences, and utilize the textbook in a non-linear manner tailored to the needs and pace of the students in the classroom. 

Living Texts 

The content in the OER included resources that the instructor created and showcased content that previous students created as well, and can be seen as a co-created ‘living text’ (Philips, 2014) as a pedagogical tool, as well as a co-creation of knowledge within the classroom. 

For example, in the activity “Interview a Francophone”, the instructor uploaded recorded interview videos of previous student’s work, as an exemplar of what the assignment would look like when current students approached the activity themselves, but also as an exercise for current students to practice their listening comprehension and understanding of French conversation in context. The instructor identified that this was to also make the students feel appreciated for their active contribution towards the course, and recognized students as part of the co-construction of literacy knowledge through this kind of interaction (Philips, 2014). 

Creating an OER that operates as a living text supports increased usability because it allows for feedback to be implemented when offered by the learners (the users). A living text can push back against the challenge of “configuring the user”, where the designers imagine the “right way” for a user to engage with their technology instead of being open to how the users actually will engage with the technology (Woolgar, 1990). This OER as a living text can be adapted to user feedback and therefore there is not only one “right way” to use the resource. Instead, the OER can increase usability for a wider variety of users as instructors adapt it based on learner feedback. The instructor noted that keeping an OER like this up-to-date is very important. This is especially true if the OER is described by an instructor to learners as a living text that is responsive to their needs. 

Equity, Diversity and Inclusion 

As mentioned above, the multi-modality, interactivity and flexibility of the living texts contributes towards a classroom climate that reflects equity, diversity, and inclusion of the students that are currently taking the courses. This approach takes into consideration the positionality, lived-experiences, interests, and abilities of students within the classroom and their agency as an active participant in their own learning.

For example, taking the aforementioned activity of interview with a Francophone, with the crowd-sourced collaborative effort of the different interviewees, students are able to see the different kinds of ‘francophone-ness’ outside of the mainstream Eurocentric depiction of French speaking people, especially when it comes to the deep-rooted history of the French language as a tool of colonization. 

By embracing inclusive pedagogical approaches and recognizing students’ diverse contributions, this approach to creating OER textbooks creates a supportive and accessible learning environment, fosters a sense of belonging, and affirms the value of students’ unique contributions to the learning process. 

Challenges 

Current Concerns: Teamwork Makes the Dream Work 

One major challenge that the instructor encountered during the creation of this OER textbook was the lack of support from the institutional level, especially when new technological adaptations require more incentive and supporting resources to push for incorporation and utilization within the college, and furthermore, across institutions. Though the instructor did collaborate with other language instructors from the Modern Languages Department and advisors from the Educational Technology Department, there is a strong suggestion for creating a community of practice across institutions to support this work’s sustainability. The production of a brand new OER like this (as as its ongoing maintenance) involves significantly more time and energy than maintaining the status quo of using physical textbooks. There is a risk that the instructor’s digital labor of producing this kind of resource might be unknown by the institution if it is unseen. 

On a practical and logistical consideration, this ensures the articulation of courses are leveled and aligned across institutions, especially when it concerns the transferability of courses and credits for pathway programs, such as Langara College. On a more idealized and aspirational endeavor, this promotes the collaboration and commitment to sharing knowledge and resources, encouraging accountability, peer reviews and continuous development of teaching and learning practices, enabling the community to build on each other’s work and fostering a culture of openness and collaboration in education. 

Future Concerns: The Rise of Artificial Intelligence and Impact of Digital Labor  

Though the BCcampus grant did provide funding for the instructor to develop the OER textbook, there needs to be more support when it comes to compensation of the unseen invisible work that is added on to the already existing duties of a teaching faculty member. With increased digitization of instruction within higher education, comes an expectation of an accelerated pace of work (Woodcock, 2018, p. 135). There can be an expectation, even implicitly, within institutions that work becomes “easier” as a result of digital resources like this OER textbook. This can result in work pressures and time pressures expanding for instructors who have created digitized aspects of their work. 

Another risk for instructors is the value that is placed on published work to push an academic career forward (Woodcock, 2018, p. 136). The motivation to pursue the creation of open access work can be reduced if the institution the academic is working within has rewards for published work. While an OER like the one described in this case is a different kind of open access work than a journal piece, its creation and upkeep exist within the same labour hours for an instructor. The instructor must be significantly committed to the creation of the OER if there is limited institutional support, as described in this case, and also if there is institutional pressure to spend time doing other, more valued work, such as publishing at a more prestigious journal. 

Finally, there is a tension inherent in the use of artificial intelligence in relation to OERs. As with this case study, we know that producing and maintaining OERs can be time, labor, and resource-intensive. With the rise of large language models like ChatGPT in the past year, there is a potential to employ AI tools like this to support the creation of OERs. This might seem to reduce the human labour needed to create an OER like Le Français Interactif. However, we also know that AI tools like ChatGPT do not appropriately cite sources and can even ‘make up’ information. Uncited sources are problematic because they effectively steal intellectual property from other academics and false information is problematic because it diminishes the reliability and utility of the OER. 

Even more concerning is that AI language models are trained with data that can be biased and produce content that is embedded with this bias (Buolamwini, 2019). With an OER project like this outlined in our case study, it could be counter to the desire to create more culturally-relevant and inclusive resources to produce them in “partnership” with an AI tool. More relevant to this case study, regarding language translation, AI tools like DeepL can be helpful but are not yet at the point where they can translate as effectively as a human who speaks multiple languages. For this reason, instructors might be wary of using AI tools as “co-authors” for OERs to ensure the quality of the instructional or learning resource remains high. 

Conclusion

This case study demonstrates how the creation of an OER textbook for the FREN1205 – French Conversation course at Langara College exemplifies a pivotal shift in educational resources toward digital platforms. This tipping point is a response to the evolving needs of both students and instructors in the post-pandemic era of education. Ideally, an OER textbook offers learners enhanced accessibility, flexibility, and more inclusivity within their educational experience. However, challenges such as institutional support for digital labour and concerns surrounding the rise of artificial intelligence underscore the importance of institutional buy-in and ethical considerations as we integrate OER textbooks into the student experience.

References

Buolamwini, J. (2019, February 7). Artificial Intelligence has a problem with gender and racial bias. Time. https://time.com/5520558/artificial-intelligence-racial-gender-bias/

 

Dressman, M. (2019). Multimodality and language learning. In M. Dressman, & R. W. Sadler (Eds.), The handbook of informal language learning (pp. 39-55). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781119472384.ch3

 

Issa, T., & Isaias, P. (2015) Usability and human computer interaction (HCI). In Sustainable Design (pp. 19-35). Springer.

 

Phillips, L. G., & Willis, L. (2014). Walking and talking with living texts: Breathing life against static standardisation. English Teaching : Practice and Critique, 13(1), 76.

 

Woodcock, J. (2018). Digital Labour in the University: Understanding the Transformations of Academic Work in the UK. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 16(1) pp. 129-142.

Woolgar, S. (1990). Configuring the user: The case of usability trials. The Sociological Review, 38(1, Suppl.), S58-S99.

Intellectual Production #7: Digital Labour

Brief

Based on a close reading and careful analysis of Kate Crawford’s chapter on “Labour” in her book Atlas of AI. Identify and describe, in her terms, a kind of work you regard as “digital labour”. Illustrate your observations, with examples and ideas drawn from the readings.

Use any cartoon-creation/animation software of your choice to communicate your understanding of how, why and in what ways work today has revived older practices of industrial labour exploitation, especially on the role of AI in this reversal.


Preface

Drawing upon a conversation through text with my friend E, it made me reflect on her experience of creating online content in relation to ideas that I was reading about in this module about Digital Labour. That was the starting point to creating this collage comic about some of the connecting concepts and theories to what she was going through.

For this project, I decided to use a collage comic as the medium for a few reasons:

Collage Imagery

Drawing from the different readings in this module, it resembles a montage of concepts, akin to a collage where individual ideas are layered and interconnected with each other. I wanted to use juxtaposition of imagery and text to create a meme-like image similar to “content” that one would see on social media, as a critique yet participation in “content-creation” as a form of sharing of ideas.

Woodcut Printing and Pop Art

I chose to use woodcut-printing of imagery as this technique of printing can be mass produced due to the division of labour in the creation process such that artists can adapt to the medium easily without needing to learn the use of woodworking skills. Usually prints are described as “designed by” rather than “by” an artist, highlights the invisible labour from woodworkers and printers that carve the actual blocks, yet go uncredited.

I chose to use bright neon blocks of color reminiscent of the pop-art movement as the philosophy of pop-art not only celebrates the imagery and symbols of mass culture (i.e. advertisements, consumer products, etc.) they also critique the consumerist nature of society. This movement also challenged elitist notions of art by blurring boundaries between high and low culture, making art more accessible to a wider audience and thus democratizing the art. I think this parallel can be drawn to how social media democratizes many aspects of production and distribution of labour.

 

 

References

All images are sourced from the public domain archives of the British Library.

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press. https://doi.org/10.12987/9780300252392

Duffy, B.E., (2017). (Not) Getting Paid to Do What You Love: Gender, Social Media, and Aspirational Work. Yale University Press. http://www.jstor.org/stable/j.ctt1q31skt

Gandini, A. (2021). Digital labour: an empty signifier? Media, Culture & Society, 43(2), 369-380. https://doi.org/10.1177/0163443720948018

ViziBLUE: Exploring Educational Data

Brief

Go to the ViziBLUE websiteLinks to an external site. and review the types of data collected about individuals on a typical university campus. Review the about page for ViziBLUELinks to an external site.; it provides background on how it came to exist.

Examine the various data sources as if you were a student on campus, with the data representing your own. Take note of these sources and consider the potential inferences that could be made about you by combining them.


At a brief glance on the ViziBLUE website, I was already unsettled by the broad range of information that ViziBLUE aims to collect from the student and staff/faculty body via the various services provided on campus. When I opened the individual data sources, I was surprised at the granularity of the data points that they can potentially collect and utilize, and it seems like there is no option of opting out of this data collection.

With the overlap of many similar data points in each data source, it becomes easy to triangulate and pinpoint demographics to identify the individual. Even when some of the data is stated to be “provided voluntarily” by the users themselves, through cross-examination of the data points, one can provide a general persona and make inferences of the individual, which can potentially negatively impact those in racialized, disabled, and queer minority communities. Where some data points are crucial for certain access to necessary services (e.g. diagnoses of disability for accessibility reasons), it might prove harmful in other contexts (e.g. medical records of disability may hinder immigration eligibility).

Though it does mention in AI Data that the university does not sell or license of personal information, “individually identifiable information may be shared outside U-M as required by law, or when we believe, sharing will help protect the safety, property, or rights of the university, members of the university community and university guests” is rather vague in terms of how and what is deemed to fit that criteria, which could be an important missing piece to the current website.

At Langara College, there is Institutional Research that collects and analyzes data for institution-wide policy forming, strategic planning and decision-making. It is not as granular and personal to that of ViziBLUE, and is more related to the functioning of the institution, it feels less intrusive compared to what University of Michigan is doing.

On one hand, it is important to make transparent to the users of the collected data to “help them view, understand and manage their personal information”, however, it also shows how much information is being collected and in situations of data breaches, it can have dire consequences.

 

Intellectual Production #2 – Artificial Intelligence

Introduce these people and how they contributed to the development of artificial intelligence? Create a chart to show the comparison between your work and ChatGPT’s side by side.
(~50 words each)

Below I have created this infographic in Genially.
The white text blocks are the introductions that I have written myself, whereas the black text blocks are written by ChatGPT3.5.
Click on the photos for links to Wikipedia references of each scholar.

The prompt:

Provide a brief introduction of the following people, and provide an account of what they define as intelligence. Shorten each entry to less than 50 words. 
Alan Turing, John McCarthy, Herbert Simon, Marvin Minsky. Timnit Gebru

 

How do “machine (programming) languages” differ from human (natural) ones? (~100 words)

I think one main difference between programming and natural languages is the flexibility in which language can be used, and the goals in which the language is used for. With programming languages, it requires more rigid and structured syntax such that the machine can register and follow through with its given tasks. With human languages, there is more space to flaunt and subvert conventions of syntax, semantics, and pragmatics to be creative, communicate or express a broad range of emotions.

Machine languages, used in programming, are precise and unambiguous with strict syntax, designed explicitly for computer communication. They lack redundancy and prioritize efficiency. Human languages, on the other hand, are flexible, expressive, and context-dependent. They allow for redundancy, creativity, and adaptation to diverse communication needs, serving purposes beyond information exchange. Learning machine languages is a deliberate process for programmers, ensuring computers understand instructions. In contrast, human languages are acquired naturally, evolving over time for cultural expression, emotional communication, and conveying complex abstract ideas with adaptability and flexibility in mind. -- ChatGPT3.5

How does “machine (artificial) intelligence” differ from the human version? (~100 words)

I believe one main difference between artificial and human intelligence is the way that human intelligence can account for a larger range of contexts and nuance, by taking bodily sensations, emotions, and “intuition” into account. A big topic of discussion during my undergrad cognitive systems courses is also whether or not artificial intelligence has “consciousness” or understand certain “qualia” of experiences, given the often seemingly “behavioristic” measures that we can observe artificial intelligence by.

"Machine (artificial) intelligence" differs from human intelligence in essence and approach. While AI excels in data processing, pattern recognition, and repetitive tasks, it lacks human-like understanding, consciousness, and emotional intelligence. AI operates within defined parameters, lacking the intuitive, contextually nuanced reasoning of human cognition. Human intelligence incorporates emotional, social, and moral dimensions, enabling creativity, empathy, and complex decision-making. AI lacks the innate adaptability, self-awareness, and holistic understanding that characterize human intelligence, which integrates sensory experiences, emotions, and a deep contextual grasp for more nuanced problem-solving and learning. -- ChatGPT3.5

How does “machine learning” differ from human learning?

Human learning in general feels more holistic and can take into consideration past experiences that are unique to each individual, which is a continuation of the previous question on the discussion of “qualia”. This integration of personal narrative can change over time as well, and create novel and serendipitous connections to propel forwards in continuation of learning. I am not sure whether or not machine learning can thread experiences over time to inform its own learning.

"Machine learning" differs from human learning in methodology and depth. While machines excel at processing vast datasets, identifying patterns, and optimizing based on predefined criteria, human learning involves intuition, emotional intelligence, and a deeper contextual understanding. Machines rely on algorithms and statistical models, learning from explicit data inputs, often lacking common-sense reasoning or the ability to grasp abstract concepts effortlessly. Human learning is holistic, integrating sensory, emotional, and social cues, facilitating nuanced decision-making, and adapting to diverse situations. Unlike machines, humans possess innate creativity, curiosity, and the capacity for lifelong learning beyond structured training sets.-- ChatGPT3.5

How do your answers to these questions differ from what a machine could generate? (~200 words)

I found that ChatGPT3.5 responses tend to be more general and overarching in a way that provides an overview of the question in a glance, which I appreciate when it comes to wanting to understand concepts and ideas on a surface level. Sometimes when I read ChatGPT responses, it feels very emotionally “flat” and has a strange “diplomatic” tone that makes the content of the response feel rather sterile and impersonal, but more objective and factual at the same time.

In comparison, my human answer tends to be more focused on a few aspects that delve a little deeper, with anecdotes that refer to my personal experiences. My choice of words to use would feel more or less  with usage of words that seem more “alive” and less “uncanny” (I hope!)  which makes it  is something that ChatGPT3.5 is unable to provide.

References

OpenAI. (2024). ChatGPT (Jan 24 version) [Large language model]. https://chat.openai.com

Timnit Gebru, (2024, January 16). In Wikipedia.  https://en.wikipedia.org/wiki/Timnit_Gebru

Harris, J.  (2023, May 22). “There was all sorts of toxic behaviour”: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases The Guardian. https://www.theguardian.com/lifeandstyle/2023/may/22/there-was-all-sorts-of-toxic-behaviour-timnit-gebru-on-her-sacking-by-google-ais-dangers-and-big-techs-biases

John McCarthy, (2023, December 05). In Wikipedia.https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)

McCarthy, J.(2007, November 12)What is Artificial Intelligence? https://www-formal.stanford.edu/jmc/whatisai/whatisai.html

Marvin Minsky, (2024, January 04). In Wikipedia.  https://en.wikipedia.org/wiki/Marvin_Minsky

Herbert Simon, (2024, January 24). In Wikipedia.  https://en.wikipedia.org/wiki/Herbert_A._Simon

Alan Turing, (2024, January 24). In Wikipedia https://en.wikipedia.org/wiki/Alan_Turing

 

Prompting Activity: Reading with AI

Brief

In this exercise, we will focus on crafting effective prompts, an essential ability for interacting with generative AI systems.

We will use the paper “Technology as Experience” by McCarthy and Wright (2004) as our reference material to test your questions and prompts.

Ask the AI to summarize that paper and identify key points, before delving deeper. More nuanced and targeted questions tend to yield more accurate and insightful outputs from AI.

A central question we will evolve around: What are some of the strategic questions you might pose to thoroughly interrogate a research paper? 


Preface

For this project, I tried utilizing ChatGPT3.5 but like many others, I was informed that it did not have access to this book or its contents.

I decided to utilize perplexity.ai instead to give it a try, as I have heard about this platform in recent AI workshops that I’ve attended, and wanted to take a look at the functionality.

One benefit of using this platform is that you don’t need to login or create an account to use it, albeit with limited functions that are only available when you sign up.

Prompt #1:

What are some examples of how technology can be seen as experience, as discussed in the paper “Technology as Experience” by McCarthy and Wright (2004)? 

The result that I got was a breakdown of three main concepts mentioned in the paper, including:

  1. Brenda Laurel's Concepts: The paper borrows concepts from Brenda Laurel, who discussed the idea of extending our minds via computer engagement, highlighting the transformative and experiential nature of human-computer interaction.
  2. John Dewey's Notion of an Active Self: The authors draw on John Dewey's concept of an active self, which shapes action and the internal dynamics of experience. This perspective emphasizes the role of the individual in shaping their technological experiences, framing it as an active and participatory process.
  3. Mikhail Bakhtin's Emphasis on Felt Life: The paper also references Mikhail Bakhtin's emphasis on the primacy of felt life, suggesting that technology is intertwined with our non-technical 'lived life' experiences. This perspective underscores the deep integration of technology into our everyday experiences and its influence on society
Prompt #2

What are some of the challenges when it comes to designing technology as experience? 

  1. Creating the Best Technology Experience: Ensuring that technology provides the best possible experience for users, encompassing factors such as productivity, collaboration, and engagement.
  2. Understanding Technology as Experience: Shifting the perspective from viewing technology as a tool to understanding it as an integral part of human experience, encompassing intellectual, sensual, and cultural aspects.
  3. Evaluating Technology as Experience: Developing frameworks and methods for evaluating technology in terms of the experiences it offers, going beyond functional aspects to consider emotional, aesthetic, and societal dimensions.
  4. Focusing on User Experience: Emphasizing that user experience is not just about technology, industrial design, or interfaces, but about creating meaningful and personally encountered events through technology. This requires a shift from a purely practical view to one that considers the richness of human experiences.
Addressing these challenges requires a shift in perspective, a focus on holistic user experiences, and the development of frameworks for evaluating and designing technology to encompass its full experiential potential.
Prompt #3

What are some drawbacks of perceiving technology as experience? 

  1. Dehumanization and Social Isolation: Improper use of technology can lead to dehumanization and foster social isolation, as individuals may become overly reliant on digital interactions, potentially diminishing face-to-face socialization and interpersonal skills.
  2. Security and Privacy Concerns: The lack of online security and exposure of sensitive data pose significant drawbacks, as both companies and individuals are exposed to the dangers lurking on the Internet, leading to potential theft, data hijacking, and major economic loss.
  3. Potential Job Losses: The rapid growth of e-commerce and digital platforms can have a detrimental impact on traditional industries and professions, potentially leading to job losses and the decline of established practices
  4. Costly Consequences of Malfunctions: When technology systems fail, the consequences can be expensive, resulting in financial costs for repairs and replacements, as well as lost productivity
  5. Reduction in Face-to-Face Socialization: Technology may reduce face-to-face socialization and interpersonal skills, as individuals increasingly rely on digital communication platforms, potentially impacting traditional forms of social interaction

Reflection

In Prompt #1, the response provided the theoretical frameworks of the scholars mentioned within the paper, with a perhaps overly generalized summarization of their ideas.

Upon further prompting (i.e. Can you expand on Brenda Laurel’s concepts within this paper?), I was given a response that referenced Laurel’s other works before reapplying it to the current context, which I found helpful for an introduction level of understanding.

In Prompt #2, it focused in on the “metacognitive” level of recognizing that a perspective shift is needed in order to implement “technology as experience” in practical ways.

Upon further prompting the difference between “technology as experience” versus “user experience”, it made the distinction that the former offers a theoretical lens for understanding the broader impact of technology on human experiences; whereas the latter was a practical approach to design and evaluation of specific experiences created through technology.

Lastly, in Prompt #3, I wanted to ask about the some aspects that we might be overlooking when we think about technology as experience. It provided some examples of what overreliance on technology could look like, and the dangers of being overly enmeshed with technology such that it becomes our sole experience.

Overall, I think Prompt #1 and Prompt #3 results could be conjured up through human brainstorming and critical thinking in general. Prompt #2 was more helpful when it comes to clarification and understanding the nuances which I found useful.

Intellectual Production #10 – The New Materialist Turn

Brief:

Imagine leading a professional learning seminar for educational technology specialists “wanting to think outside of the box”, how would one understand what New Materialism is?

What are examples, activities to illustrate some of the differences that New Materialist perspectives offer to educational technology studies? How would one understand why New Materialism invites a re-seeing and re-thinking of “best practices”?


 

Task 11: Detain/Release

First of all, I wanted to point out that this module included some of my favorite podcasts that I listen to regularly, so I really enjoyed that we were using it as learning material in class!

From the Reply All Podcast on The Crime Machine, the Compstat system was created with the original intention of “treating every crime seriously”, which quickly turned into a “management tool” as crime rates dropped throughout time. The shift in mindset upstream in the system very much so trickled downstream, the higher-ups were finding loopholes to jump through whereas the police doing the actual work were forced to adhere to quotas, and normal people are being profiled and given summons from police to fulfill the quotas. It seems like no one is benefitting from this crime detection system anymore.

It was one of the stories that left a strong impression on my mind, as at the time one of the “hot topics” in our Cognitive Systems courses was “biases in algorithms” in the realm of Artificial Intelligence and Machine Learning. If the data we are feeding algorithms is tainted with human biases (which will most likely be unavoidable), then algorithms will also learn our human biases, and reproduce them in their analysis results as well. This sentiment was echoed in the 99% Invisible Podcast, The Age of Algorithms. 

Cathy O’Neil points out that many of these algorithms are used as  “weapons of math destruction”, which act as proxies for profiling via race and class, often penalizing individuals for societal and systemic issue that they have little control over.

 "Algorithms aren't very objective even when they are carried out by computers. This is relevant because the companies that build them like to market them as objective, claiming they remove human error and fallibility from complex decision making, but every algorithm reflects the priorities and judgements of its human designer" (The Age of the Algorithm, n.d.)

Use of algorithms are more ubiquitous than we think in our daily lives, from employee hiring assessments, to social media algorithms shaping the news on our feeds.

Currently, the Vancouver Art Gallery has an exhibition called “The Imitation Game”, which “surveys the extraordinary uses (and abuses) of artificial intelligence in the production of modern and contemporary visual culture across disciplines around the world”

(Sidenote: this was a very fun and engaging exhibition for me – a COGS student- since this encompasses almost EVERYTHING I learned in my undergrad, so I am VERY BIASED. It is also a great introduction to those who are not familiar with these subject matters, so please go take a look and see for yourself!)

One thing I really liked about the curation is that they showed what AI could be used for in everyday application, but at the same time, pointed out the “dark side” of such applications. One exhibit from Algorithmic Justice League (AJL) questions the implication of facial recognition software that refused to recognize faces of people of color. The AJL advocates for “equitable and accountable AI, to offer agency and control for people that interact with AI, and transparency and redress for harm caused in the use of AI.”


Reflections

When I was going through the Detain/Release tasks, the first few things I tend to look at was:

    • Type of committed crime : if they tend to be more violent (unlawful use of weapon, assault, etc.) then I usually detain them than crimes that are “less” violent (fraud, drug possession, etc.)
    • Violence: whether or not they will pose a threat to public if I release them
    • Statement from prosecution: I feel like this adds weight to the decision making process for me, as I am more likely to go with what the Prosecution Team also thinks in some ways

Factors that I didn’t consider as much in my decisions included:

    • Statement from Defendant
    • Name
    • Age
    • Gender
    • Photo

Usually cases that I release have:

    • Low levels of Failure to Appear
    • Low levels of Commit a Crime
    • Low levels of Violence

As long as Violence and Fail to Appear are low, I usually release them.
If their Commit a Crime is not low, then I check the type of crime they committed before deciding to detain or release.

I tried running the simulation 3 different times, with varying results in Jail Capacity and Fear.

The first time, I had a 40%  Jail Capacity, and little Fear.

**Edit: I was able to get hold of my statistics afterwards, and have included them below in the textboxes with the actual percentage. As you can see, my guestimation is very not accurate to the actual data.**

Defendants processed: 25
Detained: 17 
Released: 8 
Violators: 3
Jail: 75% 
Fear: 8%

The second time, I had 60% Jail Capacity, with 50% Fear.
(I think I was more lenient with my release criteria this time)

Defendants processed: 25
Detained: 13 
Released: 12
Violators: 5
Jail:60%
Fear: 43%

The third time, I had 80% Jail Capacity and little Fear.

Defendants processed: 25
Detained: 13
Released: 12 
Violators: 5
Jail: 65% 
Fear: 62%

I wasn’t able to access the actual statistics for the simulation, so I am curious what the data can reveal about my judgements and assumptions in the process. Though I think I am aware of the biases that I subconsciously project onto my decisions, I wonder if showing the data and statistics of my performance and further inform me on the underlying  prejudices that I carry when it comes to making detain or release decisions. By making it transparent and bringing it to the forefront of my awareness, I think it will impact my decision making processes moving forward.

References

Algorithmic Justice League – The Imitation Game. (n.d.). Retrieved July 25, 2022, from https://imitationgameexhibition.ca/15-Algorithmic-Justice-League

The Age of the Algorithm. (n.d.). In 99 Percent Invisible. Retrieved from https://99percentinvisible.org/episode/the-age-of-the-algorithm/

The imitation game: Visual culture in the age of Artificial Intelligence. Vancouver Art Gallery. (n.d.). Retrieved July 25, 2022, from https://www.vanartgallery.bc.ca/exhibitions/the-imitation-game

Vogt, P. (n.d.-a). The Crime Machine, Part I. In Reply All.

Vogt, P. (n.d.-b). The Crime Machine, Part II. In Reply All.

Spam prevention powered by Akismet