GeoDASH: Exploring Predictive Policing Technology

As someone living in Vancouver for the past 9 years, the first thing I did when I saw the map was zoom into my neighborhood to see what has been happening in my surrounding areas. In addition, I also looked around the locations that I frequent.

After tinkering with the map, I decided to look read the GeoDASH FAQ page for more details on how the data was collected and visualized on the map. Overall, due to the sensitive nature and need to protect the privacy of those involved in the incidents, it seems like most of the information displayed can only be considered as proxy measures, as most of the actual locations have been rounded to the approximate block level.

With the lack of transparency in terms of how information is being collected, reported, and mapped, it is rather challenging to assess the validity of information presented on GeoDASH at face value without cross-referencing other sources, to gain a more comprehensive understanding of the crime trends in different neighborhoods.

Some inherent biases that may appear when interpreting the data on the map are intersectional in nature— using GeoDASH without additional understanding of geographic, socioeconomic, and demographic characteristics of Vancouver may result in misinterpretation of crime trends.

For example, geographically speaking, certain areas may have higher concentrations of crime due to population density, land-use patterns, or proximity to transportation hubs (e.g. central business districts of Downtown Vancouver area vs suburban residential area in Langely)

From a socioeconomic perspective, areas with higher socioeconomic status may have greater resources for crime prevention measures leading to more reported crimes (e.g. West Vancouver), compared to more disadvantaged neighborhoods where crime may be more prevalent but underreported (e.g. Downtown Eastside).

Vancouver being a diverse city with pockets of immigrant cultural hubs and communities, differences in law enforcement practices targeting specific demographic groups can also result in skewed representations of crime patterns and disparities of enforcement outcomes.

To conclude, interpreting GeoDASH information as someone not from Vancouver might be challenging at face value,  as there is a lack of nuanced understanding of the makeup of this city, which requires a lot of contexts to provide a more holistic and accurate interpretation of the presented data. Even for a Vancouverite, the historical background can be an insightful starting point to gain a better understanding of the city that I live in.

Learning Analytics Adventure: Factors that Impact Student Motivation and Completion of MOOCs

Abstract

Due to the massive number of potentially enrolled students, the open-access nature for learners to participate or not participate, and lack of physical space on online platforms,  massive open online courses (MOOCs) embody a particular learning experience independent of time and space (İnan & Ebner, 2020). With the heterogeneity of learners with varying learning motivations, this impacts whether or not learners complete the course. By identifying the characteristics that matter to learners, the learning experience of MOOCs can be improved and prioritized accordingly (Nanda et. al, 2021).

Based on the Exit Survey questions of edX-based UBC MOOC on Climate Change, two open-ended post-course survey questions were analyzed. A frequency analysis was conducted and visualized as a word cloud with learning analytic tool AntConc, revealing that learners found the video content most helpful, whereas the assignments, lack of flexibility in time and deadlines, and unguided peer evaluation were aspects that negatively impacted their learning experience.

Statement Question and Literature Review

Based on Nanda et. al’s (2021) study on analysis of open-ended feedback from MOOC learners, due to the heterogeneity of learners that enroll in MOOCs, it is important to identify the characteristics that are most important to the different learners and to prioritize them accordingly based on qualitative post-course surveys. In their study, they conducted latent Dirichlet allocation (LDA) topic model on 150000 MOOC learners from 810 MOOCs in different subject areas with the following three open-ended questions:

Q1) What was your most favorite part of the course and why?

Q2) What was your least favorite part of the course and why?

Q3) How could the course be improved?

From their qualitative analysis, the researchers identified characteristics that impacted learner’s experience in completion of the MOOCs: the quality of course content, accurate description of prerequisites and required time commitment in course syllabus, quality of assessment and feedback, meaningful interaction with peers/instructors, engaging instructor and videos, accessibility of learning materials and usability of the platforms.

Based on these studies, I would like to conduct a similar review on the edx-based UBC MOOC course on climate change, provide some data visualization and generate some discussion for further development of collection of data, and improvement of the course for future iterations.

Methods and Tools

For the scope of this project, I hope to center exploratory qualitative analysis from the Exit Survey as a starting point to further investigate learner data surrounding student demographics,engagement in course content, engagement with their peers within the course to extrapolate beneficial information and gain some insight as to what are the characteristics and aspects of the course that contribute to learner’s motivation to complete or drop out of the course.

First I familiarized myself with the quantitative and qualitative data from the MOOC, and operationally defined some of the different themes of student behavior and data that I wanted to investigate, namely student motivation, student familiarity (to subject matter), student identity/demographics, student engagement (broken down to four categories) and qualitative feedback.

Table of Themes
Motivation
Climate Entry Survey –       Q2.1 What are the main reasons for taking the course?

–       Q3.1 How many weeks do you plan to engage in the course?

–       Q3.2 How many hours per week on average do you plan to spend on this course?

–       Q3.3 How frequently do you plan to use course components?

–       Q4.2 How many MOOCs have you participated at least partially?

–       Q4.3 Think about the MOOC you were most engaged in, what best describes the level of engagement?

 

Climate Exit Survey

 

–       Q2.1 Were your goals for taking the course met?

–       Q2.2 Are you likely to…

–       Q3.1 How frequently did you use each of the course components?

–       Q3.3 How many hours per week did you spend in this course?

–       Q3.4 What interactions did you have with course peers?

–       Q4.1 Which of the following components of the course were you very satisfied with?

–       Q4.2 How did you find the following modules?

 

Familiarity (to Subject Matter)
Climate Entry Survey

 

–       Q.1.1 ~ Q1.3 Climate Knowledge

–       Q4.1 Prior to this course, how familiar were you with the subject matter?

Climate Exit Survey –       Q.1.1 ~1.3 Climate Knowledge
Student Identity/Demographics
Climate Entry Survey

 

–       Q5.1 Which country were you born in?

–       Q5.2 Which country do you currently reside in?

–       Q5.3 Main languages you speak

–       Q5.4 English proficiency

Student Engagement
Person_course_day_cleaned.tsv General (time spent)

–       Avg_dt

–       Sdv_dt

–       Sum_dt

Video Engagement:

–       Nevents

–       Nplayvideo

–       Ntranscript

–       Nvideos_viewed

–       Nvideos_watched_sec

Forum Engagement

–       Nforum_reads

–       Nforum_posts

–       Nforum_threads

–       Nforum_endorsed

–       Nforum_threads

–       Nforum_comments

Problems

–       Nproblems_attemped

–       Nproblems_answered

Qualitative Feedback
Climate Exit Survey

–       Q2.3 What did you like most about the course?

–       Q2.4 What did you like least about the course?

Qualitative Feedback Analysis

 

I utilized text analysis software AntConc for frequency analysis and visualization via the word cloud function, as it would be the best way to understand the top 50 most frequent words from the text at a glance.

There were 200 responses for the “What did you like most about the course” question, and 216 responses for the “What did you like least about the course” question. To prepare the text for frequency analysis. Due to the smaller scope and sample size, I manually cleaned up the comments by normalizing to consistent lowercase, removing unicode characters and punctuation, removing common stop words, and then proceeded to stemming and lemmatization.

Findings

In the question “What did you like most about the course?”, below are some notable words that stood out.

61 students reflected that the “videos” were helpful to understand the course content. Alongside the word cloud with higher frequency are words with positive sentiment such as “good”, “clear”, “understand” and “well”.

On the other hand, in the question “What did you like least about the course”, below are some notable words that stood out in the word cloud.

In the word cloud, the words “assignment”, “time” and “peer review” were some that I would like to mention. Having gone back and read the comments in detail, I found that they were somewhat related to each other. Many of the students reflected that they struggled with the assignments due to deadlines and personal time constraints, in addition to the peer review component that was required for completion of the assignment.

From the comments on assignments, 4 students shared that they came into the course with less background knowledge on the subject matter of climate change, feeling overwhelmed with the assignment and found “having to write essays on climate science before establishing a knowledge base was more than a little daunting.” There were also comments on how the assignments were rather demanding due to deadlines and personal time constraints, and would prefer “shorter or optional assignments” rather than the current structure.

Lastly, the assignment had a peer review component, in which 18 students voiced their dislike due to various reasons. Some students stated that “the rubrics for peer reviews were limiting and not quite matching the assignment” and had no guidance on how to provide constructive peer feedback, with varying degrees of effort put into peer evaluation such that many were in agreement that perhaps “it is necessary for staff review”.

Discussion and Future Development

One main limitation of this exploratory analysis of the qualitative analysis is only being able to conduct it after the course has ended and the surveys have been taken, and not in real-time as the course was in progress.

Aligning with Eriksson et. al’s (2017) identified three main factors influencing MOOC dropouts, which included mismatch between learner’s perception and actual course content and design; learner’s ability to manage time; and social aspects of learner community feeling.

Based on the findings of the frequency analysis as a starting point, I hope to discuss further potential investigation of data for further improvement and development of the MOOC course for future iterations based on some of the themes from the course survey comments.

Course Component, Learning Materials and Resources

Besides the positive sentiment towards utilization of videos in course delivery, I think it would be interesting to look further into the general utilization of course components, learning materials and resources. Based on the self-reported utilization of course components in the entry and exit survey, in comparison to the actual generated data of course component utilization, it would be valuable to investigate the difference to identify student expectations of what resources they think they will use versus the actual utilization of available resources, and whether there needs to be more development of certain materials for future iterations of the MOOC.

Peer Interactions and Peer Evaluations

In continuation of the findings for the negative aspects of the course, I think it would also be useful to look into the self-reported interaction with peers, and the actual generated data of peer interactions. With the utilization of Social Network Analysis, utilization of NodeXL for peer interactions can provide more insight on the socio-constructivist potential of MOOCs, Building rapport with the peers might also ease part of the negative comments on peer reviews, in addition to providing clearer rubric and guidelines for providing peer reviews, as  “peer assessment techniques and exploiting peer support can revolutionize emergence of new pedagogical models in the MOOC approaches” (Yuan, Powell & Cetis, 2013).

Prior Experience with MOOCs and Time Management

In terms of students reflecting that there was lack of flexibility and deadlines, in addition to their personal time management and constraints, I believe Time Management should be a theme that I would further add to the Table of Themes. Some of the relevant questions that would help operational define this theme currently overlap with those in the Entry Survey for Motivation, and additional questions, such as whether or not they have full-time/part-time occupations, can be a proxy measure to how much time students may realistically have.

However, I believe some of these questions also overlap with another potential theme, namely Familiarity (with MOOCs) in terms of whether or not students have had previous pedagogical experience with MOOCs, assuming that those with previous experiences have a better idea on delegation of time and other metacognitive executive functioning. Having additional survey questions that target these themes might provide more points of data collection for generating a more well-rounded idea of the students motivations and capabilities.

Personal Reflection

One big challenge that I was faced with was learning to familiarize myself with the massive quantities of data that was generated from the MOOC course, and figure out ways to operationalize it in order to target what I wanted to investigate.

Another major challenge was the learning curve to learn how to use the different Learning Analytic Tools that were available. For Tableau, I had to go through their tutorial videos to understand how to input the data with each other to create the visualizations that I wanted. For AntConc, the tool itself was more straightforward due to the smaller scope of functions and settings, however, learning to format and prepare the comments for analysis was something new that I learned about. The dataset was manageable in terms of manually preparing for it, though should the scope increase, proper Python programming would be beneficial to complete the task. In retrospect, I believe NodeXL might have been a more robust text-analysis and social network analysis software to utilize for this project. Ideally, I would have liked to have more time to understand the data and the functions of the application to create more substantial visualizations that could provide more insight to the social interactions of students in the courses, especially when it comes to the socio-constructivist potential of crowdsourcing peer reviews and evaluation in MOOCs.

References and Literature

İnan, E., Ebner, M. (2020). Learning Analytics and MOOCs. In: Zaphiris, P., Ioannou, A. (eds) Learning and Collaboration Technologies. Designing, Developing and Deploying Learning Experiences. HCII 2020. Lecture Notes in Computer Science(), vol 12205. Springer, Cham. https://doi.org/10.1007/978-3-030-50513-4_18

Eriksson, T., Adawi, T., & Stöhr, C. (2017). “Time is the bottleneck”: A qualitative study exploring why learners drop out of MOOCs. Journal of Computing in Higher Education, 29(1), 133-146. https://doi.org/10.1007/s12528-016-9127-8

Khalil, Mohammad & Ebner, Martin. (2017). Driving Student Motivation in MOOCs through a Conceptual Activity-Motivation Framework. Zeitschrift für Hochschulentwicklung. 12. 101-122. 10.3217/zfhe-12-01/06.

Nanda, G., A. Douglas, K., R. Waller, D., E. Merzdorf, H., & Goldwasser, D. (2021). Analyzing large collections of open-ended feedback from MOOC learners using LDA topic modeling and qualitative analysis. IEEE Transactions on Learning Technologies, 14(2), 146-160. https://doi.org/10.1109/TLT.2021.3064798

Nawrot, I., & Doucet, A. (2014). Building engagement for MOOC students: Introducing support for time management on online learning platforms. Paper presented at the 1077-1082. https://doi.org/10.1145/2567948.2580054

Zhu, M., Sari, A.R. & Lee, M.M. Trends and Issues in MOOC Learning Analytics Empirical Research: A Systematic Literature Review (2011–2021). Educ Inf Technol 27, 10135–10160 (2022). https://doi.org/10.1007/s10639-022-11031-6

Evaluation of Learning Analytics Tool

Evaluation Tool and Motivation

For this assignment, I have decided to choose OnTask as the learning analytics tool of choice for analysis. I want to look into the ways that this tool goes above and beyond monitoring student behavior patterns for “early alert” by providing real-time feedback for students and suggestions for further improvement by recommending additional resources for reference. In addition, as it is not limited to one LMS system, it is a potentially useful add-on for a variety of contexts. I hope to reflect on the ways that this tool would be especially helpful in the context of the UBC OnTask Pilot Study for first-year physics course as an example.

I chose to utilize the Cooper Framework (Cooper, 2012)  as the guide to evaluate this learning analytics tool, as it takes on a more “descriptive, rather definitive approach which allows to deal with real-world complexity of how analytics is” (p.3). I believe this framework refers to “characteristics of learning analytic tools which may overlap and are assumed to be extensible and adaptable” (p. 3), which I believe is important when it comes to benefiting from the affordances of the tools, while thinking about best practices for utilization in different educational contexts.

OnTask Tool Functions (OnTask, n.a)

OnTask Project “aims to improve the academic experience of students through the delivery of timely, personalized and actionable student feedback throughout their participation in a course”. Some core functions of OnTask include:

  • Assessment of data about student activities throughout the semester and allows instructor to design personalized feedback with suggestions about learning strategies for students to adjust their learning progressively
  • LMS- agnostic and receives data from various sources (i.e. online engagement, assessments, student information systems, electronic textbooks, discussion forums,etc.)
  • Directs students to specific chapters/examples in textbooks, suggest additional readings/resources, enroll in required workshops/tutorials/labs, redirects to university support services

Cooper’s Framework – UBC OnTask Pilot Study

Based on UBC’s OnTask Pilot Project (Moosvi, 2019), the tool was used in first-year physics course PHYS1117 by instructor Simon Bates and co-instructor Mateus Fandino to engage with students more and add personalized feedback for each student based on their performance throughout the weeks. Using Cooper’s proposed framework (Cooper, 2012), I have analyzed this case study as below:

Analysis subjects, objects, and clients Analysis Subjects: first-year students in PHYS1117

 

Analysis Objects: first-year students in PHYS1117, specifically student performance within the course

 

Analysis Clients: course instructors

Data Origin ●     Data collected from LMS system Canvas (i.e. assignments, exams and assessments, student engagement data, etc.)

 

●     In-class attendance and participation

 

●     Laboratory sections

Orientation and Objectives Orientation:

Diagnostic in nature even though the pilot project is an exploratory one, as it aims to investigate whether personalized feedback can  improve student performance.

 

This study is also a reflective orientation, although the natural extension of the pilot project could be used to predict outcomes of future cohorts, in which a more diagnostic mode is expected.

 

Objective: to enhance student performance and outcomes based on mix of quantitative and qualitative measures

Technical Approach Assumption of technical approach being statistical and hypothesis testing in nature based on analysis of students’ previous course performance without personalized feedback from OnTask compared to students’ course performance with the personalized feedback from OnTask.
Embedded Theories Socio-constructivist approaches for students to co-construct their learning process via receiving and implementing feedback given by instructors

 

Based on some of the feedback from students, there was a general positive perception from students for utilizing OnTask to provide feedback on what to focus on readings, reminders for upcoming assessments and deadlines, areas of confusion and commonly made mistakes, reflection of previous and future coursework, as well as helping students gauge their own progress in the course. There was also an increase of student engagement and interaction overall due to the weekly personalized newsletters that the instructors sent out.

Reflection

The pilot study results are in alignment with the findings of Pardo et. al (2019) on a first-year undergraduate engineering course. Although the size effect of personalized feedback was not large on the midterm scores, the result still provided a significant and positive impact on student satisfaction with feedback and academic performance in the midterm exam (p. 136). The researchers associated the feedback as an important factor to support student success, and highlighted the need to establish a connection between student data and how to utilize it to provide high quality feedback. Suggestions on creating comment templates to tailor to a course, or for algorithms to match between comments and observed data were provided.

Some further research on better techniques to identify individual differences in student participation in learning experiences and students study habits were highlighted, as these missing pieces of information could potentially provide additional information for interventions with potentially high effects.

In terms of applying it to my personal context of foreign language teaching and learning, due to the smaller scale of students per class and the increased amount of qualitative assessments, there might not be as big of a difference between utilizing OnTask versus the instructor providing feedback directly. However, I still believe OnTask could be a great tool to identify common challenges that students might be facing based on their qualitative assignments, and can still provide insight on student performance in courses regardless.

References

Cooper, A. (2012) CETIS Analytics Series Volume 1, No 7; A Framework of Characteristics for Analytics http://publications.cetis.ac.uk/2012/524

Lim, L., Gentili, S., Pardo, A., Dawson, S., & Gašević, D. (2018). Combining technology and human intelligence to provide feedback and learning support using OnTask . In Companion Proceedings of the 8th International Conference on Learning Analytics & Knowledge (LAK’18), Sydney, Australia. Academia.

Moosvi, Firas (2019, June 20). Learning Analytics Project OnTask: A Case Study https://learninganalytics.ubc.ca/ontask-a-case-study/

OnTask (n.a.). https://www.ontasklearning.org/

Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2017). Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology, doi:10.1111/bjet.12592

 

ETEC511 Final Project Retrospective

Project Design and Development

This project was something rather novel working in the realm of augmented reality, as I did not have a lot of previous experience with utilizing this kind of technology, especially in conjunction with language learning, which I am passionate about. It was fun to challenge myself in this way.

In terms of the design process, I think the literature review and supporting theories could be more thorough, as I would like to have the chance to dig deeper into existing research; put more thought in creating more robust scaffolded levels for language learners; and create supplemental potential user surveys to flesh out the usability testing.

As an example:
We changed our original idea of creating scaffolding levels based on the decreasing percentage of home language labels to increased delays in when the language labels will show up. This decision was made later in our design development, and though I think it is a better idea, I wished we had more background research to back it up.

If we had more time, I would like to develop the mock-up of our software a little more to create a more realistic demo of what could be achieved with the MyWebAR tools, to illustrate our concept and make it feel more grounded, concrete and feasible.

Collaboration and Teamwork

Bella, Jamie and Jennifer provided a lot of interesting observations from their hands-on experience working with elementary students in a classroom setting.  From their teacher’s perspective and the observations they provided, we quickly identified an existing problem that new immigrant students were struggling with. This reminded me of my own experiences trying to integrate into new classroom environments with a complete language barrier when I was younger.

Though I do not work with students directly, I was able to provide some of the design-thinking to the project from my experiences in research as well as creating learning content for language instructors, and contributed to the project in a different way.

All four of us worked collectively on the project proposal, report and slides.
Jamie took on the main responsibility for creating the AR demo in MyWebAR. Though the tool claimed to be rather easy for beginners to create AR elements, it seemed like it was also rather limited in what it could actually do. Jamie did a great job of creating an alternative user interface that demonstrated what we needed it to do.

I think one of the main challenges was finding time to work collaboratively together, as Jennifer was in a drastically different time zone. Sometimes Bella, Jamie and I we would block off time and work together on the project together, which I personally found super helpful.

Moving forward, one project management skill I would like to develop is creating clearer project objectives with actionable items and deadlines to ensure that we all know what needs to be done within what timeframe. I believe this will help drastically with group projects in the future.

Tipping Point – Open Education Resource Textbooks Case Study

Creation of Open Education Resource Textbook with Interactive H5P elements for FREN1205 – French Conversation course in the Modern Languages Department at Langara College

Introduction

For the case of technological displacement, we were curious to explore the tendency and shift from physical textbooks to digital Open Education Resources (OERs) in higher education institutions. We were specifically interested in the tensions and opportunities that arose from the transition to online teaching and learning after the pandemic, especially with the normalization of online and hybrid e-learning. 

We are grounding this inquiry of technological displacement in the case study of the creation of OER textbook with interactive H5P elements for a French conversation course at Langara College. In this assignment, we analyze the usability aspect of OERs from the instructor and student perspective, as well as explore the concerns of artificial intelligence, and issues surrounding digital labor in the process of creating OERs in higher education institutions. 

 

Motivation and Background

The FREN1205 – French Conversation course at Langara College is offered in-person with the utilization of a digital OER textbook Le Français Interactif created by the instructor Mirabelle Tinio. To support our work, we had the opportunity to speak with the instructor to learn more about the case study. All case study context provided in this assignment came from this conversation. Below are some of the motivators for the creation of the OER textbook from both the students’ and instructor’s perspectives. 

From the student perspective, the education landscape had been drastically transformed during the emergency transition to online teaching and learning during the beginning of the pandemic in 2020. The effects can be seen gradually resuming in-person teaching and learning once again in 2021, in which student surveys reflected that having additional supportive resources online available helped with their learning process and overall experience taking online courses. In addition, students reflected that physical textbooks were expensive and inaccessible, especially the ones that were ‘single-use’ for an individual course, and were less inclined to make such purchases.

From the instructor’s perspective, there were many factors that contributed to the transition of physical textbooks to a digital OER. The instructor that we interviewed had been teaching the French conversation course for the past at least 12 years. Though the original textbook they were using provided activities and exercises for everyday conversation scenarios, she found that the content was not up-to-date or culturally relevant enough for the students within the classroom. The instructor therefore found herself turning to other available language learning resources to patch together a curriculum plan that included vocabulary, grammar structure, and socio-cultural activities. The process was rather time consuming and she was never really satisfied with the existing resources. 

With both students and instructor identifying that the current resources were not meeting their needs, it became clear that another resource should be introduced to solve the problem of learning resources for this course. Here, we can use the concept of technological utility to demonstrate, in part, why a tipping point occurred. Utility asks the question of if the technology fulfills the users’ needs or if it does what the users need it to do (Issa & Isaias, 2015, p. 4). Physical textbooks were not meeting the learners’ and instructor’s utility needs, therefore, a new technology needed to be introduced. 

Simultaneously while working partially in the Educational Technology Department, there were many other instructors utilizing Pressbooks and other OER platforms to input resources into Brightspace, a learning management system. The existing integration of the learning management system and potential for further adaptation was an additional motivator for developing her own textbook as an OER for the class. 

The Tipping Point

The opportunity and tipping point presented itself when BCcampus Open Education Foundation Grant for Institutions applications were open for project proposals for specifically utilizing H5P for Pressbooks in 2021. The grant was intended for British Columbia post-secondary institutions wishing to explore, initiate or relaunch open educational practices, resources, support and training on their campuses. Through this grant, the instructor was able to secure additional funding and support for creating the French Conversation OER textbook. 

Benefits

Multi-modality, Interactivity and Flexibility  

Learning languages is an activity that is inherently multimodal and incorporates a combination of multi-sensory and communicative modes (Dressman, 2019). The utilization of online OERs makes it possible to include multimedia and interactive H5P elements such that students can actively engage with the learning content, allows for more diversity in learning methods, as well as increasing the accessibility of course content. 

Though the OER textbook included many different chapters and topics, each unit contained a similar format: the learning objectives, pre-test questionnaire, vocabulary, practice exercises, oral comprehension exercises, a post-test evaluation questionnaire, and self-reflection. This repeated format increases the OER’s usability because it is quickly learnable and memorable (Issa & Isaias, 2015, p. 33). The OER therefore creates a smoother user experience with less friction or frustration to navigate to the content than the physical textbooks, demonstrating again why this tipping point occurred (Issa & Isaias, 2015, p. 30).

The goal was to make the learning content accessible to both students and instructors with maximum flexibility and adaptability. Students could preview the units and prepare ahead of time before the classes; or review the units and practice on areas for further improvement, all at their own pace, with self-assessments available. Instructors can supplement the course delivery with additional resources, in-class activities or outing experiences, and utilize the textbook in a non-linear manner tailored to the needs and pace of the students in the classroom. 

Living Texts 

The content in the OER included resources that the instructor created and showcased content that previous students created as well, and can be seen as a co-created ‘living text’ (Philips, 2014) as a pedagogical tool, as well as a co-creation of knowledge within the classroom. 

For example, in the activity “Interview a Francophone”, the instructor uploaded recorded interview videos of previous student’s work, as an exemplar of what the assignment would look like when current students approached the activity themselves, but also as an exercise for current students to practice their listening comprehension and understanding of French conversation in context. The instructor identified that this was to also make the students feel appreciated for their active contribution towards the course, and recognized students as part of the co-construction of literacy knowledge through this kind of interaction (Philips, 2014). 

Creating an OER that operates as a living text supports increased usability because it allows for feedback to be implemented when offered by the learners (the users). A living text can push back against the challenge of “configuring the user”, where the designers imagine the “right way” for a user to engage with their technology instead of being open to how the users actually will engage with the technology (Woolgar, 1990). This OER as a living text can be adapted to user feedback and therefore there is not only one “right way” to use the resource. Instead, the OER can increase usability for a wider variety of users as instructors adapt it based on learner feedback. The instructor noted that keeping an OER like this up-to-date is very important. This is especially true if the OER is described by an instructor to learners as a living text that is responsive to their needs. 

Equity, Diversity and Inclusion 

As mentioned above, the multi-modality, interactivity and flexibility of the living texts contributes towards a classroom climate that reflects equity, diversity, and inclusion of the students that are currently taking the courses. This approach takes into consideration the positionality, lived-experiences, interests, and abilities of students within the classroom and their agency as an active participant in their own learning.

For example, taking the aforementioned activity of interview with a Francophone, with the crowd-sourced collaborative effort of the different interviewees, students are able to see the different kinds of ‘francophone-ness’ outside of the mainstream Eurocentric depiction of French speaking people, especially when it comes to the deep-rooted history of the French language as a tool of colonization. 

By embracing inclusive pedagogical approaches and recognizing students’ diverse contributions, this approach to creating OER textbooks creates a supportive and accessible learning environment, fosters a sense of belonging, and affirms the value of students’ unique contributions to the learning process. 

Challenges 

Current Concerns: Teamwork Makes the Dream Work 

One major challenge that the instructor encountered during the creation of this OER textbook was the lack of support from the institutional level, especially when new technological adaptations require more incentive and supporting resources to push for incorporation and utilization within the college, and furthermore, across institutions. Though the instructor did collaborate with other language instructors from the Modern Languages Department and advisors from the Educational Technology Department, there is a strong suggestion for creating a community of practice across institutions to support this work’s sustainability. The production of a brand new OER like this (as as its ongoing maintenance) involves significantly more time and energy than maintaining the status quo of using physical textbooks. There is a risk that the instructor’s digital labor of producing this kind of resource might be unknown by the institution if it is unseen. 

On a practical and logistical consideration, this ensures the articulation of courses are leveled and aligned across institutions, especially when it concerns the transferability of courses and credits for pathway programs, such as Langara College. On a more idealized and aspirational endeavor, this promotes the collaboration and commitment to sharing knowledge and resources, encouraging accountability, peer reviews and continuous development of teaching and learning practices, enabling the community to build on each other’s work and fostering a culture of openness and collaboration in education. 

Future Concerns: The Rise of Artificial Intelligence and Impact of Digital Labor  

Though the BCcampus grant did provide funding for the instructor to develop the OER textbook, there needs to be more support when it comes to compensation of the unseen invisible work that is added on to the already existing duties of a teaching faculty member. With increased digitization of instruction within higher education, comes an expectation of an accelerated pace of work (Woodcock, 2018, p. 135). There can be an expectation, even implicitly, within institutions that work becomes “easier” as a result of digital resources like this OER textbook. This can result in work pressures and time pressures expanding for instructors who have created digitized aspects of their work. 

Another risk for instructors is the value that is placed on published work to push an academic career forward (Woodcock, 2018, p. 136). The motivation to pursue the creation of open access work can be reduced if the institution the academic is working within has rewards for published work. While an OER like the one described in this case is a different kind of open access work than a journal piece, its creation and upkeep exist within the same labour hours for an instructor. The instructor must be significantly committed to the creation of the OER if there is limited institutional support, as described in this case, and also if there is institutional pressure to spend time doing other, more valued work, such as publishing at a more prestigious journal. 

Finally, there is a tension inherent in the use of artificial intelligence in relation to OERs. As with this case study, we know that producing and maintaining OERs can be time, labor, and resource-intensive. With the rise of large language models like ChatGPT in the past year, there is a potential to employ AI tools like this to support the creation of OERs. This might seem to reduce the human labour needed to create an OER like Le Français Interactif. However, we also know that AI tools like ChatGPT do not appropriately cite sources and can even ‘make up’ information. Uncited sources are problematic because they effectively steal intellectual property from other academics and false information is problematic because it diminishes the reliability and utility of the OER. 

Even more concerning is that AI language models are trained with data that can be biased and produce content that is embedded with this bias (Buolamwini, 2019). With an OER project like this outlined in our case study, it could be counter to the desire to create more culturally-relevant and inclusive resources to produce them in “partnership” with an AI tool. More relevant to this case study, regarding language translation, AI tools like DeepL can be helpful but are not yet at the point where they can translate as effectively as a human who speaks multiple languages. For this reason, instructors might be wary of using AI tools as “co-authors” for OERs to ensure the quality of the instructional or learning resource remains high. 

Conclusion

This case study demonstrates how the creation of an OER textbook for the FREN1205 – French Conversation course at Langara College exemplifies a pivotal shift in educational resources toward digital platforms. This tipping point is a response to the evolving needs of both students and instructors in the post-pandemic era of education. Ideally, an OER textbook offers learners enhanced accessibility, flexibility, and more inclusivity within their educational experience. However, challenges such as institutional support for digital labour and concerns surrounding the rise of artificial intelligence underscore the importance of institutional buy-in and ethical considerations as we integrate OER textbooks into the student experience.

References

Buolamwini, J. (2019, February 7). Artificial Intelligence has a problem with gender and racial bias. Time. https://time.com/5520558/artificial-intelligence-racial-gender-bias/

 

Dressman, M. (2019). Multimodality and language learning. In M. Dressman, & R. W. Sadler (Eds.), The handbook of informal language learning (pp. 39-55). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781119472384.ch3

 

Issa, T., & Isaias, P. (2015) Usability and human computer interaction (HCI). In Sustainable Design (pp. 19-35). Springer.

 

Phillips, L. G., & Willis, L. (2014). Walking and talking with living texts: Breathing life against static standardisation. English Teaching : Practice and Critique, 13(1), 76.

 

Woodcock, J. (2018). Digital Labour in the University: Understanding the Transformations of Academic Work in the UK. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 16(1) pp. 129-142.

Woolgar, S. (1990). Configuring the user: The case of usability trials. The Sociological Review, 38(1, Suppl.), S58-S99.

Intellectual Production #1 – Users, Uses and Usability

Formulate a conception of usability and what is missing from the conception from an educational perspective —  what is educational usability?

Human Computer Interaction (HCI) is an interdisciplinary field of study concerned with the iterative design, evaluation, and implementation of  interactions between humans and technological interfaces as a system.

The principals of usability are guidelines that help measure the quality of human-computer interactions, taking into consideration of interface functionality, efficiency and effectiveness depending on user’s needs, contexts and level of satisfaction (Issa & Isaias, 2015, p. 30)

From an educational perspective, I believe that context and user’s needs should be prioritized when it comes to evaluating the educational usability of educational technologies and resources within a learning context. Ideally, this would be implemented as a system, such that the interfaces can assist the user’s with their learning process, and can be adapted to fit the user’s ever-changing needs. This means necessarily having interfaces that are accessible to fit the user’s needs physically, cognitively, culturally, and digitally to provide support that is contextualized.

Based on Woolgar’s paper, identify and discuss 2 examples of “usability gone wrong”.

In Woolgar’s paper (1990), he seemed to be concerned about usability testing within the “right context” of both the user and the environment.

Having chosen employees within the company as test subjects (p. 81) , it is unclear whether or not their behaviors will reflect that of what is expected by their target users. Even with the provided manuals, it is uncertain that the instructions are “sufficiently clear” to target users, such that the errors made in the usability tests could be misattributed to other factors (p.82). Lastly, due to the simulated environment, the test subjects even ironicized their attempts of creating an “objective test” , making it challenging to discern whether the test subjects behave in a way “natural” to target users at all (p. 86).

Lacking concrete definition of the machine and user personas, and simulation of “objective tests of natural user behavior” overall undermines the robustness and reliability of the usability test.

Discuss the differences seen in the two excerpts of “usability”

…the usability evaluation stage is an effective method by which a software development team can establish the positive and negative aspects of its prototype releases, and make the required changes before the system is delivered to the target users"  (Issa & Isaias, 2015, p. 29).
“…the design and production of a new entity… amounts to a process of configuring its user, where 'configuring' includes defining the identity of putative users, and setting constraints upon their likely future actions” (Woolgar, 1990).

Based on the two excerpts, it seems like both of them are converging on the idea of iterative and interactive systems to adjust and create better experiences for users when they utilize the interface.

The main difference seems to be that Issa and Isaias’s approach is more from “after-the-fact” feedback, such that improvement is based on the reactions and responses of users. On the other hand, Woolgar seems to make “before-the-fact” assumptions of the users to see whether the hypotheses are confirmed or not — hence “configuring” its users.

While both approaches create recursive feedback loops to push development of the interface and are initially “human-driven” in design, it makes me wonder —  how much of our interactions with technology are directed by human agency, and how much of our interactions are shaped more by the affordances of our technology?

References

Issa, T., & Isaias, P. (2015). Usability and human computer interaction (HCI) In Sustainable Design (pp. 19-35). Springer.

Woolgar, S. (1990). Configuring the user: The case of usability trials. The Sociological Review38(1, Suppl.), S58-S99.

Spam prevention powered by Akismet