Paige’s LMS Reflection

To begin, our group brainstormed and found sample rubrics for LMS selection on the Internet. There were some great examples, but most of them seemed quite vague, general, and obviously not aligned with our specific scenario. We wanted to showcase our creativity and evaluate relevant criteria for our scenario, so we discussed the possibility of creating user personas to help us identify the needs of teachers, students, and administrators using the LMS. While this was a good start for considering different users’ needs, we decided it was counterintuitive to create different rubrics for all three users because a good LMS should cater to the most needs, and many of these needs were overlapping anyway. My biggest takeaway from this experience is that the development of a rubric plays a significant role in creating a clear goal for the LMS. Also, the LMS has to be flexible for users, so it’s important to structure a flexible rubric.

However, this is easier said than done. We analyzed the needs vs. the wants of LMS users, but Alexis made a good point that sometimes it’s not possible to distinguish between the two. Focusing on the users themselves was actually quite constrictive, so we changed our strategy and focused on high priorities of the system vs. low or future ones. Once we did this, our organizational strategy became much more manageable. With this approach, we could consider the ISTE Standards (2017) more fluidly, focusing especially on our roles as designers, collaborators, and analysts interested in rethinking the traditional LMS and its capabilities. We could still use the human-centred approach for designing and selecting a platform that was important to us from the start.

Initially, we had separated our rubric into different categories (functional, technical, etc.), but it fell apart when we restructured and created multiple rubrics to better serve our purposes of helping users prioritize a system’s functions. However, we were able to reintegrate these categories into the latest rubrics, which added further clarity and readability. Organizing the rubrics was one of the most difficult parts of the assignment, but it was helpful to have Andrew who could provide different templates on the spot; this saved us from not getting boxed in with one idea or way of doing things. Creating a rubric is obviously not a one-person show; it requires many iterations, different perspectives, and lots of time to get feedback. What seemed like such a simple, straightforward task was actually very complex. I realize for the purposes of this assignment that the timeline was tight, but in the real world, even for a small project, this would take months to collect valuable data and insight from key stakeholders, similar to what UBC did when they trialled the Canvas LMS (Wudrick, 2017).

Something that I personally struggled with in this project was distinguishing between what I would want in an LMS and what the actual users would prioritize in an LMS. As a student, I have lots of experience with LMSs, but not much administrative or teaching experience. Balancing the priorities of all and keeping a check on my own personal lens was something I had to monitor. As well, when I first began to brainstorm for the project, I was focused on overly technical details that were mostly about personalizing the learning experience. While personalization is good, the point is that the overall LMS should reflect personalization and provide options that are going to work for different teachers and other users (Spiro, 2014). There is a difference between suggesting learning experiences as criteria and selecting a learning platform that can support the needs of an expanding institution. Thus, I had to look beyond the minor details and focus on big-picture ideas that would help the program consider scalability and the best ROI. Mimi and Faeyza had done a lot of research on these factors, providing valuable perspectives that I wouldn’t have thought of alone.

Bates (2014) greatly influenced our groups’ focus, and the questions posed at the end of each part of the SECTIONS model were valuable for considering different perspectives. There are so many LMS and CMS options available online to peruse, and many use flashy words like innovative and intuitive without really explaining how it’s either of those things. We used the SECTIONS model to determine features that will actually provide a flexible learning experience rather than a fancy management system. While it’s not necessarily wrong to store content on an LMS, we need to move forward and represent the needs of digital-age teaching professionals and 21st century learners who are interested in platforms that help build knowledge rather than store information (Coates, James, & Baldwin, 2005, p. 33). This is especially true for our target audience, Francophone adult learners, who’ve likely already struggled with information they’ve been presented in other forms. The possibilities of just-in-time, personalized learning through a flexible, online system is an exciting possibility, but not a guarantee without careful consideration of online pedagogy and a clear understanding of the users and what they really need from a system.

Through our rubric creation, I realized that we challenged the traditional assessment structure just as we questioned traditional LMSs we had been subject to in the past. Our rubrics encourage users to take notes, make relevant decisions, ask questions and confer, aligning with various ISTE Standards such as collaboration (2017) that are non-negotiable in the current age. Our task was to provide a professional opinion, but through the structure of the rubric, we also involved key stakeholders in the process. By modelling best practices in the rubric creation, perhaps we can lead and inspire others to use educational technology with a critical, yet, open, mindset. That said, I don’t believe a final rubric is ever final, and it should be flexible, fluid, and evolve to meet best practices.

References

Bates, T. (2014). Choosing and using media in education: The SECTIONS model. In

Teaching in digital age. Retrieved from https://opentextbc.ca/teachinginadigitalage/part/9-pedagogical-differences-between-media/

Coates, H., James, R., & Baldwin, G. (2005). A critical examination of the effects of

Learning Management Systems on university teaching and learning. Tertiary Education and Management, 11,(1), 19-36. http://link.springer.com/article/10.1007/s11233-004-3567-9

ISTE. (2017). ISTE Standards for educators [Web page]. Retrieved from

https://www.iste.org/standards/for-educators

Spiro, K. (2014). 5 elearning trends leading to the end of the Learning Management

Systems. Retrieved from http://elearningindustry.com/5-elearning-trends-leading-to-the-end-of-the-learning-management-system

Wudrick, H. (2017, August 28). We’re retiring Connect. Get ready for Canvas! [Blog

post]. Retrieved from the University of British Columbia website: https://students.ubc.ca/ubcfyi/were-retiring-connect-ready-canvas

One comment

  1. Hi Paige,

    Thanks for a very insightful reflection. I learned a lot from reading through the process and points of consideration that you and your team had gone through – from role playing the different LMS users to categorizing the rubric, from balancing organisational needs to balancing learners’ needs versus wants. Our group found ourselves in similar discussion points and learned a lot from this process as well.

    Thanks for sharing!
    Charisse

Leave a Reply

Your email address will not be published. Required fields are marked *