Assignment 1: Learning Environment Evaluation Rubric – Reflection

The combination of the short timeframe, along with the difficulty of coordinating multiple people in at least four different time zones made this group assignment particularly challenging. Regardless, we were still able to pull together a thoughtful and comprehensive rubric reflective of our discussions and analysis of Learning Management Systems (LMSs) in public and non-profit organization settings.

We began by sharing qualitative experiences of the various LMSs we have used, whether as learners, designers, or administrators in our respective workplaces. Some of us were able to pull related documentation used in the selection process of these systems for the group to review and build context. We had initial text-based discussions to analyze what worked, or what did not work from our own experiences, and made connections to the recent readings of the SAMR model (Puentedura, 2010) and the SECTIONS model (Bates, 2014). Through research we also explored two literature reviews, one that focused on LMSs in the workplace (Sabharwal, Chugh, Hossain & Wells, 2018) and another that identified barriers to e-learning within public administrations (Stoffregen, Pawlowski, & Pirkkalainen, 2015). This social and informal analysis process was spread out first in Canvas, then via Google Spaces and within our evolving Google Doc. Some of us were also able to meet on a Zoom call to discuss further and define a plan for completing the assignment.

Through the process of defining our scenario, we identified some key issues that were prevalent within many of our workplaces. The first being cost, as both public organizations and non-profits are so often working with limited and fluctuating budgets, which impacts the upfront funds available to purchase a software product, as well as the ongoing costs available to maintain staff that administer the LMS or design the e-learning housed within it, as well as the IT staff required to update and customize the software as required. Other key shared issues that informed our criteria were usability, analytical and reporting functionality to assist with assessment, privacy and security, and technological sustainability. Although cost was definitely a core criterion from each of our perspectives, it was interesting to hear the subtle differences in our experiences. For example, at Sarah W.’s place of work, smartphone compatibility is important because their LMS is highly utilized by external parties, and at Lisa’s workplace multiple device types are accessed and available to the users, whereas at my workplace, most staff have one assigned computer, with our smartphones largely only being used for telephone functionality. We decided to focus our scenario on an internal system designed at supporting staff training completed during work hours on their work computers.

Upon reflection of our rubric design, we could see that using a scale layout with values ranging from 1-4 points (from poor to excellent) allows the user of the rubric to clearly see where a particular LMS may be lacking, and also identifies clear opportunities for improvement either at the time of selection, or to set as goals in future software builds as funding may permit.

References:

Bates, T. (2014). Choosing and using media in education: The SECTIONS model. Teaching in the digital age. https://opentextbc.ca/teachinginadigitalage/part/9-pedagogical-differences-between-media/

Puentedura, R. (2010). The journey through the SAMR model. IPad Educators: Sharing Best Practice in the use of Mobile Technology. https://www.powerschool.com/blog/samr-model-a-practical-guide-for-k-12-classroom-technology-integration

Sabharwal, R., Chugh, R., Hossain, M. R., & Wells, M. (2018, December). Learning management systems in the workplace: A literature review. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 387-393). IEEE.

Stoffregen, J., Pawlowski, J. M., & Pirkkalainen, H. (2015). A Barrier Framework for open E-Learning in public administrations. Computers in Human Behavior, 51, 674-684.

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet