Assignment 1 Reflection — Meghan Gallant

Designing an LMS evaluation rubric challenged me in ways I did not initially expect. While I knew I would have to take the requirements and limitations outlined in the case study into consideration, I did not expect to experience anxiety as I assumed to know what would be important to consider from YESNet’s perspective.

Our group began with a clear structure to get started with—Bates’ (2014) SECTIONS model. I was pleased to be using the SECTIONS model in its intended context and felt confident that, using the framework as a guide, I would contribute relevant and thoughtful criterion. However, when I began to think about what YESNet required, I realized I knew very little about what they would desire in an LMS or how they expected an LMS to facilitate blended learning. I could not put myself in YESNet’s shoes, so I started researching LMSs and blended learning.

Ellis and Calvo (2007) suggest that the first step when implementing a blended learning environment is that: staff begin by undertaking some sort of decision-making. Those initial decisions depend on the size and scope of the redevelopment or design of the course, the needs of students, the learning strategies of their department, and the culture of the institution. (p.63)

Unfortunately, I am not a member of YESNet and I did not undertake any decision making, so I had to make a lot of assumptions about YESNet’s needs. The first assumption I made was back when I chose to contribute criterion for the Ease of Use and Cost components of Bates’ (2014) SECTIONS model. After reflecting on why I chose to work with these particular components, I discovered that these two considerations are what I usually consider early on when choosing technology for my own classes. Someone had to cover Ease of Use and Cost, but was I biased in choosing them? Had I put my priorities ahead of the needs of the students, the learning strategies of YESNet, and the culture of the institution in assuming that Ease of Use and Cost would be priorities? Would my suggested criterion be useful, or would my bias skew the criterion’s validity? I wasn’t sure what type of feedback to expect when I went into our second group meeting.

Once I started working with the group, I felt better about my contributions. The group worked together re-wording, adding and deleting, and rearranging the criterion. After several hours of work, the group developed a rubric I am proud of. Revision and collaboration are not features unique to our group—this happens all the time. However, it highlighted an important point—choosing evaluation criterion is not a task that should be undertaken by a single person. I feel that working in a group softens personal bias and keeps assumptions to a minimum. As a group, we discussed the criterion and drew from our combined experience; this lead to our group addressing many considerations that would have never crossed my mind. Ideally, this is how the development of an evaluation rubric should be approached—as a team working toward a common goal.

References

Bates, T. (2014) Teaching in a digital age. (Chapter 8). Retrieved from http://opentextbc.ca/teachinginadigitalage/

Ellis, R.A. & Calvo, R.A. (2007). Minimum indicators to assure quality of LMS-supported blended learning. Journal of Educational Technology & Society, 10(2), 60-70. Retrieved from http://www.jstor.org/stable/jeductechsoci.10.2.60?seq=1#page_scan_tab_contents

Leave a Reply