LMS Selection: Evaluation Rubric – Self Reflection

Working to create this rubric deepened my understanding of the complexity of LMS selection for a postsecondary institution. We were tasked with providing an LMS scoring rubric for Athabasca University with the specific intention of expanding the distance-education program at the undergraduate level in the English-speaking South Asian market. An additional consideration was the availability of internet connectivity in remote or underserved areas of the region. A closer look at the Athabasca University mission statement and mandate revealed that the institution is “dedicated to the removal of barriers that restrict access to and success in university-level study and to increasing equality of educational opportunity for adult learners worldwide” (Athabasca University, 2017). Our group adopted this as our foundational understanding in developing the rubric, with accessibility and inclusivity as top priorities. For me, the complexity arose when we tried to articulate which specific rubric design elements were to be included in our final product.

As an educator, I am familiar with the concept of rubrics and use them frequently in assessing my students’ work. Most recently, I have used them primarily as formative assessments to allow for a richer and more detailed conversation about a students’ work, in particular their writing, and to support growth and progress in their learning. This is what Dawson (2017) refers to as the rubric design element of “Secrecy”– that is “who the rubric is shared with and when”. In other words, my current practice allows for a open-design rubrics, where the criteria being assessed can be discussed with those to whom it will be applied. This LMS scoring rubric, on the other hand, was to be an objective and external assessment of what this university might need. I found it a struggle to not be able to access some of the more qualitative data I would take into consideration prior to creating a tool that could be used in making such a widespread administrative decision. It is interesting to note that rubric use has been noted and sometimes criticized for its “opacity” (Dawson, 2017, p. 347) and “vague and ambiguous” (Grainger, et.al, 2017, p.411) nature. 

Trying to address the needs of what we had established as quite diverse stakeholder needs among administration (including policymakers and technical support), faculty, and students meant that often times the rubric category descriptors and criteria took on large and what might be considered clumsy terms. My personal bias leaned towards addressing what I predicted to be faculty concerns, and as a student in an online educational program myself, I understand what the needs of the end users might be. However, I learned to address that, as Bates (2015) notes, teaching or “just the pedagogical context” is a “weak discriminator” and that “access (and ease of use) are stronger discriminators that teaching effectiveness in selecting media”.

References:

Athabasca University. (2017). Mission & Mandate. Retrieved from http://www.athabascau.ca/aboutau/mission/

Dawson, P. (2017). Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347-360. 10.1080/02602938.2015.1111294

Grainger, P., Christie, M., Thomas, G., Dole, S., Heck, D., Marshman, M., & Carey, M. (2017). Improving the quality of assessment by using a community of practice to explore the optimal construction of assessment rubrics. Reflective Practice, 18(3), 410. 10.1080/14623943.2017.1295931

Leave a Reply

Your email address will not be published. Required fields are marked *