Reflections on Assignment 1

Individual Reflections on Assignment 1: Online Delivery Platform Evaluation Rubric

I greatly enjoyed this assignment as it offered an authentic situation in which we could apply the knowledge from the readings and could experience the selection/design process first hand.  This assignment resembled a complex strategy game in which one must carefully navigate needs and aspirations, barriers and context in order to walk a narrow band between an overly generic rubric and an overly specific rubric; both of which are not useful tools. Below I have identified two key features in creating this balance and have provided my take home message. At this point in the reflection I would like to thank all members of group 1 for the amazing collaborative work.  It was a pleasure working with you. Thank you for this wonderful learning opportunity.

The importance of identifying the context and an example from our rubric

In order to create a useful evaluation rubric, or for any selection of technology, the context, the needs and the culture of the users must be carefully considered (Brown, 2009).  By researching BCcampus and carefully considering both the SECTIONS framework (Bates, 2014) and the good practices from the ISTE (2008), we were able to assess the needs of BCcampus, their requirements and to a certain point what the agency hoped to achieve through the implementation of their LMS.  BCcampus, supporting over 25 post-secondary institutions, needs to serve as a model for all affiliates, and therefore needs to select an LMS that would show innovation, allow for collaboration and can easily incorporate leading-edge technologies (Coates, James, & Baldwin, 2005).  The rubric, also needed to help identify a LMS platform that allowed the diverse affiliates of BCcampus to customize and personalize the LMS according to their needs.  As BCcampus is proud of its role as a leader in technological innovation it was very important for us to ensure that the rubric correctly discriminated against LMS platforms that would not be able to adapt to upcoming technologies: we were looking to avoid the potential hang-ups described by Porto (2015) and Spiro (2014). Using these examples, one can see how the context of BCcampus drove many of the decisions in our rubric.

The importance of Identifying the barriers and an example from our rubric

Yet In spite of having a grasp on the expectations of BCcampus towards their future LMS platform, it was also imperative to consider the potential barriers, similar to those mentioned by Zaied (2005), that may limit the proper implementation of the technology. A rubric that does not address these issues might lead BCcampus to select a platform that is ill-suited to them and would not be used.   We attempted to highlight each of these potential barriers through the rubric, allowing BCcampus the opportunity to select the best possible platform considering their situation.  For example, the reduction in IT personal and limited three month timeline meant that cost, provided IT support and compatibility were essential.  As BCcampus was also open to the idea of implementing a new LMS that they had not tried before, our team opted for a simpler weighted criteria that did not assume prior experience with the LMS or time to experiment.

My take home message

I am astonished by the amount of elements that need to be considered in order to create a useful tool and to find that proper balance between too narrow and too generic a rubric. I knew context was very important to consider yet I had no idea how, when designing a tool, every element must be justified in regards to that context in order for the tool to be deemed relevant.  I believe that each of the rubrics generated from this assignment are context specific and time-sensitive.  For example, the rubric from the BCcampus scenario would not necessarily warrant the same results if applied to another organization and would certainly have been considered less relevant for BCcampus a year earlier when the cuts were yet to be announced.  Generic rubrics are not as useful as I once believed.  The selection of a tool or of a technology is not about applying a pre-made rubric or a generic set of criteria; one needs to cater the criteria to one’s specific and temporal context in order to make judicious choices and create an effective learning environment.  Just as a one-size-fits-all course is no longer appropriate (Spiro, 2014), so too is the one-size-fits-all rubric.

Again, thank you for this wonderful learning opportunity and thank you to all members of group 1 for the amazing collaborative work.  It was a pleasure working with you.

Danielle

 

References

Bates, T. (2014). Teaching in a digital age. Open Textbook.

Brown, T. (Producer). (2009). From Design to Design Thinking. TED Talks. Retrieved from https://www.youtube.com/watch?v=UAinLaT42xY

Coates, H., James, R., & Baldwin, G. (2005). A critical examination of the effects of learning management systems on university teaching and learning. Tertiary education and management, 11, 19-36.

International Society for Technology in Education (ISTE). (2008). Standards for teachers. Retrieved from  http://www.iste.org/standards/standards-for-teachers

Porto, S. (2015). The uncertain future of Learning Management Systems. The Evolllution: Illuminating the Lifelong Learning Movement.  Retrieved from http://www.evolllution.com/opinions/uncertain-future-learning-management-systems/

Spiro, K. (2014). 5 elearning trends leading to the end of the Learning Management Systems.

Zaied, A. N. (2005). A framework for evaluating and selecting learning technologies. Learning, 1(2), 6.

 

Leave a Reply