Assignment 2 (part 2) reflection

For this post, I’d like to reflect on two considerations in the design of my module: the “interleaved practice” technique and the Powtoon scenarios (Devers et al., 2018, p. 16).

Interleaved practice

The technique of “interleaved practice” particularly stood out to me from our readings from week 9 (Devers et al., 2018, p. 16). To some extent I’ve been using this in practice to keep learners’ attention, and I really appreciated learning more about the theoretical side of it. I took this opportunity to be intentional about leveraging interleaved practice as I designed my module, in terms of the content, information presentation, as well as learner responses. Below is some of my notes on this from my project documentation.

    • Content: The first module ties together two related concepts: year-end reviews versus continuous conversations, and describing year-end reviews to direct reports. These two concepts support one another because the clearer the learner is on the “more abstract” considerations of year-end reviews (p. 17), the better they would be able to meaningfully describe the ‘why’ and ‘what’ of year-end reviews.
    • Information presentation: The module starts by introducing the animated character Jane, which adds a human element to the “[l]earner-content” interaction (Anderson, 2008, p. 58), with the promise that Jane will be there to share scenarios throughout the module to deepen the learner’s learning. Further down the page, the readings share different perspectives on year-end reviews, accompanied by reflection prompts to encourage learners to consider the information presented. Later in the module, Jane returns to share a scenario, and then the workshop prep shifts back towards prompting abstract thinking.
    • Learner responses: The module is designed for learners to share their responses using different forms of media, including a Mentimeter word cloud, text response to a discussion post, and a video response to the scenario.

Powtoon

See Powtoon videos: Intro, module 1 scenario

I had not used Powtoon before and took this opportunity to finally create an account. I wanted to present practice scenarios in a way that feel more real, and thought Powtoon could be a quick way to create short, animated videos of the scenarios.

I decided to create the animated character Jane who comes to the learner with situations she is ‘encountering’. I introduced Jane at the start of the module and let learners know to expect Jane throughout the modules, to avoid any feelings of abruptness when Jane pops up with a scenario.

In addition, I believe Jane also acts as a course companion for the self-led components, which could support “[l]earner-content” interaction (Anderson, 2008, p. 58). I also think these scenario videos could help motivate with “[s]ensory curiosity” (Malone & Lepper, 1987, as cited in Ciampa, 2013, p. 84).

 


References

Anderson, T. (2008a). Towards a theory of online learning. In T. Anderson & F. Elloumi (Eds.), Theory and practice of online learning (pp. 45-74). Athabasca University. https://www.aupress.ca/books/120146-the-theory-and-practice-of-online-learning/

Ciampa, K. (2013). Learning in a mobile age: An investigation of student motivation. Journal of Computer Assisted Learning, 30(1), 82–96. http://onlinelibrary.wiley.com/doi/10.1111/jcal.12036/epdf

Devers, J. C., Devers, E. E., & Oke, L. D. (2018). Encouraging metacognition in digital learning environments. In D. Ifenthaler (Ed.), Digital workplace learning: Bridging formal and informal learning with digital technologies (pp. 9-22). Springer International Publishing AG.

Assignment 2 (part 1) reflection

Justification for the design of the course

Interaction

In my blended learning course, the online modules are intended to support learning in the lower levels of Bloom’s Taxonomy, specifically “[k]nowledge” and “[c]omprehension” (Bloom et al., 1956, p. 18), and the in-person workshops are intended to support learning in the higher levels of Bloom’s Taxonomy, specifically “[a]pplication”, “[a]nalysis”, and “[s]ynthesis” (Bloom et al., 1956, p. 18).

Throughout the course the facilitator will provide feedback to “suppor[t] the development and growth of critical thinking skills” (Anderson, 2008b, p. 344), encourage “[l]earner-learner” interaction (Anderson, 2008a, p. 58), and share “personal insights” (Anderson, 2008b, p. 347), and together these are intended to support “deep and meaningful learning results” by achieving the three presences identified by Garrison, Anderson, and Archer (Anderson, 2008b, p. 344).

In addition, the facilitator will design the classroom as a “flexible learning spac[e]” to support small/breakout group activities as well as large group discussions (Benade, 2017, p. 798), help evoke “cognitive curiosity” in their facilitation (Malone & Lepper, 1987, as cited in Ciampa, 2013, p. 84), and use formative assessment tools such as Kahoot to motivate through “competition” and enable learners to “receive prompt guidance and feedback” (Csikszentmihalyi, 1990, as cited in Ciampa, 2013, p. 85; Huang & Chang, 2011, as cited in Bai, 2019, p. 61).

I also considered the audience and the context in which the learning would be applied.

    • Audience: The audience is people leaders within the (fictitious) organization, who are likely to be having to balance many priorities at the same time. For this reason I decided on a blended learning format to leverage the flexibility that online modules provide. I also wanted to make sure the design is simple and consistent to “[m]inimize [c]ognitive [l]oad” in using the platform (Whitenton, 2013), so they can focus on the learning.
    • Context: The context in which the learning would be applied is in actual year-end reviews, which may be in-person. For this reason I decided on having in-person workshops so learners can practice in small group role-plays, which would be a context that’s relatively similar to the real-life context.
Assessment

Learners will be assessed on the skills they practice in the workshops. The intention of this course is to support them in conducting effective year-end reviews and not necessarily to ‘grade’ their skills, so they will have the opportunity to receive feedback and be reassessed throughout the duration of the course. A rubric is included as a course page.

To complete the course, learners will have to complete all modules and demonstrate skills as outlined in the rubric.


My experience designing the prototype

Using Canvas

For this assignment I chose Canvas as the LMS because I haven’t had the opportunity yet to design a course in Canvas. I found it easy to use, and that its rich text editor is similar to WordPress, which I use for this blog. I didn’t run into issues in setting up my course to look how I had planned – with the small exception of figuring out how to have “Recent Announcements” displayed on the Syllabus page, which is the one thing I had to Google. (It’s under Settings > more options.)


References

Anderson, T. (2008a). Towards a theory of online learning. In T. Anderson & F. Elloumi (Eds.), Theory and practice of online learning (pp. 45-74). Athabasca University. http://www.aupress.ca/books/120146/ebook/02_Anderson_2008-Theory_and_Practice_of_Online_Learning.pdf

Anderson, T. (2008b). Teaching in an online learning context. In T. Anderson & F. Elloumi (Eds.), Theory and practice of online learning (pp. 343-365). Athabasca University. http://www.aupress.ca/books/120146/ebook/02_Anderson_2008-Theory_and_Practice_of_Online_Learning.pdf

Bai, H. (2019). Pedagogical practices of mobile learning in K-12 and higher education settings. TechTrends, 63, 611–620. https://doi.org/10.1007/s11528-019-00419-w

Benade, L. (2017). Is the classroom obsolete in the twenty-first century? Educational Philosophy and Theory, 49(8), 796-807. https://doi.org/10.1080/00131857.2016.1269631

Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Longmans, Green and Co Ltd.

Ciampa, K. (2013). Learning in a mobile age: An investigation of student motivation. Journal of Computer Assisted Learning, 30(1), 82–96. http://onlinelibrary.wiley.com/doi/10.1111/jcal.12036/epdf

Whitenton, K. (2013, December 22). Minimize cognitive load to maximize usability. Nielsen Norman Group. https://www.nngroup.com/articles/minimize-cognitive-load/

Assignment 1 reflection

Overview

    • Assignment: Learning Environment Evaluation
    • Group: Corporate, Private Enterprise Setting
    • Group members: Sarah Ng, Quinn Pike, Nicole Kenny, Loveleen Reen, Jocelyn Chan
    • PDF copy of submission: Assignment 1: Learning Environment Evaluation

Reflection

This assignment was very much a learning experience for me. I have some experience with technology implementations at different workplaces, and the way this assignment was structured is quite different from the approach I’m used to in evaluating technology.

For this post I will first summarize these differences, and then reflect on the benefits and drawbacks of the approach outlined in the assignment guidelines.

Differences

For context, focusing on the technology aspects only, in my past evaluations in real-life environments, we would start by gathering the requirements of each user group, typically in some form of a “user story” (Rehkopf, n.d.). We would then work with stakeholders to prioritize the requirements and tag each one as ‘must-have’ versus ‘nice-to-have’ so we can ensure alignment on expectations. Once the requirements are confirmed and we’ve identified potential platforms, we would send a RFI (request for information) to these vendors so they can confirm if or how exactly their platforms meet each of our needs. From there, we set up demos with the shortlisted vendors and try out the platforms in sandbox environments so we can identify the most fitting platform.

The primary differences for me were skipping past requirements-gathering and “[e]xplain[ing] the functionality … of the recommended LMS relevant to [the] context of [the] organization” (“Assignment 1: Guidelines,” 2022). I can understand skipping requirements-gathering for this assignment since that work is incredibly context-based, but validating the functionality of a specific LMS against the requirements is typically done by the vendor since they would be more informed on how their platform can be used to support our needs.

Benefits

I really appreciated the opportunity to work through the rationale considering “[my] own experiences and observations” and “peer-reviewed publications” (“Assignment 1: Guidelines,” 2022). Our group also decided to frame our assignment on Bates’ (2014) SECTIONS model, which helped ground the rubric in theory and sparked ideas on considerations. My favourite contribution to the assignment was this line, which took the context of the organization, translated into a criteria for the LMS, and then connected back to a benefit for the organization supported by theory: As DTI is growing rapidly, resourcing is limited as roles are continuously expanding. DTI is committed to ensuring sufficient resources for a successful implementation, but would ideally be looking for an LMS vendor that supports setup, change management, and troubleshooting to minimize “the need to reorgani[z]e and restructure … support services” internally (Bates, 2014). (Ng et al., 2022).

Drawbacks

I found the scoring of the rubric to be arbitrary, since the importance of each criteria is not considered as part of the assignment. For instance, our organization might believe that good interface design is less important than having multiple security features, but with this rubric the two would hold equal weight. Similarly, functionality may be connected, meaning the presence or lack of one functionality may increase or decrease the need for another. I think it is great to work through what might go into the scale, but I had trouble understanding the purpose of scoring.


References

“Assignment 1: Guidelines.” (2022). In ETEC 524 64C 2022W1 Learning Technologies: Selection, Design, and Application. The University of British Columbia.

Bates, T. (2014). Choosing and using media in education: The SECTIONS model. In Teaching in digital age. https://opentextbc.ca/teachinginadigitalage/part/9-pedagogical-differences-between-media/

Ng, S., Pike, Q., Kenny, N., Reen, L., & Chan, J. (2022). Learning Environment Evaluation: Corporate, Private Enterprise Setting [Unpublished paper]. Faculty of Education, The University of British Columbia.

Rehkopf, M. (n.d.). User stories with examples and a template. Atlassian. https://www.atlassian.com/agile/project-management/user-stories

Spam prevention powered by Akismet