Assessment Tools
Post a reflection about your experience building your exam or quiz on the Assessment Tools page of your E-portfolio.
My LMS course is based on having teachers improve their numeracy programs by using SMART board technology (a pro-d like course if you will). Since I do not personally possess a Smart board, nor does the school that I am presently employed at own one, I have developed a quiz that is based on how teachers would possibly present quizzes to students using the Moodle platform. I am using some grade 6-7 numeracy outcomes found in the Integrated Resource Package for the B.C. provincial curriculum.
Please see my Moodle course quiz at http://moodle.met.ubc.ca/login/index.php
I really enjoyed building my quiz using the Moodle platform. I found the quiz user interface to be intuitive and easy to navigate. When I activated a function that I didn’t intend to, I was able to easily retrace my steps and correct myself. I enjoyed testing the features within the ‘update this quiz’ preference tab. I found myself thinking about improving student learning as I toggled between quiz time limit, questions per page and the number of attempts students will have to master the quiz content. Specifically, I thought about my students this year, and how their various learning styles would be impacted as I fiddled with how many minutes to make the quiz. I setup the quiz so students could have up to 4 attempts to master the content, while each subsequent attempt would build upon their previous performance. My thinking was, as Gagne (1977) states, helping students monitor their progress would undoubtedly help them understand the content in a more meaningful way. If my students can see their previously correct answers as they re-take the test, then their confidence and motivation to pursue a better score will be amplified.
I definitely thought that applying penalties for incorrect answers would be detrimental to student self-confidence. As Wootton (2002) reported, the negative impact of assessment is very damaging to student self-worth. Frankly, I just do not see how penalizing elementary aged students for incorrect answers could benefit their overall learning conditions. I paid special attention to all the areas within Moodle that I could apply feedback for students. I found myself thinking about how I develop my rubrics for my classroom activities, and this helped me consider my terminology when adding feedback. For example, I very much liked how multiple-choice questions permitted individual feedback per question/answer, and then a general feedback box was also available. As I prepared these questions my thoughts were how to cue and support students as they interacted with the quiz. The general feedback area behaved like extra instructions that I would place on a traditional paper and pen test.
I didn’t like the fact that the system would prevent you from observing the question bank, after a student had been made their first attempt at answering the questions. As I was preparing the quiz, I was testing each question as I completed it. I really needed to know that each question worked as I proceed along my creative endeavor, as opposed to testing the final product after having built all my questions. To me, it’s like formative vs. summative assessment. My incremental assessment of my own questions helped me with the overall building of the quiz.
After I took the entire test for myself, I realized that I needed to manually grade the two essay type questions. I would have liked the system to alert me that this would be required, as I was initially amused at my mediocre score. I quickly realized that I would have to grade these essays, and the system dynamically updated the final score after taking into account the additional numerical grades that I gave myself. I rather enjoyed grading part of the quiz, while the LMS auto-assessed the remaining answers. I noticed that if this quiz were constructed with pen and paper, it would have taken me longer to evaluate and provide feedback for students, as compared to having the quiz partly auto-assessed by the LMS. Ultimately, feedback not received in a timely manner will have little impact on student learning, because students will have moved on to new content when the feedback eventually reaches them (Gibbs and Simpson 2005).
References
Gagne, R.M. (1977) The Conditions of Learning, 3rd ed, New York: Holt, Rinehart & Winston.
Gibbs, G. and Simpson, C. (2005). “Conditions under which assessment supports students’ learning.” Learning and Teaching in Higher Education Accessed online 15 June 2009 http://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf
Wootton, S. (2002) Encouraging learning or measuring failure? Teaching in Higher Education, vol. 7, no. 3, pp. 353-357.