Assessment

Having completed a quiz in Moodle, I can now say that overall it was challenging but rewarding, with a few frustrations thrown in. There are three reasons for the frustration, some of which are due to me and some are due to the format of quizzes in Moodle.

First of all, as a new teacher-on-call (substitute teacher) I don’t really have any developed curricula. Therefore all of my content work on Moodle is starting from scratch. While obviously I have unit and lesson plans from my practicum, all of that work was sort of mid-term. I didn’t want to start with it on Moodle because they are not really entry points. In some respects I can be quite structured and I did not like the idea of starting a Moodle course with something mid-unit. The extension of this is that my quiz was therefore something new to me. My Moodle course is on Physics 11 kinematics, and I have never written a quiz for kinematics before. This was the second reason why I had difficulty in completing the assessment assignment. I couldn’t just focus on the Moodle/LMS aspect of the quiz, I also had to start from scratch with the kinematics unit. This caused a considerable amount of discomfort for me in designing the quiz. The good thing about all of this is that it allowed me to concentrate on producing a quiz that is 100% relevant and useful for the students according to the learning module, as Gibbs and Simpson (2004) suggest with their Condition 2 for increasing the quality of assessment.

The third reason for my difficulties in making a quiz in Moodle was due to the platform itself. As a physics and math teacher, a lot of what my students will be doing will contain a procedural component. For example, a typical physics problem will involve different skills including: understanding of concepts, problem solving, application and manipulation of laws/formulas, and computations. I feel that several of the Moodle quiz modes including true/false, multiple choice, matching and short answer are not ideally suitable for these types of Physics problems. In general, these types of quiz modes are not seen as productive assessment activities (Gibbs & Simpson, 2004). With very careful construction, some formative assessment can be achieved with multiple choice by carefully planning a question with some predictable “gotchya’s”. For example, a question could incorporate an element of unit conversion, and the teacher can provide an incorrect answer based on whether a student missed the unit conversion. I feel that this kind of design at best would only work if lucky, and at worst will not provide any meaningful feedback. I mentioned the aspect of computation in a physics problem, and multiple choice offers very little in terms of feedback for computational errors. The student will get the answer wrong and neither the student nor teacher will know why. This could be especially frustrating for the student in the case that they have satisfactory understanding of the underlying concepts.

One way to help mitigate the issues of procedural errors would be to break questions into smaller chunks such that each question only targets one aspect or standard. This is a typical technique used in standards based grading (Deddeh, Main, & Fulkerson, 2010). In many ways this would fulfill some of the requirements for formative assessment. The downside to this method of assessment is that topics are heavily deconstructed and larger “problem solving” skills are abandoned. This isn’t just a criticism of Moodle quizzes or multiple choice in general, but is also a critique of many standards based grading practices.

There are some positives with the Moodle quiz activity. The essay quiz mode is very flexible and gives an automated way to receive written text answers from the students. Each quiz mode also offers a way to automatically mark the quiz and give both feedback to individual answers as well as questions in general. I think these built-in tools are really great and useful, and they also reflect several of the conditions which influence the quality of studying (Gibbs & Simpson, 2004): the feedback focuses on the learning and not the learner; feedback is timely; and the feedback is appropriate to the criteria for success. I also really appreciate that quizzes can be attempted more than once. Teachers should be primarily concerned with their students learning the material, and I’m a big fan of remediation (Deddeh et al., 2010). If by the end of the year a student can demonstrate that they have learned or mastered the required standards, then I do not want to penalize them because they hadn’t yet learned the standards faster. Of course this methodology of remediation can be manipulated by students, but I would contend that all assessment can be manipulated. I also like how it is possible to included graphics in questions, as this is quite necessary for math and physics, especially when requiring graphs, data charts, diagrams and complicated equation editing. I wish that individual graphics could also be included for individual answers, such as in multiple choices. Finally, I really like how we can delay the display of grades and feedback and even separate the two. I deliberately wanted to show feeback right away but delay the display of grades, in order to highlight the importance of learning from assessment. Black and Wiliam’s (2004) research led them to conclude that formative feedback is not as effective when marks/grades are shown at the same time. The students often skip over feedback as soon as they see a mark.

References

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the Black Box: Assessment for Learning in the Classroom. Phi Delta Kappan, 86(1), 8.

Deddeh, H., Main, E., & Fulkerson, S. R. (2010). Eight Steps to Meaningful Grading. Phi Delta Kappan, 91(7), 53-58.

Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, (1), 29.

No Comment

No comments yet

Leave a reply

Spam prevention powered by Akismet