Assessment Tools
Exploring Assessment Tools in Moodle
Creating a single Moodle test was a fairly straight forward affair. I liken it to learning any piece of software – the learning curve was understandably steep at the start as I learned about the different features and their location. As I made the second and then the third question for each type, the process became smoother as I could scan the screen for the information that I needed. It took awhile to learn the roles of the different types of feedback.
This being said, I can certainly understand why on-line teachers who are given a course shell to start (as is the case in my school district), tend to add questions bit by bit throughout the year. Without release time to input questions, creating a robust exam bank would be a tedious process. Adding images to questions was also a tedious which is unfortunate because I believe images add to the assessment experience.
I would have liked to have played around with embedding links into questions (but I didn’t since it just occurred to me now!). Sending students to a web site to gather data for an open-ended question would add an element of authenticity to the assessment process.
Rationale
“Understanding Percent: Practice Test” is a practice test that students are required to take and achieve 50% before accessing the “real” test. The purpose of this hoop-jumping is to discourage test writing strategies common to grade 8 students, such as “winging it”. As Gibbs and Simpson (2005) advise, “[t]he trick when designing assessment regimes is to generate engagement with learning tasks without generating piles of marking” (p.8). My intent is to enable students to review without my direct involvement.
The practice aspect of this test is consistent with “focusing attention on important aspects of the subject [and] … giving students opportunities to practice skills and consolidate learning” (Gibbs and Simpson, 2005, p.11). The feedback informing students which section of the text each type of question is located helps guide “the choice of further instructional or learning activities to increase mastery” (Gibbs and Simpson, 2005, p. 12).
Gibbs and Simpson (2005) advise that “[f]eedback has to be quite specific to be useful” (p. 17). These questions provide feedback such as referring students to specific sections of the textbook, suggesting the student try an extension of a certain question, and being careful about converting percentages into decimals before proceeding with calculations.
Before I take the next step and use this quiz with students, I would analyze the quiz using a table of specifications to separate the questions by learning outcome and by level of thinking (knowledge, understanding, synthesis). The present quiz would lead “cue seekers” (Gibbs and Simpson, 2005, p. 5) astray as it doesn’t fully represent the quiz they would receive in class. For this reason, the open-ended (essay-styled) questions are worth more marks than the other types of questions. The intention is to send the message that being able to communicate solutions and to complete multi-step questions is valued.
Gibbs and Simpson (2005) would see this quiz as being a limited measure of student understanding since “examinations are very poor predictors of any subsequent performance, [and that] … coursework marks are a better predictor of long term learning of course content than are exams” (p. 7).
References
Gibbs, G. & Simpson, C. (2005). Conditions under which assessment supports students’ learning, Learning and Teaching in Higher Education, 1. Retrieved June 20, 2009 from http://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf