I enjoyed building this assessment — it was unexpectedly challenging. For one thing, it’s easy — far too easy — to get bogged down in the mere mechanics of Moodle’s awkward, occasionally byzantine quiz software.
The short-answer and essay questions presented my biggest challenges. Not because they are difficult questions to formulate — they aren’t — but because of the nature of the software and of the particular quiz I was designing. I had wanted it to be a pre-test quiz, both to allow the students to affirm what they might already know, and give me an idea of their level of knowledge coming into the course. But the short-answer questions, I discovered, are really just for exact answers and thus suited mainly to testing rote knowledge gained from course material, e.g., a verbatim definition. And the essay questions have to be graded manually, precluding the immediate feedback that I would expect a pre-test to provide to the students.
Could my pre-test be split, so that the students gain immediate feedback on the multiple-choice, match-up and short-answer questions, then wait for my manually graded responses to their essay questions? I placed the essay questions at the end, gave them a grade of zero (and made the other questions total 15 marks), and made it clear to students that feedback would be provided later. However, in previewing the quiz, I found the total was out of 10, not 15. Yet when I checked my array of questions, their values totaled 15. This was exasperating — until I noticed that you have to set the “maximum grade” manually. (I’m not sure why that is the case — when would you want the maximum grade to be different from the total of the questions?)
As for the short-answer format, its uses seem very limited: testing knowledge where the answer — i.e., how it is expressed — leaves no room for ambiguity. Practical use of the short-answer format would include, for example, a question like “what does URL stand for,” with the only correct answer being “uniform resource locator.” However, even there, if the student types in “it stands for uniform resource locator,” then the LMS will judge him wrong. Ditto if the student commits even a single typographical error. Such literal exactness means that the instructor often needs to supply a whole array of possible answers to cover the contingencies, e.g., if a learner prefaces his response with a definite or indefinite article (“the” or “an”).
It would be far better if the short-answer format contained a “contains” option, so the learner’s answer could be judged correct as long as it contained, for example, the phrase “uniform resource locator.” But it doesn’t, and so in many cases the multiple-choice format is a better alternative to the short-answer format, despite the inherent shortcomings of “multiple guess” questions.
And anyway, would a first-year college student who’s just beginning an information literacy course have much chance of knowing what URL stands for? (For that matter, should it even be taught in the info lit course, or is such knowledge too arcane?) Which brings me to my decision to make this a pre-test. I think a pre-test in this subject is a good idea on several levels — for example, many students would consider themselves Web-savvy and wouldn’t see the use of an info lit course, whereas this pre-test could impress on them the fact that there are new and important things to learn. It also helps introduce concepts that will be covered in the course.
One problem, perhaps, is that this quiz became too long and challenging to be “fun” (which is how I billed it in the quiz introduction). This length and difficulty may impress on students the breadth of the field, but may also turn them off. In the absence of a body of course knowledge to test them against, I barraged them with everything in order to reach my quota of question types and my 15 total marks.
The “fun” aspect grew out of the fact that it is, after all, only a pre-test, and the fact that the student’s performance consequently doesn’t matter — s/he is only graded for participation/completion.
How do my questions look in hindsight? Number 5 is too difficult (even though it’s worth double points) — there are limits to how many terms or variables (some of which have very similar descriptions) a learner should be able to juggle in his head. So a matching question shouldn’t become unwieldy. Number 1 is similarly challenging.
Number 9 is also too difficult. Two of the sentences (“Technology use…” and “To address these issues…”) appear to require citing, so I gave full marks for choosing either one, even though the instructions state “choose one answer”.
Questions such as #8 and #11 showed me that it’s possible to make multiple-choice questions highly analytical.