Assessment
This week’s assignment is on assessment and I’ve been building a quiz on writing for strings for the orchestration class. As many of my classmates have noted, the biggest issue is the time it takes to write each question individually. It makes me understand why instructors are so keen on being able to import the testbanks my company creates straight into their LMS! It also emphasizes to me the importance of creating really good testbanks that instructors know they can rely on. While that may seem obvious, I think it is clear that as more and more instructors rely on quizzing functions in their LMS, for either formative or summative assessment, the quality of the testbanks produced by third-parties like publishers will be of increasing importance. This will be particularly true for large introductory courses, where there are multiple instructors who need to be able to ensure some kind of parity among sections. Creating standards around what good testbanks will do and reviewing them is clearly going to become more critical.
Some of you have raised questions about the ability of MC questions to test beyond recall. In earlier discussions, I shared some references to the work of David DiBattista, a psych prof at Brock University, who has done some very interesting work on how to construct MC questions that test higher-order thinking skills. (DiBattista, 2008) His bigger argument, though, is that you must also change your teaching to prepare students for this different kind of testing – further reinforcing the need for an intimate connection between teaching and assessment that this week’s readings called for. (Gibbs & Simpson, 2005) Part of what I take from his work is that just because MC testing is often flawed, doesn’t mean we should toss out that kind of testing. Rather, we should spend the time to figure out how to make the assessment instrument do what we need it to do. This is particularly important given the way MC testing can be used in an era of scarce resources to give lots of immediate feedback to large groups of students.
The specific quiz that I’m creating is designed to be done in the lab portion of the class, which means it can be delivered on line, but proctored, thus decreasing the chances that the students will cheat. However, if the test is designed as formative rather than summative, I would argue that students should absolutely be allowed to take it open book, particularly if the questions are well-designed and are asking them for skills that are further ‘up’ the scale on Bloom’s revised taxonomy (analyze and evaluate, as opposed to remember/understand). (Cruz, 2003) In that way, having access to the information won’t do them much good, as the test will actually measure how well they’ve learned to apply or work with the content, as opposed to just being able to memorize and regurgitate. In other words, it will test (and therefore ultimately, one hopes, facilitate) deep learning as opposed to surface learning. Taking that position to its furthest, there should be no tests that aren’t open book, because you are – even in a multiple choice test – assessing students’ ability to THINK rather than to recall. I suspect this is what many of you are getting at when you talk about not using quizzes and tests, but using other types of assessment.
Because I’m working with content that I really don’t understand and the instructor for the course is away at the moment, I’ve found it difficult to create questions for the MC and matching sections that meet the criteria for testing higher-order thinking. But the essay question (which I interpreted rather liberally, so I could make it fit the class!) was much easier. I know this is a typical problem – it is always easier to ask for more analysis in an essay than in a MC question. So next on the agenda will be improving the quality of the questions!
In terms of the back end of the operation, I’ve found it relatively easy to set up individual questions. But I’m struggling with some of the formatting and programming issues. So, for example, it took me a while to find out how to turn off question ‘titles’ (leaving me still with the troubling question of, why do questions need a title??) At first, I tried having no title, because the questions are automatically numbered, but the system wouldn’t let me. Then my questions looked like:
1. Q1
here is my question.
Which was not exactly elegant! The solution, of course, turned out to be very simply – a little radio button indication you want titles published or not. The bigger issue, and one I’m still struggling with, is that I went to the student side to see what the quiz looked like. Now, because I’ve taken the quiz, I can’t add any new questions. I also can’t see it from the student side. So, I need to find out how to reset; it seems unlikely I’ll manage to get that done before the deadline!
One final thought about the use of feedback on the short answer and essay questions: John’s point about giving yourself more time to spend on the questions that need qualitative marking is exactly right. There are only certain things we can automate! But the feedback box is also important for consistency across sections; it gives the course designer/lead instructor an opportunity to share with other instructors and TAs what they’re looking for in a ‘great’ answer.
Laura
References
Cruz, E. (2003). Bloom’s revised taxonomy. In B. Hoffman (Ed.), Encyclopedia of Educational Technology. Retrieved June 28, 2009, from http://coe.sdsu.edu/eet/articles/bloomrev/start.htm
DiBattista, D. (2008). Making the most of multiple-choice questions: Getting beyond remembering. Collected Essays on Learning and Teaching, 1, 119-122. Retrieved June 27, 2009, from http://apps.medialab.uwindsor.ca/ctl/CELT/fscommand/CELT21.pdf
Gibbs, G. and Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education. Retrieved June 25, 2009, from http://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf