Assessment
Sep 14th, 2010 by naomi
My journey has taken me into the labyrinth called developing an assessment using Moodle using a set of conditions. Gibbs and Simpson (2004) specify conditions under which assessments are useful to students learning. These include:
- students have enough time to study to perform well on assessments
- timing and spacing of assessment activities are distributed evenly throughout the course, so students don’t have peaks and valleys of study time
- the assessment forces students to engage in productive learning (i.e. accomplishing real-world related tasks)
- students get frequent, useful and detailed feedback.
- this feedback is timely, given while the assessment task is still fresh in students’ minds and they can apply it to their learning.
- it focuses on students’ skills and learning, not on their performance in relation to others or their personality
- the feedback is related to the assessment task
- students understand the feedback as it relates to their understanding of the assessment tasks
- and last, but not least students need to use the feedback in their learning
I tried to keep these criteria in mind as I entered the labyrinth. At first, the way seemed clear and straightforward. All I had to do was create questions that tested grammar acquisition in all four strands (reading, writing, listening, speaking) while applying real-world knowledge. Normally in face-to-face classes, I use formative assessments using presentations, role-plays and writing, listening, and reading tasks. As I started to work with Moodle’s quiz features, my path was suddenly blocked. First of all, if I used multiple-choice questions to test grammar, it really wouldn’t test students’ knowledge of modals as the correct answer would be before them. So, I decided to use a cloze activity instead. Once again, that path was blocked (embedded questions looked like short answers, not fill-in-the blanks) and I had to hunt for a way through to find the correct html coding that would allow me to create a cloze activity. My hunt was successful and I progressed over that hill into a new part of the maze. To ensure that the assessment encouraged productive learning, I created paragraph questions that would force students to use some of the language they had acquired and apply them to real world situations. This path was easy enough to follow, however I still had issues when it came to the multiple choice and matching questions.
In designing how to use multiple choice and matching questions, I came to the conclusion that they are rarely useful in assessing grammar acquisition. I felt like I had hit a wall, and had to turn around and choose a new path, so I did. I decided that for my assessment to test real-world knowledge and critical thinking skills, the multiple choice and matching questions would need to be based on mastery of listening and reading skills. Questions could be set up so students need to apply skills in identifying main ideas, specific details and new vocabulary. These formats are frequently used in EAL to test these skills, so it seemed like a natural fit. I was disappointed that rather than force students to read through multiple lists, the matching question was just click and select, not actual manipulation of language concepts, like most hot potatoes is. This tests different reading skills than the ones I had targeted. Also, matching questions are primarily used to test vocabulary in EAL, so I found myself thinking outside of the box to incorporate them. I find myself constantly rethinking how to create matching questions to promote critical thinking.
In designing multiple choice and matching questions, it is possible for students to receive instant feedback and general information on the correct answers. This makes feedback, timely, task related and useful for students. Unfortunately, this type of feedback doesn’t meet all of the conditions set by Gibbs and Simpson. I found it very frustrating trying to think of ways to provide students with personal, detailed feedback that would focus on their individual skills and learning, and not that as a group. While Moodle does have in system email, where I can email students, it would take a lot of time to go through their individual exams. I still cannot find my way around this idea, and have abandoned the search for now. Moodle does allow for manual grading, so I can assign individual grades to students, when I wish to grade essays. However, I did discover a small barrier, as Moodle won’t let you build rubrics (a form of grading I frequently use), this feature is an add-on, and so I have to grade using points, and provide individual feedback later. Fortunately, this is a small boulder and I was able to hop over it and move on.
I also struggled with the debate of going with either summative or formative assessment. A formative assessment would provide tips to further student learning, while a summative assessment tests knowledge-so how detailed should feedback be and when should I give it. The assignment was to create an assessment, and given the types of questions I had to include, I decided to go with a summative assessment. The time students would need to answer the minimum number of questions, would exceed that of a standard quiz. In terms of when to give I discovered that Moodle had a feature to set when feedback is given, so I chose to have it do so after the exam, so students can review their responses. Once this was set-up, I had completed my assessment and had found my way out of the labyrinth and back on my journey.
Looking back, I realize that while Moodle does provide instant feedback to quiz questions, the type of questions I can create only test limited features of language. I would have to install large numbers of add-ons to be able to use Moodle to assess all strands including speaking. Right now, reading, writing, vocabulary and grammar accuracy are the main skills that can be assessed. Listening can be done, but it isn’t possible to limit the number of times a recording is played in a question. Moodle allows the embedding of audio in units, but not in quiz questions. Because of its limitations, Moodle’s effectiveness in language assessment is in doubt. I decided it would be good for testing as new concepts are being learned, but not for summative assessments. As well, Moodle’s quiz feature doesn’t allow for students to reflect on and use the feedback in their learning after the quiz- a condition Gibbs and Simpson say is critical for a successful assessment.
As I continue on my journey, I find myself thinking about how to create an activity that will permit students to reflect on their assessment and learning. I also find myself debating the nature of the assessments and rubrics I create in my daily practice, not just the one I designed in Moodle. I wonder if by providing specific criteria for assignments, am I creating assessments that create productive learning, or summative learning and what skills am I testing? It seems that language and traditional assessment questions are two very different skills and it is difficult to get them to a point where they meet, especially in Moodle.