Category Archives: D: Assessment

Assessment and Poor Attendance

In my school we have fantastic access to technology. We have a supportive principal with frequent budget surpluses that allow us to order what we need. One of the challenges I see to using technology to support student assessment in my context is that it requires a lot of time investment in order to see relatively small gains. I have yet to teach the same class more than two years in a row. This is really a deterrent into investing significant time developing technology for a course. However  a lot of time is also invested in trying to help students with poor attendance catch up. Perhaps the time spent developing this technology would save time in this area as well.

Students here are often absent from school. They take time off to travel to Whitehorse, there are biannual REMs, trips to neighbouring communities and countless workshops at the school. We also have real problems with tardiness. In this I have found that working to create course shells and then acting as a tutor have been a successful strategy.

A positive part of the small class sizes is that if I am able to organize assignments online then I can spend the majority of the class working with students one on one to offer constructive (formative) comments on their work. If they are stuck on an introduction, I can immediately make suggestions. This aligned nicely to the best practice mentioned by Gibbs & Simpson where students were “…gaining immediate and detailed oral feedback on their understanding as revealed in the essay.” Technology supports assessment in this way by freeing up time for the teacher to provided these tutorials. It would be impossible to keep records of this sort of assessment without the process being tedious and without me seeing the direct result, honestly I would probably drop off. I know their strengths and weaknesses like my own because of the small class sizes but right now in my district there is a real push for having detailed records of formative assessment. I want to try to tap into technology to help make this easier.

In some courses I have been given audio feedback on work. I would like to bring this into my courses as it would help me to keep record of what is being done. My goal would also be to have a visible and strong connection between the standards that I am trying to cover and the assessments that I am assigning. I wonder if having quizzes students could do that would give immediate feedback as to why they were wrong but also that were not scored might also be a good thing for me to try in my context.

Gibbs & Simpson say that the “trick when designing assessment regimes is to generate engagement with learning tasks without generating piles of marking.” As I am working on my Moodle course I am keeping this in mind. I know that the time investment will be challenging at first but that in the end it will pay off. I am also wondering if there isn’t a way that MET students might share their course shells from this program with others, or for teachers in general to for groups and create courses. We could modify them to fit our context but having them started would make techniques like this much less daunting to start in the future.

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved fromhttp://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

Purposeful Assessment

Assessment seems to be largely linked to purpose. Why are we using assessment? What is the assessment trying to achieve for the student? For the teacher? How do we know we are assessing what we aim to assess?

My first MET course was a surprise to me in this regard. In the entire course I received 11 sentences of feedback from my instructor. After pouring hours into the final paper, the feedback I received  was that I should have included headers. It felt like quite a let down. I had invested all this time and valuable learning into a paper, which was assigned a grade but the comments almost dismissed my work as of little importance. Thankfully, since then, I have discovered most courses have provided ample assessment opportunities, primarily through peer feedback of discussions and group assignments, but also regular and meaningful feedback from the instructors. Assignments with part A and part B submissions in particular I have found helpful. When receiving written feedback that is summative on part A but formative towards learning for part B, then a clear direction can be tackled.

The Gibbs & Simpson article this week raised a concept I hadn’t thought much about, mainly I believe, because my experience is in elementary school. They referred to, “different kinds of students: the ‘cue seekers’, who went out of their way to get out of the lecturer what was going to come up in the exam and what their personal preferences were; the ‘cue conscious’, who heard and paid attention to tips given out by their lecturers about what was important, and the ‘cue deaf’, for whom any such guidance passed straight over their heads.” (Gibbs & Simpson, 2005, p.4). In elementary school, grades are typically only provided twice a year at report card time and summarize an entire unit of study i.e. ‘Writes to communicate and express ideas and information”. Throughout the term, assessment is provided as verbal or written teacher feedback, rubric scores, peer or self assessments, none of which assign a ‘grade’. You never hear the question, Will this be on the test?, even if tests are one form of assessment used by teachers.  For these students, how do they maneuver or prioritize their learning or ‘hidden curriculum’ (Gibbs & Simpson, 2005). Is this a developmental concept that develops with age? Does the assignment of number or letter grades in later years change the way students approach the tasks? How does digital assessment change the way students identify the essential elements to achieve a higher grade rather than more solid understanding of the concepts? How can we ensure our assessments are leading our students to deeper understanding rather than grade chasing?

If the purpose of our assessment is formative and to advance student learning, then digital assessment can be very powerful. For example, a multiple choice test that is designed to assess what is learned at the end of the unit is not likely to achieve our outlined purpose. Several multiple choice exams spread throughout the unit provides a greater opportunity for learning, however changing the style of the multiple choice exam is likely to have the greatest impact of all. Adding media, i.e. pictures or video, questions with a ‘hints’ option, feedback for questions that were answered incorrectly or that include a student’s level of confidence in their answer are a few ways improve this traditional form of testing. Also, how teachers use the information obtained from these tests is relevant. Rather than simply recording a grade, if teachers compile answers and determine where most students answered incorrectly, an opportunity for class discussion arises. Was the question worded in a way that was difficult to understand? Does the concept require review? Would having students work in peer groups to debate their answers lead to increased understanding? Having students be able to re-try exams can also be beneficial to learning. This has traditionally been viewed as cheating. However if the purpose is for students to identify areas needing improvement, then students can learn those concepts and confirm their new understandings, which is the primary purpose of the assessment. Traditional assessment practices can be used in new ways to increase student achievement.

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved fromhttp://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

Jenkins, M. (2004).  Unfulfilled promise: Formative assessment using computer-aided assessment. Learning and Teaching in Higher Education, i, 67-80. Retrieved fromhttp://www2.glos.ac.uk/offload/tli/lets/lathe/issue1/articles/jenkins.pdf

Case studies and assessments

In the healthcare context, I see a major opportunities for using technology to support patient’s in having a greater understanding of their own conditions to improve self-management. A particular example of this is the Bant app that helps diabetes patients better self-manage their blood glucose levels. One of the major challenges of using technology to support patients is that there is a certain level of basic computer and digital literacy that is required on the part of the user. If that basic level is not present then the technology will not serve its purpose. Technology in this case may result in a greater digital divide between those who can afford the technology and those who cannot. Often times it is those who are less educated about personal health that require more support but do not have the means to obtain it.

Another example that is more in line with the Gibbs and Simpson (2005) reading is with healthcare students. A key opportunity for technology to support assessments is immediate feedback on certain types of assessment questions. In healthcare, critical thinking and analysis are usually tested in the form of case scenarios. Learning occurs most often when students are able to justify their answer and use clinical reasoning to rule out alternatives. These types of answers are not well suited for technology to support them through automated response or feedback. Although the feedback may be immediate, is it “sensitive to the unsophisticated conceptions of learning that may be revealed in students’ work”? (pg22). As such, I feel that feedback is where teachers provide the most value for student learning and requires the most thought that technology may not be able to support at this time.

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved from http://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

Assessment vs. Attendance

The major challenge for our Department of Continuing Professional Development, in terms of student assessment, is that it is not required.  In fact, to some extent, it is prohibited. (I’m not kidding!)

In healthcare education, one must abide by the regulatory agency of each profession, usually called a ‘College’. Both the Royal College of Physicians and Surgeons of Canada and The (yes, ‘The’ must be capitalized) College of Family Physicians of Canada require that educational credits be issued to practicing physicians based solely on their attendance at accredited continuing medical education events, and not on any performance measure.

The truth is, we are not prohibited from including performance measures in our courses, but we cannot refuse to give educational credits to a participant based on their performance on such measures. Consequently, in face-to-face courses, few instructors ever bother with any kind of assessment, except for the occasional use of an audience response systems.

That said, some providers of Continuing Medial Education are sticklers for attendance. Participants are given individual bar-codes and must scan in and out of lecture halls. Anytime they are not in the lecture hall is not credited. As professionals must acquire a certain number of credits annually, this does motivate them to attend.

However, in an online environment, this becomes tricky. Depending on the LMS used, it may not always be possible to see to what extent someone has participated.  For example, I may be able to tell that someone has opened a particular learning module, but I have no way of knowing how long they engaged with the material, especially as some of our course materials can be downloaded and read offline. Our solution is to require a final multiple choice quiz of the course content, and so far participants are complying. However, if anyone refused to take the test, or took the test and failed, we would still be required to issue them learning credits.

Bates (2014) is fully aware of this phenomenon, as he indicates in his section on ‘No Assessment’. In fact he describes our learning environment very well: “There may be contexts, such as a community of practice, where learning is informal, and the learners themselves decide what they wish to learn, and whether they are satisfied with what they have learned” (Section A.8.3, p. 2). Physicians themselves are responsible for keeping on top of the latest advances in their area of medicine. They must show that they are attending educational activities regularly; however, which aspects of these activities they find relevant to their own practice are, at this point, up to them to decide.

However, there is now a movement in physician education adapted from business management – that of quality improvement. More and more, physicians are encouraged to assess their own practices, or in some cases, have an outside agency do it. These assessments can then be used to show them which areas would benefit most from improvement. For example, perhaps one practice is far below the national norm in terms of performing immunizations; or perhaps a large proportion of patients have cardiac conditions but the physician has not reviewed advances in cardiac care in some time. Practice assessment, therefore, covers many of the conditions outlined by Gibbs and Simpson (2005), particularly the last few:

  • Condition 8: Feedback is appropriate, in relation to students’ understanding of what they are supposed to be doing.
  • Condition 9: Feedback is received and attended to.
  • Condition 10: Feedback is acted upon.

 

References:

Technology & Triangulation

 

In the small private school where I worked, and only recently left, for 5.5 years, assessment was often a major point of discussion for our staff.  In the past year or so specifically, incorporating what the Ontario Curriculum calls ‘triangulation of assessment’ received a lot of attention and effort towards implementation.  For anyone who isn’t familiar with this, here’s a handy picture:


At first we all groaned at the idea of MORE assessment, but before long we all started to realize how empowering it can be, and how well it can be integrated with technology.  Products have now become a tool we all use much more sparingly, and often only after there have been other kinds of formative feedback provided to the students to help them prepare for the summative (graded) assessment.

Observing student interaction or work can obviously be done with a simple checklist, but many of my peers have started to use apps that help them stay organized, as well.  Class Dojo (best suited to grades under 9, I would say) and Socrative (great for senior students) are apps that allow teachers to create checklists for certain behaviours, skills, or even content that they are looking for – Google forms will do this too, if a teacher is willing to make one – and then have it be visible for students to check their own progress.  Providing students will the criterion with which they will be assessed – or ideally, co-constructing it with them – and not always telling them WHEN to expect such evaluation (or making it clear to them it will happen every day), improved our student attendance greatly.    When we were told we could use such evaluations to help inform our professional judgment towards the student’s grade, and the students themselves became aware, they took class-time much more seriously as a whole.  As a result many of the conditions for effective assessment as outlined by Gibbs & Simpson (2005) were met, especially numbers 4 – 8.  Doing this kind of timely feedback, and putting it online where students can check in on it when they wish, also helps take away the phenomenon of them just ‘studying for the exam’ and cuts down on students being able to get a high grade while being “selectively negligent” (p. 6) of the elements they don’t see as valuable.

Gibbs & Simpson also explain the preference students have for coursework over exams, and how studies show that courses in which there was greater emphasis on coursework students achieved better grades – and it didn’t even need to be ‘marked’ (p. 7-8)!  Flipping lessons, where students watch a video or read something content-heavy PRIOR to class and then engage in activities DURING class time that test their understanding, is also made much easier through the use of technology.  Hosting the ‘homework’ (e.g. the content) on the class LMS makes it easy for students to access, so that when they arrive in class they can begin to engage with it and the teacher can get a quick idea of who needs what.

The challenges of this kind of integration of technology can certainly be in the learning and designing process for the teacher – in my experience thus far I’ve found students are quite quick to pick up on how to use the various platforms I’ve attempted as long as I’m confident with them.  Flipping is a front-loaded type of work, but the lessons can be re-used for future teachings of the course or class, and easily shared between peers.  I’m the type of teacher to just jump into trying new technologies or methods, but have learned that scaffolding its implementation is important for many of my teaching colleagues, as it can appear quite intimidating.  Just as with students however, when teachers get to the point where they are creating their own content (whether it’s videos or just lessons that USE technology),  their enjoyment and understanding becomes authentic.  Students have thus far shown a positive attitude towards this kind of technology-based support through assessment, when I’ve had the organization and time to make it come together – but I’ve also been able to benefit from schools with 1:1 device:student ratio.  It would take some creativity to figure out how to proceed if a class had a percentage of students without access!

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning.Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved fromhttp://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

 

 

Digital Tools for Elementary Assessment

Both required readings this week emphasize that assessment is a primary driver of student motivation and engagement (Bates, 2014; Gibbs & Simpson 2005). However, they also point out the importance of a learning community and motivating student learning through other means such as interactive simulations or games, peer review and discussion, and consistent formative assessment and feedback.

One of my primary foci with assessment is creating a community of learning between my student peers. Student feedback and reinforcement is a major aspect of my classroom community. For example, when students are writing weekly blog posts for their blogs (which act as digital portfolios), they give feedback on each other’s posts and make edits in advance of submitting them to me for feedback. In providing weekly peer and instructor feedback through digital means, I have seen writing abilities shift dramatically in my Grade 3 and 4 students.

Another way that I try to incorporate self-assessment practices is through metacognitive reflection after recording student read-alouds. With young students, it is especially important that they can hear themselves read in order to make improvements. I use Explain Everything on the iPad to have students record their reading and listen to the playback. I equate this to “game film for learning” – as a former athlete, watching myself play was a powerful method by which I could improve. Students use their own recordings to improve attention to punctuation, expression, fluency, and to review texts for comprehension purposes.

However, just as the speed of ongoing, formative assessments like these can be powerful and easy to create, and therefore, the volume of assessments can pile up. A teacher cannot possibly review recordings of daily readings for every student, every day. Likewise, my students are constantly writing over and above – sometimes two or more posts a week – and it’s not plausible to give feedback on every single piece of writing. I have found the need to pick and choose the things I need to assess and allow students to develop and improve the rest of their work on their own. Despite being young, most of them absolutely take advantage of this time and the peer reinforcement helps to keep them on track, too.

As we develop our introductory modules, I realized how fluid my own assessment practices are and how different the requirements for online courses really are. Access to a blended model allows you to change the components of assessment that you’re aiming to include in your teaching depending on what students know. In contrast, online courses need to lay that all out for students in the beginning of the course and seem less flexible in catering to a group or a student’s specific needs.

 

References

Bates. T. (2014). Teaching in a digital age. Appendix 1. A8. Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/5-8-assessment-of-learning/

Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved from http://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf

Assessment or Evaluation?

Planning for learning is, according to Wiggins & McTighe (2005), “to be more thoughtful and specific about our purposes and what they imply” (p. 14).  Using a planning methodology such as the backward design of Wiggins and McTighe (2005), is a great place to start. When designing assessment methods, thought needs to be given to the purpose of the assessment, and those that are being assessed. Despite the pitfalls outlined by Brown (2001), I believe the challenge I will have in the assessment methods I choose, is that they will be used with teachers. Teachers, and learning, and feedback, and improvement, are all words intricately woven into perceptions of self-efficacy for teachers. After all, teaching learners is what teachers do. In order to assess for the planned learning objectives, I am going to need to build motivation for “student” improvement in such a way that sets aside performance posturing/anxiety/paralysis and helps them forget, for the moment, that they are teachers [read: are learning].

References

Brown, G. (2001). Assessment series 3. Assessment: a guide for lecturers. LTSN Generic Centre: York.

Wiggins, G. McTighe, J. (2005). Understanding by Design. Alexandria, VA: ASCD. Retrieved from https://books.google.ca/books?id=hL9nBwAAQBAJ&pg=PA13&source=gbs_toc_r&cad=3#v=onepage&q&f=false