Category Archives: D: Assessment

Teacher Voice, Reading and Assessment?

I recently came across a statement in John Hattie and Gregory Yate’s Visible Learning and the Science of How We Learn (2014) that resonated with me and shed some light on the perspective of teacher talk and its impact on student learning and assessment strategies.  As Hattie and Yates (2014) state, “a great deal of information flows through teachers’ talk. But when a teacher exposes students to high levels of their talk, the students’ basis for knowing what is relevant or not can be undermined.” As teachers, how does a lecturing style of instruction or overuse of teacher talk negatively impact the learning and development of our students? We see it in our classrooms frequently, and we identify areas of concern for students who can’t seem to sit still or can’t seem to listen. But are our expectations reasonable, or even appropriate for the learning of our students?  In turn, I’m wondering how this translates into online learning, and whether a reliance on reading texts and materials might have a similar impact on student focus and learning.

According to Hattie and Yates, studies into the characteristics of effective teachers have found that material is best suited to learning when explained in 5 to 7 minute bursts (2014). Mental focus drops off significantly after 10 minutes, and other information overload factors come into play, as students’ ability to listen and focus intensively (or to try and focus) literally runs out through biological exhaustion in accordance with glucose levels available to the brain. As students try and conserve these energies for upcoming tasks and trials in the school day, mind wandering or other inattentive behaviours become adaptive strategies for the conservation of their physical and biological resources for learning (Hattie & Yates 2014).  If we are basing online learning around a “new” style of lecturing, whether that be through videos or reading, what effect will this have on student behaviour?  Is it reasonable to base assessment strategies around these methods of instructional delivery?

With information processing and mental organization, our minds aim for simplicity, but the input through excessive teacher talk creates an implication of complexity. Creating opportunities for meaningful student discussion, enhanced with authentic student voice, can help support students through building opportunities to promote higher order thinking. By reducing, and refocusing, teacher involvement in these discussions, students can be guided towards deepening their knowledge and understanding while shifting teachers’ roles away from more traditional models, both in the classroom and in online learning environments.

References

Hattie, J. & Yates, G. (2014). Visible Learning and the Science of How We Learn. New York: Taylor & Francis.

 

Challenges and Opportunities

For the Moodle course I am creating some major issues are:

  • the large number of adult students per class and only one online facilitator.
  • a wide range of student motivations. For example, some students are taking the course due to personal interests, whereas others are taking it as work-related professional development.

Often due to the lack of resources, non-credit adult education courses for professional development are often designed to be self-paced and without a facilitator or instructor. For example, think of the Human Resources or WHMIS courses organizations often require their employees to complete.

One of the goals for the course I am working on is for the learning to be based on constructivist principles, so that learners can share knowledge, and prepare them to access resources and become involved in the communities. Therefore, a facilitator is necessary to guide discourse, and provide expert input where required.

In order to focus attention to the integration of key concepts, rather than the memorization of knowledge, the course is designed to limit the instructor’s time with the rote knowledge. The use of frequent knowledge check-ins, thought-provoking questions, and weekly auto-graded quizzes will provide repetition, knowledge assessments, and feedback.

This will free up the facilitator’s time to focus on the discussions. The discussion questions are designed to capitalize on the wide knowledge base and experience of the learners.

Usability

Since my context isn’t as a teacher, I am most concerned with how to launch or implement new assessment technologies for instructors. I think that usability of the assessment tools is a really major issue. Having had some minor experience creating assessments, it can be really difficult and time consuming to create something that work well. This is besides all the time spent developing the questions and answers. If teachers struggle to input their assessments, they may decide to cut corners and not provide the kind of depth that students need in their assessments. So I think that there is a great opportunity for an assessment tool that a strong usability and interactivity to all parties involved in the technology.

Pros & Cons

Today technology is helping students learn on their own speed and in their preferred style of learning. Yet for instructors to decide how to use technology to measure learning a few things must be taken into consideration:

Cons:
1. Environment: When technology is being used for assessment the instructor can only observe the digital environment
2. IT Learning curve: Learners need to be familiar with the technology and the platform used otherwise they may score incorrectly even though they know the answer
3. Motivation: It is easy to demotivate the learner specially in a distant learning environment because the learner cannot estimate how his/her peers are doing on the assessment
4. Plagiarism: With the digital world plagiarism became easier yet there are softwares to uncover it

Pros:
1. Formative & summative tools: Instructors are equipped with different tools and content that can create different types of assessment
2. Learner behaviour: Standards like SCORM and xAPI allow instructors to measure the experience of their learner. This can help better design the course. For example the instructor can know how much time is spent on each page and where did the learner clicked or got stuck
3. Machine Learning: Some advanced algorithms are offering data analysis that can be done automatically can generate useful information.
4. Big Data: The amount of data recorded from different institutes can be analyzed and cross referenced in order to validate assumptions or find new methods of assessment

References

Bates. T. (2014). Teaching in a digital age. Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/5-8-assessment-of-learning/ (Appendix 1. A8)

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.

Metz, S. (2015). Big data. The Science Teacher, 82(5), 6.

Online Assessments not always acceptable

Last year I worked at a grade 7 and 8 school that licensed an LMS called Schoology. The user interface is very much like Facebook, and I and the students found it easy to use. I taught math exclusively, to 3 grade 7 classes and 3 grade 8 classes. While I often did creative assignments that I would mark by hand, I set up some tests on Schoology for automated marking. Yes, it saved me an enormous amount of time, but I tried to ensure that the tests asked rich questions and were valid. Some were multiple choice and some involved filling in the blanks with the right answers but with both of these options they could be marked automatically by the system. Some were multi-step problems where the student would have to fill in the blank at each stage of the problem, thus could still get part marks while being assessed by the answers pre-programmed into Schoology. Because it was a French immersion program, I would sometimes have to override the system if someone put in the English answer 36.5 rather than the French 36,5, and for other issues of misunderstanding I could go back in and give part marks.

Anyway, despite the tests being rich, and flexible, people complained to the principal, and based on his directive, I could use them as “quizzes” for formative assessment but not as “tests” for summative assessment? Why not? I still don’t know. Really, I think it was because it was perceived as me taking a short cut and not doing all the work that my predecessors had to do! Let’s face it, most math teachers since the advent of the school house have marked math assignments by looking at the answer and marking it right or wrong. If it is wrong, we look back at the process and identify where the thinking led to the incorrect answer. Is it really necessary that such a job be done by a human?

Optional completion based programs and assessment

The Continuing Professional Development world is an interesting one when it comes to assessment. A lot of our programs are optional and the ones that are mandatory do not have formal assessment or grades for students. I see that being the largest challenge with incorporating technology that supports student assessment. Our learners are simply not required to complete it. With that said, I think technology offers a lot of new interesting mediums that would be appealing to students. It also adds a level of convenience that our learner would appreciate. I’m sure they would be much more likely to complete assessments if they were easily accessible, quick, and provide immediate feedback.

One opportunity that I do see for us to use more technology to support student assessment is through online modules. Our department has been growing our on demand program offerings both internally and externally. I think we could use software like Articulate Storyline to build assessment right into our modules. Whether that is a few multiple choice questions throughout the module or a summative question at the end. I think we could utilize this technology to assess our learner and make modules more interactive.

I would love to hear all of your thought on the matter!

Online Assessment

Some of the opportunities I see in using technology to support student assessment is described by Bates’ (2008): Project work or group work, along with e-portfolios. I remember back in 2010 when I graduated from the education program, our final assessment was to create an e-portfolio linking our journey in the program to the 8 standards for teachers. BC Standards for Teachers

Most teachers do use a physical portfolio for their student’s progress, but once I completed my own e-portfolio, I knew that this would be perfect for students. Bates’ (2014) describes them as self-assessment through reflection, recording, knowledge management and evaluation of learning activities. Students could keep them online indefinitely and be able to see how they improved over the years to come. I certainly enjoyed it, and look forward to completing another e-portfolio at the end of this program.

Another great opportunity that technology can bring to student assessment is through online project work. There are so many apps and learning software available for students to use nowadays that the opportunities are endless. The new BC curriculum is headed this way through self exploration, problem-solving, collaborative learning and creativity. I teach my grade 7 class using the MYP International Baccalaureate curriculum, and I feel that the new BC curriculum is using the same idea.

Another opportunity I find using online assessment is feedback. According to Hattie (1987), the most powerful influence in student achievement is feedback (as cited in Gibbs and Simpson, 2005). Students want to know how they are doing, not necessarily by a mark, but by words. Are they doing in correctly? Is there something they should change? Are they really understanding it? Feedback is not my strong point as it requires individualized comments for each student and I am used to just grading a quiz or a paper using a rubric. With online feedback, I can pre-program feedback so that each student will receive a comment that is in the database according to their response. I can see that this would be a great tool to use.

Bates, T. (2014). Teaching in a digital age http://opentextbc.ca/teachinginadigitalage/

Blank Canvas

Due to the looming final ministerial exit exam, the manner in which our department assessed its students changed.  Teachers, pressed for time, hurried through the concepts that might be covered on the final.  Amidst the chaos generated by the genuine desire for our students to succeed on this exam, we omitted formative assessment, as well as, opportunities to provide detailed and relevant feedback; we provided grades. Quite honestly we have lost most if not all of the conditions set out by Gibbs, Simpson, Gravestock, and Hills (2005)  for effective assessment.  It seems counter-intuitive, yet when faced with growing fears of failure we wanted to drill the knowledge into the students. Good intentions, poor results.

This rush not only pushed aside formative assessment but also modified the format of summative assessments.  In order to best mimic the final exam, projects, case studies, portfolios and to a certain extent complex labs were replaced by paper quizzes and tests.  In some terms, this form of summative evaluation accounts for 90% of the student’s grade.  In such a context, you must understand that technology, even the use of science simulations, is a very rare practice.

On the positive side I must say that the opportunities of using technology in this context are endless as one is faced with a blank canvas.  Technology could be used to provide feedback, formative assessments; we do not have to deal with technology being misused and revising existing behavior, technology simply is just not being used at all.  However, as we are not using technology, even though it is at our disposal, we must contend with two enormous challenges; tradition and fear.

Interestingly enough, I see these challenges originating not from the students or their parents, but from the educators.   Students are comfortable with the idea of using technology; we are teaching a generation who has never known the world without the internet.  Many educators however have had little experience with technology.  Some fear trying something new, and prefer sticking with the tried and true methods they have developed over the years.  Some fear letting go and doubt that meaningful learning can occur beyond their classroom walls.  Others fear being embarrassed in front of the students; fearing being exposed as less technological savvy than their students.  Faced with such fear, it seems safer to stick with tradition, with the old fashion quiz and test.  Far from pleased with the results of such methods or the long hours of correction, they still shy away from the potential benefits of trying technology.  Instead of admitting this fear, some educators often dismiss using technology assuming that students will cheat or plagiarise (Jenkins, 2004).  However, this behavior is still observable regardless of whether the assessment is performed with technology or not. Also some educators  assume that their students, potential “digital natives” (Prensky, 2001), would simply not be able to understand how it works.

In my context the affordances granted by the use of technology in assessments need to be unlocked slowly.  We need to build confidence in the notion that assessment is not only the driving force for learners (Bates, 2014), it also drives our teaching practices.  It can provide educators with immediate feedback on the difficulties experienced by the class and concepts that are still unclear to many.  Such feedback is necessary to teaching itself, and I believe that technology can aid in that pursuit.

I have established below a potential progression of technology that might hopefully lead to a full integration of technology in assessments by slowly increasing the teachers confidence in both the technological and their technological skills, as well as slowly transferring assessment activities outside the classroom walls.  Any addition or comments you might have on this progression would be greatly appreciated. As a question of convenience I have placed it in a google doc for everyone to edit.   https://docs.google.com/document/d/14oezqRrFMkdv4AZjB86Y8YThD7KMHB9D__RyyIfW0cw/edit?usp=sharing

 

Thank you,

Danielle

progression

 

 

High Tech

 

References:

Bates, T. (2014). Teaching in a digital age. Open Textbook.

Gibbs, G., Simpson, C., Gravestock, P., & Hills, M. (2005). Conditions under which assessment supports students’ learning.

Jenkins, M. (2004). Unfulfilled promise: formative assessment using computer-aided assessment. Learning and Teaching in Higher Education, 1(1), 67-80.

Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.

Assessment challenges in Med Ed

Medical education has changed significantly in the last several years. Emphasis used to be placed on content, where exams would test on minutiae, which would be forgotten as quickly as it was memorized. Medical education today aims to teach communication, collaboration, self-directed learning, compassion, patient-centred care, leadership, health advocacy, and professionalism. Even in terms of content, the aim is to learn the “big-picture”, be able to take a situation and critically analyze it, develop a plan and execute. Given this change assessment methods must also change.

Traditionally, multiple choice examinations were the assessment method of choice. It can be administered to a large group of students at once and scoring can be performed electronically. Unfortunately, this method may not accurately test the student’s level of understanding, does not provide effective feedback and it would be difficult to assess attributes such as communication, collaboration, leadership etc. Herein lies the challenge. How do you assess all of these attributes, in over 100 students, and provide formative feedback that will inspire and motivate medical students to further their education? Can technology assist in this endeavor?

I think that some of these challenges can be addressed through technology. Testing higher level cognitive processes can be done by changing multiple choice questions to ones that are context rich. Making them short answer questions that can be marked electronically is also feasible. Feedback for these questions can also be provided through the use of technology. I understand that there are some programs that are able to score essays, which would not only assess knowledge and higher order thinking skills but also communication. Discussion forums are another way to assess communication and analytical skills. Online simulations would also be useful in these domains, but these would have to created, which requires more skills than I possess. And I’m sure there’s a lot of applications out there that I’m not aware of that could useful in assessment of medical students. I look forward to learning more about these as I progress in my career/education.

Flipping our approach?

I am acutely aware that the overall approach to assessment at my school is not ideal–for students or teachers. While we also have significant barriers in using technology, I think online assessment can help shift our approach to assessment in a direction that is more useful for students and teachers.

It was Bates’ (2014) list of assessment purposes that really got me thinking about assessment at my school. Bates put his list in order from most important to creating an effective learning environment to least important. It was no surprise to me that our overall approach was the reverse. As much as it pains me to admit, as a private school that caters to a very specific demographic, I would say that the number one priority of assessment at my school  is “for institutional accountability and/or financial purpose” (Bates, 2014). As this makes Bates’ list, it does have a purpose in creating an effective learning environment, largely ensuring there IS a learning environment to begin with. Surprisingly, I don’t feel pessimistic about this. The results students get on assessments like TOEFL, IELTS, and the SAT help attract more students. More students mean more teachers, which means there is opportunity for change–because there is bound to be a teacher brave enough to switch the focus, if not for the whole school, at least with their classes–and perhaps that will include online assessments.

The benefits of online assessment for my school are two-fold–it would potentially maximize time spent by teachers giving feedback and it would help us reverse our current approach to assessment by focusing on the first two items on Bates’ list: “to improve and extend student learning” and “to assess student knowledge and competence in terms of desired learning goals or outcomes” (Bates 2014). I see online assessment as a means to focus more on formative assessment with feedback. As Gibbs and Simpson’s Condition 4–Sufficient feedback is provided, both often enough and in enough detail states, “feedback may need to be quite regular, on relatively small chunks of course content, to be useful” (p.17). The main obstacle to giving frequent feedback at my school is large class sizes and a heavy teaching load. Using an online assessment with automated feedback would ensure that every student is getting some feedback. Using Socrative, for example, would allow my students to complete quick assessments, perhaps even daily assessments, using their cell phones. I know that precise written feedback is ideal, but even just showing correct answers is feedback that students weren’t getting before–at least not immediately. Gibbs and Simpson’s Condition 6–The feedback is timely in that is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance (p.18) supports my assumption, although their example referred to “imperfect feedback from a fellow student” (p.19) rather than learning the correct answers as I suggested. Timely feedback also helps the teacher. For example, if the teacher is also getting feedback from online assessments by way of reports they can quickly identify trouble areas for the whole class, or individual students and address it in the next class, rather than finding out after a summative assessment that no one understood a particular topic. So, even a simple Socrative quiz could make a huge change at our school–as long as the feedback is useful.

There is a flip side to online assessment, and perhaps it’s a comment that doesn’t need to be made because, let’s be honest–many teachers are guilty of making an assessment because they need to assess students and they know it. I think it’s easy to abuse online assessments because, from my experience, they’re easy to use if you’re not using them well (does that make sense?)  So, if the online assessment (or any assessment!) doesn’t “[engage] students in [a] productive learning activity of an appropriate kind” (Gibbs and Simpson 2005 p.14) it is a waste of everyone’s time. You would think that’s common sense, but I see it happen everyday. I’m guilty of it myself. But, a good teacher keeps learning and trying new things, so I’m going to make a conscious effort to not assess unless it is worthwhile to both my students and to me as their teacher.

References

Bates, T. (2014). Teaching in a digital age. Retrieved from http://opentextbc.ca/teachinginadigitalage/chapter/5-8-assessment-of-learning/

Gibbs, G., & Simpson, C. (2005) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3-31. Retrieved from http://www.open.ac.uk/fast/pdfs/GIbbs%20and%20Simpson%202004-05.pdf