Tag Archives: CWSEI

Two-Stage Exam: Introduction and Resources

Time to dive in! After thinking about them for a long time, this term I’m converting my exams into two-stage exams.

  • Step 1. I shorten the exam so it’s doable in about 2/3 of the testing time slot.
  • Step 2. Students write the exam individually.
  • Step 3. Students immediately — during the same class period — write the same exam again in groups of 4.
  • Step 4. Grade the exams as usual, but 90% of the score comes from individual, and 10% from team, with a guarantee that if you do better than the team score you get 100% weight for the individual (which very rarely happens, so I’m told).

Why am I making this change?

Four key reasons:

  1. Data. A growing pool of evidence is showing that team tests help students learn. See references below.
  2. Feedback. My classes are very large, so I struggle to give any personalized feedback at all, especially timely feedback. By re-doing the test immediately with peers, they get to immediately discuss the questions and come to the right answer (according to data).
  3. Exam improvement. Based on my evaluations, a small but consistent group of students find my exams very difficult and/or too long. Because I still only have 50 minute classes to work with, this change will force me to shorten my exams, culling and distilling to just the most effective questions that measure deep learning.
  4. Community. I value collaboration and building a supportive community. Research papers and instructors who have used this method report extra benefits beyond learning: students have more rapport with each other and are more willing to participate with their peers in class throughout the term. Also, Gillian Sandstrom and I have a research paper in press showing the more students talk in class, the more they feel like part of a community and interested in the class. So… back to data.

Interested? Here are some quick and effective resources for implementation:

  1. Videos by the CWSEI team depicting Two-Stage Exams in action.
  2. Jones, F., Gilley, B., Harris, S. (2013). Tips for successful two stage exams. The EOS-SEI Times, 6(9). Retrieved http://www.cwsei.ubc.ca/Files/EOS/EOS-SEITimes_4.1_GroupExams.pdf
  3. Jones, F., Gilley, B., Lane, E., Caulkins, J., & Harris, S. (2011). Using group exams in your classes. The EOS-SEI Times, 4(1). Retrieved http://www.cwsei.ubc.ca/Files/EOS/EOS-SEITimes_4.1_GroupExams.pdf
  4. PHAS-CWSEI Team. (2012). Two-stage (group) exams. CWSEI–PHYS & ASTRO Newsletter. Retrieved http://www.cwsei.ubc.ca/Files/PHAS/PHAS-CWSEI_Newsletter_Summer-2012.pdf
  5. Brett Gilley, aka @ModernHydra

Data

Dahlstrom, O. (2012). Learning during a collaborative final exam. Educational Research and Evaluation: An International Journal on Theory and Practice, 18, 321-332.

Eaton, T. T. (2009). Engaging students and evaluating learning progress using collaborative exams in introductory classes. Journal of Geoscience Education, 57, 113-120.

Gilley, B. H., & Clarkston, B. (2014). Collaborative testing: Evidence of learning in a controlled in-class study of undergraduate students. Journal of College Science Teaching, 43, 83-91.

  • A particularly well-designed example.

Leight, H., Saunders, C., Calkins, R., & Withers, M. (2012). Collaborative testing improves performance but not content retention in a large-enrollment introductory biology class. CBE—Life Sciences Education, 11, 392-401.

  • The title might be alarming here… they showed no effect of the 2-stage exam on final exam performance (compared with material that had been previously tested only with individual tests). I’m ok with this. Not every study is going to find the same effect (particularly ones with some execution oddities like this one), yet this is still a “no-change” effect with no evidence that student learning decreases. Moreover, students still enjoyed the process and found it less stressful than the individual-only tests. No harm done, potential benefits.

Rieger, G. W., & Heiner, C. E. (2014). Examinations that support collaborative learning: The students’ perspective. Journal of College Science Teaching, 43, 41-47.

Roediger, III, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory, & Cognition, 31, 1155-1159.

  • Two-stage tests might help to fight the negative consequences of MC tests: you remember what you answered (and thought was right), not what actually was right.

Sandstrom, G. M., & Rawn, C. D. (in press/2014). Embrace chattering students: They may be building community and interest in your class. Teaching of Psychology.

Zipp, J. F. (2007). Learning by exams: The impact of two-stage cooperative tests. Teaching Sociology, 35, 62-76. doi: 10.1177/0092055X0703500105

 

 

 

Student Evaluations of Teaching 2012/2013: Part 2 Research Methods

Thank you to each of my students who took the time to complete a student evaluation of teaching this year. I value hearing from each of you, and every year your feedback helps me to become a better teacher. As I explained here, I’m writing reflections on the qualitative and quantitative feedback I received from each of my courses.

Psyc 217: Research Methods

I went into this year especially excited about research methods, as I got to use a textbook with my own name on it! Wow, what a thrill! Perhaps reflecting this extra-potent boost of enthusiasm, my quantitative results were overwhelmingly positive this year. Interestingly, I seem to have connected especially effectively with my 10am section. Ratings from my 9am section were positive too (see the mint green bars on the graph linked above)… and on par with years past. But scores from my 10am section were the highest I’ve ever received (see the light purple bars on the graph)! Because I taught the two sections pretty much the same way, I’m not sure what can account for the difference. Suffice it to say, in my mind it was an especially awesome year… and many of my students seem to have felt that way too.

When I teach research methods, it’s often at 9 and 10 in the morning, and I do my very best every day to bring the energy. For many people, this material isn’t exactly inherently exciting. As one student wrote, “Based on what I’ve heard from friends and acquaintances at UBC, research methods is one of the most disliked courses offered at the university due to its sheer boringness.” Thankfully, this student continued, “that said, this instructor did a phenomenal job of teaching the course in a way that students found the material relevant and exciting (to the extent that this material can be exciting).” Such an assessment is the most common comment coming from my student evaluations in this course: Students expect this material to be dull, but I bring it alive. That’s exactly what I strive to do every single day. I’m satisfied that my well-caffeinated efforts are effective for my students.

A few other topics were noted by small subsets of students. Two topics drew ambivalent assessments: groupwork and in-class activities. People seem to have a love-hate relationship with groupwork. First, only a handful of people mentioned it at all, leading me to suspect that mostly people feel neutrally about it (perhaps recognizing its inherent challenges and strengths). The people who noted liking the team project still found it a lot of work, but recognized the value in it. The people who didn’t seem to work as well with teammates report viewing it as a frustrating waste of time. Each year I hear this dichotomous assessment. One thing I tried out last year in response to one particularly struggling group was a mediation meeting, during which I acted as mediator. It seemed to work well to get that group through effectively to the end of the course. To broaden this service and reach the struggling groups I don’t hear about, I am creating a form-based mediation request process for this year. That may sound like a cold approach, but I’ve given it much thought. After years of imploring people to come to me face-to-face to help solve their group challenges, I note that very few groups—or individuals struggling within groups—ever come to me. By formalizing this process, I hope to remove some of the emotion around “tattling,” and treat it as just another issue that needs to be dealt with, just like a grade change request. Hopefully this new process will help reach those extra few groups who are struggling on their own so they can move forward and perform well in group tasks.

In-class activities make material memorable, illustrate difficult concepts, up the energy and attention levels, and make learning fun. Every year, dozens of students report appreciating them. However, there is a small minority of students who don’t appreciate the time spent on these active learning adventures (yes, that’s the subtitle of my blog… see where I’m headed). I’m committed to student learning, and one of the hallmarks of my teaching philosophy is to get out of the way. And data supports my commitment to using active learning techniques (Armbruster, Patel, Johnson, & Weiss, 2009; Deslauriers, Schelew, & Wieman, 2011; Hake, 1998; Prince, 2004; Ruhl, Hughes, & Schloss, 1987; Yoder & Hochevar, 2005). I encourage people who are considering taking research methods or statistics with me (or any of my courses, really), to be ready to engage actively during class. If you’re not up for having fun while learning, my section might not be for you.

The last topic I’ll touch on is the textbooks. A few students noted how much they found my textbook worthwhile (yay!), with one student going so far as to say “I loved her book she wrote, very clear, informative, concise, probably the best textbook I ever used and read due to how clear it is to understand, with all the learning objectives.” I can’t take full credit for that readability (thanks to Cozby for laying such a strong foundation in his 10 prior editions!), yet I’m glad this text is being perceived as helpful. Unfortunately, the Stanovich text once again was voted unhelpful. The messages are useful, but even I find many examples dated and the chapters too lengthy for the points they make. Two years ago I wrote learning objectives and emphasized “get in, find what you need to know, and get out approach” in an attempt to make Stanovich’s text more accessible. Since then, there have been fewer complaints about Stanovich’s text, but a small, consistent group remain. I’ve been back-and-forth on this text for quite a while now, and I’m strongly considering replacing it with a few key peer reviewed articles/commentaries. I have some deep thinking to do in the coming weeks!

Many thanks to all my Psyc 217 students in 2012/2013 students who completed this evaluation. The response rate this year was 67% across both sections, which is my highest rate ever. And thanks to everyone for a really fun year of learning about research methods!

References

Armbruster, P., Patel, M. Johnson, E., & Weiss, M. (2009). Active learning and student-centred pedagogy improve student attitudes and performance in introductory biology. CBE—Life Sciences Education, 8, 203-213.

Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved learning in a large-enrollment physics class. Science, 332, 862-864.

Hake, R. (1998). Interactive-engagement vs. traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64-74.

Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93, 223-231.

Ruhl, K., Hughes, C., & Schloss, P. (1987). Using the pause procedure to enhance lecture recall. Teacher Education and Special Education, 10, 14-18.

Yoder, J. D., & Hochevar, C. M. (2005). Encouraging active learning can improve students’ performance on examinations. Teaching of Psychology, 32, 91-95.

pearls of wisdom

Today Sunaina and I had the pleasure of lunching with Russ Day, Senior Lecturer and head of the Intro Psych program at SFU. Of the many insightful ideas he shared with us, a few stand out for me in particular. Most potently, he built on the idea of of 20-60-20: 20% of students will learn in spite of you, 20% may not be sufficiently motivated to learn from you at all, but that middle 60% is where our biggest impact can be as instructors. So if I pitch my course at the 80th percentile of students, the top 20% won’t be too bored, the bottom 20% will be disengaged, but I have the potential to truly engage and challenge 60% of my students. This is interesting on its own, but he pushed it further into what this would mean for student evaluations. The students in the, say, 21st-25th percentiles will be pushed too far if I’m pitching for the 80th percentile. A psychologically healthy response to failure is an external attribution: i.e., to blame me. So if I’m not getting about 2-3% of students feeling frustrated by my course, I may be pitching my course at too easy of a level. Wow!!! That is powerful! (I’m reminded here about something else we discussed: Chickering & Gamson’s 7 Principles, one of which is “communicate high expectations.”) So often I (and others) ruminate about those few extremely critical comments in the student evaluations, and have to find ways to cope with them… but Russ offered such a thoughtful and realistic perspective on those comments! Instead, I should be ruminating on the positive comments, trying to figure out exactly what I did to connect with that student so I can do more of it.

The second idea that really stands out for me was our discussion about being a scholar. As a scholar, there is no choice but to keep up with the literature. For me, that means content, but also as a teaching-focused scholar, the education literature. This is a challenge to me, one that’s been in the back of my mind for a while now. One thing I do to help with this is that I attend the Carl Wieman Science Education Initiative (CWSEI) reading group weekly during the summer months. This is one step in the right direction. Where can I build more literature into my life?

On Teaching Psychology and Physics

What?  I’m a member of the weekly reading group at the Carl Wieman Science Education Initiative . Today we discussed an article that empirically demonstrates performance gains (measured by scores on a standardized test) as a result of “Interactive Engagement” teaching methods, when compared with traditional lecture based instruction (Hake, 1998). A course was coded as using “Interactive Engagement” if the instructor used teaching methods aimed at promoting a conceptual understanding of the material via interactive activities accompanied by peer and instructor feedback through discussion. The sample was huge and diverse, involving 6542 students from 62 courses in a variety of high school, community college, and university settings. I learned today that this paper is a citation classic in physics, and one of the key drivers of physics teaching reform.

So What?  The data make a compelling case for incorporating interactive techniques in the classroom by linking them to learning gains. I already use many interactive techniques in my courses, largely because of more tangential research (and because I have more fun than when I lecture — and shouldn’t learning be fun?). Research in cognitive psychology shows that deeper processing results in greater comprehension and memory; deeper processing can be enhanced by interactive techniques.

More broadly though, as I learn more about physics education, I’m surprised to see a striking connection to problems we often face in psychology education. In both disciplines, it seems, people come to intro courses with a vast amount of experience interacting with our subject matter: physical objects and people, respectively. One of the aims of intro courses in both disciplines is to disabuse people of some prior assumptions about how the (physical or psychological) world works, and replace them with discipline-specific understanding and ways of knowing. People have some intuitions about the world that need to be adjusted — and sometimes rejected entirely — in order to understand the discipline. I’m reminded here of a message from Ken Bains’ book: set up an experience in which existing paradigms don’t work, and help build these back up.

Now What?  Knowing about this article makes me want to find more papers that test the hypothesis that interactive activities result in better learning — and to figure out how to measure that in my courses. I also plan to think very carefully about what kinds of activities are most useful for facilitating comprehension in my contexts (e.g., 500 students vs. 20 per course). 

Since realizing the parallel between physics and psychology instruction, I’m interested in learning more about physics pedagogical research and figuring out in what ways we are conceptually similar in our teaching-related challenges (and hence what I can pull from their literature). I’m also interested in figuring out what ways we (need to) differ as disciplines when teaching the next generation of scientists and informed citizens.