On finding my “big idea” in undergrad

I was asked to speak at tonight’s Jumpstart ProfTalks event on the topic of “finding your big ‘idea’–how you want to change the world–and developing the toolbox you need to translate that passion into practice”. Here’s what I came up with. What was/is your big idea?

On finding my “big idea” in undergrad…

I feel like if I had expected to find a “big idea” (how I wanted to change the world) in undergrad, I wouldn’t have found it. In fact I didn’t. It sounds terrifying! What I knew I wanted was to change my world. Nobody asked me to change the world. Nobody expected me to. People who come from where I come from don’t change the world. A high school teacher told me—told my class—this much. That remains one of my life’s worst memories. Janitors come from my neighbourhood. Not CEOs. Thankfully, he wasn’t my only high school teacher. I had others who said things like of course you’re going to university! and then helped me get there.

So my “big idea” was rather small by comparison. I wanted to learn. To do my very best. To be afraid and lonely and make my way anyway. To succeed in the face of odds stacked against me. I had no idea what I was supposed to be doing, but I knew how to get good grades by working really hard. In hindsight, I consider my “big idea” was learning to think for myself. I changed my world by questioning it. I stumbled into psychology because my family situation was rather unconventional and I’d already seen a counsellor, so I thought hey psychology might be a helpful thing to take. What I found there changed me. I wouldn’t have said this at the time—I don’t think I was that aware of it, I just liked psychology—but psychology gave me a method, a way to ask the questions I wanted to ask about people and relationships and identity, and it was a way to get answers. When my TA said she was looking for research assistants, I stepped up, which led to invaluable experiences learning from faculty and graduate students.

The coursework in undergrad that was most annoying and frustrating and challenging is what prepared me best for my little big idea, and what I still find useful today. Example 1: Dr. Burris asked me to collaborate on writing a paper. WHAT? I wasn’t 100% in control of my grade! That was really frustrating!! But guess what, all the papers I’ve ever written in my position have been collaborative. Just today a group of us finalized Terms of Reference for the Instructor Network, a committee I helped develop… it was collaborative writing. Most of the things I write professionally are collaborative. Example 2: Another thing I had to do that I hated at the time: statistics. And even worse, I had to take intro to university math before I was allowed to take stats. Nightmare. But if I hadn’t taken stats, and kept working hard at it, I wouldn’t have been selected to TA stats in graduate school… and it was while TAing stats in graduate school that I realized I love teaching. Every time I got to plan a lesson, I wanted to do that first before any other work. I wanted to perfect it. It was (and still is) an immensely creative endeavour for me. And then… I get to test it out to find out if it works to help people learn… aha! There was my big idea. But not until years beyond undergrad. My original big idea was to change my world. And so I did. I’m still trying to figure out how to change the world.

SMTs: Student Management Teams

This post is the latest in my annual series where I publically commit to share the evidence-based change I’m making to my teaching practice this year. (See last year’s post, where I shared rationale and resources on two-stage exams, which were awesome.)

What’s the idea?

A Student Management Team (SMT) is a small group of students (usually 3-5) that meets regularly throughout a course, and whose primary objective is to facilitate communication between the course instructor and the class. I first learned about SMTs at the Teaching Preconference at SPSP this past February, from a talk given by Jordan Troisi. He uses them to gather feedback on what’s working well and what isn’t, to gather ideas about potential changes, and sometimes to explain in detail why something can’t be changed. The SMT also creates, administers, and analyzes mid-course feedback, which springboards dialogue with the instructor. Overall, the SMT acts as a communication bridge between the rest of the class and the instructor.

Why am I interested?

Every major change I’ve made to my courses in the past 3-4 years has (a) increased the amount of peer-to-peer learning/interaction, and (b) implemented an evidence-based practice (see the impact of these on my teaching philosophy revision, here and here). Of all my courses, I think my introductory psychology courses (101 and 102) need the most attention. I see SMTs as an opportunity to work with motivated students to help me identify what changes are most needed and how we can implement them on a large scale.

Although most students rate these courses positively overall, I know that some of my 370 students (each term) feel overwhelmed, lost, stressed, and alone. To help somewhat, I have held a weekly Invitational Office Hour outside the classroom on Friday afternoons, and I have happily met many of my students face-to-face during that period. Some longstanding friendships among student attendees have even developed at those office hours! But I continue to struggle with the sheer size of my classes. How can I connect more students with each other more intentionally? How do I integrate more meaningful peer-to-peer interaction to help students learn while building community? I’m interested in hearing feedback from the SMT on these and other issues. I am also looking forward to building ongoing working relationships with a small cohort of students. My position is such that I don’t get many opportunities to mentor students (unlike, say, if I was running a research lab), so I’m excited by the idea of working closely with a few students to help me communicate with the many.

What evidence is there to support it?

Not as much as I’d ideally like to see, but as Jordan notes in his papers, it’s relatively new. I see no downsides to trying it at this point, and Jordan’s data suggest benefits not just to SMT members (Troisi, 2014), but also to the whole class (Troisi, 2015). I’m interested in adding a measure of relatedness, and seeing if his findings for autonomy hold with a class almost 15x larger (with, perhaps, 15x the need?).

Troisi, J. D. (2015). Student Management Teams increase college students’ feelings of autonomy in the classroom. College Teaching, 63, 83-89.

  • Shows that students enrolled in a course that had an SMT increased their sense of autonomy by the end of the course, but students in the same course (same instructor, same semester) without an SMT showed no change in their autonomy. In other words, students feel more in control of their outcomes if they have an SMT as a conduit (not just if they’re actually in the SMT). This paper uses the lens of Self Determination Theory (a major theory of motivation), and provides a nice introduction for non-psychologists who might be interested in using it to inform their teaching practice. (In a nutshell: highest motivation for tasks that meet competence, autonomy, and relatedness needs.)

Troisi, J. D. (2014). Making the grade and staying engaged: The influence of Student Management Teams on student classroom outcomes. Teaching of Psychology, 41, 99-103.

  • Shows benefits for the SMT members themselves. They perform better in the course than non-members (after controlling for incoming GPA), which seems partly due to increased engagement over the duration of the course.

Handelsman, M. M. (2012). Course evaluation for fun and profit: Student management teams. In J. Holmes, S. C. Baker, & J. R. Stowell (Eds.), Essays from e-xcellence in teaching, 11, 8–11. Retrieved from the Society for the Teaching of Psychology website: http://teachpsych.org/ebooks/eit2011/index.php

  • Anecdotal discussion of benefits, with description of how he has implemented it.
  • Free e-book!

Are you thinking of trying out SMTs? Let’s talk! Email me at cdrawn@psych.ubc.ca

2014/2015 Student Evaluations Response Part 2: Psyc 101

Thank you to each of my students who took the time to complete a student evaluation of teaching this year. I value hearing from each of you, and every year your feedback helps me to become a better teacher. As I explained here, I’m writing reflections on the qualitative and quantitative feedback I received from each of my courses.


After teaching students intro psych as a 6-credit full-year course for three years, in 2013/2014 I was required to transform it into 101 and 102. Broadly speaking, the Term1/Term2 division from the 6-credit course stays the same, but there are some key changes because students can take these courses in either order from any professor (who uses any textbook). These two courses really still form one unit in my mind so I structure the courses extremely similarly. I have summarized the quantitative student evaluations in tandem. Students rate my teaching in these courses very similarly. However, I will discuss them separately this year because of some process-based changes I made in 102 relative to 101.


To understand the qualitative comments in Psyc 101, I sorted all 123 of them into broad categories of positive, negative, and suggestions, and three major themes emerged: enthusiasm/energy/engagement, tests/exams, and writing assignments. Many comments covered more than one theme, but I tried to pick out the major topic of the comment while sorting. Twenty-one comments (18%) focused on the positive energy and enthusiasm I brought to the classroom – enthusiasm for students, teaching, and the discipline (one other comment didn’t find this aspect of my teaching helpful). All nine comments about the examples I gave indicated they were helpful for learning.

Thirty-four comments focused largely on tests (28%). Last year I implemented the two-stage tests in most of my classes, and I expected to see this format discussed in these evaluations. All but one of the five comments about the two-stage tests were positive. Other positive comments mentioned that tests were fair and that having three midterms kept people on track. Yet, the major theme was difficulty. Of the 17 negative comments that focused largely on tests (representing 14% of total comments), the two biggest issues were difficultly and short length (which meant missing a few questions had a larger impact). The biggest theme of the suggestions (5 comments) requested practice questions for the exams. There is a study guide that students can use that accompanies the text. I wonder how many students make use of that resource? Do they know? Would that help students get a better sense of the exam difficulty? This is a major question I want to ask my upcoming Student Management Team (SMT) this year. Still, it’s important to keep in mind that these comments represented a minority in the context of all 123 that were given.

Twenty-five comments (20%) focused largely on the low-stakes writing-to-learn assessments. Five times throughout the term students write a couple of paragraphs explaining a concept and then applying it to understand something in their lives. They are each worth 2%. When I implemented this method two years ago I also added a peer assessment component, such that 1% has been completion, and 1% comes from the average of their peers’ ratings of their work. In year one I used Pearson’s peerScholar application, and in year two (2014/2015) I switched to the “Self and Peer Evaluation” feature in Connect (UBC’s name for Blackboard LMS)… which was disastrous from the data side of things (e.g., giving a zero instead of missing data… and then counting the zero when auto-calculating the student’s average peer rating!). As I expected, most of the comments about the assignments were about peer review failures: missing comments, missing data, distrust of peers as being able to rate each other, distrust at only receiving 3 ratings that can be wildly different sometimes, difficult to get a good mark, suspicion that peers weren’t taking it seriously. For 2015/2016, I have made three major changes to help improve this aspect of the course:

  1. Return to the peerScholar platform instead of Connect, which should fix part of the missing data problem (and it can now integrate with my class list better than it did before),
  2. Review 4 or maybe 5 peers rather than 3,
  3. Implement a training workshop! With the help of a TLEF, Peter Graf and I have been developing an online training module for our courses to help students learn to use our respective peer assessment rubrics and test out the rubric on some sample essays. Our main hypothesis is that students will be more effective peer reviewers—and will feel like peers are more effective reviewers—as a result of going through this process. More to come!

As can be seen in the graph above, quantitative ratings of this course haven’t changed too much over the past few years. The written comments are useful for highlighting some areas for further development. I look forward to recruiting a Student Management Team this year to make this course an even more effective learning experience for students!

2014/2015 Student Evaluations Response Part 1: Psyc 217

Thank you to each of my students who took the time to complete a student evaluation of teaching this year. I value hearing from each of you, and every year your feedback helps me to become a better teacher. As I explained here, I’m writing reflections on the qualitative and quantitative feedback I received from each of my courses.


Research Methods is the course I have taught more than any other: since 2008 I have taught 1022 students across 14 sections. I am also the course coordinator, which means I help new research methods instructors prepare for their sections including ordering textbooks, I organize the labs including TFs and room bookings, and I facilitate the poster session. And I write the Canadian edition of the text we use (2nd edition forthcoming February 2016). I spend a lot of time thinking about this course!

Over the years I have incorporated many suggestions initiated by students in student evaluations, and have made changes to the course design based on student evaluation feedback. In 2014/2015, I implemented one major change to the course: I converted my three tests and the final exam to a two-stage format. Students write the test first on their own, then break into groups to write it again (see my blog post about this method, with resources). The largest challenge I knew I faced going in was timing, particularly for the tests: Our class periods are just 50 minutes long, and my tests were already notoriously lengthy. It was with this lens that I approached reading my student evaluations for this past year. Do I keep the two-stage tests?

I examined the quantitative data first. As is clear from the graph, there were no major differences relative to previous years. Notably, The fairness of the instructors’ assessments of learning item was rated higher than usual, though that difference was small. No indication of a disaster. Yay!


Next, I evaluated the qualitative data. As I sorted into positive and negative columns, two topic themes seemed to be emerging: tests and enthusiasm. As in past years, students appreciated the energy and enthusiasm I bring to the classroom (especially at 9am and especially with this topic). Out of 128 comments, 29 of them (23%) specifically mentioned energy or enthusiasm (with just a couple of those recommending I tone it down a bit).

Coincidentally, the same proportion of comments (29, 23%) mentioned the tests in some way. Six comments endorsed the three (rather than two) test format, indicating it helped them keep on top of studying, although three comments mentioned that tests 2 and 3 were too close together, and another three indicated they would have preferred two tests. Seven comments mentioned enjoying the two-stage format were positive, indicating that it provides opportunities to work together, make friends, and receive immediate feedback. The two negative comments that specifically mentioned the two-stage format did not disagree with it per se, but felt that this format exacerbated a different problem: feeling rushed. Seven comments specifically mentioned feeling rushed during exams. Two others indicated that the fix implemented for test #2 worked well to address the timing issue. Still, it seems that timing during tests was the clearest challenge in my course. Despite my best efforts to shrink the tests, there is a small group of students reporting they remain too long for the required tasks. I’ll consider strategies for preparing students for this pace.

Overall, the two-stage tests seemed to work well for most students, and grades were still within the acceptable range for our department. I enjoyed giving exams much more than I used to, and I was able to relax and hear the conversations students were having as they debated correct answers. Anecdotally, I was able to witness deeper learning and (mostly) positive group dynamics during the second stage (luckily I have other peoples’ data to offer as evidence that it works to promote learning!). Two-stage exams: you’re staying!


Hey! I’m presenting this talk Saturday at the Vancouver International Conference on the Teaching of Psychology ( http://www.kpu.ca/victop)…hope to see you at 1:15!

Disciplinary Reform Talk


Next Page »

Spam prevention powered by Akismet

This work by Catherine Rawn is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Canada.