Tag Archives: student evaluations of teaching

2015/2016 Student Evaluations Response Part 1: Intro Psych

Thank you to each of my students who took the time to complete a student evaluation of teaching. I value hearing from each of you, and every year your feedback helps me to become a better teacher. Each year, I write reflections on the qualitative and quantitative feedback I received from each of my courses, and post them on this blog. I have graphed the quantitative student evaluations here. Note that I was on sabbatical for the 2016/2017 academic year, so I’m writing these posts in response to 2015/2016 student feedback in preparation for Fall 2017.

A recap on this course: After teaching students intro psych as a 6-credit full-year course for three years, in 2013/2014 I was required to transform it into 101 and 102. Broadly speaking, the Term1/Term2 division from the 6-credit course stays the same, but there are some key changes because students can take these courses in either order from any professor (who uses any textbook). These two courses really still form one unit in my mind so I structure the courses extremely similarly.

Across PSYC 101 and PSYC 102, the two halves of Introductory Psychology, are pretty consistent in how I teach them and the reception they get from students. In both cases, I think the things I need to keep strong are enthusiasm and care for students as humans, as well as a variety of activities during classes, including engaging students with each other, but also lecture, videos, demos. My area for growth is around assessments (i.e., the least fun part of teaching, but an essential part of learning!). Two-stage exams are here to stay – they make test day fun, offer students feedback, and help them learn. After all these years I still haven’t quite managed to find the right balance between textbook-only and class-only material (and do I unassign portions of the text that won’t be tested?). And students have long been calling for representative practice questions. Fingers crossed that MyPsychLab can help with that. [Follow-up: As I suspected, questions in MyPsychLab are not challenging enough. Erg.] Regarding the written assignments with peer assessments, I got a mixed bag of feedback that really only point me at something’s not quite right for some students. I wonder if I need to devote more class time to the exercise (e.g., show examples of papers, feedback, including the grade range to be expected by peers)? Not sure. Time to consult the experts! If you’re interested in a distilled version of student comments, all summarized in a table that sort of compares 101 and 102, interjects some of my thoughts and recommendations for students, here you go…

2014/2015 Student Evaluations Response Part 3: Psyc 102

Thank you to each of my students who took the time to complete a student evaluation of teaching this year. I value hearing from each of you, and every year your feedback helps me to become a better teacher. Each year, I write reflections on the qualitative and quantitative feedback I received from each of my courses, and post them here.

After teaching students intro psych as a 6-credit full-year course for three years, in 2013/2014 I was required to transform it into 101 and 102. Broadly speaking, the Term1/Term2 division from the 6-credit course stays the same, but there are some key changes because students can take these courses in either order from any professor (who uses any textbook). These two courses really still form one unit in my mind so I structure the courses extremely similarly. I have summarized the quantitative student evaluations in tandem. As can be seen in the graph, quantitative ratings of this course haven’t changed too much over the past few years, and students rate my teaching in these courses very similarly. However, I will discuss them separately this year because of some process-based changes I made in 102 relative to 101.

IntroPsycHistoricUMIs.LastFiveYears

My response to Psyc 101 included a formal coding of comments into various categories. Oh to have the open time of summer! I’m a bit more pressed for time now as I work on my Psyc 102 preparations, so as I was reading the comments I picked out themes a bit less formally. Two major themes emerged (which map on roughly to those identified using more a formal strategy for Psyc 101): class management, and tests. Interestingly, I changed the weighting of the Writing to Learn assignments from Psyc 101 (Term 1 in 2014/15) to Psyc 102 (Term 2 in 2014/15), to avoid relying on peer reviews and just do them for completion points only. The number of comments about that aspect of the course dropped close to zero, despite the actual tasks of the assignment staying the same (see my response to Psyc 101, linked above, for discussion of why I was compelled to make changes in 102 last year).

Again, a major theme in the comments was that tests are challenging. I don’t think they’re any more challenging than in my 101 course, but maybe there’s a perception that they will be easier because the content seems more relatable, and so people are more surprised by the difficulty in 102. Not sure. Just like in my 101, they draw from class content, overlapping content, and some textbook-only content, and they prioritize material that follows from the learning objectives (which I post in advance to help you know what material will be explored in class the next day). MyPsychLab is a source of practice questions, as are your peers and the learning objectives.

 
In addition to content, time is tight on the tests. Before implementing the Stage 2 group part, my students didn’t have 25 questions in 50 minutes… they had 50 questions in 50 minutes. Now, we have 25 questions in about 28-30 minutes, which is actually more relaxed than before. Although many people report finding value in the group part of the test, it’s not universally loved. A few people mentioned that it’s not worth it because it doesn’t improve grades by very much. My goal here is to promote learning. I’m stuck with the grading requirements: we have to have a class average between 63 and 67%. That’s out of my control. The group tests add an average of about 2% to your test grade, which you may or may not value. But importantly for me, they improve learning (Gilley & Clarkston).

The second most frequent comment topic related to various aspects of classroom dynamics. I thought I’d take this opportunity to elaborate on some choices I make in class.

I do my best to bring high energy to every class. Many people report being fueled by that enthusiasm—that’s been my most frequent comment for many years across many courses. However, a few people don’t love it and feel it’s a bit juvenile or just too much. I bring this up here as a heads up: Although I’d love to have you join us, if you’re not keen on the way I use my voice to help engage people, you might enjoy a different section of 102 better.

In class, occasionally I comment when a student is doing non-course related things on a device, and invite them to join us. A couple of people mentioned this in evaluations from last year. My intention here is to promote learning (i.e., to do my job). Research shows that when people switch among screens on their laptops, they’re not just decreasing their own comprehension, but the comprehension of all the people within view of the screen (Sana, Weston, & Cepeda). I occasionally monitor and comment on this activity (e.g., during films) so that I can create a class climate where anyone who wants to succeed can do so.

 
Sometimes I wait for the class to settle, and sometimes I start talking to the people in the front (which might be perceived as incomprehensible to the people at the back of the room). I get impatient sometimes too, particularly toward the end of the year (I’m only human after all!). I don’t like to start class until the noise level is settled, out of fairness of people who are sitting at the back but still want to be involved, and, to be honest, talking while others are talking and not listening makes me feel disrespected. One change I made in my 101 class this past term might help us with starting class in 102. I decided to move the announcements from verbal ones at the start of class to a weekly email I send out on Friday afternoons. This change seems to have improved people’s recognition that when I’m ready to start class, we’re starting with content right away, so settling happens more quickly. Hopefully this will help us out in 102.

 
As always, many thanks for your feedback. It challenges me to think about ways I can improve in my teaching, and to reconsider decisions I have made in the past. Sometimes I make changes, and sometimes I reaffirm the decisions I made before. This space gives me a chance to explain changes or re-articulate why I continue to endorse my past decisions. Student feedback is an essential ingredient to my decision-making process. Thank you!

SMTs: Student Management Teams

This post is the latest in my annual series where I publically commit to share the evidence-based change I’m making to my teaching practice this year. (See last year’s post, where I shared rationale and resources on two-stage exams, which were awesome.)

What’s the idea?

A Student Management Team (SMT) is a small group of students (usually 3-5) that meets regularly throughout a course, and whose primary objective is to facilitate communication between the course instructor and the class. I first learned about SMTs at the Teaching Preconference at SPSP this past February, from a talk given by Jordan Troisi. He uses them to gather feedback on what’s working well and what isn’t, to gather ideas about potential changes, and sometimes to explain in detail why something can’t be changed. The SMT also creates, administers, and analyzes mid-course feedback, which springboards dialogue with the instructor. Overall, the SMT acts as a communication bridge between the rest of the class and the instructor.

Why am I interested?

Every major change I’ve made to my courses in the past 3-4 years has (a) increased the amount of peer-to-peer learning/interaction, and (b) implemented an evidence-based practice (see the impact of these on my teaching philosophy revision, here and here). Of all my courses, I think my introductory psychology courses (101 and 102) need the most attention. I see SMTs as an opportunity to work with motivated students to help me identify what changes are most needed and how we can implement them on a large scale.

Although most students rate these courses positively overall, I know that some of my 370 students (each term) feel overwhelmed, lost, stressed, and alone. To help somewhat, I have held a weekly Invitational Office Hour outside the classroom on Friday afternoons, and I have happily met many of my students face-to-face during that period. Some longstanding friendships among student attendees have even developed at those office hours! But I continue to struggle with the sheer size of my classes. How can I connect more students with each other more intentionally? How do I integrate more meaningful peer-to-peer interaction to help students learn while building community? I’m interested in hearing feedback from the SMT on these and other issues. I am also looking forward to building ongoing working relationships with a small cohort of students. My position is such that I don’t get many opportunities to mentor students (unlike, say, if I was running a research lab), so I’m excited by the idea of working closely with a few students to help me communicate with the many.

What evidence is there to support it?

Not as much as I’d ideally like to see, but as Jordan notes in his papers, it’s relatively new. I see no downsides to trying it at this point, and Jordan’s data suggest benefits not just to SMT members (Troisi, 2014), but also to the whole class (Troisi, 2015). I’m interested in adding a measure of relatedness, and seeing if his findings for autonomy hold with a class almost 15x larger (with, perhaps, 15x the need?).

Troisi, J. D. (2015). Student Management Teams increase college students’ feelings of autonomy in the classroom. College Teaching, 63, 83-89.

  • Shows that students enrolled in a course that had an SMT increased their sense of autonomy by the end of the course, but students in the same course (same instructor, same semester) without an SMT showed no change in their autonomy. In other words, students feel more in control of their outcomes if they have an SMT as a conduit (not just if they’re actually in the SMT). This paper uses the lens of Self Determination Theory (a major theory of motivation), and provides a nice introduction for non-psychologists who might be interested in using it to inform their teaching practice. (In a nutshell: highest motivation for tasks that meet competence, autonomy, and relatedness needs.)

Troisi, J. D. (2014). Making the grade and staying engaged: The influence of Student Management Teams on student classroom outcomes. Teaching of Psychology, 41, 99-103.

  • Shows benefits for the SMT members themselves. They perform better in the course than non-members (after controlling for incoming GPA), which seems partly due to increased engagement over the duration of the course.

Handelsman, M. M. (2012). Course evaluation for fun and profit: Student management teams. In J. Holmes, S. C. Baker, & J. R. Stowell (Eds.), Essays from e-xcellence in teaching, 11, 8–11. Retrieved from the Society for the Teaching of Psychology website: http://teachpsych.org/ebooks/eit2011/index.php

  • Anecdotal discussion of benefits, with description of how he has implemented it.
  • Free e-book!

Are you thinking of trying out SMTs? Let’s talk! Email me at cdrawn@psych.ubc.ca

2014/2015 Student Evaluations Response Part 2: Psyc 101

Thank you to each of my students who took the time to complete a student evaluation of teaching this year. I value hearing from each of you, and every year your feedback helps me to become a better teacher. As I explained here, I’m writing reflections on the qualitative and quantitative feedback I received from each of my courses.

 

After teaching students intro psych as a 6-credit full-year course for three years, in 2013/2014 I was required to transform it into 101 and 102. Broadly speaking, the Term1/Term2 division from the 6-credit course stays the same, but there are some key changes because students can take these courses in either order from any professor (who uses any textbook). These two courses really still form one unit in my mind so I structure the courses extremely similarly. I have summarized the quantitative student evaluations in tandem. Students rate my teaching in these courses very similarly. However, I will discuss them separately this year because of some process-based changes I made in 102 relative to 101.

IntroPsycHistoricUMIs.LastFiveYears

To understand the qualitative comments in Psyc 101, I sorted all 123 of them into broad categories of positive, negative, and suggestions, and three major themes emerged: enthusiasm/energy/engagement, tests/exams, and writing assignments. Many comments covered more than one theme, but I tried to pick out the major topic of the comment while sorting. Twenty-one comments (18%) focused on the positive energy and enthusiasm I brought to the classroom – enthusiasm for students, teaching, and the discipline (one other comment didn’t find this aspect of my teaching helpful). All nine comments about the examples I gave indicated they were helpful for learning.

Thirty-four comments focused largely on tests (28%). Last year I implemented the two-stage tests in most of my classes, and I expected to see this format discussed in these evaluations. All but one of the five comments about the two-stage tests were positive. Other positive comments mentioned that tests were fair and that having three midterms kept people on track. Yet, the major theme was difficulty. Of the 17 negative comments that focused largely on tests (representing 14% of total comments), the two biggest issues were difficultly and short length (which meant missing a few questions had a larger impact). The biggest theme of the suggestions (5 comments) requested practice questions for the exams. There is a study guide that students can use that accompanies the text. I wonder how many students make use of that resource? Do they know? Would that help students get a better sense of the exam difficulty? This is a major question I want to ask my upcoming Student Management Team (SMT) this year. Still, it’s important to keep in mind that these comments represented a minority in the context of all 123 that were given.

Twenty-five comments (20%) focused largely on the low-stakes writing-to-learn assessments. Five times throughout the term students write a couple of paragraphs explaining a concept and then applying it to understand something in their lives. They are each worth 2%. When I implemented this method two years ago I also added a peer assessment component, such that 1% has been completion, and 1% comes from the average of their peers’ ratings of their work. In year one I used Pearson’s peerScholar application, and in year two (2014/2015) I switched to the “Self and Peer Evaluation” feature in Connect (UBC’s name for Blackboard LMS)… which was disastrous from the data side of things (e.g., giving a zero instead of missing data… and then counting the zero when auto-calculating the student’s average peer rating!). As I expected, most of the comments about the assignments were about peer review failures: missing comments, missing data, distrust of peers as being able to rate each other, distrust at only receiving 3 ratings that can be wildly different sometimes, difficult to get a good mark, suspicion that peers weren’t taking it seriously. For 2015/2016, I have made three major changes to help improve this aspect of the course:

  1. Return to the peerScholar platform instead of Connect, which should fix part of the missing data problem (and it can now integrate with my class list better than it did before),
  2. Review 4 or maybe 5 peers rather than 3,
  3. Implement a training workshop! With the help of a TLEF, Peter Graf and I have been developing an online training module for our courses to help students learn to use our respective peer assessment rubrics and test out the rubric on some sample essays. Our main hypothesis is that students will be more effective peer reviewers—and will feel like peers are more effective reviewers—as a result of going through this process. More to come!

As can be seen in the graph above, quantitative ratings of this course haven’t changed too much over the past few years. The written comments are useful for highlighting some areas for further development. I look forward to recruiting a Student Management Team this year to make this course an even more effective learning experience for students!

2014/2015 Student Evaluations Response Part 1: Psyc 217

Thank you to each of my students who took the time to complete a student evaluation of teaching this year. I value hearing from each of you, and every year your feedback helps me to become a better teacher. As I explained here, I’m writing reflections on the qualitative and quantitative feedback I received from each of my courses.

 

Research Methods is the course I have taught more than any other: since 2008 I have taught 1022 students across 14 sections. I am also the course coordinator, which means I help new research methods instructors prepare for their sections including ordering textbooks, I organize the labs including TFs and room bookings, and I facilitate the poster session. And I write the Canadian edition of the text we use (2nd edition forthcoming February 2016). I spend a lot of time thinking about this course!

Over the years I have incorporated many suggestions initiated by students in student evaluations, and have made changes to the course design based on student evaluation feedback. In 2014/2015, I implemented one major change to the course: I converted my three tests and the final exam to a two-stage format. Students write the test first on their own, then break into groups to write it again (see my blog post about this method, with resources). The largest challenge I knew I faced going in was timing, particularly for the tests: Our class periods are just 50 minutes long, and my tests were already notoriously lengthy. It was with this lens that I approached reading my student evaluations for this past year. Do I keep the two-stage tests?

I examined the quantitative data first. As is clear from the graph, there were no major differences relative to previous years. Notably, The fairness of the instructors’ assessments of learning item was rated higher than usual, though that difference was small. No indication of a disaster. Yay!

Psyc217historicUMIs.LastFiveYears

Next, I evaluated the qualitative data. As I sorted into positive and negative columns, two topic themes seemed to be emerging: tests and enthusiasm. As in past years, students appreciated the energy and enthusiasm I bring to the classroom (especially at 9am and especially with this topic). Out of 128 comments, 29 of them (23%) specifically mentioned energy or enthusiasm (with just a couple of those recommending I tone it down a bit).

Coincidentally, the same proportion of comments (29, 23%) mentioned the tests in some way. Six comments endorsed the three (rather than two) test format, indicating it helped them keep on top of studying, although three comments mentioned that tests 2 and 3 were too close together, and another three indicated they would have preferred two tests. Seven comments mentioned enjoying the two-stage format were positive, indicating that it provides opportunities to work together, make friends, and receive immediate feedback. The two negative comments that specifically mentioned the two-stage format did not disagree with it per se, but felt that this format exacerbated a different problem: feeling rushed. Seven comments specifically mentioned feeling rushed during exams. Two others indicated that the fix implemented for test #2 worked well to address the timing issue. Still, it seems that timing during tests was the clearest challenge in my course. Despite my best efforts to shrink the tests, there is a small group of students reporting they remain too long for the required tasks. I’ll consider strategies for preparing students for this pace.

Overall, the two-stage tests seemed to work well for most students, and grades were still within the acceptable range for our department. I enjoyed giving exams much more than I used to, and I was able to relax and hear the conversations students were having as they debated correct answers. Anecdotally, I was able to witness deeper learning and (mostly) positive group dynamics during the second stage (luckily I have other peoples’ data to offer as evidence that it works to promote learning!). Two-stage exams: you’re staying!