Why should I use peer instruction in my class?

Image: "Lecture Hall," uniinnsbruck, Flickr (CC)

[Update (June 16): Lead author Zdeslav Hrepic pointed me to a follow-up book chapter [PDF] where he and the study co-authors describe using tablet-PCs to counter the problems uncovered in their study. Thanks, Z.]

I’m sure we’ve all heard it from skeptical instructors: Why should I use peer instruction in my class? In response, we often cite Hake’s 6000-student study or the new UBC study by my colleagues Louis, Ellen and Carl. These are still pretty abstract, though: If you use interactive, learner-centered instruction, you can expect your students to better grasp of the concepts.

“Sure, but why?” the instructors ask. “Why does it work?”

I just read a paper that can help answer that question. I ran across it while following a discussion about the Khan Academy videos and whether or not they are good tools for learning. This paper by Hrepic, Zollman and Rebello (2007) asks students in an introductory physics course and physics experts (with M.Sc’s and Ph.D’s) to watch a 15 minute video of a renowned physics educator presenting a topic in physics.

The researchers do a series of pre- and post-tests and interviews with the students and experts to compare their understanding of the concepts covered (or not) in the video. There were some significant differences. A couple that stick in my head. (1) students recalled learning about concepts that were not presented in the video. (2) Only students who knew the correct answers on the pre-test were able to infer the concepts from the video (that is, the questions were not explicitly answered in the video.) The students who did not know the concept before were unable to make the inferences. Like I said, there are significant differences between what the instructor thinks a lecture covers and what the students think is covered.

The paper nicely gives us some suggestions to counter this problem.

And my thoughts about how to use peer instruction to do that.

Making inferences: Experts make more inferences than students. And only students who already know the concepts can infer them from the lecture. Therefore, instructors need to be cautious about relying on students to fill in the blanks.

Some of the best peer instruction questions are the conceptual questions where the answer is not simple recall. No traxoline here, please. Questions that rely on students making inferences are excellent for promoting discussion because it’s likely students will interpret the question differently, make different assumptions and come to different conclusions. <soapbox> All the more reason that students need to first answer clicker questions on their own so they’re prepared to share their inferences. </soapbox>

Prior knowledge: Students’ prior knowledge influences what they perceive and can “distort” their recollection of what the lecturer says. Therefore, it’s essential that the instructor has some idea of what the students already know (particularly their misconceptions) before presenting new material.

A few, introductory clicker questions will reveal the students’ prior knowledge. Sure, maybe these are simple recall questions that won’t generate a lot of discussion. But the students’ responses will inform the agile instructor who can tailor the instruction.

Continuous feedback about students’ understanding: The trail the instructor blazes through the concepts and the path the students follow often diverge during a lecture. The instructor should be continuously gathering and reacting to feedback from the students about their understanding so the instructor can shepherd the students back on track.

Observant instructors can gather critical feedback from the discussions that occur during peer instruction or the students answers on in-class worksheets like the Lecture-Tutorials popular in introductory “Astro 101” classes and other hybrids of the Washington Tutorials. Rather than waiting weeks until after the midterm or final exam to find out students totally missed Concept X, the instructor can discover it within minutes of introducing the topic. Minutes, not weeks! The agile instructor can immediately revisit the difficult concepts. Immediately, not weeks later or never!

I’m much more confident I can answer the skeptical instructor now. “Why should I use clickers in my classroom?” Because they give the students and you to ability to assess the current level of understanding of the concepts. Current, right now, before it’s too late and the house of cards you’re so carefully building come crashing down.

Posted in astro 101, clickers, teaching | Tagged , , | 6 Comments

How should I share materials?

[Update (9 September 2011): Finally stopped procrastin–, er, planning and did it. Follow the “Astro Labs” link at the top of the page. I’m continually adding new activities so check back periodically. Or watch for announcements on my twitter feed, @polarisdotca .]

The goal of the Carl Wieman Science Education Initiative (CWSEI) is to improve undergraduate science education. The chosen method for doing that is based on 3 “pillars”:

In my position as a CWSEI Science and Teaching Learning Fellow in the Department of Physics and Astronomy, I get to spend time working on each of these pillars. Sometimes,  I flit from pillar to pillar to pillar in a single sitting, like when I’m making up a nice think-pair-share clicker question. Other times, I can spend an hour, a day, a month working on one pillar. For instance, I spent the good part of a summer working with our introductory astronomy (“Astro 101”) instructors on a set of learning goals, statements directed at the students like

[By the end of this course, you will be able to] use the geometry of the Earth, Moon and Sun to illustrate the phases of the Moon and predict the Moon’s rise and set times.

For the last couple of terms, I’ve been working closely with the Astro 101 instructors on instructional approaches to help them become more effective instructors.

But it’s hard to be an effective instructor if you don’t have good materials to work with. (No, I’m not saying good materials make you a good instructor — I’m a math grad, I know all that necessary and/or sufficient stuff.)  So I have spent considerable time in the last few years creating activities for our Astro 101 labs. These aren’t traditional, 3-hour labs. Rather, they’re 1-hour, hands-on activities run in groups of less than 40 students. Following our American friends, we call them “tutorials” even though the rest of UBC uses “tutorial” for that hour you spend with a teaching assistant going over problems on the board.

Once we’d drafted the set of learning goals for Astro 101, we selected the learning goals that would be best tackled with a hands-on activity. The Moon phases goal mentioned above, for example. Or “describe experiments or observations that would detect if space is flat, has positive or negative curvature.” Then I set about creating the activity, cycling from CWSEI pillar to pillar.

It got pretty hectic, at times. We have some large classes with the students split into 5 or 6 tutorial sections each week. I’d get the activity ready and create a set of worksheets that we’d use in the Monday section. Then I’d sit in as the teaching assistants led the activity, observing the students, talking to them about how they answered the questions and talking to the teaching assistants about what worked and what didn’t. That afternoon (or night!) I’d make some changes and try version 2 on Tuesday. And repeat. Throughout the week. And then assess on the final exam. Eventually, we ended up with some, quite frankly, excellent activities. The most “mature” activities consist of

  • worksheets to guide the students through the activity
  • question sheet to assess their knowledge at the end of the activity
  • equipment
  • detailed guide for the teaching assistants, including how to set up the equipment, how to facilitate the activity, suggestions for prompts and Socratic-style questions to guide the students, solutions to the assessment
  • in some cases, materials for adapting the activity for use in the classroom
  • exam questions that assess the selected learning goal(s)

It’s taken several years to get here. And it’s time to visit the fourth CWSEI pillar:

disseminate what works

Yes, it’s time to share the activities. A couple of them are already out there, like the human orrery activity [with video] or a concept-mapping activity that will appear in the proceedings of Cosmos in the Classroom 2010. But what about the rest? How do I share them with the community of astronomy educators which includes, I believe

  • post-secondary Astro 101 instructors
  • teaching assistants
  • lab instructors
  • K-12 teachers
  • museum/science center presenters sharing astronomy with school children and the general public
  • astronomy education researchers

I feel there are 2 major decisions to make:

1. Are they free?

I’ve got a pretty good relationship with a certain textbook publisher and I could certainly talk to them about finding a way to bundle the activities up into a workbook. But honestly, I don’t want to go that route. The CWSEI and my Department have been paying me to create these materials – and in some sense, they’re already paid for. In the spirit of standing on the shoulders of giants, I’d like to make them available to anyone who wants them. Does it mean anything if I add ” © 2011 Peter Newbury” in the footer. Or is that “© 2011 UBC”? No, the intellectual property policies at UBC are pretty clear it belongs to me:

Copyright and other intellectual property rights to scholarly and literary works—including books, lecture notes, laboratory manuals [my emphasis], artifacts, visual art and music—produced by those connected with the University belong to the individuals involved.

Or maybe I tag them with a Creative Commons license to use, adapt but give credit where credit is due.

2. What format?

Full disclosure, right here, right now: These materials are written in LaTeX and I will not, I repeat not, Not, NOT re-write them in MS-frickin-Word. One more auto-format because apparently I’m stupid and it knows what I want and I’m going to tear out my hard-drive. And sorry, I don’t know iPages or whatever that Apple iProgram is iCalled.  Plus, I get such a geek thrill out turning this

%%%%%%%%%%%%%%%%%%%%
% Jupiter orbit
%%%%%%%%%%%%%%%%%%%%
\pscircle(0,0){5.2}
\parametricplot[plotpoints=721,linestyle=dashed]{0}{360}{%
t cos 5.2 mul t 9 mul cos 1.5 mul add
t sin 5.2 mul t 9 mul sin 1.5 mul add}

into this [update 7 June 2011: here’s the full .tex file]

Jupiter's spirograph orbit comes from one line of sweet, LaTeX PSTricks code.

So here’s what I’m thinking: for each activity, I’ll make available the .tex files, .eps figures, other graphics and PDFs which are ready-to-use but can’t (easily) be edited. I could add a new page to this WP blog and distribute them there.

What would work for you?

Like the heading asks, what would work for you? Something I suggested above? Or maybe something entirely different? Please leave a comment if you have any thoughts, suggestions, recommendations, requests,…

Posted in astro 101, teaching | Tagged , , | 4 Comments

An astronomy education retreat

Last year, Tim and Stephanie Slater phoned me up and invited me to be part of an astronomy education research group they were putting together. I was flattered to be part of the Conceptual Astronomy and Physics Education Research (CAPER) team! Especially when I learned who else I’d be working with. I mean, check out the bio’s of these remarkable astronomy educators. I’ve got to admit, I was a bit overwhelmed by their experience (and publication records.)

We got together at a conference we all attended and meet via telecon regularly but this week was special. A group of us — Tim, Stephanie, Julia, Sharon, Kendra, Inge, Eric and I — got together in Colorado for an intensive, 3-day astronomy education research retreat.

Wow.

We talked about this. We argued about that. We thought about this and that. And it was all about teaching and learning astronomy. Not marking or Little League or home renovations or all those other things that eat up our time. Just astronomy education. What a treat!

By the end of the 3 days, we’d developed a research project, from concept tests and interview protocols to IRB letters and pre/post testing schedules. And what’s it all about?

Understanding certain concepts in introductory astronomy, like the causes of the seasons and the phases of the Moon, requires students to visualize the Earth, Moon and Sun, from both Earth-centered and Sun-centered points-of-view. It seems likely, then, that students with better spatial reasoning abilities will be more successful. There are already  standard tests of spatial reasoning. And there are a number of assessments of astronomy knowledge, augmented by the one’s we created this week. Add some pre-/post-testing and a dash of correlation coefficient and see what comes out.

One of the concepts we want to explore is the motion of the sky, so we made up an assessment using this diagram.  (I’m using this example because *I* created this diagram with Powerpoint and a little help from Star Walk.)

Looking south at sunset. So many questions we can ask...

Like I said earlier, I was pretty overwhelmed by the calibre of the other people in the group. So it was very gratifying, good for my ego, to be able to contribute and realize that we all have strengths. Maybe that’s the humble Canadian coming through.  I’m excited about what we’ve done and what we’ll be doing. And proud I have knowledge and experience to share.

I can’t wait to see what we find. Stay tuned!

Posted in astro 101, teaching | Tagged , , , | 4 Comments

CWSEI End of Year Conference

Every April, at the end of the “school year” at UBC, the Carl Wieman Science Education Initiative (CWSEI) holds a 1-day mini-conference to highlight the past years successes. This year, Acting-Director Sarah Gilbert did a great job organizing the event. (Director CW, himself, is on leave to the White House.) It  attracted a wide range of people, from UBC admin to department heads, interested and involved faculty, Science Teaching and Learning Fellows (STLFs) like myself and grad students interested in science education. The only people not there, I think, were the undergraduate students, themselves. Given that the event was held on the first day after exams finished and the beginning of 4 months of freedom, I’m not surprised at all there weren’t any undergrads. I know I wouldn’t have gone to something like this, back when I was an undergrad.

Part 1: Overview and Case Studies

The day started with an introduction and overview by Sarah, followed by 4 short “case studies” where 4 faculty members who are heavily involved in transforming their courses shared their stories.

Georg Rieger talked about how adding one more activity to his Physics 101 classes made a huge difference. He’s been using peer instruction with i>Clickers for a while and noticed poor student success on the summative questions he asked after explaining a new concept. He realized students don’t understand a concept just because he told them about it, no matter how eloquent or enthusiastic he was. So he tried something new — he replaced his description with worksheets that guided the students through the concept. It didn’t take a whole lot longer for the students to complete the worksheets compared to listening to him but they had much greater success on the summative clicker questions. The students, he concluded, learn the concepts much better when they engage and generate the knowledge themselves. Nice.

Susan Allen talked about the lessons she learned in a large, 3rd-year oceanography class and how she could apply them in a small, 4th-year class. Gary Bradfield showed us a whole bunch of student-learning data he and my colleague Malin Hansen have collected in an ecology class (Malin’s summer job is to figure out what it all means.) Finally, Mark MacLean described his approach to working with the dozen or so instructors teaching an introductory Math course, only 3 of whom had any prior teaching experience. His breakthrough was writing “fresh sheets” (he made the analogy to a chef’s specials of the week) for the instructors that outlined the coming week’s learning goals, instructional materials, tips for teaching that content, and resources (including all the applicable questions in the textbook.) The instructors give the students the same fresh sheet, minus the instructional tips. [Note: these presentations will appear on the CWSEI shortly and I’ll link to them.]

Part 2: Posters

All of my STLF colleagues and I were encouraged to hang a poster about a project we’d been working on. Some faculty and grad students who had stories to share about science education also put up posters.

My poster was a timeline for a particular class in the introductory #astro101 course I work on. The concept being covered was the switch from the Ptolemaic (Earth-centered) Solar System to the Copernican (Sun-centered) Solar System. The instructor presented the Ptolemaic model, described how it worked, asked the students for to make a prediction based on the model (a prediction that does not match the observations, hence the need to change models.) The students didn’t get it. But he forged onto the Copernican model, explained how it worked, asked them to make a prediction (which is consistent with the observations, now). They didn’t get that either. About a minute after the class ended, the instructor looked at me and said, “Well that didn’t work, did it?” I suggested we take a Muligan, a CTRL-ALT-DEL, and do it again the next class. Only different this time. That was Monday. On Tuesday, we recreated the content switching from an instructor-centered lecture to a student-centered sequence of clicker questions and worksheets.  On Wednesday, we ran the “new” class. It took the same amount of time and the student success on the same prediction questions was off the chart! (Yes, they were the same questions. Yes, they could have remembered the answers. But I don’t think a change from 51% correct on Monday to 97% on Wednesday can be attributed entirely to memory.)

Perhaps the most interesting part of the poster, for me, was coming up with the title. The potential parallel between Earth/Sun-centered and instructor/student-centered caught my attention (h/t to @snowandscience for making the connection.) With the help of my tweeps, wrestled with the analogy, finally coming to a couple of conclusions. One, the instructor-centered class is like the Sun-centered Solar System (with the instructor as the Sun):

  • the instructor (Sun) sits front and center in complete control while “illuminating” the students (planets), especially the ones close by.
  • the planets have no influence on the Sun,…
  • very little interaction with each other,…
  • and no ability to move in different directions.

As I wrote on the poster, “the Copernican Revolution was  a triumph for science but not for science education.” I really couldn’t come up with a Solar System model for a student-centered classroom, where students are guided but have “agency” (thanks, Sandy), that is, the free-will, to choose to move (and explore) in their own directions. In the end, I came up with (yes, it’s a mouthful but someone stopped me later to compliment me specifically on the title)

Shifting to a Copernican model of the Solar System
by shifting away from a Copernican model of teaching

Part 3: Example class

When we were organizing the event, Sarah thought it would be interesting to get an actual instructor to present an actual “transformed” class, one that could highlight for the audience (especially the on-the-fence-about-not-lecturing instructors) what you can do in a student-centered classroom. I volunteered the astronomy instructor I was working with, and he agreed. So Harvey (and I) recreated a lecture he gave about blackbody radiation. I’d kept a log of what happened in class so we didn’t have to do much. In fact, the goal was to make it as authentic as possible. The class, both the original and the demo class, had a short pre-reading, peer instruction with clickers (h/t to Adrian at CTLT for loaning us a class set of clickers), the blackbody curves Lecture-Tutorial worksheet from Prather et al. (2008), and a demo with a pre-demo prediction question.

Totally rocked, both times. Both audiences were engaged, clicked their clickers, had active discussions with peers, did NOT get all the questions and prediction correct.

At the CWSEI event, we followed the demonstration with a long, question-and-answer “autopsy” of the class. Lots of great questions (and answers) from the full spectrum of audience members between novice and experienced instructors. Also some helpful questions (and answers) from Carl, who surprised us by coming back to Vancouver for the event.

To top it off, we made the class even more authentic by handing out a few Canadian Space Agency stickers to audience members who ask good questions, jus

Canadian Space Agency (CSA) or Agence spatiale canadienne (ASC) logo

t like we do in the real #astro101 class. You should have seen the glee in their eyes. And the “demo” students went all metacognitive on us (as they did in the real class, eventually) and started telling Harvey and I who asked sticker-worthy questions!

Part 4: Peer instruction workshop

The last event of the day was a pair of workshops. One was about creating worksheets for use in class. The other, which I lead, was called “Effective Peer Instruction Using Clickers.” (I initially suggested, “Clicking it up to Level 2” but we soon switched to the better title.)  The goal was to help clicker-using instructors to take better advantage of peer instruction. So many times I’ve witnessed teachable moments lost because of poor clicker “choreography,” that is, conversations cut-off, or not even started, because of how the instructor presents the question or handles the votes, and other things. Oh, and crappy questions to start with.

I didn’t want this to be about clickers because there are certainly ways to do peer instruction without clickers. And I didn’t want it to be a technical presentation about how to hook an i>clicker receiver to your computer and how to use igrader to assign points.

Between attending Center of Astronomy Education peer instruction workshops myself, which follow the “situated apprentice” model described by Prather and Brissenden (2008), my conversations with @derekbruff and the #clicker community, and my own experience using and mentoring the use of clickers at UBC, I easily had enough material to fill a 90-minute workshop. My physics colleague @cynheiner did colour-commentary (“Watch how Peter presents the question. Did he read it out loud?…”) while I did a few model peer instruction episodes.

After these demonstrations, we carefully went through the choreography I was following, explaining the pros and cons. There was lots of great discussion about variations. Then the workshop turned to how to handle some common voting scenarios. Here’s one slide from the deck (that will be linked shortly.)

I’d planned on getting the workshop participants to get into small groups, create a question and then present it to the class. If we’d had another 30 minutes, we could have pulled that off. Between starting late (previous session went long) and it being late on a Friday afternoon, we cut off the workshop. Left them hanging, wanting to come back for Part II. Yeah, that’s what we were thinking…

End-of-Year Events

Sure, it’s hard work putting together a poster. And demo lecture. And workshop. But it was a very good for the sharing what the CWSEI is doing, especially the demo class. And I’ll be using the peer instruction workshop again. And it was a great way to celebrate a year’s work. And then move onto the next one.

Does your group hold an event like this? What do you find works?

Posted in astro 101, clickers, physics, teaching | Tagged , , , | 4 Comments

A misconception about extrasolar planets

A couple of weeks ago in the introductory “Astro 101” class I work in, the instructor and I confirmed that many students hold a certain misconception. I was, still am, pretty excited about this little discovery in astronomy education. If my conversations over the following few days had turned out differently, I probably would be writing it for publication in the Astronomy Education Review. Maybe I still will. But for now, here’s my story.

Our search for life in the Universe and the flood of results from the Kepler Mission have made the discovery of extrasolar planets an exciting and relevant topic for introductory “Astro 101” courses and presentations to the general public.  Instructors, students, presenters and audiences latch onto “the transit method” of detection because it is so intuitive: when an extrasolar planet passes between us and its star, the planet temporarily blocks some star light and we detect a dip in the brightness of the star. The period and shape of the dips in the record of the star’s brightness encode the characteristics of the planet.

When an extrasolar planet passes between us and its star (when it "transits" the star) we detect a dip in the brightness of the star. (Kepler/NASA image)

Our students do a nice 50-minute, hands-on lab about how to decode these “light curves” which I hope to share at the ASP 2011 conference (#ASP2011 on Twitter) in July. In a class following this lab, the instructor posed the following think-pair-share clicker question. We wanted to assess if the students remembered that the size of the dip is proportional to the area of the star blocked by the planet’s disk, which scales as the square of the diameters:

Clicker question to assess the students' grasp of the transit method of detecting extrasolar planets.

The bars in this histogram record the number of students who chose (from left to right) A to E:

Students' responses for (left to right) choices A to E to extrasolar planets clicker question.

About 60% of the class chose answers (C and E) with a 1% drop in brightness, the correct drop, and about 40% chose answers B and D with a 10% drop. This second group didn’t remember the “proportional to area” property. So, not stunning results, certainly a good candidate for pairing and sharing.

The misconception

What is stunning, though, and the source of my excitement, is that 97% of the class feels you see a black spot moving across the star. Which is not true! We only detect the drop in the brightness of the star. We can’t even see the disk of the star, let alone a tiny black spot!

Okay, okay before you jump to the students’ defence, let me (with the help of my great CAPER Team colleagues) jump to the students’ defence:

  1. The question says, “…by observing it pass in front of the distant star.” Of course the students are going to say we see a dark spot – that’s what we just told them! Perhaps I should be worried about the 3% who didn’t read the question properly.
  2. The question is vague about what we mean by “size.” Diameter? Area? Volume? Mass? “The star’s diameter is 10 times bigger than the planet’s diameter” is a much better question stem.
  3. My colleague Aaron Price points out
  4. Astronomers may not see a “dot” crossing the star right now, but they can see something comparable. Through speckle imaging, radial topography and optical interferometry we have been able to see starspots for decades. CHARA’s recent direct observations of a disk of dust moving across epsilon Aurigae shows what is being done right now in interferometric direct imaging. I predict within 10 years we’ll have our first direct image of a “dot” in transit across another star.

  5. Aaron, Kendra Sibbernsen and I all agree that the word “see” in “What would you see?” is too vague. The question I wanted to ask should have used “observe” or “detect”. Kendra suggested we write “A) a dark spot visibly passing in front of the star” and perhaps following up the question with this one to poke explicitly at the potential misconception:

With current technology, can astronomers resolve the dark spot of an extrasolar planet on the disk of a star when it is in transit? (T/F)

Was there a misconception?

Did the students reveal a misconception about transiting extrasolar planets. Nope, not at all. It’s not like they took the information we gave them, mixed it with their own preconceived notions and produced an incorrect explanation. Instead, they answered with the information they’d been given.

A teachable moment

It seems that we’re not being careful enough in how we present the phenomenon of transiting extrasolar planets. But as it turns out, this is a teachable moment about creating models to help us visualize something (currently) beyond our reach. We observe variations in the brightness of the star. We then create a model in our mind’s eye — a large, bright disk for the star and a small, dark disk for the planet — that helps us explain the observations.

This is a very nice model, in fact, because it can be extended to explain other, more subtle aspects of transiting extrasolar planets, like a theoretical bump, not dip, in the brightness, when the planet is passing behind the star and we  see detect extra starlight reflected off the planet. The models also explains these beautiful Rossiter-McLaughlin wiggles in the star’s radial velocity (Doppler shift) curve as the extrasolar planet blocks first the side of the star spinning towards us and then the side spinning away from us.

These wiggles in the radial velocity curve are caused by the Rossiter-McLaughlin effect (from Winn, Johnson et al. 2006, ApJL)

Want to help?

If you’re teaching astronomy, you can help us by asking them this version, written by Kendra, and letting me know what happens.

An extrasolar planet passes in front of its star as seen from the Earth. The star’s diameter is 10 times bigger than the planet’s diameter. What do astronomers observe when this happens?

A)  a dark spot visibly passing across the disk of the star
B)  a 10% dip in the brightness of the star
C)  a 1% dip in the brightness of the star
D) A and B
E) A and C

In conclusion

I don’t think this qualifies as a misconception, not like the belief that the seasons are caused by changes in the distance between the Earth and the Sun. We’re just need to be more careful when we teach our students about extrasolar planets. And in more-carefully explaining the dips in the light curve, we have an opportunity to discuss the advantages and disadvantages of using models to visualize phenomena beyond our current abilities. That’s a win-win situation.

Thanks to my CAPER Team colleagues Aaron, Kendra and Donna Governor for the thoughtful conversations and the many #astro101 tweeps womanastronomer, erinleeryan, uoftastro, jossives, shanilv and more who were excited for me, and then patient with me, as I figured this out.

Posted in astro 101, clickers, communicating science, interpreting graphs, teaching | Tagged , , , , | 4 Comments