Category Archives: peer assessment & feedback

Problems with grading rubrics for complex assignments

In an earlier post I discussed a paper by D. Royce Sadler on how peer marking could be a means for students to learn how to become better assessors themselves, of their own and others’ work. This could not only allow them to become more self-regulated learners, but also fulfill roles outside of the university in which they will need to evaluate the work of others. In that essay Sadler argues against giving students preset marking criteria to use to evaluate their own work or that of other students (when that work is complex, such as an essay), because:

  1. “Quality” is more of a global concept that can’t easily be captured by a set of criteria, as it often includes things that can’t be easily articulated.
  2. As Sadler pointed out in a comment to the post noted above, having a set of criteria in advance predisposes students to look for only those things, and yet in any particular complex work there may be other things that are relevant for judging quality.
  3. Giving students criteria in advance doesn’t prepare them for life beyond their university courses, where they won’t often have such criteria.

I was skeptical about asking students to evaluate each others’ work without any criteria to go on, so I decided to read another one of his articles in which this point is argued for more extensively.

Here I’ll give a summary of Sadler’s book chapter entitled “Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning” (in Assessment, Learning and Judgement in Higher Education, Ed. G. Joughin. Dordrecht: Springer, 2009). DOI: 10.1007/978-1-4020-8905-3_4).

[Update April 22, 2013] Since the above is behind a paywall, I am attaching here a short article by Sadler that discusses similar points, and that I’ve gotten permission to post (by both Sadler and the publisher): Are we short-changing our students? The use of present criteria in assessment. TLA Interchange 3 (Spring 2009): 1-8. This was a publication from what is now the Institute for Academic Development at the University of Edinburgh, but these newsletters are no longer online.

Note: this is a long post! That’s because it’s a complicated article, and I want to ensure that I’ve got all the arguments down before commenting.

Continue reading

How does giving comments in peer assessment affect students? (Part 3)

This is the third post in a series summarizing empirical studies that attempt to answer the question posed in the title. The first two can be found here and here. This will be the last post in the series, I think, unless I find some other pertinent studies.

Lundstrom, K. and Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer’s own writing, Journal of Second Language Writing 18, 30-43. Doi: 10.1016/j.jslw.2008.06.002

This article is focused on students in “L2” classes (second language, or additional language), and asks whether students who review peers’ papers do better in their own (additional language) writing than students who only receive peer reviews and attempt to incorporate the feedback rather than giving comments on peers’ papers.

Participants were 91 students enrolled in nine sections of additional language writing classes at the English Language Center at Brigham Young University. The courses were at two levels out of a possible five: half the students were in level 2, “high beginning,” and half were in level 4, “high intermediate” (33). The students were then divided into a control and experimental group:

The first group was composed of two high beginning and three high intermediate classes, (totaling forty-six students). This group was the control group (hereafter ‘‘receivers’’) and received peer feedback but did not review peers’ papers (defined as compositions written by students at their same proficiency level). The second group was composed of two high beginning classes and two high intermediate classes, (totaling forty-five students), and made up the experimental group, who reviewed peer papers but did not receive peer feedback (hereafter ‘‘givers’’). (33; emphasis mine)

Research questions and procedure to address them

Research questions:

1. Do students who review peer papers improve their writing ability more than those who revise peer papers (for both beginning and intermediate students)?

2. If students who review peer papers do improve their writing ability more than those who revise them, on which writing aspects (both global and local) do they improve? (32)

Continue reading

How does giving comments in peer assessment impact students? (Part 2)

This is the second post looking at published papers that use empirical data to answer the question in the title. The first can be found here. As noted in that post, I’m using “peer assessment” in a broad way, referring not just to activities where students give grades or marks to each other, but more on the qualitative feedback they provide to each other (as that is the sort of peer assessment I usually use in my courses).

Here I’ll look at just one article on how giving peer feedback affects students, as this post ended up being long. I’ll look at one last article in the next post (as I’ve only found four articles on this topic so far).

Lu, J. and Law, N. (2012) Online peer assessment: effects of cognitive and affective feedback, Instructional Science 40, 257-275. DOI 10.1007/s11251-011-9177-2. This article has been made open access, and can be viewed or downloaded at: http://link.springer.com/article/10.1007%2Fs11251-011-9177-2

In this study, 181 13-14 year old students in a Liberal Studies course in Hong Kong participated in online peer review of various parts of their final projects for the course. They were asked to both engage in peer grading and give peer feedback to each other in groups of four or five. The final project required various subtasks, and peer grading/feedback was not compulsory — students could choose which subtasks to give grades and feedback to their peers about. The grades were given using rubrics created by the teacher for each subtask, and both grades and feedback were given through an online program specially developed for the course.

Research Questions

  1. Are peer grading activities related to the quality of the final project for both assessors and assessees?
  2. Are different types of peer …  feedback related to the quality of the final projects for both assessors and assessees? (261)

Continue reading

How does giving comments in peer assessment impact students? (Part 1)

Some colleagues and I are brainstorming various research we might undertake regarding peer assessment, and in our discussions the question in the title of this post came up. I am personally interested in the comments students can give to each other in peer assessment, more than in students giving marks/grades to each other. Students engaging in giving comments on each others’ work are not only impacted by receiving peer comments, of course, but through the process of giving them as well. How does practice in giving comments and evaluating others’ work affect students’ own work or the processes they use to produce it?

I’ve already looked at a couple of articles that address this question from a somewhat theoretical (rather than empirical) angle (see earlier posts here and here). As discussed in those posts, it makes sense to think that practice in evaluating the work of peers could help students get a better sense of what counts as “high quality,” and thus have that understanding available to use in self-monitoring so as to become more self-regulated.

In this post I summarize the findings of two empirical articles looking at the question of whether and how providing feedback to others affects the quality of students’ own work. I will continue this summary in another post, where I look at another few articles.

(1) Li, L., Liu, X. and Steckelberg, A.L. (2010) Assessor or assessee: How student learning improves by giving and receiving peer feedback, British Journal of Educational Technology 41:3, 525-536. DOI: 10.1111/j.1467-8535.2009.00968.x

In this study, 43 undergraduate teacher-education students engaged in online peer assessment of each others’ WebQuest projects. Each student evaluated the projects of two other students. They used a rubric, and I believe they gave both comments and marks to each other. Students then revised their projects, having been asked to take the peer assessment into account and decide what to use from it. The post-peer assessment projects were marked by the course instructor.

Continue reading

Literature on written and oral peer feedback

For context on why I’m interested in this, see the previous post.

I’ve done some searches into the question of oral and written peer feedback, and been surprised at the paucity of results. Or rather, the paucity of results outside the field of language teaching, or teaching courses in a language that is an “additional language” for students. I have yet to look into literature on online vs. face-to-face peer review as well. Outside of those areas, I’ve found only a few articles.

1. Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.  http://dx.doi.org/10.1080/13562510500527685

In this article Van den Berg et al report on a study in which they looked at peer feedback in seven different courses in the discipline of history (131 students). These courses had peer feedback designs that differed according to things such as: what kind of assignment was the subject of peer feedback, whether the peer feedback took place alongside teacher feedback or whether there was peer feedback only, whether students who commented on others got comments from those same others on their own work or not, how many students participated in feedback groups, and more. Most of the courses had both written and oral peer feedback, though one of the seven had just written peer feedback.

The authors coded both the written and oral feedback along two sets of criteria: feedback functions and feedback aspects. I quote from their paper to explain these two things, as they are fairly complicated:

Based on Flower et al . (1986) and Roossink (1990), we coded the feedback in relation to its product-oriented functions (referring directly to the product to be assessed): analysis, evaluation, explanation and revision. ‘Analysis’ includes comments aimed at understanding the text. ‘Evaluation’ refers to all explicit and implicit quality statements. Arguments supporting the evaluation refer to ‘Explanation’, and suggested measures for improvement to ‘Revision’. Next, we distinguished two process-oriented functions, ‘Orientation’ and ‘Method’. ‘Orientation’ includes communication which aims at structuring the discussion of the oral feedback. ‘Method’ means that students discuss the writing process. (141-142)

By the term ‘aspect’ we refer to the subject of feedback, distinguishing between content, structure, and style of the students’ writing (see Steehouder et al., 1992). ‘Content’ includes the relevance of information, the clarity of the problem, the argumentation, and the explanation of concepts. With ‘Structure’ we mean the inner consistency of a text, for example the relation between the main problem and the specified research questions, or between the argumentation and the conclusion. ‘Style’ refers to the ‘outer’ form of the text, which includes use of language, grammar, spelling and layout. (142)

They found that students tended to focus on different things in their oral and written feedback. Written feedback over all the courses tended to be more product-oriented than process-oriented, with a focus on evaluation of quality rather than explaining that evaluation or offering suggestions for revision.  In terms of feedback aspect, written feedback focused more on content and style than structure (143).

Continue reading

Oral and written peer feedback

This post is part of my ongoing efforts to develop a research project focusing on the Arts One program–a team-taught, interdisciplinary program for first-year students in the Faculty of Arts at the University of British Columbia. As noted in some earlier posts, one of the things that stands out about Arts One is what we call “tutorials,” which are weekly meetings of four students plus the professor in which all read and comment on each others’ essays (students write approximately one essay every two weeks). Thus, peer feedback on essays is an integral part of this course, occurring as a regular part of the course meeting time, every week.

In a recent survey of Arts One Alumni (see my post summarizing the results), students cited tutorials as one of the things that helped them improve their writing the most, and as one of the most important aspects of the program. In that earlier post I speculated on what might be so valuable about these tutorials, such as the frequency of providing and getting peer feedback (giving feedback every week, getting feedback on your own paper every two weeks), the fact that professors are there in the meetings with students to give their comments too and comment on the students’ comments, the fact that students revisit their work in an intensive way after it’s written, that they may feel pressure to make the work better before submitting it because they know they’ll have to present and defend it with their peers, etc. That last point is perhaps made even more important when you consider that the students get to know each other quite well, meeting every week for at least one term (the course is two terms, or one year long, but some of us switch students into different tutorial groups halfway through so they get the experience of reading other students’ papers too).

One thing I didn’t consider before, but am thinking about more now, is whether the fact that the feedback is done mostly, if not exclusively, orally and synchronously (and face-to-face) rather than through writing and asynchronously, makes a difference.

Continue reading

The value of peer review for effective feedback

No matter how expertly and conscientiously constructed, it is difficult to comprehend how feedback, regardless of its properties, could be expected to carry the burden of being the primary instrument for improvement. (Sadler 2010, p. 541)

… [A] deep knowledge of criteria and how to use them properly does not come about through feedback as the primary instructional strategy. Telling can inform and edify only when all the referents – including the meanings and implications of the terms and the structure of the communication – are understood by the students as message recipients. (Sadler 2010, p. 545)

In “Beyond feedback: developing student capability in complex appraisal” (Assessment & Evaluation in Higher Education, 35:5, 535-550), D. Royce Sadler points out how difficult it can be for instructor feedback to work the way we might want–to allow students to improve their future work. Like Nicol and Macfarlane-Dick 2006 (discussed in the previous post), Sadler here argues that effective feedback should help students become self-regulated learners:

Feedback should help the student understand more about the learning goal, more about their own achievement status in relation to that goal, and more about ways to bridge the gap between their current status and the desired status (Sadler 1989). Formative assessment and feedback should therefore empower students to become self-regulated learners (Carless 2006). (p. 536)

 The issue that Sadler focuses on here is that students simply cannot use feedback for improvement and development of self-regulation unless they share some of the same knowledge as the person giving the feedback. Much of this is complex or tacit knowledge, not easily provided in things such as lists of criteria or marking rubrics. Instructors may try to make their criteria for marking and their feedback as clear as they can,

Yet despite the teachers’ best efforts to make the disclosure full, objective and precise, many students do not understand it appropriately because, as argued below, they are not equipped to decode the statements properly. (p. 539)

Continue reading

Seven Principles of Effective Feedback Practice

I recently read an article by David J. Nicol and Debra Macfarlane-Dick that I found quite thought-provoking:

David J. Nicol & Debra Macfarlane-Dick (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, 31:2, 199-218.   http://dx.doi.org/10.1080/03075070600572090

The basic belief guiding their argument is that formative assessment (which they define, referring to Sadler 1998, as “assessment that is specifically intended to generate feedback on performance to improve and accelerate learning” (199)) should be aimed at helping students become more self-regulated. What does it mean for students to be self-regulated? The authors state that it manifests in behaviours such as monitoring and regulating processes such as “the setting of, and orientation towards, learning goals; the strategies used to achieve goals; the management of resources; the effort exerted; reactions to external feedback; the products produced” (199). They also cite later in the article (p. 202) a definition from Pintrich and Zusho, 2002:

Self-regulated learning is an active constructive process whereby learners set goals for their learning and monitor, regulate, and control their cognition, motivation, and behaviour, guided and constrained by their goals and the contextual features of the environment. (Pintrich and Zusho (2002), 64)

Students who are self-regulated learners, Nicol and Macfarlane-Dick explain on p. 200, set goals for themselves (usually affected by external goals in the educational setting) against which they can measure their performance. They generate internal feedback about the degree to which they are reaching these goals, what they need to do to improve progress towards them, etc. They incorporate external feedback (e.g., from instructors and peers) into their own sense of how well they are doing in relation towards their goals. The better they are at self-regulation, the better they are able to use their own and external feedback to progress towards their goals (here they cite Butler and Winne, 1995). The authors point to research providing evidence that “learners who are more self-regulated are more effective learners: they are more persistent, resourceful, confident and higher achievers (Pintrich, 1995; Zimmerman & Schunk, 2001)” (205).

Nicol and Macfarlane-Dick note, however, that current literature shows little work on arguing for how formative feedback can improve student self-regulation. That’s what they do here.

Continue reading

Results of a survey of Arts One alumni, and thoughts on research questions

 

In March and April of 2012, a research assistant (an Arts One alum) and I did a survey of alumni of the Arts One program (a team-taught, interdisciplinary, year-long program for first year students at UBC). Arts One has been in existence since 1967, yet very little research has been done on how it impacts students. I hope to do some research on that broad question in the coming years.

The purpose of the survey of alumni from earlier this year was to see how students themselves thought Arts One impacted them, in two ways: how it impacted their work in other courses, and how it impacted them beyond their work in other courses. Just for fun, and to see if we could isolate that which really sets the Arts One program apart from other first year programs, we also asked them what they thought the most important aspect of Arts One was (a question I had taken from an earlier study on Arts One, by Cheryl Dumaresq (see https://circle.ubc.ca/handle/2429/3693 for a copy of Dumaresq’s survey of Arts One alumni in the 1990s). We surveyed students who had just finished Arts One that Spring, plus those who had taken Arts One their first year and were still at UBC, in their second, third, fourth years and beyond. 116 students answered our email request to fill in an online survey. The questions were all open-ended and subjected to descriptive qualitative analysis only.

The reason for doing this survey, beyond being interested generally, was to gain material for developing research questions to study further later. In other words, this was a pilot study to determine which areas of Arts One to subject to further research, and how/why. In what follows I begin thinking about some possible research questions arising from the data.

In some ways, the results of the survey were unsurprising, in the sense that I guessed before doing the survey much of what came out of it. But there were a couple of things I hadn’t thought of before, which was the point of doing the survey in an open-ended way (so students weren’t stuck with giving only multiple-choice answers to topics we thought of ourselves).


 

[What follows provides only a discussion of only a few of the prominent answers, or the ones I found most interesting. There were many, many different things said by students, with the result that most of those ended up being said only a few times, and thus are not recorded below unless I found them particularly interesting or surprising.]

Percentage of student alumni saying Arts One positively impacted their other coursework

Over 90% said Arts One impacted them in ways that positively affected their work in other courses. A few said there was no impact on their work in other courses, and a few said it impacted their work negatively (though some of those also said it had some positive impacts too).

How Arts One helps with other courses

  • Most of the respondents said it helped with writing (73% of respondents; 80% of those who said A1 had positive impact on other courses). This wasn’t surprising, as improving writing is one of the main emphases of Arts One. Students write 12 essays over the course of one year (approximately one every two weeks), and each of those essays is peer reviewed in a tutorial meeting of four students plus their professor. Every week students meet in these tutorials and discuss two essays, each for 25-30 minutes.
    • Of those, most said it was the tutorials that helped with their writing (48%), and next highest cited was the amount of writing (39%). Some of these may be the same people (they may have said both things).
    • What could be some reasons students think tutorials are so helpful for improving their writing? In tutorials we discuss particular issues with the papers being commented on, but also general advice about writing academic papers, about the writing process, etc. Some of this advice comes from the professors and some from the students. One thing that several students mentioned as being helpful is the chance to read and comment on others’ essays; some noted that this helped them think more critically about their own. One stated that giving comments to peers in front of the professor, and having the professor comment on those comments, was especially helpful. Students likely also feel some pressure to make their writing better because they have to present and defend it in front of peers and the professor. Revisiting work in an intensive way after it’s written may also contribute to improvement in writing.
    • Possible research areas:
      • It would be interesting to see if students’ self-perceptions match up with reality: does their writing really improve after completing the program? One could compare their improvement with gains in writing skill from other first-year writing courses (though that might be difficult, as several such courses at UBC have different foci than the writing in Arts One, emphasizing research whereas we do not). I could have a look at the literature on self-perceptions of skills (esp. writing skills) and objective measures of those, to see if generally self-perceptions tend to be accurate or not. I recall reading something about this somewhere, but I don’t have the details ready to hand.
      • I could focus on and expand upon the above questions about tutorials: what it is about tutorials leads students to cite them as particularly helpful for improving their writing skills? Is there something special about A1 tutorials, as opposed to peer review in other courses? E.g., I do peer review in other courses, and get no such comments about its value, even though the peer review is done in small groups during class time, as with the Arts One tutorials. Peer review does not constitute a separate class period in those other courses, though, like it does in Arts One, and I as the professor am not sitting there with the group the whole time they are doing the peer review. Is there something special about having the prof present the whole time? Is there something special about the amount of time spent on each essay? Is it that it is done every single week? Does the fact that it’s a scheduled part of the course, so especially emphasized, make a difference? Is it perhaps that students recognize that tutorials are supposed to help them with writing, so they think and say that they do? After all, one wouldn’t want to spend so much time doing tutorial work for nothing. I’m not sure how to even begin to answer any of those questions yet!

 

  • A significant number of students said Arts One contributed to their critical thinking skills (24% of respondents, 27% of those who said A1 had a positive impact on their other coursework). I expected this to show up in the survey, as one of the things we emphasize in Arts One is providing students with a lot of opportunity to develop their own responses and arguments to the texts and issues we discuss. The seminars are focused on discussion, and many of us try to get the students to lead that discussion as much as possible (with some variation for professor style, of course) and though we provide essay topics for the papers, they are purposefully open-ended so that students have a lot of room to argue for their own readings and emphasize what they find important.
    • However, the survey responses did not provide much clarity as to what aspects of the program especially helped with critical thinking, in the students’ views. Many of those who said A1 helped with critical thinking did not explain clearly how, and among those that did, numerous aspects were cited such that I couldn’t find a clear pattern for which things were most helpful.
    • possible research areas
      • Studying what critical thinking is, and how to promote it, is an entire field in its own right. There are numerous definitions of critical thinking, and quite a few different tests of it, and I would need to delve into that literature before even beginning to think further about this topic. It’s unclear what each student meant by terms like “critical thinking,” or “analysis skills” when they said them, as well.
      • One thing I am particularly interested in myself is perhaps captured by the term “critical thinking,” but perhaps not–it’s the ability and confidence to engage in independent thinking. I think what I mean by that term is that students feel they can sit down with a text and make sense of it on their own, or with a small group of peers, without needing to rely on finding out what others (experts) have said about it in order to find out what is the “right” way to read the text. We specifically and consciously downplay seeking outside research on the texts we read, so as to focus students’ attention on their own reading and analysis skills. I, personally, also emphasize that the idea in writing essays for Arts One is less a matter of getting something right about the text than it is about coming up with a thought-provoking, justified reading, an interpretation grounded in the text but that also goes beyond a surface level and can make the reader really think. I am hoping students take risks rather than provide just the safest or most clear-cut arguments possible. I would need to better clarify just what I am looking for here, and what terms that corresponds to in the literature.

 

  • The same number of students as said Arts One helped critical thinking said it helped their confidence in some way, such as confidence speaking in class, speaking to profs, confidence in their writing skills, or in their own ideas as valuable (24% of respondents, 27% of those who said positive impact).
    • I find this one especially interesting, perhaps because I hadn’t really thought about it before, and yet it is so important to students’ future coursework and life. My rough analysis of the survey data has not yet revealed clear patterns on how, exactly, students think Arts One helps with their confidence, which aspects do so and why. I plan to go back over the data and see if I can come up with something clearer on this, or if the answers are simply too thinly scattered.
    • possible research areas
      • Maybe something in the area of confidence would be a better place for me to focus my attention to try to capture what I was talking about at the end of the discussion on critical thinking. If students feel confident in their own ideas and their ability to express and defend them, they might be more likely to approach texts and discussions in the way I suggested.
      • Studying confidence has the added bonus of being easier to study than whether writing really improves or whether critical thinking really improves, because one would, presumably, rely on students’ own self-reports of confidence. Though I suppose that some behavioural data might contribute to showing increases or decreases in confidence as well.
      • There must be literature on levels of confidence and how this affects students’ coursework…I would need to look into that.


Percentage of student alumni saying Arts One positively impacted them beyond coursework

83% of students said that Arts One impacted them positively in ways beyond their work in other courses.  Most of those who did not assent to this simply said it did not impact them beyond coursework. Only two gave negative comments in response to this question, and only one of those was about the program itself.

How Arts One impacts students beyond their work in other courses

  • The most often-cited answer to this question from students was that Arts One provided a close-knit community that allowed them to develop friendships, get to know their colleagues and the professor, and feel comfortable in the classroom (38% of respondents, 46% of those who said Arts One impacted them in some positive way beyond their work in other courses). I guessed beforehand that this would be important, as it’s something else that we emphasize in Arts One. There are five professors who team-teach the course (two groups of five profs total, with separate themes, readings, and students); there are up to 100 students in the course and each professor is assigned to up to 20 of those. Each week there is a lecture given by one of the five professors to all 100 students, and two seminar discussions of 20 students with their professor. Then there are tutorials of four students from the group of 20 with their professor, and each student has one of these per week (each prof has five). The students in the seminar group of 20 get to know each other and their professor very well, with two discussions per week plus a tutorial every week. They often get together outside of class for study or social purposes, develop group Facebook pages, etc.
    • The seminars were cited as more important for developing a sense of community and friendships (25% of respondents, 30% of those who said A1 had positive impact beyond coursework) than tutorials (14% of respondents, 17% of those who said positive impact). I suppose this could be a little surprising, since the tutorials are only 4 students plus the professor, while the seminars are 20 students plus the professor. But the tutorials are somewhat stressful for numerous students, as it is where they have to present and defend their essays, listening to and responding to criticism from their peers and professor. They also have to learn to constructively comment on the essays of their peers, which can be difficult for many at first. Though by the end of the year the tutorials are often much more relaxed, I am not surprised that students don’t view tutorials as being as much of a space for developing a close sense of community and friendships as seminars, overall.
    • possible research areas
      • It might seem, and many students thought of it this way, that developing a sense of community and having friendships come out of Arts One is mainly a social effect that doesn’t have much to do with academic coursework. But I think that’s a mistake. I know I have read some things that point to the importance of having a close community and friendships in courses, and it would be good to revisit those to show that this could be an important ingredient to student success in Arts One. If students see the value of developing an academic and social community in their classes through their Arts One experience, it might encourage them to seek to develop those in other courses as well (as a few students mentioned in their answers to the survey). So looking at the literature on community and student success (broadly defined) would be a good place to start. Then I might be able to find best practices on developing a sense of community and see if they are implemented in Arts One. Or see if some aspects of Arts One are especially important to developing a sense of community that might be exported to other courses.


  •  A significant number of students pointed out that Arts One had improved their confidence in ways that extended beyond their work in university courses, such as confidence in public speaking, in their own views, and in writing (13% of respondents, 16% of those who said A1 had positive impact beyond coursework). This is less than the number who pointed to confidence in response to how Arts One had impacted their work in other courses, but presumably if it gave them confidence in the above ways for courses, most of that would transfer to their life beyond the university.
    • possible research areas: Same as above re: confidence, but it’s important to think about the value of confidence beyond how it can help students succeed in their university coursework.

 

The most important aspect of Arts One, according to student alumni

  • Most often cited as an answer to this question was having small classes (35% of respondents).
  • A close second were seminars and tutorials, which had about the same number of people citing them in answer to this question (31% of respondents said something about seminars in response to this question, and 32% said something about tutorials).
  • The next highest category was people who said something relating to the quality of the professors, a particular professor, or the lectures given by professors (26% of respondents).
  • Finally, a significant number of people said the most important thing was the ability to have close connections between students and professors (19%). This, of course, is closely connected to having small classes (though small classes are not required for it, they can help facilitate it). The 4-person tutorials are especially conducive to students being able to work closely with professors, and to gain confidence in speaking to them.
    • possible research areas
      • Many of the above are related–small classes (seminars and tutorials) and the ability to have a close connection between students and professors. Why is it that connecting with one’s professor is so important? What does it facilitate that being an anonymous face in a classroom where one never speaks to the professor does not? Does the fact that Arts One does not have any Teaching Assistants have a bearing on these issues, or would a close connection to Teaching Assistants yield similar results?

 

There is much to think about here. Obviously I’ve raised enough research areas to last a lifetime. I just need to pick the one I’d like to work on for the next few years…a daunting task!

 

Here is a more detailed report, without the possible research areas, and including the questions asked: Arts One Alumni Survey 2012, Report on Results

 

Peer Review of Teaching

The University of British Columbia is moving towards emphasizing and improving the practice of peer review of teaching. This website explains the project and its history:  http://ctlt.ubc.ca/about-isotl/programs-events/ubc-peer-review-of-teaching-initiative/

I have recently received a draft of a set of guidelines for the Faculty of Arts, a draft that is still in development so what I say below may change. But still, I find the whole project very interesting and potentially quite valuable.

Explanation and evaluation of some specific aspects of the program:

1. Data sources for peer evaluation of teaching are wider than just one class visit. The guidelines give several options for other sources, stating that not all of these need to be used. Some options:  course materials, such as syllabi, assignment instructions, even possibly samples of student work; one or more meetings with the instructor; meeting with students; statement of teaching philosophy; past student evaluation results; contributions to curriculum or new course development; innovations in teaching practices and/or use of technology; evidence of professional development re: teaching beyond the classroom; evidence of reflection upon teaching; teaching load (number and types of courses); grad students supervised; grad student publications and awards; information solicited from grad students. This seems an excellent way to get a better picture of someone’s teaching capacities than just visiting one course meeting. Of course, it has to be handled carefully within departments so that what is requested of each person in terms of documentation is relatively uniform so as to avoid perceptions of unfairness. Considerations of workload come in here too–gathering and looking over this information can take a significant amount of time and effort, on the part of both the reviewer and the instructor him/herself.

Continue reading