Category Archives: Scholarship of Teaching and Learning

OERI and some literature on open pedagogy

Open as in Not Closing Open, photo by Alan Levine, shared on Flickr under CC0.

I’m excited to be giving a lightning talk at the Open Education Research Institute hosted by Kwantlen Polytechnic University this week–thank you so much to Rajiv Jhangiani and Urooj Nizami from KPU for the invitation. I’ll also be acting as an OE research mentor for a group of participants in the Institute, which is a wonderful honour though to be honest I still feel a bit of a novice myself in this area. I was trained as a researcher in philosophy, with no information about empirical research with people, which is what I’ve had to try to learn along the way as I do research on open education. Not that all research on open edu has to be empirical…there is a good deal of theoretical research of great significance as well!

But I have done some empirical research on open educational resources in particular, and I must say a big thank you to Rajiv Jhangiani, Jessie Key, Clint Lalonde, and Beck Pitt for my first intro to such research, as we worked on the 2016 BCcampus report: Exploring Faculty Use of OER in British Columbia Post-secondary Institutions. This was part of the work that Rajiv, Jessie and I did during our 2014-2015 BCcampus Open Textbook Fellowship program. I am also grateful to have received an Open Education Group OER Research Fellowship in 2015, where I learned a lot from John S. Hilton and the many other OER Research Fellows as we met and discussed projects.

Of late, I’ve been working with a couple of colleagues on a research project about student perceptions of an open pedagogy project. Specifically, we surveyed students who created case studies for Forestry and Conservation courses, most of which were shared openly and with an open license, on the UBC Open Case Studies website. We started this research back in 2018, when we administered surveys to students in three courses, in Fall 2108 and Spring 2019. Then in 2019 we began coding the data, finishing up around the end of 2019 if memory serves, and then…COVID-19 and we dropped it altogether for a year.

We recently picked this project back up and are excited to report the results and write up an article to submit for publication. The lightning talk at OERI will be the first time I’ll be talking about this project to a wider audience. We’re not completely ready with full results; we have coded the data and have started pulling out a few themes, but we haven’t done a full analysis yet. So the lightning talk will focus on:

  • motivations, including (at the time) not a lot of research literature on student perceptions of open pedagogy projects
  • methods
  • a few preliminary results

That should be easily enough to fill the seven-minute time slot I have!

In the rest of this post, I’m basically starting on the literature review for the article we’ll be working on, by reviewing some of the literature on open educational practices, open pedagogy, and student perceptions of open pedagogy. What follows is a not exhaustive review of literature with some quotes, about open pedagogy, open educational practices, students as producers, and student perceptions of open pedagogy. I can’t imagine we’ll use all of this in our article, but it’s useful to have it in one place!

Continue reading

SoTL Workshop at Lakehead University

I was invited to Lakehead University in Thunder Bay, Ontario to speak to a few different groups of people about educational leadership (they have a new teaching and educational leadership faculty stream there like we do at UBC), and also about the Scholarship of Teaching and Learning (SoTL). As part of that, I led a workshop on getting started with SoTL.

I’m here posting the slides, worksheets, and other resources so they’re easily available for participants in the workshop, but also for anyone else who is interested!

Slides & worksheets

Slides are available on Slideshare.net, and also in downloadable and editable PowerPoint format on OSF.

 

Here are the worksheets we used for the activities:

Other resources

General SoTL guides & introductions

Finding SoTL literature on particular topics

Where to publish

SoTL conferences

 

Presentation on SoTL research re: peer feedback

In mid-November I gave a presentation at the SoTL Symposium in Banff, Alberta, Canada, sponsored by Mount Royal University.

It’s a little difficult to describe this complex research, so I’ll let my (long) abstract for the presentation tell at least part of the story.


750-word abstract

Title: Tracking a dose-response curve for peer feedback on writing

There is a good deal of research showing that peer feedback can contribute to improvements in student writing (Cho & MacArthur, 2010; Crossman & Kite, 2012). Though intuitively one might think that students would benefit most from receiving peer comments on their written work, several studies have shown that student writing benefits both from comments given as well as comments received–indeed, sometimes the former more than the latter (Li, Liu & Steckelberg, 2010; Cho & MacArthur, 2011).

There are, however, some gaps in the literature on the impact of peer feedback on improving student writing. First, most studies published on this topic consider the effect of peer feedback on revisions to a single essay, rather than on whether students use peer comments on one essay when writing another essay. Cho and MacArthur (2011) is an exception: the authors found that students who wrote reviews of writing samples by students in a past course produced better writing on a different topic than those who either only read those samples or who read something else. In addition, there is little research on what one might call a “dose-response” curve for the impact of peer feedback on student writing—how are the “doses” of peer feedback related to the “response” of improvement in writing? It could be that peer feedback is more effective in improving writing after a certain number of feedback sessions, and/or that there are diminishing returns after quite a few sessions.

To address these gaps in the literature, we designed a research study focusing on peer feedback in a first-year, writing intensive course at a large university in North America. In this course students write an essay every two weeks, and they meet every week for a full year in groups of four plus their professor to give comments on each others’ essays (the same group stays together for half or the full year, depending on the instructor). With between 20 and 22 such meetings per year, students get a heavy dose of peer feedback sessions, and this is a good opportunity to measure the dose-response curve mentioned above. We can also test the difference in the dose-response curve for the peer feedback groups that change halfway through the year versus those who remain the same over the year. Further, we can evaluate the degree to which students use comments given by others, as well as comments they give to others, on later essays.

While at times researchers try to gauge improvement in student work on the basis of peer feedback by looking at coarse evaluations of quality before and after peer feedback (e.g., Sullivan & Pratt, 1996; Braine, 2001), because many things besides peer feedback could go into improving the quality of student work, more specific links between what is said in peer feedback and changes in student work are preferable. Thus, we will compare each student’s later essays with comments given to them (and those they gave to others) on previous ones, to see if the comments are reflected in the later essays, using a process similar to that described in Hewett (2000).

During the 2013-2014 academic year we ran a pilot study with just one of those sections (sixteen students, out of whom thirteen agreed to participate), to refine our data collection and analysis methods. For the pilot program we collected ten essays from each of the students who agreed to participate, comments they received from their peers on those essays, as well as comments they gave to their peers. For each essay, students received comments from three other students plus the instructor. We will use the instructor comments to, first, see whether student comments begin to approach instructor comments over time, and to isolate those things that only students commented on (not the instructor) to see if students use those in their essays (or if they mainly focus on those things that the instructor said also).

In this session, the Principal Investigator will report on the results of this pilot study and what we have learned about dealing with such a large data set, whether we can see any patterns from this pilot group of thirteen students, and how we will design a larger study on the basis of these results.


 

It turned out that we were still in the process of coding all the data when I gave the presentation, so we don’t yet have full results. We have coded all the comments on all the essays (10 essays from 13 participants), but are still coding the essays themselves (had finished 10 essays each from 6 participants, so a total of 60 essays).

I’m not sure the slides themselves tell the whole story very clearly, but I’m happy to answer questions if anyone has any. I’m saving up writing a narrative about the results until we have the full results in (hopefully in a couple of months!).

We’re also putting in a grant proposal to run the study with a larger sample (didn’t get a grant last year we were trying to get…will try again this year).

Here are the slides!

Authentic assessment and philosophy

In order to prepare for a meeting of the Scholarship of Teaching and Learning Community of Practice, I recently started reading a few articles on “authentic assessment.” I have considered this idea before (see short blog post here), but I thought I’d write a bit more about just what authentic assessment is and how it might be implemented in philosophy.

Authentic assessment–what

A brief overview of authentic assessment can be found in Svinicki (2004). According to Svinicki, authentic assessment “is based on student activities that replicate real world performances as closely as possible” (23). She also lists several criteria for assessments to be authentic, from Wiggins (1998):

 1. The assessment is realistic; it reflects the way the information or skills would be used in the “real world.”

2. The assessment requires judgment and innovation; it is based on solving unstructured problems that could easily have more than one right answer and, as such, requires the learner to make informed choices.

3. The assessment asks the student to “do” the subject, that is, to go through the procedures that are typical to the discipline under study.

4. The assessment is done in situations as similar to the contexts in which the related skills are performed as possible.

5. The assessment requires the student to demonstrate a wide range of skills that are related to the complex problem, including some that involve judgment.

6. The assessment allows for feedback, practice, and second chances to solve the problem being addressed. (23-24)

She points to an example of how one might assign a paper as an authentic assessment. Rather than just writing an essay about law generally (perhaps legal theory?), one might ask students to write an essay arguing for why a particular law should be changed. Or even better, write a letter to legislators with that argument (25).

Turns out there are numerous lists of what criteria should be used for authentic assessment, though (not surprising?). I have only looked at a few articles, and only those that are available for easy reading online (i.e., not books, or articles in books, or articles in journals to which our library does not have a digital subscription–I know this is lazy, but I’m not doing a major lit review here!). Here’s what I’ve found.

In Ashford-Rowe et al. (2014), eight questions are given that are said to get to the essential aspects of authentic assessment. These were first developed from a literature review on authentic assessment, then subjected to evaluation and discussion by several experts in educational design and assessment, and then used to redesign a module for a course upon which they gathered student and instructor feedback to determine whether the redesign solved some of the problems faced in the earlier design.

(1) To what extent does the assessment activity challenge the student?

(2)  Is a performance, or product, required as a final assessment outcome?

(3)  Does the assessment activity require that transfer of learning has occurred, by means of demonstration of skill?

(4)  Does the assessment activity require that metacognition is demonstrated?

(5)  Does the assessment require a product or performance that could be recognised as authentic by a client or stakeholder? (accuracy)

(6)  Is fidelity required in the assessment environment? And the assessment tools (actual or simulated)?

(7)  Does the assessment activity require discussion and feedback?

(8)  Does the assessment activity require that students collaborate? (219-220)

Regarding number 3, transfer of learning, the authors state: “The authentic assessment activity should support the notion that knowledge and skills learnt in one area can be applied within other, often unrelated, areas” (208). I think the idea here is that the knowledge and skills being assessed should be ones that can transfer to environments beyond the academic setting, which is the whole idea with authentic assessment I think.

Number 4, metacognition, has to do with self-assessment, monitoring one’s own progress, the quality of one’s work, reflecting on the what one is doing and how it is useful beyond the classroom, etc.

Number 6, regarding fidelity, has to do with the degree to which the environment in which the assessment takes place, and the tools used, are similar to what will be used and how, outside of the academic setting.

The point of number 8, collaboration, is that, as the authors state, “The ability to collaborate is indispensable in most work environments” (210). So having assessments that involve collaboration would be important to their authenticity for many work environments. [Though not all, perhaps. And not all authentic assessment needs to be tied to the workplace, right? Couldn’t it be that students are developing skills and attitudes that they can use in other aspects of their lives outside of an educational context?]

Gulikers et al. (2004) define authentic assessment as “an assessment requiring students to use the same competencies, or combinations of knowledge, skills, and attitudes, that they need to apply in the criterion situation in professional life” (69). They took a somewhat different approach to determining the nature of authentic assessments than that reflected in the two lists above. They, too, started with a literature review, but from that focused on five dimensions of authentic assessments, each of which can vary in their authenticity:

(a) the assessment task

(b) the physical context

(c) the social context

(d) the assessment result or form

(e) the assessment criteria (70)

Whereas the above two lists look at the kinds of qualities an assessment should have to count as “authentic,” this list looks at several dimensions of assessments and then considers what sorts of qualities in each dimension would make an assessment more or less authentic.

So, for example, an authentic task would be, given their definition of authentic assessment as connected to professional practice, one that students would face in their professional lives. Specifically, they define an authentic task as one that “resembles the criterion task with respect to the integration of knowledge, skills, and attitudes, its complexity, and its ownership” (71), where ownership has to do with who develops the problem and solution, the employee or the employer (I think that’s their point).

The physical context has to do with what sorts of physical objects people will be working on, and also the tools they will generally be using. It makes assessments less authentic if we deprive students of tools in academic settings that they will be allowed to use in professional settings, or give them tools in academic settings that they generally won’t have access to in professional settings. Time constraints for completing the task are also relevant here, for if professionals have days to complete a task, asking students to do it in hours is less authentic.

The social context has to do with how one would be working with others (or not) in the professional setting. Specifically, they specify that if the task in the professional setting would involve collaboration, then the assessment should do so, but not otherwise.

The assessment result or form has to do with the product created through the task. It should be something that students could be asked to do in their professional lives, something that “permits making valid inferences about the underlying competencies,” which may require more than one task, with a variety of “indicators of learning” (75).

Finally, the criteria for the assessment should be similar to those used in a professional setting and connected to professional competencies.

 

Authentic assessment and philosophy

Though Gulikers et al. (2004) tie authentic assessment pretty closely to professional life, and thus what they say might seem to be most relevant to disciplines where professional practice is directly part of courses (such as medicine, business, architecture, clinical psychology, and more), the overview in Svinicki (2004) suggests that authentic assessments could take place in a wide variety of disciplines. What could it look like in philosophy?

I think this is a somewhat tricky question, because unlike some other fields, where what one studies is quite directly related to a particular kind of activity one might engage in after receiving a degree, philosophy is a field in which we practice skills and develop attitudes that can be used in a wide variety of activities, both within and beyond one’s professional life. What are those skills and attitudes? Well, that’s a whole different issue that could take months to determine (and we’re working on some of that by developing program outcomes for our major in philosophy here at UBC), but for now let’s just stick with the easy, but overly vague answers like: the ability to reason clearly; to analyze problems into their component parts and see interrelationships between these; to consider implications of particular beliefs or actions; to make a strong case for one approach to a problem over another; to identify assumptions lying behind various beliefs, approaches, practices; to locate the fundamental disagreements between two or more “sides” to a debate and thereby possibly find a way forward; to communicate clearly, orally and in writing; to take a charitable attitude towards opponents and focus on their arguments rather than the persons involved; and more.

So what could it mean to do a task in philosophy in a similar way, with similar tools, for example, as what one might encounter in a work environment? Because the skills and attitudes developed in philosophy might be used in many different work environments, which one do we pick? Or, even more broadly, since many of these skills and attitudes can be practiced in everyday life, why restrict ourselves to what one might do in a work environment?

Perhaps, though, this means we have a lot more leeway, which could be a good thing. Maybe authentic assessments in philosophy could be anything that connects to what one might do with philosophical thinking, speaking and writing skills outside of the educational setting. And if several courses included them during a students’ educational career, they could perhaps see how philosophy can be valuable in many aspects of their lives, having done different sorts of authentic assessments applying those skills to different kinds of activities.

When I came up with a couple of possible authentic assessments in philosophy courses last summer, I believe I was thinking along these lines–something that the students would do that would mirror an activity they might engage in outside of class. One, which I implemented this year in my moral theory course, asked students to apply the moral theories we’re studying to a moral dilemma or issue of some kind. This isn’t exactly like an authentic assessment, though, because I’m not sure that I would expect anyone in their everyday lives to read Kant and Mill and then try to apply them to moral dilemmas they face. Maybe some people do, but I’m not really sure that’s the main value of normative moral theories (I’m still working on what I think that value is, exactly).

Another one of the suggested assignments from that earlier blog post was that students would reflect on how they use philosophical thinking or speaking or writing in their lives outside of the course. That one isn’t asking them to do so, though, so it’s not like mirroring a task they might use outside the class; it’s just asking them to reflect on how they already do so.

So I think I need to consider further just what an authentic assessment in philosophy might look like (the one from Svinicki (2004), above, about writing a letter to legislators to change a law is a good candidate), and how I might include one in a course I teach in the future. Possible ideas off the top of my head:

  • Take a discussion of a moral issue (for example) in the media and clearly lay out the positions on the various “sides” and what arguments underlie those. Evaluate those arguments. (We do this sort of thing all the time in philosophy, but not always by starting with media reports, which would be the sort of thing one might do in one’s everyday life.) Or, identify assumptions in those positions.
  • Write a letter to the editor or an op-ed piece about some particular moral or other issue, laying out clear arguments for your case.
  • Participate in or even facilitate a meeting of a Socrates Cafe, a philosophical discussion held in a public place for anyone who is interested to join.
  • Make a case to the university, or your employer, or someone else for something that you’d like to see changed. Give a clear, logical argument for why it should be changed, and how. Can collaborate with others on this project.

Okay, this is hard.

And it occurs to me that some of what we already do might be like an authentic activity, even if not an authentic assessment. For example, when we ask students to engage in philosophical discussion in small groups during class, this is the sort of thing they might also do in their lives outside of class (don’t know how many do, but we are giving them practice for improving such activities in the future).

Hmmm…gotta think more on this…

 

Any ideas are welcome, in the comments below!

 

Works Cited

Ashford-Rowe, K., Herrington, J. & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205-222. DOI: 10.1080/02602938.2013.819566

Gulikers, J.T.M., Bastiaens, T.J., Kirschner, P.A. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development, 52(3), 67-86. Available on JSTOR, here: http://www.jstor.org/stable/30220391?

Svinicki, M. D. (2004). Authentic assessment: Testing in reality. New Directions in Teaching and Learning, 100, 23-29. Available behind a paywall, here: http://onlinelibrary.wiley.com/doi/10.1002/tl.167/abstract

Wiggins, G. (1998). Educative Assessment: Designing Assessments to Inform and Improve Student Performance. San Francisco: Jossey-Bass.

Providing feedback to students for self-regulation

On Nov. 21, 2013, I did a workshop with graduate students in Philosophy at UBC on providing effective feedback on essays. I tried to ground as much as I could on work in the Scholarship of Teaching and Learning.

Here are the slides for the workshop (note, we did more than this…this is just all I have slides for):

 

Here is the works cited for the slides:

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233.

 

Chanock, K. (2000). Comments on essays: Do students understand what tutors write? Teaching in Higher Education, 5(1), 95-105.

 

Lizzio, A. and Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment and Evaluation in Higher Education, 33(3), 263-275.

 

Lunsford, R.F. (1997). When less is more: Principles for responding in the disciplines. New Directions For Teaching and Learning, 69, 91-104.

 

Nicol, D.J. and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

 

Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.

 

Walker, M. (2009). An investigation into written comments on assignments: do students find them usable? Assessment and Evaluation in Higher Education, 34(1), 67-78.

 

Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education, 31(3), 379-394.

Evaluating a cMOOC using Downes’ four “process conditions”

This is the third in a series of posts on a research project I’m developing on evaluating cMOOCs. The first can be found here, and the second here. In this post I consider an article that uses Downes’ four process conditions” for a knowledge-generating network to evaluate a cMOOC. In a later post I’ll consider another article that takes a somewhat critical look at these four conditions as applied to cMOOCs.

Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In Proceedings of the 7th International Conference on Networked Learning 2010 (pp. 266–275). Retrieved from http://eprints.port.ac.uk/5605/

Connexion, Flickr photo by tangi_bertin, licensed CC-BY

In this article, Mackness et al. report findings from interviews of participants in the CCK08 MOOC (Connectivism and Connective Knowledge 2008; see here for a 2011 version of this course) insofar as these relate to Downes’ four process conditions for a knowledge-generating network: autonomy, diversity, openness, interactivity. In other words, they wanted to see if these conditions were met in CCK08, according to the participants. To best understand these results, if you’re not familiar with Downes’ work, it may be helpful to read an earlier post of mine that addresses and tries to explain these conditions.

Specifically, the researchers asked: “To what extent were autonomy, diversity, openness and connectedness/interactivity a reality for participants in the CCK08 MOOC and how much they were affected by the course design?” (271). They concluded that, in this particular course at least, there were difficulties with all of these factors.

Data

Data for this study came from 22 responses by participants (including instructors) to email interview questions (out of 58 who had self-selected, on a previous survey sent to 301 participants, to be interviewed). Unfortunately, the interview questions are not provided in the paper, so it’s hard to tell what the respondents were responding to. I find it helpful to see the questions so as to better understand the responses given, and be able to undertake a critical review of the interpretation of those responses given in an article.

Results

Autonomy

The researchers note that most respondents valued autonomy in a learning environment: “Overall, 59% of interview respondents (13/22) rated the importance of learner autonomy at 9 or 10 on a scale of 1-10 (1 = low; 10 = high)” (269). Unfortunately, I can’t tell if this means they valued the kind of autonomy they experienced in that particular course, or whether they valued the general idea of learner autonomy in an abstract way (but how was it defined?). Here is one place, for example, where providing the question asked would help readers understand the results.

Mackness et al. then argue that nevertheless, some participants (but how many out of the 22?) found the experience of autonomy in CCK08 to be problematic. The researchers provided quotes from two participants stating that they would have preferred more structure and guidance, and one course instructor who reported that learner autonomy led to some frustration that what s/he was trying to say or do in the course was not always “resonating with participants” (269).

The authors also provide a quote from a course participant who said they loved being able to work outside of assessment guidelines, but then comment on that statement by saying that “autonomy was equated with lack of assessment”–perhaps, but not necessarily (maybe they could get good feedback from peers, for example? Or maybe the instructors could still assess something outside of the guidelines? I don’t know, but the statement doesn’t seem to mesh, by itself, with the interpretation).  Plus, the respondent saw this as a positive thing, whereas the rhetorical aspects of the interpretation suggest it was a negative, a difficulty with autonomy. I’m not seeing that.

The researchers conclude that the degree of learner autonomy in the course was affected by the following:

levels of fluency in English, the ‘expertise divide’, assessment for credit participants, personal learning styles, personal sense of identity and the power exerted, either implicitly or explicitly, by instructors through their communications, status and reputation, or by participants themselves….” (271)

In addition, there were reports of some “trolling” behaviour on the forums, which led some participants to “retreat to their blogs, effectively reducing their autonomy” (271). The authors point out that some constraint on autonomy in the forums through discouraging or shutting down such behaviour may have actually promoted autonomy amongst more learners.

Diversity

The researchers note that learner diversity was certainly present in the course, including diversity in geography, language, age, and background. They give examples of diversity “reflected in the learning preferences, individual needs and choices expressed by interview respondents” (269).

However, diversity was also a problem in at least one respect, namely that not all learners had the “skills or disposition needed to learn successfully, or to become autonomous learners in a MOOC” (271). This is not so much of a problem if there is significant scaffolding, such as support for participants’ “wayfinding in large online networks,” but CCK08 was instead designed to have “minimal instructor intervention” (271). In addition, in order to promote sharing in a network like a cMOOC, there needs to be a certain amount of trust built up, the authors point out; and the more large and diverse the network, the more work may need to be done to help participants build that trust.

Openness

CCk08 was available, for free, to anyone who wanted to participate (without receiving any university or other credits), so long as they had a reliable web connection. The interview data suggests that participants interpreted “openness” differently: some felt they should (and did) share their work with others (thus interpreting openness as involving sharing one’s work), some worked mostly alone and did not do much or any sharing–thereby interpreting openness, the author suggest, merely as the idea that the course was open for anyone with a reliable web connection to participate in. The authors seem to be arguing here that these differing conceptions of openness are problematic because there was an “implicit assumption in the course was that participants would be willing or ready to give and receive information, knowledge, opinions and ideas; in other words to share freely” (270), but that not everyone got that message. They point to a low rate of active participation: only 14% of the total enrolled participants (270).

They also note that amongst participants there was no “common understanding of openness as a characteristic of connectivism” (270), implying that there should have been. But I wonder if conscious understanding of openness, and the ability to express that as a clear concept, is necessary for a successful connectivist course. This is just a question at this point–I haven’t thought it through carefully. I would at least have liked to have seen more on why that should be considered a problem, as well as whether the respondents were asked specifically for their views of openness. The responses given in this section of the paper don’t refer to openness at all, making me think perhaps the researchers interpreted understandings of openness from one or more of the other things respondents said. That’s not a problem by itself, of course, but one might have gotten different answers if one had asked them their views of openness directly, and answers that might have been therefore more relevant to concluding whether or not participants shared a common understanding of openness.

Finally, Mackness et al. argue that some of the barriers noted above also led to problems in regard to participants’ willingness to openly communicate and share work with others: this can be “compromised by lack of clarity about the purpose and nature of the course, lack of moderation in the discussion forums, which would be expected on a traditional course, and the constraints (already discussed in relation to autonomy and diversity) under which participants worked” (272).

Interactivity

 There were significant opportunities for interaction, for connecting with others, but the authors note that what is most important is not whether people did connect with others (and how much) as what these connections made possible. Respondents noted some important barriers to connecting as well as problems that meant some of the interactions did not yield useful benefits. As noted above, some participants pointed to “trolling” behaviour on the forums, and one said there were some “patronising” posts as well–which, the respondent said, likely led some participants to disengage from that mode of connection. Another respondent noted differences in expertise levels that led him/her to disengage when s/he could no longer “understand the issues being discussed” (271).

The researchers conclude that connectivity alone is not sufficient for effective interactivity–which of course makes sense–and that the degree of effective interactivity in CCK08 was not as great as it might have been with more moderation by instructors. However, the size of the course made this unfeasible (272).

One thing I would have liked to have seen in this analysis of “interactivity” is what Downes focuses on for this condition, namely the idea that the kind of interactivity needed is that which promotes emergent knowledge–knowledge that emerges from the interactions of the network as a whole, rather than from individual nodes (explained by Downes here and here, the first of which the authors themselves cite). This is partly because if they used Downes’ framework, it would make sense to evaluate the course with the specifics of what he means by “interactivity.” It’s also partly because I just really want to see how one might try to evaluate that form of interactivity.

Conclusion

Mackness et al. conclude that

some constraints and moderation exercised by instructors and/or learners may be necessary for effective learning in a course such as CCK08. These constraints might include light touch moderation to reduce confusion, or firm intervention to prevent negative behaviours which impede learning in the network, and explicit communication of what is unacceptable, to ensure the ‘safety’ of learners. (272)

Though, at the same time, they point to the small size of their sample, and the need for further studies of these sorts of courses to validate their findings.

That makes sense to me, from my unstudied perspective of someone who has participated in a few large and one small-ish open online courses, one of which seemed modeled to some degree along connectivist lines (ETMOOC). There was some significant scaffolding in ETMOOC, through starting off with discussions of connected learning and help with blogging and commenting on blogs. There wasn’t clear evidence of moderating discussions from the course collaborators (several people collaborated on each two-week topic, acting in the role of “instructors” for a brief time), except insofar as some of the course collaborators were very actively present on Twitter and in commenting on others’ blogs, being sure to tweet or retweet or bookmark to Diigo or post to Google+ especially helpful or thought-provoking things. We didn’t have any trolling behaviour that I was aware of, and we also didn’t have a discussion forum. But IF there were problems in the Google+ groups or in Twitter chats, I would have hoped one or more of the collaborators would have actively worked to address them (and I think they would have, though of course since it didn’t happen (to my knowledge) I can’t be certain).

Some further thoughts 

If one decides that Downes’ framework is the right one to use for evaluating an open online course like a cMOOC (which I haven’t decided yet; I still need to look more carefully at his arguments for it), it would make sense to unpack the four conditions more carefully and collect participants’ views on whether those specific ways of thinking about autonomy, diversity, openness and interactivity were manifested in the course. The discussion of these four conditions is at times rather vague here. What, more specifically, does learner “autonomy” mean, for example? Even if they don’t want to use Downes’ own views of autonomy, it would be helpful to specify what conception of autonomy they’re working with. I’ve also noted a similar point about interactivity, about which the discussion in the paper is also somewhat vague–what sort of interactivity would have indicated success, exactly, beyond just participants communicating with each other on blogs or forums?

I find it interesting that in his most recent writing on the topic of evaluating cMOOCs (see the longer version attached to this post, and my discussion of this point here (and the helpful comments I’ve gotten on that post!)), Downes argues that it should be some kind of expert in cMOOCs or in one of the fields/topics they cover that evaluates their quality, while here the authors looked to the participants’ experiences. Interesting, because it makes sense to me to actually focus on the experiences of the participants rather than to ask someone who may or may not have taken the course. That is, if one wants to find out if the course was effective for participants.

Still, I can see how some aspects of these conditions might be measured without looking at what participants experienced, or at least in other ways in addition to gathering participants’ subjective evaluations. The degree to which the course is “open,” for example, might have some elements that could be measured beyond or in addition to what participants themselves thought. Insofar as openness involves the course being open to anyone with a reliable internet connection to participate, without cost, and the ability to move into and out of the course easily as participants choose, that could be partly a matter of looking at the design and platform of the course itself, as well as participants’ evaluations of how easy it was to get into and out of the course. If openness also involves the sharing of one’s work, one could look to see how much of that was actually done, as well as ask participants about what they shared, why, and how (and what they did not, and why).

I just find it puzzling that in that recent post Downes doesn’t talk about asking participants about their experiences in a cMOOC at all. I’m not sure why.

[I just read a recent comment on an earlier post, which I haven’t replied to yet, which discusses exactly this point–it makes no sense to leave out student experiences. Should have read and replied to that before finalizing this post!]

 

 

Downes on evaluating cMOOCs

In my previous post I considered some difficulties I’m having in trying to figure out how to evaluate the effectiveness of cMOOCs. In this one I look at some of the things Stephen Downes has to say about this issue, and one research paper that uses his ideas as a lens through which to consider data from a cMOOC.

Stephen Downes on the properties of successful networks

This post by Stephen Downes (which was a response to a question I asked he and others via email) describes two ways of evaluating the success of a cMOOC through asking whether it fulfills the properties of successful networks. One could look at the “process conditions,” which for Downes are four: autonomy, diversity, openness, and interactivity. And/or, one could look at the outcomes of a cMOOC, which for Downes means looking at whether knowledge emerges from the MOOC as a whole, rather than just from one or more of its participants. I’ll look briefly at each of these ways of considering a cMOOC in what follows.

The four “process conditions” for a successful network are what Downes calls elsewhere a “semantic condition” that is required for a knowledge-generating network, a network that generates connective knowledge (for more on this, see longer articles here and here). This post discusses them succinctly yet with enough detail to give a sense of what they mean (the following list and quotes come from that post).

  • Autonomy: The individuals in the network should be autonomous. One could ask, e.g.: “do people make their own decisions about goals and objectives? Do they choose their own software, their own learning outcomes?” This is important in order that the participants and connections form a unique organization rather than one determined from one or a few individuals, in which knowledge is transferred in as uniform a way as possible to all (this point is made more explicitly in the longer post attached here).
  • Diversity: There must be a significant degree of diversity in the network for it to generate anything new. One could ask about the geographical locations of the individuals in the network, the languages spoken, etc., but also about whether they have different points of view on issues discussed, whether they have different connections to others (or does everyone tend to have similar connections), whether they use different tools and resources, and more.
  • Openness: A network needs to be open to allow new information to flow in and thereby produce new knowledge. Openness in a community like a cMOOC could include the ease with which people can move into and out of the community/course, the ability to participate in different ways and to different degrees, the ability to easily communicate with each other. [Update June 14, 2013: Here Downes adds that openness also includes sharing content, both that from within the course to those outside of it, and that gained from outside (or created by oneself inside the course?) back into the course.]
  • Interactivity: There should be interactivity in a network that allows for knowledge to emerge “from the communicative behaviour of the whole,” rather than from one or a few nodes.

To look at the success of a cMOOC from an “outcomes” perspective, you’d try to determine whether new knowledge emerged from the interactions in the community as a whole. This idea is a bit difficult for me to grasp, and I am having trouble understanding how I might determine if this sort of thing has occurred. I’ll look at one more thing here to try to figure this out.

Downes on the quality of MOOCs

Recently, Downes has written a post on the blog for the “MOOC quality project” that discusses how he thinks it might be possible to say whether a MOOC was successful or not, and in it he discusses the process conditions and outcomes further (to really get a good sense of his arguments, it’s best to read the longer version of this post, which is linked to the original).

Downes argues in the longer version that it doesn’t make sense to try to determine the purpose of MOOCs (qua MOOCs, by which I think he means as a category rather than as individual instances) based on “the reasons or motivations” of those offering or taking particular instances of them. This is because people may have varying reasons and motivations for creating and using MOOCs, which need not impinge on what makes for a good MOOC (just like people may use hammers in various ways–his example–that don’t impinge on whether a particular one hammer is a good hammer). Instead, he argues that we should look at “what a successful MOOC ought to produce as output, without reference to existing … usage.”

And what MOOCs ought to produce as output is “emergent knowledge,” which is

constituted by the organization of the network, rather than the content of any individual node in the network. A person working within such a network, on perceiving, being immersed in, or, again, recognizing, knowledge in the network thereby acquires similar (but personal) knowledge in the self.

Downes then puts this point differently, focusing on MOOCs:

[A] MOOC is a way of gathering people and having them interact, each from their own individual perspective or point of view, in such a way that the structure of the interactions produces new knowledge, that is, knowledge that was not present in any of the individual communications, but is produced as a result of the totality of the communications, in such a way that participants can through participation and immersion in this environment develop in their selves new (and typically unexpected) knowledge relevant to the domain.

 He then argues that the four process conditions discussed previously usually tend to produce this sort of emergent knowledge as a result, in the ways suggested in the above list. But, properties like diversity and openness are rather like abstract concepts such as love or justice in that they are not easily “counted” but rather need to be “recognized”: “A variety of factors–not just number, but context, placement, relevance and salience–come into play (that is why we need neural networks (aka., people) to perceive them and can’t simply use machines to count them.”

So far, so good; one might think it possible to come up with a way to evaluate a MOOC by looking at these four process conditions, and then assume that if they are in place, emergent knowledge is at least more likely to result (though it may not always do so). It would not be easy to figure out how to determine if these conditions are met, but one could come up with some ways to do so that could be justified pretty well, I think (even though there might be multiple ways to do so).

MOOCs as a language

But Downes states that while such an exercise may be useful when designing a course, it is less so when evaluating one after the fact–I’m not sure why this should be the case, though. He states that looking at the various parts of a course in terms of these four conditions (such as the online platform, the content/guest speakers, and more) could easily become endless–one could look at many, many aspects of a MOOC this way. But I don’t see why that would be more problematic in evaluating a course than in designing one.

Instead, Downes suggests we take a different tack in measuring success of MOOCs. He suggests we think of MOOCs as a language, “and the course design (in all its aspects) therefore as an expression in that language.” This is meant to take us away from the idea of using the four process conditions above as a kind of rubric or checklist in a mechanical way. The point rather is for someone who is already fluent in either MOOC design or the topic(s) being addressed in a MOOC to be able to look at the MOOC and the four conditions and “recognize” whether it has been successful or not. Downes states that “the bulk of expertise in a language–or a trade, science or skill–isn’t in knowing the parts, but in fluency and recognition, cumulating in the (almost) intuitive understanding (‘expertise’, as Dreyfus and Dreyfus would argue)” (here Downes refers to: http://www.sld.demon.co.uk/dreyfus.pdf).

So I think the idea here is that once one is fluent in the language of MOOCs or the “domain or discipline” of the topics they are about, one should be able to read and understand the expression in that language that is the course design, and to determine the quality of the MOOC by using the four conditions as a kind of “aid” rather than “checklist”. But to be quite honest, I am still not sure what it means, exactly, to use them as an “aid.” And this process suggests relying on those who have developed some degree of expertise in MOOCs to be able to make the judgment, thereby making the decision of successful vs. unsuccessful MOOCs come only from a set of experts.

Perhaps this could make sense, if we think of MOOCs like the product of some artisanal craft, like swordmaking–maybe it really is only the experts who can determine their quality, because perhaps there is no way to set out in a list of necessary and sufficient conditions what is needed for a successful MOOC, like it’s difficult (or impossible) to do for a high-quality sword (I’m just guessing on that one). Perhaps there are so many different possible ways of having a high quality MOOC/sword, with some aspects being linked to individual variations such that it’s impossible to describe each possible variation and what aspects of quality would be required for that particular variation. It may be that no one can possibly know in advance what all the possible variations of a successful MOOC/sword are, but that these can be recognized later.

But I’m not yet convinced that must be the case for MOOCs, at least not from this short essay. And I expect I would benefit from a closer reading of Downes’ other work, which might help me see why he’s going in this direction here. It would also help me see why he thinks the process conditions for a knowledge-generating network should be the ones he suggests.

Using Downes’ framework to evaluate the effectiveness of a cMOOC

This is a bit premature, as I admit I don’t understand it in its entirety, but I want to put out a few preliminary ideas. I’m leaving aside, for the moment, the idea of MOOCs as a language until I figure out more precisely why he thinks we should look at them that way, and then decide if I agree. I’m also leaving aside for the moment the question of whether I think the process conditions he suggests are really the right ones–I haven’t evaluated them or the reasons behind them and thus can’t say one way or the other at this point.

The four process conditions

One would have to figure out exactly how to define Autonomy, Diversity, and Openness, which is no easy task, but it seems possible to come to a justifiable (though not final or probably perfect) outline of what those mean, considering what might make for a knowledge-generating network. It might be a long and difficult process to do so, but at least possible, I think. Then, it would be fairly straightforward to devise a manageable (and only ever partial) list of things one could ask about, measure, humanly “recognize” (in the sense of not using a checklist mechanically…though again, I’m not entirely sure what that means) to see if a particular cMOOC fit these three criteria. Again, I have no idea how to do any of this right now, but I think it could be done.

But I am still unsure about the final one: interactivity. This is because it’s not just a matter of people interacting with each other; rather, Downes emphasizes that what is needed is interaction that allows for emergent knowledge. So to figure this one out, one already needs to understand what emergent knowledge looks like and how to recognize if it has happened. I understand the idea of emergent knowledge in an abstract sense, but it’s hard to know how I would figure out if some knowledge had emerged from the communicative interactions of a community rather than from a particular node or nodes. How would I tell if, as quoted above, “the structure of the interactions produce[d] new knowledge, that is, knowledge that was not present in any of the individual communications, but [was] produced as a result of the totality of the communications”? Or, to take another quote from the longer version of the post Downes did for the “MOOC quality project”, how would I know if “new learning occur[red] as a result of this connectedness and interactivity, it emerge[d] from the network as a whole, rather than being transmitted or distributed by one or a few more powerful members”?

I honestly am having a hard time figuring out where/how to look for knowledge that wasn’t present in any of the individual communications, but emerges from the totality of them. But I think part of the problem here is that I don’t understand enough about Downes’ view of connectivism and connectivist knowledge. I knew I should take a closer look at connectivism before trying to tackle the question of evaluating cMOOCs! Guess I’ll have to come back to this after doing a post or two on Downes’ view of connectivism & connective knowledge.

Conclusion

So clearly I have a long way to go to understand exactly what Downes is suggesting and why, before I can even decide if this would be a good framework for evaluating a cMOOC.

In a later post I will look at two research papers that look at cMOOCs through the lens of Downes’ four process conditions, to see how they have interpreted and used these.

I welcome comments on anything I’ve said here–anything I’ve gotten wrong, or any suggestions on what I’m still confused about?

 

 

Difficulties researching the effectiveness of cMOOCs

As noted in an earlier post, I have submitted some proposals for conference presentations on researching the effectiveness of connectivist MOOCs, or cMOOCs (see another one of my earlier posts for what a cMOOC is). I am using this post (and one or two later ones) to try to work through how one might go about doing so, and the problems I’ve considered only in a somewhat general way previously. I need to think things through by writing, so why not do that in the open?

I had wanted to think more carefully about connectivism before moving to some research questions about connectivist MOOCs, but for various reasons I need to get something worked out about possible research questions as soon as I can, so I’ll return to looking at connectivism in later posts.

The general topic I’m interested in (at the moment)

And I mean general. I want to know whether we can determine whether a cMOOC has been “effective” or “successful.” That’s so general as to mean almost nothing.

What might help is some specification of the purposes or goals of offering a particular cMOOC, so one could see if it has been effective in achieving those. This could be taken from any of a number of perspectives, such as:

  • If an institution is offering a cMOOC, what is the institution’s purpose in doing so? This is not something I’m terribly interested in at the moment.
  • What do those who are designing/planning/facilitating the cMOOC hope to get out if doing so, for themselves? This is also not what I’m particularly interested in for a research project.
  • What do those who are designing/planning/facilitating the cMOOC hope participants will get out if it? There are likely some reasons, articulated or not, why the designers thought a cMOOC would be effective for participants in some way, thus they decided to offer a cMOOC at all. This is closer to what I’m interested in, but there’s a complication.

The connectivist MOOC model as implemented so far by people such as Dave Cormier, Alec Couros, Stephen Downes and George Siemens encourages participants to set their own goals and purposes for participation, rather than determining what these are to be for all participants (see, e.g., McAuley, Stewart, Siemens, & Cormier, 2010 (pp. 4-5, 40); see The MOOC Guide for a history of cMOOC-type courses, and lists of more recent connectivist MOOCs here and here). As Stephen Downes puts it:

In the MOOCs we’ve offered, we have said very clearly that you (as a student) define what counts as success. There is no single metric, because people go into the course for many different purposes. That’s why we see many different levels of activity ….

Further, just what a cMOOC will be like, where it goes, what people talk about, depends largely on the participants–even though there are often pre-set topics and speakers in advance, the rest of what happens is mostly up to what is written, discussed, shared amongst the participants. The ETMOOC guide for participants emphasizes this:

What #etmooc eventually becomes, and what it will mean to you, will depend upon the ways in which you participate and the participation and activities of all of its members.

Thus, it’s hard to say in advance what participants might get out of a particular cMOOC, in part because it’s impossible to say in advance what the course will actually be like (beyond the scheduled presentations, which are only one of many parts of a cMOOC).

Some possible directions for research questions

Developing connections with other people

Photo Credit: Graylight via Compfight CC-BY

I at first thought that perhaps one could say cMOOCs should allow participants to, at the very least, develop a set of connections with other people that are used for sharing advice, information, comments on each others’ work, for collaborating, and more. As discussed in my blog post on George Siemens’ writings on connectivism, what may be most important to a course that is run on connectivist principles is not the content that is provided, but the fostering of connections and skills for developing new ones and maintaining those one has, for the sake of being able to learn continually into the future.

And even though I understand what Downes and others say about participants in cMOOCs determining their own goals and deciding for themselves whether the course has been a success, cMOOCs have been and continue to be designed in certain ways for certain reasons, at least some of which most likely has to do with what participants may get out of the courses. Some of those who have been involved in designing cMOOCs have emphasized the importance of forming connections between people, ideas and information.

Stephen Downes talks about this in “Creating the Connectivist Course” when he says that he and George Siemens tried to make the “Connectivism and Connective Knowledge” course in 2008 “as much like a network as possible.” In this video on how to succeed in a MOOC, Dave Cormier emphasizes the value of connecting with others in the course through commenting on their blog posts, participating in discussion fora, and other ways. The connections made in this way are, Cormier says, “what the course is all about.” Now, of course, Cormier states at the beginning and end of the video that MOOCs are open to different ways of success and this is just “his” way, but the tone of the video suggests that it would be useful for others as well. Cormier says something similar in this video on knowledge in a MOOC: participants in a MOOC “are [ideally?] going to come out with a knowledge network, a network of people and ideas that’s going to carry long past the end of [the] course date.”

So it made sense to me at first to consider asking about the effectiveness or success of a cMOOC through looking at whether and how participants made connections with each other, and especially whether those continue beyond the end of the course. But again, there are some complications, besides the important questions of just how to define “connections” so as to decide what data to gather, and then the technical issues regarding how to get that data.

Would we want to say that the course succeeded more if more people made connections to others, rather than less? Or how about the question of how many people each participant should ideally connect with–I don’t think more is necessarily better, but where do we draw the line to say that x number of people made y number of connections with others, so the course has been a success?

This is getting pedantic, but I’m trying to express the point that when you really dig into this kind of question and try to design a research project, you would have to address this kind of question, and it’s kind of ridiculous. It’s ridiculous because there are so many different ways that connecting with other people could be valuable for a person, and for one person, having made one connection ends up being much more valuable than for another who has made 50. So much depends on the nature and context of those connections, and those are going to be highly individual and likely impossible to specify in any general way.

Further, what if some participants are happy to watch a few presentations and read blogs and lurk in twitter chats but don’t participate and therefore don’t “connect” in a deeper sense (than just reading and listening to others’ work and words). Should we say that if there are a lot of such persons in a cMOOC, the course has not been successful? I don’t think so, if we’re really sticking to the idea that participants can be engaged in the course to the degree and for the reasons they wish.

One possibility would be to ask participants to reflect on the connections they’ve made and whether/why/how they are valuable. One might be able to get some kind of useful qualitative data out of this, and maybe even find some patterns to what allows for valuable connections. In other words, rather than decide in advance what sorts of connections, and how many, are required for a successful cMOOC, one could just gather data about what connections were made and why/how people found them valuable. If done over lots of cMOOCs, one might be able to devise some sort of general idea of what makes for valuable connections in cMOOCs.

But would it be possible to say, on the basis of such data, whether a particular cMOOC has been successful? If many people made some connections they found valuable, would that be more successful than if only a few did? Again, this leads to the problems noted above–it runs up against the point that in cMOOCs participants are free to act and participate how they wish, and if they wish not to make connections, that doesn’t necessarily have to mean the course hasn’t been “successful” for them.

Looking at participation rates

photo credit: danielmoyle via photopin CC-BY

One might consider looking at participation rates in a cMOOC, given that much of such a course involves discussions and sharing of resources amongst participants (rather than transferral of knowledge mainly from one or a few experts to participants). As this video by Dave Cormier demonstrates so well, cMOOCs are distributed on the web rather than taking place in one central “space” (though there may be a central hub where people go for easy access to such distributed information and discussions, such as a blog hub), and this means that a large part of the course is happening on people’s blogs, on Twitter, on lists of shared links, and elsewhere. So it would seem reasonable to consider the degree to which participants engage in discussions through these means. How many people are active in the sense of writing blog posts, commenting on others’ blog posts, participating in Twitter chats and posting things to the course Twitter hashtag, participating in discussion forums (if there are any; there were none in ETMOOC) or in social media spaces like Google+, etc?

This makes sense given the nature of cMOOCs, since if there were no participation in these ways then there would be little left of the course but a set of presentations by experts that could be downloaded and watched. Perhaps one could say that even if we can’t decide exactly how much participation (or connection, for that matter) is needed for “success,” an increase in participation (or connection) over time might indicate some degree of success.

But again, we run up against the emphasis on participants being encouraged to participate only when, where and how they wish, meaning that it’s hard to justify saying that a cMOOC with greater participation amongst a larger number of people was somehow more effective than one in which fewer people participated.  Or that a cMOOC in which participation and connections increased over time was more successful than one in which these stayed the same or decreased (especially since the evidence I’ve seen so far suggests that a drop off in participation over time may be common).

Determining your own purposes for participating in a cMOOC and judging whether you’ve reached them

Another option could be to ask participants who agree to be part of the research project to state early on what their goals for participating in the cMOOC are, and then towards end, and even in the middle, perhaps, ask them to reflect on whether they’re meeting/have met them.

Sounds reasonable, but then there are those people–like me taking ETMOOC–who don’t have a clear set of goals for taking an open online course. I honestly didn’t know exactly what I was getting into, nor what I wanted to get out of it because I didn’t understand what would happen in it. And as noted above, even though there may be some predetermined topics and presentations, what you end up focusing on/writing about/commenting on in discussion forums or others’ blogs/Tweeting about develops over time, as the course progresses. So some people may recognize this and be open to whatever transpires, not having any clear goals in advance or even partway through.

For those who do set out some goals for themselves at the beginning, it could easily be the case that many don’t end up fulfilling those particular goals by the end, but going in a different direction than what they could have envisioned at the beginning. In fact, one might even argue that that would be ideal–that people end up going into very different directions than they could have imagined to begin with might mean that the course was transformative for them in some way.

Thus, again, it’s difficult to see just how to make an argument about the effectiveness of a cMOOC by asking participants to set their goals out in advance and reflect on whether or not they’ve met them. Perhaps we could leave this open to people not having any goals but being able to reflect later on what they’ve gotten out of the course, and open to those who end up not meeting their original goals but go off in other valuable directions.
This would mean gathering qualitative data from things such as surveys, interviews or focus groups. I think it would be good to ask people to reflect on this partway through the course, at the end of the course, and again a few months or even a year later. Sometimes what people “get out of” a course doesn’t really crystallize for them until long after it’s finished.

Conclusions so far

It seems to me that there is a tension between the desire to have a course built in large part on the participation of individuals involved, and the desire to let them choose their level and type of participation. In some senses, cMOOCs appear to promote greater participation and connections amongst those involved, while also backing away from this at the same time. I understand the latter, and I appreciate it myself–that was one of the things that made ETMOOC so valuable for me. I was encouraged to choose what to focus on, what to write about, which conversations to participate in, based on what I found most important for my purposes (and based on how much time I had!). There are potential downsides to this, though, in that participants may not move far beyond their current beliefs, values and interests if they just look at what they find important based on those. But overall, I see the point and the value. I expect there are some good arguments in the educational literature for this sort of strategy that I’m not aware of.

Still, this is in tension, to some degree, with the emphasis on connecting and participating in cMOOCs. Perhaps the idea is that it would be good for people to do some connecting and participating, but in their own ways and on their own time, and if they choose not to we shouldn’t say they are not doing the course “correctly.” It might nevertheless be possible/permissible to suggest that, given the other side of this “tension,” looking at participation or connection rates could be considered as part of looking at the success of a cMOOC? Honestly, I’m torn here.

[Update June 7, 2013] I just came across this post by George Siemens, in which he doubts the value of lurking, at least in a personal learning network (PLN). There are likely differences of opinion amongst cMOOC proponents and those who offer them, on the value of letting learners decide exactly how much to participate.

It is, of course, possible that the whole approach I’m taking is misguided, namely trying to determine how one measure whether a cMOOC has been successful or not. I’m open to that possibility, but haven’t given up yet–not until I explore other avenues.

I had one other section to this post, but as it is already quite long, I moved that section to a new post, in which I discuss a suggestion by Stephen Downes as to how to evaluate the “success” of MOOCs. In that and/or perhaps another post I will also discuss some of the published literature so far on cMOOCs, and what the research questions and methods were in those studies.

 

Please comment/question/criticize as you see fit. As you can tell, I’m in early stages here and am happy for any help I can get.

 

Connectivism–Siemens’ arguments

I have submitted a proposal to two different conferences, for a session in which we would discuss the possibility and methods of researching the effectiveness of cMOOCs. One of those conferences is still considering the proposal, but as soon as decisions are made there I’ll post the proposal itself here on my blog. The proposal was accepted for a poster presentation at one conference, but I’m still waiting to hear from the other.

If the session gets accepted, I’ll need to give some background to cMOOCs in the way of talking about connectivism. And I want to dig my way through connectivist ideas anyway, so I’m going to do so here on the blog. That’s the way I think through things best–writing about them (or teaching, but I’m on sabbatical at the moment and not teaching).

I have read a number of articles and blog posts by George Siemens on connectivism, and have bookmarked quite a few others by Stephen Downes. Here I’ll discuss Siemens’ arguments, at least those I’ve found so far. I will not address the question of whether this is really a “new” learning theory, or whether it’s a learning theory at all, which are some issues that have been discussed in the research literature. I’m also not going to comment on the relationships between connectivism and constructivism, behaviourism, and cognitivism, as I have a woeful lack of knowledge of such theories. I’m just going to try to figure out some of (not all of) the basic ideas/arguments in what Siemens has written, and give my comments.

Context for the view

Siemens argues that connectivism makes sense for a context in which people have relatively access to a very large amount of information (through, e.g., the world wide web–not saying that everyone does have such access, but for those who do, Siemens is claiming, connectivism makes sense), can use technology to store that information rather than needing to have it in their own heads, and in which what counts as “knowledge” changes rapidly such that it becomes obsolete relatively quickly compared to past centuries and even decades (Siemens, 2005a). He claims we need a new learning theory, a new way of understanding how learning and knowledge work, within this sort of context.

Learning as a process of forming connections

To me, this is one of the fundamental ideas in connectivism, and the one I’m most interested in. I want to pick apart some of what Siemens says about it, in order to understand it better.

Learning is a process of connecting specialized nodes or information sources. (Siemens, 2005a)

I perceive learning as a network formation process. (Siemens, 2006b)

To really get at what is going on here, one would probably need to know more about learning theory than I do (as in, something about it, which I don’t). But the general idea is that when one learns something, what happens is that one makes connections between…what? Nodes. What counts as a “node”?

photo credit: jared via photopin CC-BY

Siemens explains that networks have both nodes and connections, and “a node is any element that can be connected to any other element. A connection is any type of link between nodes” (Siemens, 2005b). He notes in (Siemens 2005a) that nodes can be, e.g., “fields, ideas, communities,” among other things. He also speaks of people as nodes. In this presentation posted by the Universitat Oberta de Catalunya (starting at around 7 minutes), Siemens describes teaching a course as a process of directing the formations of connections for students–when we given them particular course content, particular texts, particular theories to study and discuss, we are guiding how they form connections. The scope of “nodes” is very wide:

Virtually any element that we can scrutinize or experience can become node. Thoughts, feelings, interactions with others, and new data and information can be seen as nodes. (Siemens, 2005b)

Thus, connections can be made between nodes as persons, as ideas, as sets of data, as texts (or other media, such as videos), as groups, and more. So if I learn something, I make a connection (in my mind?) between myself and, say, a text, and between ideas I already have and those I’m getting from the text. I think that’s right, but I’m not absolutely certain, especially about the location of the connections–can some connections be located in thoughts, others outside? Siemens is clear that “[l]earning may reside in non-human appliances” (Siemens, 2005a), so clearly he thinks the connections don’t have to be only internal. But can they be internal at all? I’ll return to this question below.

First, briefly: How can learning reside in non-human appliances? If learning is a process of making connections, then appliances such as computers, and the software that runs on them, as well as the whatever that nebulous thing is that is sometimes called the “web,” could be considered as facilitating the making of connections. I make connections between myself and other persons, between my ideas and theirs, between my ideas and new information, quite often these days through the medium of these non-human entities. I suppose it is in that sense that learning, as a process of connection formation, can “reside” in non-human appliances. In a blog post entitled “What is the Unique Idea in Connectivism?”, Siemens explains the role of technology a bit further:

… technology plays a key role of 1) cognitive grunt work in creating and displaying patterns, 2) extending and enhancing our cognitive ability, 3) holding information in ready access form (for example, search engines, semantic structures, etc). (Siemens, 2006d)

Under (1), technological appliances like computers and software can create links and patterns, but also (2) extend our cognitive ability, which to me means that we can think through and understand many things more quickly and easily when we can quickly see and read and watch a number of resources about them. (3) is related to this too–the information is stored and readily accessible (well, sometimes readily…sometimes it’s quite hard to find) so that we don’t have to store it in our own, individual minds. The latter is true of textual technology like books, too. So if learning is a process of forming connections, then non-human entities can be and often are an important part of that process.

One might want to object that the appliances merely make possible these connections, that the connections themselves occur, somehow, mostly internally to individuals. One might think of the connection between some ideas I have already and some new ones I am introduced to in this way–the connection between these, however that might be characterized (and that’s a big question), seems to be localized in my own mind.

But what about the “connection” between myself and someone I communicate with entirely through the internet and the applications that allow me to do so? In what does it consist? Is it an abstraction in my mind? A feeling I have that I am linked to someone? Perhaps it makes more sense to think of this connection in terms of thoughts, feelings, plus tangible evidence of the connections in the form of emails, posts to social networks like Twitter, Google+, Facebook, video chats on Google Hangout or Skype, work that is collaboratively produced, and more. Some of these connections can be traced and visualized, such as connections on twitter being tracked and visualized through Martin Hawksey’s Twitter Archiving Google Spreadsheet (TAGS). Here’s a spreadsheet I made through TAGS for the #ds106zone Twitter hashtag, for May 23-31, 2013. And here’s the visualization of the connections on that hashtag for those seven days.

Thus, connections could, I think, be both in an individual’s mind in some way (however one might understand connections between ideas), and also located outside an individual as well.

Learning and knowledge

In reading some of Siemens’ articles and blog posts, I found myself getting confused as to the exact meanings of these tw0 terms, so I want to explore them further here. Learning is discussed briefly above as the process of making connections between nodes. What about knowledge?

Knowledge is defined as a particular pattern of relationships and learning is defined as the creation of new connections and patterns as well as the ability to maneuver around existing networks/patterns.

Our knowledge resides in the connections we form – where to other people or to information sources such as databases. (Siemens, 2006d; emphasis mine)

Siemens here suggests that while learning is a process of creating connections and patterns (as well as the ability to move around existing ones…though frankly I’m not quite sure what that means), knowledge is a particular pattern itself. In “Connectivism: Learning theory or pastime of the self-amused?” he states that knowledge “resides in a distributed manner across a network,” rather than being only in the mind of an individual (Siemens, 2006a, p. 10).

This “externalization of our knowledge,” he states in the same article, “is increasingly utilized as a means of coping with information overload” (p. 11):

Most learning needs today are becoming too complex to be addressed in “our heads”. We need to rely on a network of people (and increasingly, technology) to store, access, and retrieve knowledge and motivate its use. (Siemens, 2006c)

We have access to and can use so much information that we must externalize it through various means, such as storing it in handwritten notes, printed papers, or digital works such as texts, images, videos.

But what does it mean, exactly, to say that knowledge is a certain “pattern” of connections? The best way I can make sense of this for myself is with an example. Let’s say I know how to make an animated gif. We’ll ignore for the moment how it is I know that I know this, if I’m just trying to figure out the theory that would explain knowledge in the first place. How did I learn how to do this? I connected to a course called “ds106” (Digital Storytelling 106), which connected me to the instructor (Jim Groom), who connected me (through Twitter) to two tutorials on how to do it–a wiki page and a video. What/where is my “knowledge” of how to do this? Partly in my head, but partly not because I can’t (yet) remember each step. So partly it’s in the tutorials and the links I have to those on my computer, and partly in my link to the instructor whom I could ask questions of, and partly in my links to the other participants in the course who could help me as well. I can see, then, why one might say that the “knowledge” is not just what’s in my head, but also in some way “in” these connections. Of course, I could get to the point where I remember how to make an animated gif and so don’t need to access those connections for the basic procedure, but I would need to access the people and/or a web search if I wanted to do anything more advanced with gifs (which I did with the one linked above).

But what I’d like to see is some clearer and more detailed arguments on what counts as “knowledge” to justify why I should think of these connections I have to information on the web and to other people as part of my knowledge set. I am not an epistemologist, and haven’t studied epistemology since grad school, so I can’t go very far in criticizing this view from a philosophical perspective. I would, nevertheless, like to see a more fully worked-out argument for this view of knowledge.

In the connectivist view according to Siemens, then, it seems that learning is the process of creating patterns through developing connections, and knowledge is a resulting pattern within a network. Both, I think, can have both internal and external elements (knowledge can be a pattern of connections as abstract ideas and their logical links, e.g., as well as a connection stored in a computer text or video file).

But I get confused reading some of Siemens’ texts, because at times these words are defined slightly differently. For example, in Siemens (2006a), learning is stated to be the network itself: “The learning is the network” (p. 16), whereas I was thinking of learning as the process of making connections and then knowledge residing in the network thus created. He says the same thing in a blog post:  “The network itself becomes the learning” (Siemens, 2006c). Perhaps this ambiguity is just the result of working this view out over a series of writings, which would be completely understandable. Blog posts, after all, may be treated by their authors as drafts of ideas, working through one’s views over time (I certainly view mine that way). 

“The pipe is more important than the content within the pipe” (Siemens, 2005a)

This claim reflects the idea that since what counts as our “knowledge” is not only in our own heads but also outside of us, in a pattern of connections that is located partly in our own minds (and neurons!), partly in connections stored on computers, on paper, or technology (as discussed above), and partly in the links we have to other people, then what is most important is not what we have in our minds at any given moment, but the nature of these connections. When we have a problem we need to solve, for example, we don’t have to turn to the knowledge stored in our brains, but can turn to a web search, to people we’re connected with, to a course, or other sources to get the information/skills we need. “As knowledge continues to grow and evolve, access to what is needed is more important than what the learner currently possesses,” so “[n]urturing and maintaining connections is needed to facilitate continual learning” (Siemens, 2005a).

This, of course, has implications for teaching and learning: if we were to follow the connectivist view as teachers, we would not emphasize providing content to students. As many of us have already realized, at least some of the content we could provide is readily available to students on the web (This depends on the course, of course; I would say that my own interpretations of philosophical texts are not readily available, even though students could find others’ interpretations on the web pretty easily. But then again, if I post my interpretations as lecture notes on the web, then it would be available to them already.) Instead, the instructor could spend time with the class discussing, criticizing, asking and answering questions, etc.

And we could also help them with their “connectivist” skills–for lack of a better term (my term). We could help them with finding and evaluating information and information sources, for example, and with forming a network of people that can help them (and that they can help) in regards to a particular topic/field. Siemens (2008) provides a summary of various activities and roles for “connectivist” educators.

My current thoughts

Besides some slipperiness in terminology, the basic idea here makes sense. We could think of learning as a process of making connections, and knowledge as the patterns of connections thus made (if, that is, I’ve got the view right, which I might not of course). And in a context in which the internet makes information fairly easy to access (recognizing the problems with search engines filtering results in various ways) and connections to other people fairly easy to make (recognizing that people are most likely going to connect to others who are connected to who they already know, and those they tend to agree with most), I can see that the ability to make and access connections would be more important than what one “knows” in the sense of having information stored in one’s brain. One could also think of all learning as a matter of connecting things, whether it be making a connection to a book, a web page, a person, a video, etc., one might say that I am learning through making connections. I am also learning through adding the new information and skills I get thereby to my existing set, and making connections in that way as well.

I’m not convinced that this is the best way to think of learning and knowledge, at least not yet. The main problem is that I know nothing of learning theory, so I don’t know the other options. Another problem is, as noted above, I don’t think Siemens has a clearly worked-out, detailed epistemological view in the articles and blog posts I’ve read (as a philosopher I want many more specific and clear arguments supporting this view of knowledge). So while I think it makes some sense, I’m not convinced at the moment that I should accept Siemens’ view of connectivist understandings of learning and knowledge.

I think, however, that Stephen Downes has more arguments about connectivist epistemology in his writings, so that is who I’ll turn to next, in an upcoming post.

Your thoughts

Have I done justice to Siemen’s arguments about connectivism? What do you think of them? Please let us know in the comments!

Works cited

Siemens, G. (2005a). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 1(2). Retrieved from http://itdl.org/journal/jan_05/article01.htm

Siemens, G. (2005b). Connectivism: Learning as network-creation. Retrieved from http://www.elearnspace.org/Articles/networks.htm

Siemens, G. (2006a). Connectivism: Learning theory or pastime for the self-amused? Retrieved from http://www.elearnspace.org/Articles/connectivism_self-amused.htm

Siemens, G. (2006b, April 6). Learning, assessment, outcomes, ecologies. [Web log post]. Retrieved from http://www.connectivism.ca/?p=57

Siemens, G. (2006c, June 21). Constructivism vs. connectivism [Web log post]. Retrieved from http://www.connectivism.ca/?p=65

Siemens, G. (2006d, Aug. 6). What is the unique idea in connectivism? [Web log post] Retrieved from http://www.connectivism.ca/?p=116

Siemens, G. (2008). Learning and knowing in networks: Changing roles for educators and designers. IT Forum. Retrieved from http://itforum.coe.uga.edu/Paper105/Siemens.pdf

Summary of research on modes of peer assessment

I have been doing quite a few “research reviews” of articles on peer assessment–where I summarize the articles and offer comments about them. Lately I’ve been reading articles on different modes of peer assessment: written, oral, online, face to face, etc. And here, I am going to try to put together what that research has said to see if anything can really be concluded about these issues from it.

In what follows, I link to the blog posts discussing each article. Links to the articles themselves can be found at the bottom of this post.

I created PDF tables to compare/contrast the articles under each heading. They end up being pretty small here on the blog, so I also have links to each one of them, below.

Peer feedback via asynchronous, written methods or synchronous, oral, face to face methods

This is the dichotomy I am most interested in: is there a difference when feedback is given asynchronously, in a written form, or when given synchronously, as spoken word face to face? Does the feedback itself differ? Might one form of feedback be more effective than another in terms of being taken up in later revisions of essays?

Do the comments differ in the two modes of peer feedback, and are they used differently in later drafts?

The PDF version of the table below can be downloaded here.

van den Berg, Admiraal and Pilot (2006) looked at differences in what was said in peer feedback on writing assignments when it was written (on standardized peer feedback forms, used for the whole class) and when it was given in oral, face to face discussions. They found that written feedback tended to be more focused on evaluating the essays, saying what was good or bad about them, and less on giving explanations for those evaluative comments or on providing suggestions for revision (though this result differed between the courses they analyzed). In the oral discussions, there was more of a balance between evaluating content, explaining that evaluation, and offering revisions. They also found that both written and oral feedback focused more on content and style than on structure, though there were more comments on structure in the written feedback than in the oral. The authors note, though, that in the courses in which peer feedback took place on early drafts or outlines, there was more feedback on structure than when it took place on later drafts. They conclude: “A combination of written and oral feedback is more profitable than written or oral feedback only” (146).

Hewett (2000) looked at differences in peer feedback between an oral, face to face environment and an electronic, text-based environment. She found that the talk in the oral communication was much more interactive, with students responding to each others’ comments, giving verbal cues that they were following along, and also working together to generate new ideas. The text-based, online feedback was much less like a conversation, with students commenting on the papers at hand but not interacting very much with each other. Perhaps unsurprisingly, then, while the feedback in the written environment was mostly focused on the content of the essay being evaluated, the discussion in the oral environment ranged more widely. Hewett also analyzed essay drafts and peer comments from both environments to see if the peer discussion and comments influenced later drafts of essays. She found that in the oral environment, there was more use in students’ work of ideas that came up in the peer discussion about others’ essays, or that one had oneself said. Hewett concludes that a combination of oral discussion and asynchronous, written comments would be good, using the former for earlier stages of writing–since in oral discussion there can be more talk in which students speculate about wider issues and work together to come up with new ideas–and the latter for revisions focused more on content.

What are students’ views of each mode?

A PDF version of the following table can be downloaded here.

Figl et al. (2006) surveyed students in a computer science course who had engaged in peer assessment of a software project in both the face to face mode as well as through an online, asynchronous system that allows for recording of criticisms as well as adding comments as in a discussion board. There wasn’t a clear preference for one mode over another overall, except in one sense: about half of the students preferred using the face to face mode for discussion within their own teams, and with their partner teams (those they are giving feedback to and receiving feedback from). There was not as much discussion of the feedback, whether within the team or with the partner teams, in the online format, students reported, and they valued the opportunity for that discussion. Figl et al. conclude that it would be best to combine online, asynchronous text reviews with face to face activities, perhaps even with synchronous chat or voice options.

The study reported in Guardardo & Shi 2007 focused on asynchronous, written feedback for the most part; the authors recorded online, discussion-board feedback on essays and compared that with a later draft of each essay. They wanted to know if students used or ignored these peer comments, and what they thought of the experience of receiving the asynchronous, written feedback (they interviewed each student as well). All of the students had engaged in face to face peer feedback before the online mode, but the face to face sessions were not recorded so the nature of the comments in each mode was not compared. Thus, the results from this study that are most relevant to the present concern are those that come from interviews, in which the students compared their experiences of face to face peer feedback with the online, written, asynchronous exchange of feedback. Results were mixed, as noted in the table, but quite a few students said they felt more comfortable giving feedback without their names attached, while a significant number of students preferred the face-to-face mode because it made interacting with the reviewer/reviewee easier. The authors conclude that “online peer feedback is not a simple alternative to face-to-face feedback and needs to be organized carefully to maximize its positive effect” (458).

Cartney 2010 held a focus group of ten first-year students in a social work course who had engaged in a peer feedback exercise in which essays and comments on essays, as well as follow up discussion, was to take place over email. Relevant to the present concern is that the focus group discussion revealed that several groups did not exchange feedback forms via email but decided to meet up in person instead in order to have a more interactive discussion. Some groups did exchange written, asynchronous, online feedback, citing discomfort with giving feedback to others to their “faces.” The author concludes that there may be a need to use more e-learning in curricula in order for students to become more accustomed to using it for dialogue rather than one-way communication. But I also see this as an indication that some students recognized a value in face to face, oral, synchronous communication.

Peer feedback via electronic, synchronous text-based chat vs. oral, face to face methods

This dichotomy contrasts two sorts of synchronous methods for peer feedback and assessment: those taking place online, through text-based systems such as “chats,” and those taking place face to face, orally.

Do comments given synchronously through text-based chats differ from those given orally, face to face? And do these two modes of commenting affect students’ revisions of work differently?

A PDF version of both of the tables below can be downloaded here.

Sullivan & Pratt 1996  looked at two writing classes: in one class all discussions and peer feedback took place through a synchronous, electronic, text-based chat system and in the other discussions and peer feedback took place face to face, orally. They found that writing ability increased slightly more for the computer-assisted class over the traditional class, and that there were differences in how the students spoke to each other in the electronic, text-based chat vs. face to face, orally. The authors stated that the face to face discussion was less focused on the essay being reviewed than in the online chats (but see my criticisms of this interpretation here). They also found that the electronic chats were more egalitarian, in that the author did not dominate the conversation in them in the same way as happened with the face to face chats. The authors conclude (among other things) that discussions through online chats may be beneficial for peer assessments, since their study “showed that students in the computer-assisted class gave more suggestions for revision than students in the oral class” (500), and since there was at least some evidence for greater writing improvement in the “chat” class.

Braine 2001 (I haven’t done an earlier summary of this article in my blog) looked at students in two different types of writing classes in Hong Kong (in English), similar to those discussed in Sullivan & Pratt (1996), in which one class has all discussions and peer assessment taking place orally, and the other has these taking place on a “Local Area Network” that allows for synchronous, electronic, text-based chats. He looked at improvement in writing between a draft of an essay and a revision of that essay (final version) after peer assessment. Braine was testing students’ ability to write in English only, through the “Test of Written English.” He found that students’ English writing ability improved a bit more for the face-to-face class than the computer-mediated class, and that there were significant differences in the nature of discussions in the two modes. He concluded that oral, face-to-face discussions are more effective for peer assessment.

Liu & Sadler 2003  contrasted two modes of peer feedback in two composition classes, one of which wrote comments on essays by hand and engaged in peer feedback orally, face to face, and the other wrote comments on essays digitally, through MS Word, and then engaged in peer discussion through an electronic, synchronous, text-based chat during class time. The authors asked about differences in these  modes of commenting, and whether they had a differential impact on later essay revisions. Liu & Sadler were not focused on comparing the asynchronous commenting modes with the synchronous ones, but their results show that there was a higher percentage of “global” comments in both of the synchronous modes, and a higher percentage of “local” comments in the asynchronous ones. They also found that there was a significantly higher percentage of “revision-oriented” comments in the oral discussion than in the electronic chat. Finally, students acted more often on the revision-oriented comments given in the “traditional” mode (handwritten, asynchronous comments plus oral discussion) than in the computer-mediated mode (digital, asynchronous comments plus electronic, text-based chat). They conclude that for asynchronous modes of commenting, using digital tools is more effective than handwriting (for reasons not discussed here), and for synchronous modes of commenting, face to face discussions are more effective than text-based, electronic chats (219-221). They suggest combining these two methods for peer assessment.

Jones et al 2006  studied interactions between peer tutors in an English writing centre in Hong Kong and their clients, both in face to face meetings and in online, text-based chats. This is different from the other studies, which were looking more directly at peer assessment in courses, but the results here may be relevant to what we usually think of as peer assessment. The authors were looking at interactional dynamics between tutors and clients, and found that in the face-to-face mode, the relationship between tutors and clients tended to be more hierarchical than in the electronic, online chat mode. They also found that the subjects of discussion were different between the two modes: the face-to-face mode was used most often for “text-based” issues, such as grammar and word choice, while in the electronic chats the tutors and clients spoke more about wider issues such as content of essays and process of writing. They conclude that since the two modes differ and both serve important purposes, it would be best to use both modes.

Implications/discussion

This set of studies is not the result of a systematic review of the literature; I did not follow up on all the other studies that cited these, for example. A systematic review of the literature might add more studies to the mix. In addition, there are more variables that should be considered (e.g., whether the students in the study underwent peer assessment training, how much/what kind; whether peer assessment was done using a standardized sheet or not in each study, and more).

Nevertheless, I would like to consider briefly if these studies provide any clarity for direction regarding written peer assessment vs. oral, face-to-face.

For written, asynchronous modes of peer assessment (e.g., writing on essays themselves, writing on peer assessment forms) vs. oral, face-to-face modes, the studies noted here (van den Berg, Admiraal and Pilot (2006) and Hewett (2000)) suggest that in these two modes students give different sorts of comments, and for a fuller picture peer assessment should probably be conducted in both modes. Regarding student views of both modes (Figl et al. (2006), Guardardo & Shi (2007), Cartney (2010)), evidence is mixed, but there are at least a significant number of students who prefer face-to-face, oral discussions if they have to choose between those and asynchronous, written peer assessment.

For written, synchronous modes of peer assessment (e.g., electronic, text-based chats) vs. oral, face-to-face, the evidence here is all from students for whom English is a foreign language, but some of the results might still be applicable to other students (to determine this would require further discussion than I can engage in now). All that can be said here is that the results are mixed. Sullivan & Pratt (1996) found some, but not a lot of evidence that students using e-chats improved their writing more than those using oral peer assessment, but Braine (2001) found the opposite. However, they were using different measures of writing quality. Sullivan & Pratt also concluded that the face-to-face discussions were less focused and effective than the e-chat discussions, while Braine concluded the opposite. This probably comes down in part to interpretation of what “focused” and “effective” mean.

Liu & Sadler (2003) argued that face-to-face modes of synchronous discussion are better than text-based, electronic, synchronous chats–opposing Sullivan & Pratt–because there was a higher percentage of “revision-oriented” conversational turns (as a % of total turns) in the face-to-face mode, and because students acted on the revision-oriented comments more in the traditional class (both writing comments on paper and oral, face-to-face peer discussion) than in the computer-mediated class (digital comments in MS Word and e-chat discussions). Jones et al. (2006) found that students and peer tutors talked about different types of things, generally, in the two modes and thus concluded that both should be used. But that study was about peer tutors and clients, which is a different situation than peer assessment in courses.

So really, little can be concluded, I think, from looking at all these studies, except that it does seem that students tend to say different types of things in different modes of communication (written/asynchronous, written/synchronous, oral/face-to-face/synchronous), and that those things are all valuable; so perhaps what we can say is that using a combination of modes is probably best.

Gaps in the literature

Besides more studies to see if better patterns can emerge (and perhaps they are out there–as noted above, my literature search has not been systematic), one gap is that no one, so far, has considered video chats, such as Google Hangouts, for peer assessment. Perhaps the differences between those and face-to-face meetings might not be as great as between face-to-face meetings and text-based modes (whether synchronous chats or asynchronous, written comments). And this sort of evidence might be useful for courses that are distributed geographically, so students could have a kind of face-to-face peer assessment interaction rather than just giving each other written comments and carrying on a discussion over email or an online discussion board. Of course, the problem there would be that face-to-face interactions are best if supervised, even indirectly, so as to reduce the risk of people treating each other disrespectfully, or offering criticisms that are not constructive.

So, after all this work, I’ve found what I had guessed before starting: it’s probably best to use both written, asynchronous comments and oral, face-to-face comments for peer assessment.

 

Works Cited

Braine, G. (2001) A study of English as a foreign language (EFL) writers on a local-area network (LAN) and in traditional classes, Computers and Composition 18,  275–292. DOI: http://dx.doi.org/10.1016/S8755-4615(01)00056-1

Cartney, P. (2010) Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used, Assessment & Evaluation in Higher Education, 35:5, 551-564. DOI: http://dx.doi.org/10.1080/02602931003632381

Figl, K., Bauer, C., Mangler, J., Motschnig, R. (2006) Online versus Face-to-Face Peer Team Reviews, Proceedings of Frontiers in Education Conference (FIE). San Diego: IEEE. See here for online version (behind a paywall).

Guardado, M., Shi, L. (2007) ESL students’ experiences of online peer feedback, Computers and Composition 24, 443–461. Doi: http://dx.doi.org/10.1016/j.compcom.2007.03.002

Hewett, B. (2000) Characteristics of Interactive Oral and Computer-Mediated Peer Group Talk and Its Influence on Revision, Computers and Composition 17, 265-288. DOI: http://dx.doi.org/10.1016/S8755-4615(00)00035-9

Jones, R.H., Garralda, A., Li, D.C.S. & Lock, G. (2006) Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers, Journal of Second Language Writing 15,  1–23. DOI: http://dx.doi.org/10.1016/j.jslw.2005.12.001

Liu, J. & Sadler, R.W. (2003) The effect and affect of peer review in electronic versus traditional modes on L2 writing, Journal of English for Academic Purposes 2, 193–227. DOI: http://dx.doi.org/10.1016/S1475-1585(03)00025-0

Sullivan, S. & Pratt, E. (1996) A comparative study of two ESL writing environments: A computer-assisted classroom and a traditional oral classroom, System 29, 491-501. DOI: http://dx.doi.org/10.1016/S0346-251X(96)00044-9

Van den Berg, I., Admiraal, W.,  & Pilot, A. (2006) Designing student peer assessment in higher education: analysis of written and oral peer feedback, Teaching in Higher Education, 11:2, 135-147.  DOI: http://dx.doi.org/10.1080/13562510500527685