Tag Archives: cMOOCs

The “open” in MOOCs

I was part of a debate on the value of MOOCs for higher education during UBC’s Open Access Week, on Oct. 29, 2014.

Here is the description of the event and speaker bios, from the Open UBC 2014 website (not sure how long the link is going to be active, so copied the description here). (The following text is licensed CC-BY)

Debate: Are MOOCs Good for Higher Education?


Massive Open Online Courses (MOOCs) are subject to both hype and criticism. In 2012, the New York Times declared it was the year of MOOC, while critics branded 2013 as the year of the anti-MOOC. Today, the debate about the impact that MOOCs are having, and will have, on higher education continues and the topic of MOOCs often dominates conversations and questions about how changes in technologies, pedagogies, learning analytics, economics, student demographics, and open education will impact student learning. Many universities, including UBC, are experimenting with MOOCs in different ways – from trying to understand how to scale learning to how to best use MOOC resources on campus.
This session will explore different types of MOOCs, the possible role for MOOCs in higher education, and their benefits and drawbacks.

Speaker Bios.

Angela Redish (moderator) is the University of British Columbia’s Vice Provost and Associate Vice President for Enrollment and Academic Facilities. Dr. Redish served as a professor in the Department of Economics in the Faculty of Arts at UBC for nearly 30 years. She received her PhD in Economics from the University of Western Ontario, and her subsequent research studied the evolution of the European and North American monetary and banking systems. She served as Special Adviser at the Bank of Canada in 2000-2001, and continues to be active in monetary policy debates. Her teaching has been mainly in the areas of economic history, monetary and macro-economies.

Jon Beasley-Murray is an Associate Professor of Latin American Studies at the University of British Columbia. He has taught a wide range of courses, from Spanish Language to Latin American literature surveys and seminars on topics ranging from “The Latin American Dictator Novel” to “Mexican Film.” His  use of Wikipedia in the classroom has led to press coverage in multiple languages across the globe.

Jon is a vocal critic of the current model of learning and assessment common in Massive Open Online Courses (MOOCs), especially for the Humanities. He blogs at Posthegemony and is the author of Posthegemony: Political Theory and Latin America. His current book projects include “American Ruins,” on the significance of six ruined sites from Alberta, Canada, to Santiago de Chile. He is also working on a project on “The Latin American Multitude,” which traces the relationships between Caribbean piracy and the Spanish state, and indigenous insurgency and the discourse of Latin American independence.

Gregor Kiczales is a Professor of Computer Science at the University of British Columbia. Most of his research has focused on programming language design and implementation. He is best known for his work on aspect-oriented programming, and he led the Xerox PARC team that developed aspect-oriented programming and AspectJ. He is a co-author of “The Art of the Metaobject Protocol” and was one of the designers of the Common Lisp Object System (CLOS).  He is also the instructor for the Introduction to Systematic Program Design MOOC at Coursera. His discussion of the benefits of MOOCs can be found on the Digital Learning blog.

Christina Hendricks is a Senior Instructor in Philosophy and Arts One at the University of British Columbia. While on sabbatical during the 2012-2013 academic year, she participated in a number of MOOCs, of different types. Ever since then she has used her MOOC participation as a form of professional development and a way to make connections with other teachers and researchers around the world. She has also been one of the co-facilitators for an open online course (not massive) at Peer 2 Peer University called“Why Open?”, and is a part of a project called Arts One Open that is opening up the Arts One program as much as possible to the public.


For my portion of the debate, I wanted to talk about openness (duh…it was open access week!) and the degree to which what many people think of as MOOCs are open (some of them not very). I talked a bit about OERs (open educational resources) and open textbooks as ways to make MOOCs more open, and also about opening up the curriculum and content to co-creation by participants. This led me to cMOOCs, which could be described as having a more open pedagogy. I briefly touched on the value of cMOOCs for higher education, partly as professional development for faculty and for lifelong learning for students.

Jon Beasley-Murray has posted a copy of what he said during this debate, on his blog.

I’m told this session was recorded and the recording will be posted on YouTube, but I don’t think it’s there yet. In the meantime, here are my slides from the debate. I just had 12 minutes max, though I expect I went over time a bit!


Tweets about my OpenEd13 presentation

I gave a presentation at the Open Education Conference 2013 in Park City, Utah on Nov. 8, 2013. See my previous post for video, slides and bibliography.

I also wanted to see what people were saying about it during the presentation, in case there were some ideas there that are useful for my continuing research into this issue (and there were!). So I made a Storify story. Here’s the link to it on Storify if you’d rather see it there.

Open Education Conference 2013 Presentation

Difficulties Evaluating cMOOCs: Navigating Autonomy and Participation


Given Nov. 8, 2013, at the Open Education Conference 2013 at Park City, Utah.

Here is the video recording. I had only 25 minutes to present, and I was late starting because I was messing with my computer, trying to get it to show me “presenter mode” while it showed the slides on the screen so I could see my notes. Then I tried to see my notes on my phone. Then I gave up on my notes and just winged it! (I was using Keynote rather than PowerPoint, and I’ve never tried to use presenter mode before…the problem was that I couldn’t print out my notes because the printer in the “business centre” of the hotel was out of order!)

Here are the slides, which are licensed CC-BY so you can use any part of them if you want. Again, these were in Apple Keynote, and when I exported to PowerPoint some of the colours, fonts and alignments got messed up a bit.


When I get a free half a day (probably in December) I’ll write up a post in which I explain my argument in this presentation, including the slides at the end I didn’t get to!

Update Feb. 2015: Well, obviously I never wrote this up. Which is too bad, because now it’s been quite awhile and it would take me a long time to try to do so. I do plan to return to this research at some point (perhaps in the Summer of 2015), and see what else has been published in the meantime. And who knows what kind of open online course models there will be by then?!




Things either cited on the slides or quoted from in the presentation (at least, the original version as I wrote it, not the shortened one given in the video!)


Ahn, J., Weng, C., & Butler, B. S. (2013). The Dynamics of Open, Peer-to-Peer Learning: What Factors Influence Participation in the P2P University? (pp. 3098–3107). IEEE. doi: 10.1109/HICSS.2013.515


Cormier, D. (2010a). Knowledge in a MOOC – YouTube. Retrieved from https://www.youtube.com/watch?v=bWKdhzSAAG0


Cormier, D., & Siemens, G. (2010b). Through the Open Door: Open Courses as Research, Learning, and Engagement. Educause Review, 45(4), 30–39. Retrieved from http://www.educause.edu/ero/article/through-open-door-open-courses-research-learning-and-engagement


Downes, S. (2007, February 3). What Connectivism Is. Half an Hour. Retrieved from http://halfanhour.blogspot.com.au/2007/02/what-connectivism-is.html


Downes, S. (2009, February 24). Connectivist Dynamics in Communities. Half an Hour. Retrieved from http://halfanhour.blogspot.co.uk/2009/02/connectivist-dynamics-in-communities.html


Downes, S. (2013a). Supporting a Distributed Online Course ~ Stephen’s Web. Presented at the Information Technology Based Higher Education and Training ITHET 2013, Antalya, Turkey. Retrieved from http://www.downes.ca/presentation/327


Downes, S. (2013b). The Quality of Massive Open Online Courses. MOOC Quality Project. Retrieved from http://mooc.efquel.org/week-2-the-quality-of-massive-open-online-courses-by-stephen-downes/  A longer version of this post can be found here: http://cdn.efquel.org/wp-content/blogs.dir/7/files/2013/05/week2-The-quality-of-massive-open-online-courses-StephenDownes.pdf


Fournier, H., Kop, R., & Sitlia, H. (2011). The Value of Learning Analytics to Networked Learning on a Personal Learning Environment. Presented at the 1st International Conference Learning Analytics and Knowledge, Banff, Alberta. Retrieved from http://nparc.cisti.nrc.ca/npsi/ctrl?action=shwart&index=an&req=18150452&lang=en


Kop, R. (2011). The challenges to connectivist learning on open online networks: Learning experiences during a massive open online course. The International Review of Research in Open and Distance Learning, 12(3), 19–38. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/882


Kop, R., Fournier, H., & Mak, J. S. F. (2011). A pedagogy of abundance or a pedagogy to support human beings? Participant support on massive open online courses. The International Review of Research in Open and Distance Learning, 12(7), 74–93. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1041


Lane, L. M. (2013). An Open, Online Class to Prepare Faculty to Teach Online. Journal of Educators Online10(1), n1. Retrieved from http://www.thejeo.com/Archives/Volume10Number1/Lane.pdf


Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In Proceedings of the 7th International Conference on Networked Learning 2010 (pp. 266–275). University of Lancaster. Retrieved from http://eprints.port.ac.uk/5605/


McAuley, A., Stewart, B., Siemens, G., & Cormier, D. (2010). The MOOC model for digital practice. SSHRC Knowledge Synthesis Grant on the Digital Economy. Retrieved from http://www.edukwest.com/wp-content/uploads/2011/07/MOOC_Final.pdf


Milligan, C., Littlejohn, A., & Margaryan, A. (2013). Patterns of Engagement in Connectivist MOOCs. Journal of Online Teaching and Learning, 9(2). Retrieved from http://jolt.merlot.org/vol9no2/milligan_0613.htm


Siemens, G. (2006, November 12). Connectivism: Learning Theory or Pastime for the Self-Amused? elearnspace. Retrieved from http://www.elearnspace.org/Articles/connectivism_self-amused.htm


Siemens, G. (2008, August 6). What is the unique idea in Connectivism? « Connectivism. Connectivism. Retrieved from http://www.connectivism.ca/?p=116


Siemens, G. (2012, June 3). What is the theory that underpins our moocs? elearnspace. Retrieved from http://www.elearnspace.org/blog/2012/06/03/what-is-the-theory-that-underpins-our-moocs/


Waite, M., Mackness, J., Roberts, G., & Lovegrove, E. (2013). Liminal Participants and Skilled Orienteers: Learner Participation in a MOOC for New Lecturers. Journal of Online Teaching and Learning, 9(2). Retrieved from http://jolt.merlot.org/vol9no2/waite_0613.htm


Williams, R., Karousou, R., & Mackness, J. (2011). Emergent learning and learning ecologies in Web 2.0. The International Review of Research in Open and Distance Learning, 12(3), 39–59. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/883


Williams, R. T., Mackness, J., & Gumtau, S. (2012). Footprints of emergence. The International Review of Research in Open and Distance Learning, 13(4), 49–90. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1267

Evaluating a cMOOC using Downes’ four “process conditions”

This is the third in a series of posts on a research project I’m developing on evaluating cMOOCs. The first can be found here, and the second here. In this post I consider an article that uses Downes’ four process conditions” for a knowledge-generating network to evaluate a cMOOC. In a later post I’ll consider another article that takes a somewhat critical look at these four conditions as applied to cMOOCs.

Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In Proceedings of the 7th International Conference on Networked Learning 2010 (pp. 266–275). Retrieved from http://eprints.port.ac.uk/5605/

Connexion, Flickr photo by tangi_bertin, licensed CC-BY

In this article, Mackness et al. report findings from interviews of participants in the CCK08 MOOC (Connectivism and Connective Knowledge 2008; see here for a 2011 version of this course) insofar as these relate to Downes’ four process conditions for a knowledge-generating network: autonomy, diversity, openness, interactivity. In other words, they wanted to see if these conditions were met in CCK08, according to the participants. To best understand these results, if you’re not familiar with Downes’ work, it may be helpful to read an earlier post of mine that addresses and tries to explain these conditions.

Specifically, the researchers asked: “To what extent were autonomy, diversity, openness and connectedness/interactivity a reality for participants in the CCK08 MOOC and how much they were affected by the course design?” (271). They concluded that, in this particular course at least, there were difficulties with all of these factors.


Data for this study came from 22 responses by participants (including instructors) to email interview questions (out of 58 who had self-selected, on a previous survey sent to 301 participants, to be interviewed). Unfortunately, the interview questions are not provided in the paper, so it’s hard to tell what the respondents were responding to. I find it helpful to see the questions so as to better understand the responses given, and be able to undertake a critical review of the interpretation of those responses given in an article.



The researchers note that most respondents valued autonomy in a learning environment: “Overall, 59% of interview respondents (13/22) rated the importance of learner autonomy at 9 or 10 on a scale of 1-10 (1 = low; 10 = high)” (269). Unfortunately, I can’t tell if this means they valued the kind of autonomy they experienced in that particular course, or whether they valued the general idea of learner autonomy in an abstract way (but how was it defined?). Here is one place, for example, where providing the question asked would help readers understand the results.

Mackness et al. then argue that nevertheless, some participants (but how many out of the 22?) found the experience of autonomy in CCK08 to be problematic. The researchers provided quotes from two participants stating that they would have preferred more structure and guidance, and one course instructor who reported that learner autonomy led to some frustration that what s/he was trying to say or do in the course was not always “resonating with participants” (269).

The authors also provide a quote from a course participant who said they loved being able to work outside of assessment guidelines, but then comment on that statement by saying that “autonomy was equated with lack of assessment”–perhaps, but not necessarily (maybe they could get good feedback from peers, for example? Or maybe the instructors could still assess something outside of the guidelines? I don’t know, but the statement doesn’t seem to mesh, by itself, with the interpretation).  Plus, the respondent saw this as a positive thing, whereas the rhetorical aspects of the interpretation suggest it was a negative, a difficulty with autonomy. I’m not seeing that.

The researchers conclude that the degree of learner autonomy in the course was affected by the following:

levels of fluency in English, the ‘expertise divide’, assessment for credit participants, personal learning styles, personal sense of identity and the power exerted, either implicitly or explicitly, by instructors through their communications, status and reputation, or by participants themselves….” (271)

In addition, there were reports of some “trolling” behaviour on the forums, which led some participants to “retreat to their blogs, effectively reducing their autonomy” (271). The authors point out that some constraint on autonomy in the forums through discouraging or shutting down such behaviour may have actually promoted autonomy amongst more learners.


The researchers note that learner diversity was certainly present in the course, including diversity in geography, language, age, and background. They give examples of diversity “reflected in the learning preferences, individual needs and choices expressed by interview respondents” (269).

However, diversity was also a problem in at least one respect, namely that not all learners had the “skills or disposition needed to learn successfully, or to become autonomous learners in a MOOC” (271). This is not so much of a problem if there is significant scaffolding, such as support for participants’ “wayfinding in large online networks,” but CCK08 was instead designed to have “minimal instructor intervention” (271). In addition, in order to promote sharing in a network like a cMOOC, there needs to be a certain amount of trust built up, the authors point out; and the more large and diverse the network, the more work may need to be done to help participants build that trust.


CCk08 was available, for free, to anyone who wanted to participate (without receiving any university or other credits), so long as they had a reliable web connection. The interview data suggests that participants interpreted “openness” differently: some felt they should (and did) share their work with others (thus interpreting openness as involving sharing one’s work), some worked mostly alone and did not do much or any sharing–thereby interpreting openness, the author suggest, merely as the idea that the course was open for anyone with a reliable web connection to participate in. The authors seem to be arguing here that these differing conceptions of openness are problematic because there was an “implicit assumption in the course was that participants would be willing or ready to give and receive information, knowledge, opinions and ideas; in other words to share freely” (270), but that not everyone got that message. They point to a low rate of active participation: only 14% of the total enrolled participants (270).

They also note that amongst participants there was no “common understanding of openness as a characteristic of connectivism” (270), implying that there should have been. But I wonder if conscious understanding of openness, and the ability to express that as a clear concept, is necessary for a successful connectivist course. This is just a question at this point–I haven’t thought it through carefully. I would at least have liked to have seen more on why that should be considered a problem, as well as whether the respondents were asked specifically for their views of openness. The responses given in this section of the paper don’t refer to openness at all, making me think perhaps the researchers interpreted understandings of openness from one or more of the other things respondents said. That’s not a problem by itself, of course, but one might have gotten different answers if one had asked them their views of openness directly, and answers that might have been therefore more relevant to concluding whether or not participants shared a common understanding of openness.

Finally, Mackness et al. argue that some of the barriers noted above also led to problems in regard to participants’ willingness to openly communicate and share work with others: this can be “compromised by lack of clarity about the purpose and nature of the course, lack of moderation in the discussion forums, which would be expected on a traditional course, and the constraints (already discussed in relation to autonomy and diversity) under which participants worked” (272).


 There were significant opportunities for interaction, for connecting with others, but the authors note that what is most important is not whether people did connect with others (and how much) as what these connections made possible. Respondents noted some important barriers to connecting as well as problems that meant some of the interactions did not yield useful benefits. As noted above, some participants pointed to “trolling” behaviour on the forums, and one said there were some “patronising” posts as well–which, the respondent said, likely led some participants to disengage from that mode of connection. Another respondent noted differences in expertise levels that led him/her to disengage when s/he could no longer “understand the issues being discussed” (271).

The researchers conclude that connectivity alone is not sufficient for effective interactivity–which of course makes sense–and that the degree of effective interactivity in CCK08 was not as great as it might have been with more moderation by instructors. However, the size of the course made this unfeasible (272).

One thing I would have liked to have seen in this analysis of “interactivity” is what Downes focuses on for this condition, namely the idea that the kind of interactivity needed is that which promotes emergent knowledge–knowledge that emerges from the interactions of the network as a whole, rather than from individual nodes (explained by Downes here and here, the first of which the authors themselves cite). This is partly because if they used Downes’ framework, it would make sense to evaluate the course with the specifics of what he means by “interactivity.” It’s also partly because I just really want to see how one might try to evaluate that form of interactivity.


Mackness et al. conclude that

some constraints and moderation exercised by instructors and/or learners may be necessary for effective learning in a course such as CCK08. These constraints might include light touch moderation to reduce confusion, or firm intervention to prevent negative behaviours which impede learning in the network, and explicit communication of what is unacceptable, to ensure the ‘safety’ of learners. (272)

Though, at the same time, they point to the small size of their sample, and the need for further studies of these sorts of courses to validate their findings.

That makes sense to me, from my unstudied perspective of someone who has participated in a few large and one small-ish open online courses, one of which seemed modeled to some degree along connectivist lines (ETMOOC). There was some significant scaffolding in ETMOOC, through starting off with discussions of connected learning and help with blogging and commenting on blogs. There wasn’t clear evidence of moderating discussions from the course collaborators (several people collaborated on each two-week topic, acting in the role of “instructors” for a brief time), except insofar as some of the course collaborators were very actively present on Twitter and in commenting on others’ blogs, being sure to tweet or retweet or bookmark to Diigo or post to Google+ especially helpful or thought-provoking things. We didn’t have any trolling behaviour that I was aware of, and we also didn’t have a discussion forum. But IF there were problems in the Google+ groups or in Twitter chats, I would have hoped one or more of the collaborators would have actively worked to address them (and I think they would have, though of course since it didn’t happen (to my knowledge) I can’t be certain).

Some further thoughts 

If one decides that Downes’ framework is the right one to use for evaluating an open online course like a cMOOC (which I haven’t decided yet; I still need to look more carefully at his arguments for it), it would make sense to unpack the four conditions more carefully and collect participants’ views on whether those specific ways of thinking about autonomy, diversity, openness and interactivity were manifested in the course. The discussion of these four conditions is at times rather vague here. What, more specifically, does learner “autonomy” mean, for example? Even if they don’t want to use Downes’ own views of autonomy, it would be helpful to specify what conception of autonomy they’re working with. I’ve also noted a similar point about interactivity, about which the discussion in the paper is also somewhat vague–what sort of interactivity would have indicated success, exactly, beyond just participants communicating with each other on blogs or forums?

I find it interesting that in his most recent writing on the topic of evaluating cMOOCs (see the longer version attached to this post, and my discussion of this point here (and the helpful comments I’ve gotten on that post!)), Downes argues that it should be some kind of expert in cMOOCs or in one of the fields/topics they cover that evaluates their quality, while here the authors looked to the participants’ experiences. Interesting, because it makes sense to me to actually focus on the experiences of the participants rather than to ask someone who may or may not have taken the course. That is, if one wants to find out if the course was effective for participants.

Still, I can see how some aspects of these conditions might be measured without looking at what participants experienced, or at least in other ways in addition to gathering participants’ subjective evaluations. The degree to which the course is “open,” for example, might have some elements that could be measured beyond or in addition to what participants themselves thought. Insofar as openness involves the course being open to anyone with a reliable internet connection to participate, without cost, and the ability to move into and out of the course easily as participants choose, that could be partly a matter of looking at the design and platform of the course itself, as well as participants’ evaluations of how easy it was to get into and out of the course. If openness also involves the sharing of one’s work, one could look to see how much of that was actually done, as well as ask participants about what they shared, why, and how (and what they did not, and why).

I just find it puzzling that in that recent post Downes doesn’t talk about asking participants about their experiences in a cMOOC at all. I’m not sure why.

[I just read a recent comment on an earlier post, which I haven’t replied to yet, which discusses exactly this point–it makes no sense to leave out student experiences. Should have read and replied to that before finalizing this post!]



Downes on evaluating cMOOCs

In my previous post I considered some difficulties I’m having in trying to figure out how to evaluate the effectiveness of cMOOCs. In this one I look at some of the things Stephen Downes has to say about this issue, and one research paper that uses his ideas as a lens through which to consider data from a cMOOC.

Stephen Downes on the properties of successful networks

This post by Stephen Downes (which was a response to a question I asked he and others via email) describes two ways of evaluating the success of a cMOOC through asking whether it fulfills the properties of successful networks. One could look at the “process conditions,” which for Downes are four: autonomy, diversity, openness, and interactivity. And/or, one could look at the outcomes of a cMOOC, which for Downes means looking at whether knowledge emerges from the MOOC as a whole, rather than just from one or more of its participants. I’ll look briefly at each of these ways of considering a cMOOC in what follows.

The four “process conditions” for a successful network are what Downes calls elsewhere a “semantic condition” that is required for a knowledge-generating network, a network that generates connective knowledge (for more on this, see longer articles here and here). This post discusses them succinctly yet with enough detail to give a sense of what they mean (the following list and quotes come from that post).

  • Autonomy: The individuals in the network should be autonomous. One could ask, e.g.: “do people make their own decisions about goals and objectives? Do they choose their own software, their own learning outcomes?” This is important in order that the participants and connections form a unique organization rather than one determined from one or a few individuals, in which knowledge is transferred in as uniform a way as possible to all (this point is made more explicitly in the longer post attached here).
  • Diversity: There must be a significant degree of diversity in the network for it to generate anything new. One could ask about the geographical locations of the individuals in the network, the languages spoken, etc., but also about whether they have different points of view on issues discussed, whether they have different connections to others (or does everyone tend to have similar connections), whether they use different tools and resources, and more.
  • Openness: A network needs to be open to allow new information to flow in and thereby produce new knowledge. Openness in a community like a cMOOC could include the ease with which people can move into and out of the community/course, the ability to participate in different ways and to different degrees, the ability to easily communicate with each other. [Update June 14, 2013: Here Downes adds that openness also includes sharing content, both that from within the course to those outside of it, and that gained from outside (or created by oneself inside the course?) back into the course.]
  • Interactivity: There should be interactivity in a network that allows for knowledge to emerge “from the communicative behaviour of the whole,” rather than from one or a few nodes.

To look at the success of a cMOOC from an “outcomes” perspective, you’d try to determine whether new knowledge emerged from the interactions in the community as a whole. This idea is a bit difficult for me to grasp, and I am having trouble understanding how I might determine if this sort of thing has occurred. I’ll look at one more thing here to try to figure this out.

Downes on the quality of MOOCs

Recently, Downes has written a post on the blog for the “MOOC quality project” that discusses how he thinks it might be possible to say whether a MOOC was successful or not, and in it he discusses the process conditions and outcomes further (to really get a good sense of his arguments, it’s best to read the longer version of this post, which is linked to the original).

Downes argues in the longer version that it doesn’t make sense to try to determine the purpose of MOOCs (qua MOOCs, by which I think he means as a category rather than as individual instances) based on “the reasons or motivations” of those offering or taking particular instances of them. This is because people may have varying reasons and motivations for creating and using MOOCs, which need not impinge on what makes for a good MOOC (just like people may use hammers in various ways–his example–that don’t impinge on whether a particular one hammer is a good hammer). Instead, he argues that we should look at “what a successful MOOC ought to produce as output, without reference to existing … usage.”

And what MOOCs ought to produce as output is “emergent knowledge,” which is

constituted by the organization of the network, rather than the content of any individual node in the network. A person working within such a network, on perceiving, being immersed in, or, again, recognizing, knowledge in the network thereby acquires similar (but personal) knowledge in the self.

Downes then puts this point differently, focusing on MOOCs:

[A] MOOC is a way of gathering people and having them interact, each from their own individual perspective or point of view, in such a way that the structure of the interactions produces new knowledge, that is, knowledge that was not present in any of the individual communications, but is produced as a result of the totality of the communications, in such a way that participants can through participation and immersion in this environment develop in their selves new (and typically unexpected) knowledge relevant to the domain.

 He then argues that the four process conditions discussed previously usually tend to produce this sort of emergent knowledge as a result, in the ways suggested in the above list. But, properties like diversity and openness are rather like abstract concepts such as love or justice in that they are not easily “counted” but rather need to be “recognized”: “A variety of factors–not just number, but context, placement, relevance and salience–come into play (that is why we need neural networks (aka., people) to perceive them and can’t simply use machines to count them.”

So far, so good; one might think it possible to come up with a way to evaluate a MOOC by looking at these four process conditions, and then assume that if they are in place, emergent knowledge is at least more likely to result (though it may not always do so). It would not be easy to figure out how to determine if these conditions are met, but one could come up with some ways to do so that could be justified pretty well, I think (even though there might be multiple ways to do so).

MOOCs as a language

But Downes states that while such an exercise may be useful when designing a course, it is less so when evaluating one after the fact–I’m not sure why this should be the case, though. He states that looking at the various parts of a course in terms of these four conditions (such as the online platform, the content/guest speakers, and more) could easily become endless–one could look at many, many aspects of a MOOC this way. But I don’t see why that would be more problematic in evaluating a course than in designing one.

Instead, Downes suggests we take a different tack in measuring success of MOOCs. He suggests we think of MOOCs as a language, “and the course design (in all its aspects) therefore as an expression in that language.” This is meant to take us away from the idea of using the four process conditions above as a kind of rubric or checklist in a mechanical way. The point rather is for someone who is already fluent in either MOOC design or the topic(s) being addressed in a MOOC to be able to look at the MOOC and the four conditions and “recognize” whether it has been successful or not. Downes states that “the bulk of expertise in a language–or a trade, science or skill–isn’t in knowing the parts, but in fluency and recognition, cumulating in the (almost) intuitive understanding (‘expertise’, as Dreyfus and Dreyfus would argue)” (here Downes refers to: http://www.sld.demon.co.uk/dreyfus.pdf).

So I think the idea here is that once one is fluent in the language of MOOCs or the “domain or discipline” of the topics they are about, one should be able to read and understand the expression in that language that is the course design, and to determine the quality of the MOOC by using the four conditions as a kind of “aid” rather than “checklist”. But to be quite honest, I am still not sure what it means, exactly, to use them as an “aid.” And this process suggests relying on those who have developed some degree of expertise in MOOCs to be able to make the judgment, thereby making the decision of successful vs. unsuccessful MOOCs come only from a set of experts.

Perhaps this could make sense, if we think of MOOCs like the product of some artisanal craft, like swordmaking–maybe it really is only the experts who can determine their quality, because perhaps there is no way to set out in a list of necessary and sufficient conditions what is needed for a successful MOOC, like it’s difficult (or impossible) to do for a high-quality sword (I’m just guessing on that one). Perhaps there are so many different possible ways of having a high quality MOOC/sword, with some aspects being linked to individual variations such that it’s impossible to describe each possible variation and what aspects of quality would be required for that particular variation. It may be that no one can possibly know in advance what all the possible variations of a successful MOOC/sword are, but that these can be recognized later.

But I’m not yet convinced that must be the case for MOOCs, at least not from this short essay. And I expect I would benefit from a closer reading of Downes’ other work, which might help me see why he’s going in this direction here. It would also help me see why he thinks the process conditions for a knowledge-generating network should be the ones he suggests.

Using Downes’ framework to evaluate the effectiveness of a cMOOC

This is a bit premature, as I admit I don’t understand it in its entirety, but I want to put out a few preliminary ideas. I’m leaving aside, for the moment, the idea of MOOCs as a language until I figure out more precisely why he thinks we should look at them that way, and then decide if I agree. I’m also leaving aside for the moment the question of whether I think the process conditions he suggests are really the right ones–I haven’t evaluated them or the reasons behind them and thus can’t say one way or the other at this point.

The four process conditions

One would have to figure out exactly how to define Autonomy, Diversity, and Openness, which is no easy task, but it seems possible to come to a justifiable (though not final or probably perfect) outline of what those mean, considering what might make for a knowledge-generating network. It might be a long and difficult process to do so, but at least possible, I think. Then, it would be fairly straightforward to devise a manageable (and only ever partial) list of things one could ask about, measure, humanly “recognize” (in the sense of not using a checklist mechanically…though again, I’m not entirely sure what that means) to see if a particular cMOOC fit these three criteria. Again, I have no idea how to do any of this right now, but I think it could be done.

But I am still unsure about the final one: interactivity. This is because it’s not just a matter of people interacting with each other; rather, Downes emphasizes that what is needed is interaction that allows for emergent knowledge. So to figure this one out, one already needs to understand what emergent knowledge looks like and how to recognize if it has happened. I understand the idea of emergent knowledge in an abstract sense, but it’s hard to know how I would figure out if some knowledge had emerged from the communicative interactions of a community rather than from a particular node or nodes. How would I tell if, as quoted above, “the structure of the interactions produce[d] new knowledge, that is, knowledge that was not present in any of the individual communications, but [was] produced as a result of the totality of the communications”? Or, to take another quote from the longer version of the post Downes did for the “MOOC quality project”, how would I know if “new learning occur[red] as a result of this connectedness and interactivity, it emerge[d] from the network as a whole, rather than being transmitted or distributed by one or a few more powerful members”?

I honestly am having a hard time figuring out where/how to look for knowledge that wasn’t present in any of the individual communications, but emerges from the totality of them. But I think part of the problem here is that I don’t understand enough about Downes’ view of connectivism and connectivist knowledge. I knew I should take a closer look at connectivism before trying to tackle the question of evaluating cMOOCs! Guess I’ll have to come back to this after doing a post or two on Downes’ view of connectivism & connective knowledge.


So clearly I have a long way to go to understand exactly what Downes is suggesting and why, before I can even decide if this would be a good framework for evaluating a cMOOC.

In a later post I will look at two research papers that look at cMOOCs through the lens of Downes’ four process conditions, to see how they have interpreted and used these.

I welcome comments on anything I’ve said here–anything I’ve gotten wrong, or any suggestions on what I’m still confused about?



Difficulties researching the effectiveness of cMOOCs

As noted in an earlier post, I have submitted some proposals for conference presentations on researching the effectiveness of connectivist MOOCs, or cMOOCs (see another one of my earlier posts for what a cMOOC is). I am using this post (and one or two later ones) to try to work through how one might go about doing so, and the problems I’ve considered only in a somewhat general way previously. I need to think things through by writing, so why not do that in the open?

I had wanted to think more carefully about connectivism before moving to some research questions about connectivist MOOCs, but for various reasons I need to get something worked out about possible research questions as soon as I can, so I’ll return to looking at connectivism in later posts.

The general topic I’m interested in (at the moment)

And I mean general. I want to know whether we can determine whether a cMOOC has been “effective” or “successful.” That’s so general as to mean almost nothing.

What might help is some specification of the purposes or goals of offering a particular cMOOC, so one could see if it has been effective in achieving those. This could be taken from any of a number of perspectives, such as:

  • If an institution is offering a cMOOC, what is the institution’s purpose in doing so? This is not something I’m terribly interested in at the moment.
  • What do those who are designing/planning/facilitating the cMOOC hope to get out if doing so, for themselves? This is also not what I’m particularly interested in for a research project.
  • What do those who are designing/planning/facilitating the cMOOC hope participants will get out if it? There are likely some reasons, articulated or not, why the designers thought a cMOOC would be effective for participants in some way, thus they decided to offer a cMOOC at all. This is closer to what I’m interested in, but there’s a complication.

The connectivist MOOC model as implemented so far by people such as Dave Cormier, Alec Couros, Stephen Downes and George Siemens encourages participants to set their own goals and purposes for participation, rather than determining what these are to be for all participants (see, e.g., McAuley, Stewart, Siemens, & Cormier, 2010 (pp. 4-5, 40); see The MOOC Guide for a history of cMOOC-type courses, and lists of more recent connectivist MOOCs here and here). As Stephen Downes puts it:

In the MOOCs we’ve offered, we have said very clearly that you (as a student) define what counts as success. There is no single metric, because people go into the course for many different purposes. That’s why we see many different levels of activity ….

Further, just what a cMOOC will be like, where it goes, what people talk about, depends largely on the participants–even though there are often pre-set topics and speakers in advance, the rest of what happens is mostly up to what is written, discussed, shared amongst the participants. The ETMOOC guide for participants emphasizes this:

What #etmooc eventually becomes, and what it will mean to you, will depend upon the ways in which you participate and the participation and activities of all of its members.

Thus, it’s hard to say in advance what participants might get out of a particular cMOOC, in part because it’s impossible to say in advance what the course will actually be like (beyond the scheduled presentations, which are only one of many parts of a cMOOC).

Some possible directions for research questions

Developing connections with other people

Photo Credit: Graylight via Compfight CC-BY

I at first thought that perhaps one could say cMOOCs should allow participants to, at the very least, develop a set of connections with other people that are used for sharing advice, information, comments on each others’ work, for collaborating, and more. As discussed in my blog post on George Siemens’ writings on connectivism, what may be most important to a course that is run on connectivist principles is not the content that is provided, but the fostering of connections and skills for developing new ones and maintaining those one has, for the sake of being able to learn continually into the future.

And even though I understand what Downes and others say about participants in cMOOCs determining their own goals and deciding for themselves whether the course has been a success, cMOOCs have been and continue to be designed in certain ways for certain reasons, at least some of which most likely has to do with what participants may get out of the courses. Some of those who have been involved in designing cMOOCs have emphasized the importance of forming connections between people, ideas and information.

Stephen Downes talks about this in “Creating the Connectivist Course” when he says that he and George Siemens tried to make the “Connectivism and Connective Knowledge” course in 2008 “as much like a network as possible.” In this video on how to succeed in a MOOC, Dave Cormier emphasizes the value of connecting with others in the course through commenting on their blog posts, participating in discussion fora, and other ways. The connections made in this way are, Cormier says, “what the course is all about.” Now, of course, Cormier states at the beginning and end of the video that MOOCs are open to different ways of success and this is just “his” way, but the tone of the video suggests that it would be useful for others as well. Cormier says something similar in this video on knowledge in a MOOC: participants in a MOOC “are [ideally?] going to come out with a knowledge network, a network of people and ideas that’s going to carry long past the end of [the] course date.”

So it made sense to me at first to consider asking about the effectiveness or success of a cMOOC through looking at whether and how participants made connections with each other, and especially whether those continue beyond the end of the course. But again, there are some complications, besides the important questions of just how to define “connections” so as to decide what data to gather, and then the technical issues regarding how to get that data.

Would we want to say that the course succeeded more if more people made connections to others, rather than less? Or how about the question of how many people each participant should ideally connect with–I don’t think more is necessarily better, but where do we draw the line to say that x number of people made y number of connections with others, so the course has been a success?

This is getting pedantic, but I’m trying to express the point that when you really dig into this kind of question and try to design a research project, you would have to address this kind of question, and it’s kind of ridiculous. It’s ridiculous because there are so many different ways that connecting with other people could be valuable for a person, and for one person, having made one connection ends up being much more valuable than for another who has made 50. So much depends on the nature and context of those connections, and those are going to be highly individual and likely impossible to specify in any general way.

Further, what if some participants are happy to watch a few presentations and read blogs and lurk in twitter chats but don’t participate and therefore don’t “connect” in a deeper sense (than just reading and listening to others’ work and words). Should we say that if there are a lot of such persons in a cMOOC, the course has not been successful? I don’t think so, if we’re really sticking to the idea that participants can be engaged in the course to the degree and for the reasons they wish.

One possibility would be to ask participants to reflect on the connections they’ve made and whether/why/how they are valuable. One might be able to get some kind of useful qualitative data out of this, and maybe even find some patterns to what allows for valuable connections. In other words, rather than decide in advance what sorts of connections, and how many, are required for a successful cMOOC, one could just gather data about what connections were made and why/how people found them valuable. If done over lots of cMOOCs, one might be able to devise some sort of general idea of what makes for valuable connections in cMOOCs.

But would it be possible to say, on the basis of such data, whether a particular cMOOC has been successful? If many people made some connections they found valuable, would that be more successful than if only a few did? Again, this leads to the problems noted above–it runs up against the point that in cMOOCs participants are free to act and participate how they wish, and if they wish not to make connections, that doesn’t necessarily have to mean the course hasn’t been “successful” for them.

Looking at participation rates

photo credit: danielmoyle via photopin CC-BY

One might consider looking at participation rates in a cMOOC, given that much of such a course involves discussions and sharing of resources amongst participants (rather than transferral of knowledge mainly from one or a few experts to participants). As this video by Dave Cormier demonstrates so well, cMOOCs are distributed on the web rather than taking place in one central “space” (though there may be a central hub where people go for easy access to such distributed information and discussions, such as a blog hub), and this means that a large part of the course is happening on people’s blogs, on Twitter, on lists of shared links, and elsewhere. So it would seem reasonable to consider the degree to which participants engage in discussions through these means. How many people are active in the sense of writing blog posts, commenting on others’ blog posts, participating in Twitter chats and posting things to the course Twitter hashtag, participating in discussion forums (if there are any; there were none in ETMOOC) or in social media spaces like Google+, etc?

This makes sense given the nature of cMOOCs, since if there were no participation in these ways then there would be little left of the course but a set of presentations by experts that could be downloaded and watched. Perhaps one could say that even if we can’t decide exactly how much participation (or connection, for that matter) is needed for “success,” an increase in participation (or connection) over time might indicate some degree of success.

But again, we run up against the emphasis on participants being encouraged to participate only when, where and how they wish, meaning that it’s hard to justify saying that a cMOOC with greater participation amongst a larger number of people was somehow more effective than one in which fewer people participated.  Or that a cMOOC in which participation and connections increased over time was more successful than one in which these stayed the same or decreased (especially since the evidence I’ve seen so far suggests that a drop off in participation over time may be common).

Determining your own purposes for participating in a cMOOC and judging whether you’ve reached them

Another option could be to ask participants who agree to be part of the research project to state early on what their goals for participating in the cMOOC are, and then towards end, and even in the middle, perhaps, ask them to reflect on whether they’re meeting/have met them.

Sounds reasonable, but then there are those people–like me taking ETMOOC–who don’t have a clear set of goals for taking an open online course. I honestly didn’t know exactly what I was getting into, nor what I wanted to get out of it because I didn’t understand what would happen in it. And as noted above, even though there may be some predetermined topics and presentations, what you end up focusing on/writing about/commenting on in discussion forums or others’ blogs/Tweeting about develops over time, as the course progresses. So some people may recognize this and be open to whatever transpires, not having any clear goals in advance or even partway through.

For those who do set out some goals for themselves at the beginning, it could easily be the case that many don’t end up fulfilling those particular goals by the end, but going in a different direction than what they could have envisioned at the beginning. In fact, one might even argue that that would be ideal–that people end up going into very different directions than they could have imagined to begin with might mean that the course was transformative for them in some way.

Thus, again, it’s difficult to see just how to make an argument about the effectiveness of a cMOOC by asking participants to set their goals out in advance and reflect on whether or not they’ve met them. Perhaps we could leave this open to people not having any goals but being able to reflect later on what they’ve gotten out of the course, and open to those who end up not meeting their original goals but go off in other valuable directions.
This would mean gathering qualitative data from things such as surveys, interviews or focus groups. I think it would be good to ask people to reflect on this partway through the course, at the end of the course, and again a few months or even a year later. Sometimes what people “get out of” a course doesn’t really crystallize for them until long after it’s finished.

Conclusions so far

It seems to me that there is a tension between the desire to have a course built in large part on the participation of individuals involved, and the desire to let them choose their level and type of participation. In some senses, cMOOCs appear to promote greater participation and connections amongst those involved, while also backing away from this at the same time. I understand the latter, and I appreciate it myself–that was one of the things that made ETMOOC so valuable for me. I was encouraged to choose what to focus on, what to write about, which conversations to participate in, based on what I found most important for my purposes (and based on how much time I had!). There are potential downsides to this, though, in that participants may not move far beyond their current beliefs, values and interests if they just look at what they find important based on those. But overall, I see the point and the value. I expect there are some good arguments in the educational literature for this sort of strategy that I’m not aware of.

Still, this is in tension, to some degree, with the emphasis on connecting and participating in cMOOCs. Perhaps the idea is that it would be good for people to do some connecting and participating, but in their own ways and on their own time, and if they choose not to we shouldn’t say they are not doing the course “correctly.” It might nevertheless be possible/permissible to suggest that, given the other side of this “tension,” looking at participation or connection rates could be considered as part of looking at the success of a cMOOC? Honestly, I’m torn here.

[Update June 7, 2013] I just came across this post by George Siemens, in which he doubts the value of lurking, at least in a personal learning network (PLN). There are likely differences of opinion amongst cMOOC proponents and those who offer them, on the value of letting learners decide exactly how much to participate.

It is, of course, possible that the whole approach I’m taking is misguided, namely trying to determine how one measure whether a cMOOC has been successful or not. I’m open to that possibility, but haven’t given up yet–not until I explore other avenues.

I had one other section to this post, but as it is already quite long, I moved that section to a new post, in which I discuss a suggestion by Stephen Downes as to how to evaluate the “success” of MOOCs. In that and/or perhaps another post I will also discuss some of the published literature so far on cMOOCs, and what the research questions and methods were in those studies.


Please comment/question/criticize as you see fit. As you can tell, I’m in early stages here and am happy for any help I can get.


MOOC engagement and disengagement

Recently I contrasted ds106 with a course in statistics from Udacity, as part of my participation in a course on Open Education from the Open University. I got very frustrated writing that post because I felt constrained by the script, by the instructions. It wasn’t that I had other things to say that didn’t fit the script; it was more that following the explicit instructions seemed to keep me from thinking of other things to say. I was busy saying what I was supposed to, and therefore didn’t leave myself mental space to consider much of anything else.

Usually I only write blog posts when I have something I want to reflect on, to share with others, to get feedback about. It’s self-generated, and I care about what I’m doing. That hasn’t been the case for many of the posts I’ve done for the Open Education course, and it has just felt far too forced and unmeaningful.

I decided to stop.

Apparently the post was actually useful to some, as some Twitter conversations & retweets indicated, but it still felt dull to me because I wasn’t the one deciding what to write, or whether to write at all. Okay, yes, ultimately I was the one, of course, since I didn’t need to (a) do this particular activity for the course, or (b) do it in the scripted way, or (c) join the course at all in the first place. So yes, I decided. But my point is more subtle. And it affects how I approach face-to-face teaching as well. 

In my previous post, I listed some of the major differences between ETMOOC and the OU course, and talked a bit about why I preferred the former. Here I want to focus on one particular downside to the OU course.

The directed assignment

There is probably a better word or phrase for this–I just mean an assignment or activity in which one is told exactly what to do. This is what we had, each week, several times a week, in the OU course. It is not what we had in ETMOOC.

In ETMOOC we had a few suggestions here and there for blog topics, things one could write about if one wanted. During some of the bimonthly topics there were lists of activities we might do if we wished, including reading/watching outside materials and writing about them. But there was a strong emphasis that one should choose one or just a few of these, or none at all (see, e.g., the post for the digital storytelling topic in ETMOOC). The activities were clearly suggestions, and participants could (and many did) blog about anything that caught their attention and interest in relation to the topics at hand, whether from the suggested activities, the presentations, the Twitter chats, or others’ blog posts.

My experience with the OU course was much different. The activities were written as directives rather than suggestions. Here, for example, is an activity about “connectivism” that I decided not to do (other examples of directions can be found by clicking on the #h817open tag to the right). I am going to blog about connectivism and how it informs the structure of cMOOCs, as it’s something I’m interested in, but that’s just the point. The way the activities in the course are written, one gets the strong message that directions should be followed. The rhetoric is clear. You may be interested in writing about something else, but then you’re not participating in the course.

Sometimes I followed the instructions; sometimes not. My choice, yes, but something else happens too.

Follow the path

Follow the path, CC-BY licensed flickr photo shared by Miguel Mendez

There could easily be, and for me at times there was, a strong enough feeling that I ought to follow directions that, well, I did. It’s just a sense that that’s what you do in a “course.” And the fact that this was an “open boundary” course–meaning it had students officially registered for credit as well as outside participants–probably contributed to it having a more traditional structure. But that structure suggested, implicitly, that one should do what the instructor says.

Incidentally, this was another difference from ETMOOC–in the OU course, there was clearly one instructor in the “expert” or “authority” role. In ETMOOC there were many people involved in both planning and facilitating, and unless they were giving one of the synchronous presentations, they acted just like every other participant in the course. The information about each week’s topic seemed to come from some anonymous source, without a clear authorial voice, even though it had a list of people at the end who were involved in working on that topic. It felt less hierarchical, more like a collective group of people learning together than a set of instructors vs. learners.

I’m not concerned about having specific, assigned readings, videos, or other materials; some of those for the OU course I found very helpful, and when one is faced with something unfamiliar, having a few common guideposts on the way is helpful when learning with others. What led me to disengage was being explicitly directed as to what to do with those materials, exactly what to write about. And even though I knew that was optional, the rhetorical  thrust of both the wording and the structure of the course indicated otherwise. 

I had a bit of a discussion with Inger-Marie Christensen in comments on one of her blog posts, here, about this issue. She rightly pointed out the danger of just skipping things in a MOOC that don’t seem immediately interesting to you, and I agree. I also see that by following directions I might end up finding new things that I’m interested in, engaged with, that I might not otherwise.

Still, I think that a balance can be struck: encouragement to at least engage with most or all of the topics, read or watch at least one or two things, and then choose from a variety of suggested topics to write about or activities to do (while also providing freedom to do something else related if one chooses). I think the value of greater engagement and more meaningful work by participants by offering such flexibility can outweigh the loss of perhaps missing some aspects of a topic.

Face-to-face courses

I felt this way earlier in the OU course, but continued on for awhile anyway:

And another implication struck me then, too:

But in Uni the students either just do what you ask or drop the course. And suddenly it’s hitting me that when I provide clear, detailed instructions on what to write for essays, my students may respond the way I did. How did I not see this before?

I often give very detailed essay assignments, saying exactly what should be written about. I have thought I’m doing students a favour by providing clear directives. And for some, that’s probably the case. But I’m also:

  • doing the hard work for them–wouldn’t it be better to ask them to find the important aspects of texts and arguments for themselves, based on what they want to talk about? 
  • leading their essays to be as rigid as my instructions, and so
  • likely preventing the excitement that comes when you really want to figure something out and work with a text (or something else) to do so, as well as
  • discouraging deep creativity in responding to the texts and issues we’re discussing.

Now, I actually do give students in third- and fourth-year courses more freedom, but I tend to be more directive in first- and second-year courses. And I’m wondering if I can strike more of a balance between specificity and flexibility. I realize that people new to philosophy can use clear guidance on how to write philosophy essays well, and sometimes that could mean telling them exactly what to write about. But does it have to? At the very least, I could make it clearer that the provided essay topics are suggestions rather than directives, and emphasize that there is room to experiment.

I could, thereby, open up students to the significant possibility of writing essays that are deeply problematic because I gave them the freedom to fail. But if I also give them detailed feedback and the chance to revise without penalty, then, well, that seems to me a good way to learn. And maybe they’ll be excited to do so in the process. Okay, at least some of them.

The bigger issue

But this doesn’t address the problem noted above: even if one says, explicitly, that directives are optional, one’s other words and course structure may indicate that, after all, they really should be followed. And/or, the learning experience for many has for so long been such that when the instructor gives suggestions for what to do, many students may do that rather than come up with something on their own, because after all, the instructor is in the position of authority/expertise.

Even in ETMOOC, I recall several participants expressing how they felt “behind,” and needed to “catch up”; some even said they dropped out because they felt so behind. The message of flexibility may not have gotten through.

So I am left with two problems for my face-to-face teaching:

1. How to balance promoting flexibility and creativity, and thereby hopefully greater engagement, with the danger of learners only focusing on what they want and not going beyond their comfort zones (hmmm…seems to me I’ve visited this issue before).

2. Once I solve problem number 1, how to communicate that flexibility really means…flexibility?


MOOCs I have known

So far in 2013, while on sabbatical, I’ve actively participated in two MOOCs (Massive, Open, Online Courses): the OU course on Open Education, and ETMOOC (Educational Technology and Media MOOC). The latter was one of the best educational and professional development experiences I have ever had. The former…well…was just okay. Not bad, but not transformative like ETMOOC was.

I want to use this blog post to try to figure out why this might have been the case, and in the next one I’ll focus in on one particular difference and discuss it in more depth. 

I don’t think it was just the most obvious difference, that the OU course was an “open boundary” course, meaning it was a face-to-face course that invited outside participants as well, and ETMOOC was not–though ultimately, this may have been an important part of why the two differed so much.

A heated discussion

A heated discussion, CC-BY licensed flickr photo shared by ktylerconk

1. Synchronous presentations/discussions

ETMOOC had 1-2 synchronous presentations weekly, some by the “co-conspirators” (the group that planned and facilitated the course), and some by people outside the course. These were mostly held on a platform that allowed interactivity between the presenter and participants, including a whiteboard that participants could write on synchronously, and a backchannel chat that presenters often watched and responded to.

Instead of synchronous presentations, the OU course had assigned readings and/or videos for each week. ETMOOC had no such assigned materials, just the synchronous sessions. These are somewhat similar, though of course the presentations get you a sense of being more connected to the presenter than does reading a static text or video from them. There is at least the chance of asking live questions.

The OU course had one synchronous presentation and two synchronous discussions–the last one a discussion of how the course went & thoughts for the future. I could only attend one of these because of time zone issues, and there was much less interactivity–the chat was much less active, e.g.

2. Twitter

ETMOOC had a weekly Twitter chat that was, most weeks, very lively. I met numerous people through these chats that I followed/got followers from, and I still interact with them after the course. The Twitter stream for the #etmooc hashtag was quite busy most of the time, and still has a good number of posts on it. The OU course had no synchronous Twitter chat, and most days saw maybe 2-3 tweets on the #h817open hashtag. Few participants used Twitter, and those that did, didn’t use it very much. Mostly they announced their own blog posts/activities for the course, though some shared some outside resources that were relevant.

3. Discussion boards vs. Google + groups

OU had discussion boards where, I imagine, much of the discussion took place (instead, e.g., of being on Twitter). ETMOOC had no discussion boards, only blogs, Twitter, and a Google+ group.

Iwent to the OU boards a couple of times, and remembered that I really don’t like discussion boards. I am still not sure why. Partly because they feel closed even if they are available for anyone to view, and partly because I don’t feel like I’m really connecting to people when all I’m getting are their discussion board posts. Unlike Twitter or Google+, I can’t look at their other posts, their other interests and concerns. I stopped looking at the boards after the first week or so.

Fortunately, some of the members of the OU group set up their own Google+ group, so I did most of my discussion on there (and on others’ blogs). There was a small group of active participants on G+ that frequently commented on each others’ blogs, much smaller than the ETMOOC Google + group.


Linked, CC-BY licensed flickr photo shared by cali4beach

4. Building connections

ETMOOC started off with some presentations and discussions on the sorts of activities needed to become a more connected learner (unsurprisingly, as this was a connectivist MOOC), such as introductions to Twitter, to social curation, and to blogging (one of the two blogging sessions stressed the importance of commenting on others’ blogs, how to do it well, etc.)  (see the archive of presentations here). Many of us are still connecting after the course has finished–through a blog reading group, through Twitter and G+, and through collaborative projects we developed later.

OU had no such introduction to things that might help us connect with each other–again, unsurprisingly, as it wasn’t really designed as a cMOOC, it seems. There was a blog hub, and there were suggestions in the weekly emails to read some of the blog posts and comment on them, but it wasn’t emphasized nearly as much as in ETMOOC.

I don’t see myself continuing to connect with any people from the OU course; or maybe I will with just a couple. I didn’t really feel linked to them, even though we read and commented on each others’ blogs a bit. I think the lack of synchronous sessions, including Twitter chats, contributed to this–even in the ETMOOC presentations we talked with each other over the backchannel chat. Of course, things might have been different if I had participated in the online discussion forums in the OU course; but I still think those are not a very good method for connecting with others, for reasons noted above.

5. Learning objectives

The OU course had explicit learning objectives/outcomes for the course as a whole, and for each topic in the course. ETMOOC, by contrast, explicitly did not–see this set of Tweets for a discussion about why. The quick answer is that ETMOOC was designed to be a space in which participants could formulate their own goals and do what they felt necessary to meet them.

6. Dipping vs. completing

ETMOOC had about five topics, each of which ran for two weeks. They were more or less separate in that you didn’t have to have gone through the earlier ones to participate in the later ones. There was an explicit message being given out by the co-conspirators, picked up and resent by participants, that it was perfectly fine to start anytime and drop out whenever one needed/wanted, coming back later if desired. There was no “getting behind” in ETMOOC–that was the message we kept hearing and telling to each other. And after awhile, it worked, at least for me; I missed a few synchronous sessions and didn’t feel pressure to go back and watch them. I just moved on to things I was more interested in.

The OU course seemed more a “course” in the sense of suggesting, implicitly, through its structure, that it was something one should “complete–one should start at the beginning and go through all the sections, in order. Some of the later activities built directly on the earlier ones. Now, clearly, this makes sense in the context of having a set of course objectives that are the same for all–participants can’t meet those if there isn’t a series of things to read/watch/do to get to the point where they can fulfill them.


So, clearly, two very different MOOCs, doing different things, for different purposes. Obviously, for some people in some contexts and for some purposes, each one is going to have upsides and downsides. In the next post I focus on one particular downside, for me, of the OU course (though, as you can tell from my tone in the above list, I found ETMOOC more engaging). I also appreciated the flexibility, which the next post addresses.



Contrasting the xMOOC and the … ds106 (#h817open, Activity 14)

For week four of the Open University course on Open Education, we were asked to compare MOOC models: either ds106 or the Change MOOC with something from Coursera or Udacity, focusing on “technology, pedagogy, and general approach and philosophy.”

I decided to go ahead and do this activity (though I’m not doing all of them for the course) because I really want to get a better sense of ds106. Plus, though I’ve explored Coursera a fair bit, and even signed up for one of their courses to see what it’s like being a participant, I haven’t looked at Udacity at all. While I kind of don’t care if I look at Udacity, this activity is a good excuse to look at ds106, which I do care about, and, well, I’ll at least know a bit more about Udacity in case that ever comes in handy.


“DS” stands for digital storytelling, and this course began in 2010, started by Jim Groom at the University of Mary Washington. It still has students registered officially at UMW, and there are sections at other campuses as well (see “other Spring 2013 courses” at the top of the ds106 site). In addition, it has, well, I have no idea how many other online participants who are participating in parts or all of the course. (There are over 150 blogs listed in the “open online participants” section, but that may not be the same as the number of people who are actually participating. And that doesn’t count the on-campus students.)

One thing that stands out about ds106, among many others, is that while it’s a course that has specific beginning and end times for on-campus participants, it explicitly invites anyone to drop in anytime they like and stay for as long (or as short) as they like. Some people may be participating in a fairly in-depth way, by setting up blogs that are syndicated on the site, while others may just do a few assignments here and there (thus, the near-impossibility of figuring out how many people are actually “participating” at any given time).

Ways of participating in ds106 (for open online participants)

1. The daily create: a low-key, low-commitment, super fun way to participate. Every day there is a new suggestion to create something, and anyone can do one or more of these and add them to the collection. The daily create site explains:

The daily create provides a space for regular practice of spontaneous creativity through challenges published every day. Each assignment should take no more than 15-20 minutes. There are no registrations, no prizes, just a community of people producing art daily.

For example, today’s daily create (April 21, 2013) is: “Take a photograph of something you must see everyday. Make it look like something else!” Once it’s done you simply upload it to Flickr with some specific tags, and voilà, they show up on the daily create site (well, barring some technical hiccups and such). You can also search Flickr for the specific tag for today and find all the creations. Utterly cool.

I decided to do the Daily Creates for April 21 and 22, and had much fun with them. You can see my photos here and here. (I’ve got a lot of work to do on the “creative” end of things.)

2. Do some assignments from the “open assignment bank.” According to the “about” page for the assignments, they are all created by ds106 students. Those who are taking the course in a formal sense on a campus don’t need to do all the same assignments–they can pick and choose in order to put together those that will equal a certain number of “points” for a topic in the course. And anyone can do any one or more of the assignments, anytime they like. One can either do them on one’s blog and register it with the blog aggregator, or upload it to the site directly.

3. Don’t just do the assignments, write about them in a blog. Tell a story about why you chose that assignment, the context of what you created, and how you did it so others can see the process. Then, connect your blog to the ds106 hub so it shows up here. Further, read some posts from others’ blogs and comment. Build community.

4. Follow along with an on-campus course. You could look at the posts from a particular on-campus course (see top menu of ds106 site) and do similar topics as they are, and comment on their blogs/assignments.

This is all in addition to following ds106 on Twitter through the #ds106 hashtag.

And really, what other “course” has its own radio station? The most amazing thing about it is that it’s open to anyone to broadcast on, so far as I can tell. Well, anyone who can figure out how to do it. Find out what’s on by following @ds106radio or the #ds106radio hashtag on Twitter.

And there’s a “tv” station too, though I’m not sure how it works. I just know I got a tweet about an upcoming presentation, and when I clicked on the tv station site I could watch the presentation. Seems to be an option for live chat, too. You can follow @ds106tv or the #ds106tv hashtag on Twitter.

Udacity: “Elementary Statistics”

At some point I need to learn some statistics for my work in the Scholarship of Teaching and Learning. So I decided to take a look at Udacity’s “Elementary Statistics” course, for possibly doing it later. 

Image Hertzsprung-Russell Diagram, flickr photo licensed CC-BY, shared by Arenamontanus

General observations

Starting off with the main Udacity “How it works” page, I find something suspicious:

The lecture is dead
Bite-sized videos make learning fun

My experience with Coursera was that the traditional, hour-or-so-long lecture format seemed to just be cut up into shorter pieces, with a talking head talking for, if I remember correctly, 10-15 minutes at a time, interspersed by quizzes or other activities. We were still supposed to watch all the pieces. That’s not what I’d call killing the lecture: a lecture is still a lecture, no matter how short it is. This point has been made countless times before (here is just one example, from the excellent “More or Less Bunk” blog by Jonathan Rees). The lecture is dead. Long live the (mini) lecture. So I’m right away wondering whether Udacity is going to be any different on this point.

And I really, really don’t like the “branding” they do: they call us “Udacians” (Coursera calls participants “Courserians”), and they have their own new word–see, e.g., here. Yuck. It really puts me off. I don’t mind the sense of identity I got through doing ETMOOC, a sense of community, of belonging to something. I think it’s because the latter was developed over time, rather than foisted upon people when they start; with Udacity I feel like I’m being told I’m part of a community in order to put me into a feeling of caring about the company, rather than letting that feeling develop over time (if at all).

About the course front page: I hate the fact that I have to actually enroll to see how the course works (unlike ds106, in which all elements are out there for anyone to see and start doing). No wonder these kinds of MOOCs have such large enrollment numbers. You have to enroll just to see the thing in the first place.

Why do they require a registration before you can get a real sense of a course? At the very least, they can keep track of people that way to send them marketing materials. And they can gather a bunch of data about participants–all one’s courses, all one’s work inside those courses, can be tracked if they can attach work in the course to specific people. Which makes me wonder: what is that data being used for, exactly? The privacy policy doesn’t answer that question fully:

We use the Personally Identifiable Information that we collect from you when you participate in an online course through the Website for managing and processing purposes, including but not limited to tracking attendance, progress and completion of an online course.

But what do they do with the information about progress in courses, besides store it so you can go back to the course later and see how much you’ve done, or use it to issue certificates? Well, here’s one answer: it’s being used to make money. Udacity and similar companies can identify students who might be good matches for employers, and the employers can pay for the service.

But I wonder if any of this data could be used to provide useful information on online teaching and learning. Maybe, maybe not, but we may never know unless researchers can get access to the data. (Mike Caulfield explains here that institutions that are partnered with Coursera can get at least some data, but I don’t know what Udacity’s policies are in this regard.)

Nor do I expect that I, as a participant, will have detailed access to my data, because I don’t own it; they do–a problem discussed by Audrey Watters, here (and in a great presentation for ETMOOC, linked here).

I decide to register for an account and take a deeper look at the course–because really, I want to see how they killed the lecture.

Starting the course

The course goes right away into a short (>2 min) introductory video, and I pretty quickly get the hang of how this course works: very short videos (0-2 mins, some 30-45 secs long) followed by quick quiz questions (multiple-choice, fill in the blank, that sort of machine-gradable thing), back and forth for each “lesson” (though some video segments don’t have quiz questions attached). At the end of each lesson there is a problem set. And so it goes, for 12 lessons.

One nice thing is that there is a link to forum questions connected to each of the short videos, because if you go to the main forum page, you just get a bunch of discussions that aren’t clearly organized by topic or lesson. You can organize them by tags, but you have to know what the tags are to do a search on them. Another nice thing is that for each video you can click on the “ask a question” button, and it automatically adds the right tags for you for that particular video segment.

I skipped ahead to the first problem set and tried to do some of them, just to see what they’re like. All multiple choice, and like the “quizzes” in the lessons, you are told right away if your answer is right or wrong. In the quizzes you can just skip ahead to the answer if you can’t figure it out; not so in the problem sets. You have to keep trying until you get it right (a process of elimination, in may cases) or just skip the question. Or, you can always take a look at the discussion forums, where I found that sometimes someone had helpfully posted the answers.

Apparently there will be a final exam, but it won’t be ready online until May (not all the lessons are ready yet, either).

Is the lecture dead?

Yes and no.

The course does a great job of mixing lecture with participant activities, such as short quizzes to apply what’s just been said, or sending you to third-party sites to do activities there. In the first lesson, they sent us to do a face memory test from the BBC, and then asked us to put our scores into a Google form. Much of the rest of the first lesson referred back to this test and how one might think about the data generated by it. That’s a nice way to use an example for a stats lesson.

I didn’t make it all the way to the end of the first lesson, but if I had, I might see what they are actually doing with the data generated by student participants who take the test and upload their scores into the Google form. What’s it being used for? I think it’s uploaded anonymously, but I’m not sure because you access the form through the course interface itself. Hmmmmm.

[And if my BBC face test data was connected to my personally identifiable information, then I should have had to fill out a consent form for it to be used, right? Might they have gotten ethics approval to collect such data? Or maybe they don’t need to? The important thing here is that none of these questions are answered, even the question of whether my Google form data had identifiable information on it. I just don’t know.]

The videos still contain lectures, but they are so short as to hardly seem such; often there is a quiz every 30 secs to 1 minute (sometimes longer, but not much). So there is a good deal of participant activity going on as well (one might even call it a form of “active learning”). And the videos for this course are (mostly) not face shots of instructors talking, but rather some kind of digital whiteboard with text and diagrams.

One could say these aren’t like lectures because they are so interspersed with participants having to do something. But the pedagogical approach that underpins lecturing is still in evidence, namely the knowledge/information transmission approach (more on this, below). So in some sense, there are still lectures here; they are just very, very short.

I tend to think there’s nothing wrong with having some lecturing going on here and there, though I’m also rather drawn to Jacques Rancière’s The Ignorant Schoolmaster, which can be read as suggesting that one ought not to act an expert and engage in explaining things to learners at all (see, e.g., the section on “Emancipatory Method” here, and the nice summary by my colleague Jon Beasley-Murray here, along with a critique I have to think about further).

I expect Udacity means that “the hour-long lecture, without  participant activities to break it up” is dead (which, of course, it’s not, but that’s another matter). But the “expert” as transmitter of knowledge to be grasped, and the “learner” as taking on that knowledge in exactly the same way as the expert, is not.


The most striking difference in terms of technology is this. For the Udacity course, there is some pretty heavy technological investment going into the production of the course. The videos are not just recordings of professors talking, but often of a digital board that one of the instructors writes on with a stylus, in different colours. The video switches fairly seamlessly into a quiz: the quiz looks just like what was last seen on the video, but when you move to it suddenly click boxes appear, and suddenly you’re in interactive mode. The technological structure of the course may not be terribly complicated (what do I know about such things? pretty much nothing), but my point is that the main technological investment is happening at the “course” side.

What’s different about ds106 is that the participants themselves create things with technology, with software and applications, rather than being consumers of such products produced by those in charge of the course. Instead of just passively interacting with things made by others, ds106 participants learn how to use technology to create their own artifacts. Just a quick glance at the Assignment Bank or The Daily Create shows that course participation is heavily focused on making things rather than (only) taking in knowledge from others. As does the fact that all the assignments (and at least some, or many, of The Daily Creates) are created by course participants.

Pedagogy and philosophy

Making and replicating

The above point about different uses of technology in the Udacity course vs. ds106 reminds me of some things George Siemens said about the difference between “xMOOCs,” like those from Udacity and Coursera, and “cMOOCs,” or connectivist MOOCs, like ETMOOC and Change 11 (I discuss some of the differences in an earlier blog post). He states here that

Our MOOC model [cMOOC] emphasizes creation, creativity, autonomy, and social networked learning. The Coursera model emphasizes a more traditional learning approach through video presentations and short quizzes and testing. Put another way, cMOOCs focus on knowledge creation and generation whereas xMOOCs focus on knowledge duplication.

Is ds106 a cMOOC? It does have the focus on creating over duplication. Alan Levine argues that it’s not a MOOC at all:

To me, all other MOOCs, be they x or c type, sis [sic] to create the same content/curriculum for everyone in the course- they all do the same tasks. And to be honest, the framing points are actually weekly lectures, be they videos spawned out of xMOOCs or webinars. The instruction in these modes are teacher centric (even if people can banter in chat boxes).

Should we say that’s the definitive answer to the question? I don’t know, and really, it doesn’t matter in the end. But Levine has a point about other open online courses being more focused on weekly presentations (ETMOOC was like this) and having the same general topics for all each week, even if there aren’t always common assignments given to everyone (there weren’t in ETMOOC). ETMOOC was also more of a set “event” happening at a certain time (though, thankfully, many of us are continuing to think and discuss and work together afterwards on a “blog reading group”). ds106 is even less structured than that, being something one can participate in anytime, an ongoing community more than a course–except for those who are taking it as part of an official educational program, that is.

The Udacity course on statistics definitely holds to a model of knowledge duplication, in which participants learn things from experts and duplicate that knowledge on quizzes and problem sets. This is not surprising, given the topic, and not really a problem, given the topic. I found it more problematic when looking at a Coursera course on critical thinking and argumentation.

For all that, though, the Udacity course doesn’t encourage passivity in participants; one is continually doing things with the information being presented, instead of mainly watching or listening. It’s just that one isn’t really making or creating new artifacts, new knowledge in these activities, things to be contributed back to the community of learners. Except, of course, on the discussion forums, which are not really integral to the course. You can go to them if you have a question, or want to see answers to others’ questions, or want to answer others’ questions, but I think you can do the whole course without ever going to the forums.


I’m not familiar enough with educational theories to be able to say much of anything scholarly here, so I’ll just make a couple of quick observations that risk being so general as to caricature the approaches in these two online experiences.

In the Udacity course, the philosophical approach has already been stated above: a kind of expert-transmission model. The instructors are experts who should explain the topics in a way that will work for the most participants possible. There can be no adjustment in the instruction for different participants, as it must necessarily be the same for all in the main presentations and quizzes (though it can be altered over time, if evidence suggests a need for it). The assumption has to be that there can be a way to reach at least a good portion of a mass audience of learners, through clarity of presentation and testing of understanding along the way. If this doesn’t work for some people, they can hopefully get help through the forums (which have contributions from both participants and, at times, the instructors).

The learning experience is, from what I experienced in the first lesson and problem set, entirely instructor-directed, with the participants going through an already-set and -structured path through the course. It is possible to earn a certificate for a course, according to the FAQ page, if you complete a certain number of “mastery questions” correctly and thus achieve at least a certain “mastery level.” In this case, “mastery” means being able to replicate the knowledge one has ingested.

ds106, by contrast, (at least for open, online participants) is participant-directed rather than instructor-directed. Participants decide what they want to do, and when. There is no indication that one ought to follow a pre-set path through the course, nor that one should try to work through most or all of the topics.

The instructors in ds106 are not acting as “experts” for the open participants. There is nothing in the way of information being given to participants that they must somehow return back in the same form. There is only the ds106 handbook, which provides advice and tips for using digital tools as well for blogging about one’s artifacts, but participants then create their own artifacts and knowledge with those tools. Indeed, the “experts” in ds106 are not the instructors, at least for the open participants–it’s really the other students. They are the ones producing the artifacts, creating the assignments, and commenting on each others’ work and blogs.


It’s no secret on this blog that I prefer the “cMOOC” structure to the xMOOC one. Generally I prefer providing students with more freedom to investigate things they find to be engaging and valuable than to tell them exactly what they should do in order to “learn.” (Though my reservations about rhizomatic learning are also relevant here).

So it would probably seem that I’d prefer ds106 to the Udacity course. Which I do. I really appreciate that the “course” is about what participants can create rather than about what experts have to tell them.

But there just are some things that lend themselves okay well to the expert, knowledge-dissemination model, like basic statistics. That’s not to say that I don’t think participants can add important critical and creative knowledge to the field of stats, but at the start, one has to just grasp some of the basic concepts in order to understand the field well enough to do so. Or at least, to talk to others in the field about one’s ideas. And Udacity does a fairly good job of that, from what I’ve seen.

I expect I’d have a different response to a Udacity-type course in philosophy, however.


MOOCs and humanities, revisited

In the last post I discussed how I have come to learn about the different kinds of MOOCs through my participation in etmooc. I also said that through learning about a new kind of MOOC, the cMOOC or “network-based” MOOC, I was reconsidering my earlier concerns with MOOCs. Might the cMOOC do better for humanities than the xMOOC?

A humanities cMOOC

“Roman Ondák”, cc licensed ( BY ) flickr photo shared by Marc Wathieu

I haven’t yet decided whether or not one could do a full humanities course, such as a philosophy course, through a cMOOC structure. Brainstorming a little, though, I suppose that one could have a philosophy course in which:

  • Common readings are assigned
  • Presentations are given by course facilitators and/or guests, just as in etmooc
  • Participants are encouraged to blog about the readings and presentations and comment on each others’ blogs (through a course blog hub, like etmooc and ds106 have)
  • Dedicated Twitter hashtag, plus a group on a social network like Google+, and a group on a social bookmarking site like Diigo (see etmooc’s group site on Diigo)
  • Possibly a YouTube channel, for people to do vlogs instead of blogs if they want, or share other videos relevant to the course

Would this sort of structure be more likely to allow for teaching and practice of critical thinking, reading and writing skills, as I discussed in my earlier criticism of MOOCs (which was pretty much a criticism of xMOOCs)? I suppose it depends on what is discussed in the presentations, in part. The instructors/facilitators could model critical reading and thinking, through explaining how they are interpreting texts and pointing out potential criticisms with the arguments. They could talk about recognizing, criticizing, and creating arguments so that participants could be encouraged to present their own arguments in blogs as clearly and strongly as possible, as well as offering constructive criticisms of works being read–as well as each others’ arguments (though the latter has to be undertaken carefully, just as it is in a face to face course).

This would involve, effectively, peer feedback on participants’ written work. Rough guidelines for blog posts (at least some of them) could be given, so that in addition to reflective pieces (which are very important!) there could also be some blog posts that are focused on criticizing arguments in the texts, some on creating one’s own arguments about what’s being discussed, etc.

What you wouldn’t be able to do well with this structure are writing assignments in the form of argumentative essays. These take a long time to learn how to do well, and ideally should have more direct instructor/facilitator feedback rather than only peer feedback, in my view. Peer feedback is important too, but could lead to problems being perpetuated if the participants in a peer group share misconceptions.

Another thing you can’t do well with a cMOOC is require that everyone learn and be assessed on a particular set of facts, or content. A cMOOC is better for creating connections between people so that they can pursue their own interests, what they want to focus on. Each person’s path through a cMOOC can be very different. Thus, as noted in my previous post, there is not a common set of learning objectives; rather, participants decide what they want to get out of the course and focus on that.

One would need to have a certain critical mass of dedicated and engaged participants for this to work. If it’s a free and open course, then people will participate when they can, and can flit in and out of the topics as their time and interest allows. That’s fantastic, I think, though if there are few participants that might mean that for some sections of the course little is happening. So having a decent sized participant base is important. (How many? No idea.)

I envision this sort of possibility as a non-credit course for people who want to learn something about philosophy and discuss it with others. Why not give credit? There would have to be more focus on content and/or more formal assessments, I think (at least in the current climate of higher education).

A cMOOC as supplement to an on-campus course

Even if a full cMOOC course in philosophy or another humanities subject may not work, I can see a kind of cMOOC component to philosophy courses, or Arts One. In addition to the campus-based, in-person course, one could have an open course going alongside it. This is what ds106 is like. One could have readings and lectures posted online (or at least, links to buy the books if the readings aren’t readily available online), and then have a platform for students who are off campus to engage in a cMOOC kind of way.

Then, those off campus can participate in the course through their blog posts and discussions/resource sharing on the other platforms, like we do in etmooc. Discussion questions used in class could be posted for all online participants.  Students who are on campus could be blogging and tweeting and discussing with others outside the course as well as inside the course.

Frankenstein engraved

Frontispiece to Mary Shelley’s Frankenstein (1831),by Theodor von Holst [Public domain], via Wikimedia Commons. One of the texts on Arts One Digital.

Discussions would expand to include many more people with many more backgrounds and things to contribute, which is likely to enrich the learning experience. There might get to be too much for each individual to follow, but then one just has to learn to pick and choose what to read and comment on (more on this, below). All participants could make connections and continue discussions beyond the course itself.

Arts One has already started to move in this direction, with a new initiative called Arts One Digital. So far, there are some lectures posted, links to some online versions of texts, twitter feed, and blog posts. This is a work in progress, and we’re still figuring out where it should go. I think extending the Arts One course in the way described above might be a good idea.

Again, the main problem with this idea (beyond the fact that yes, it will require more personnel to design and run the off-campus version of the course) is getting a high number of participants. It won’t work well if there aren’t very many people involved–a critical mass is needed to allow people to find others they want to connect with in smaller groups, to engage in deeper discussions, to help build their own personal learning network.

Looking back at previous concerns with (x)MOOCs

Besides general worried about their ability to help students develop critical skills, I was also concerned in my earlier post with the following:

  • In the Coursera Course on reasoning and argumentation (“Think Again”) that I sat in on briefly, I found myself getting utterly overwhelmed by the number of things posted in the discussion board. I complained that I could scroll and scroll just to get through the comments on one post, to get down to the next post, and repeat for each of the thousands of posts. Even for one topic there were just too many posts.
  • I felt that the asynchronous discussion opportunities weren’t as good as synchronous ones, which allow for groups to be in the same mind space at the same time, feeding off each others’ ideas and coming up with new ideas. With asynchronous discussions, one might not get a response to one’s idea or comment until long after one has been actively thinking about it, and then at that point one may not be as interested in discussing it anymore (or at the very least, the enthusiasm level may be different).
  • The synchronous option of Google Hangouts seems to be a promising way to address the previous point, but I noted in my earlier post that there had been some reports of disrespectful behaviour in one or two of those in the “Think Again” course. I said I thought a moderator would be needed for such discussions, just as we have in face to face courses to ensure students treat each other respectfully.

Can a cMOOC address these concerns?

  1. From my experience with etmooc, the discussion does not have to get overwhelming. The thing is, each person focuses on what they want to focus on from the presentations, or from what others have said in their blogs, or from resources shared by others. There is no single “curriculum” that we all have to follow, so it’s not the case that everything posted by each person is relevant to everyone else’s interests and purposes for the course. This could be true of a philosophy or Arts One cMOOC as well, so it could be easier to pick and choose what, amongst the huge stream of things to read and think about, one wants to focus on.
  2. Synchronous discussions are difficult in a large group. In etmooc we have some opportunities for them in the presentations, which allow for people to write on the whiteboard, engage in a backchannel “chat,” and also take the mic and ask questions/offer comments. One could have the presentations have more time for discussion, perhaps, which could take place in part on the chat and in part via audio. It’s not as good as face to face discussions, though–much more fragmented.
  3. Google Hangouts are an alternative, though I haven’t tried doing one in etmooc. Some have, though, and reported success. However, the people taking etmooc are mostly professionals, both teachers and businesspeople, and they are both highly motivated and responsible/respectful. Having Google Hangouts where anyone in the world can show up could be inviting trouble. I don’t see a cMOOC addressing this problem.

cMOOCs in humanities–what’s not to love?

What other problems might there be with trying to do a cMOOC in humanities, whether on its own or as a supplement to another course? Or, do you love the idea? Let us know in the comments.

UPDATE: I just found, in that wonderfully synergistic way that etmooc seems to work, this blog post by Joe Dillon, which explains how well a cMOOC like etmooc stacks up to a face to face course. It’s just one example, but it can provoke some further thought on whether a cMOOC for humanities might be a good thing.