This is the third in a series of posts on a research project I’m developing on evaluating cMOOCs. The first can be found here, and the second here. In this post I consider an article that uses Downes’ four “process conditions” for a knowledge-generating network to evaluate a cMOOC. In a later post I’ll consider another article that takes a somewhat critical look at these four conditions as applied to cMOOCs.
Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In Proceedings of the 7th International Conference on Networked Learning 2010 (pp. 266–275). Retrieved from http://eprints.port.ac.uk/5605/
In this article, Mackness et al. report findings from interviews of participants in the CCK08 MOOC (Connectivism and Connective Knowledge 2008; see here for a 2011 version of this course) insofar as these relate to Downes’ four process conditions for a knowledge-generating network: autonomy, diversity, openness, interactivity. In other words, they wanted to see if these conditions were met in CCK08, according to the participants. To best understand these results, if you’re not familiar with Downes’ work, it may be helpful to read an earlier post of mine that addresses and tries to explain these conditions.
Specifically, the researchers asked: “To what extent were autonomy, diversity, openness and connectedness/interactivity a reality for participants in the CCK08 MOOC and how much they were affected by the course design?” (271). They concluded that, in this particular course at least, there were difficulties with all of these factors.
Data
Data for this study came from 22 responses by participants (including instructors) to email interview questions (out of 58 who had self-selected, on a previous survey sent to 301 participants, to be interviewed). Unfortunately, the interview questions are not provided in the paper, so it’s hard to tell what the respondents were responding to. I find it helpful to see the questions so as to better understand the responses given, and be able to undertake a critical review of the interpretation of those responses given in an article.
Results
Autonomy
The researchers note that most respondents valued autonomy in a learning environment: “Overall, 59% of interview respondents (13/22) rated the importance of learner autonomy at 9 or 10 on a scale of 1-10 (1 = low; 10 = high)” (269). Unfortunately, I can’t tell if this means they valued the kind of autonomy they experienced in that particular course, or whether they valued the general idea of learner autonomy in an abstract way (but how was it defined?). Here is one place, for example, where providing the question asked would help readers understand the results.
Mackness et al. then argue that nevertheless, some participants (but how many out of the 22?) found the experience of autonomy in CCK08 to be problematic. The researchers provided quotes from two participants stating that they would have preferred more structure and guidance, and one course instructor who reported that learner autonomy led to some frustration that what s/he was trying to say or do in the course was not always “resonating with participants” (269).
The authors also provide a quote from a course participant who said they loved being able to work outside of assessment guidelines, but then comment on that statement by saying that “autonomy was equated with lack of assessment”–perhaps, but not necessarily (maybe they could get good feedback from peers, for example? Or maybe the instructors could still assess something outside of the guidelines? I don’t know, but the statement doesn’t seem to mesh, by itself, with the interpretation). Plus, the respondent saw this as a positive thing, whereas the rhetorical aspects of the interpretation suggest it was a negative, a difficulty with autonomy. I’m not seeing that.
The researchers conclude that the degree of learner autonomy in the course was affected by the following:
levels of fluency in English, the ‘expertise divide’, assessment for credit participants, personal learning styles, personal sense of identity and the power exerted, either implicitly or explicitly, by instructors through their communications, status and reputation, or by participants themselves….” (271)
In addition, there were reports of some “trolling” behaviour on the forums, which led some participants to “retreat to their blogs, effectively reducing their autonomy” (271). The authors point out that some constraint on autonomy in the forums through discouraging or shutting down such behaviour may have actually promoted autonomy amongst more learners.
Diversity
The researchers note that learner diversity was certainly present in the course, including diversity in geography, language, age, and background. They give examples of diversity “reflected in the learning preferences, individual needs and choices expressed by interview respondents” (269).
However, diversity was also a problem in at least one respect, namely that not all learners had the “skills or disposition needed to learn successfully, or to become autonomous learners in a MOOC” (271). This is not so much of a problem if there is significant scaffolding, such as support for participants’ “wayfinding in large online networks,” but CCK08 was instead designed to have “minimal instructor intervention” (271). In addition, in order to promote sharing in a network like a cMOOC, there needs to be a certain amount of trust built up, the authors point out; and the more large and diverse the network, the more work may need to be done to help participants build that trust.
Openness
CCk08 was available, for free, to anyone who wanted to participate (without receiving any university or other credits), so long as they had a reliable web connection. The interview data suggests that participants interpreted “openness” differently: some felt they should (and did) share their work with others (thus interpreting openness as involving sharing one’s work), some worked mostly alone and did not do much or any sharing–thereby interpreting openness, the author suggest, merely as the idea that the course was open for anyone with a reliable web connection to participate in. The authors seem to be arguing here that these differing conceptions of openness are problematic because there was an “implicit assumption in the course was that participants would be willing or ready to give and receive information, knowledge, opinions and ideas; in other words to share freely” (270), but that not everyone got that message. They point to a low rate of active participation: only 14% of the total enrolled participants (270).
They also note that amongst participants there was no “common understanding of openness as a characteristic of connectivism” (270), implying that there should have been. But I wonder if conscious understanding of openness, and the ability to express that as a clear concept, is necessary for a successful connectivist course. This is just a question at this point–I haven’t thought it through carefully. I would at least have liked to have seen more on why that should be considered a problem, as well as whether the respondents were asked specifically for their views of openness. The responses given in this section of the paper don’t refer to openness at all, making me think perhaps the researchers interpreted understandings of openness from one or more of the other things respondents said. That’s not a problem by itself, of course, but one might have gotten different answers if one had asked them their views of openness directly, and answers that might have been therefore more relevant to concluding whether or not participants shared a common understanding of openness.
Finally, Mackness et al. argue that some of the barriers noted above also led to problems in regard to participants’ willingness to openly communicate and share work with others: this can be “compromised by lack of clarity about the purpose and nature of the course, lack of moderation in the discussion forums, which would be expected on a traditional course, and the constraints (already discussed in relation to autonomy and diversity) under which participants worked” (272).
Interactivity
There were significant opportunities for interaction, for connecting with others, but the authors note that what is most important is not whether people did connect with others (and how much) as what these connections made possible. Respondents noted some important barriers to connecting as well as problems that meant some of the interactions did not yield useful benefits. As noted above, some participants pointed to “trolling” behaviour on the forums, and one said there were some “patronising” posts as well–which, the respondent said, likely led some participants to disengage from that mode of connection. Another respondent noted differences in expertise levels that led him/her to disengage when s/he could no longer “understand the issues being discussed” (271).
The researchers conclude that connectivity alone is not sufficient for effective interactivity–which of course makes sense–and that the degree of effective interactivity in CCK08 was not as great as it might have been with more moderation by instructors. However, the size of the course made this unfeasible (272).
One thing I would have liked to have seen in this analysis of “interactivity” is what Downes focuses on for this condition, namely the idea that the kind of interactivity needed is that which promotes emergent knowledge–knowledge that emerges from the interactions of the network as a whole, rather than from individual nodes (explained by Downes here and here, the first of which the authors themselves cite). This is partly because if they used Downes’ framework, it would make sense to evaluate the course with the specifics of what he means by “interactivity.” It’s also partly because I just really want to see how one might try to evaluate that form of interactivity.
Conclusion
Mackness et al. conclude that
some constraints and moderation exercised by instructors and/or learners may be necessary for effective learning in a course such as CCK08. These constraints might include light touch moderation to reduce confusion, or firm intervention to prevent negative behaviours which impede learning in the network, and explicit communication of what is unacceptable, to ensure the ‘safety’ of learners. (272)
Though, at the same time, they point to the small size of their sample, and the need for further studies of these sorts of courses to validate their findings.
That makes sense to me, from my unstudied perspective of someone who has participated in a few large and one small-ish open online courses, one of which seemed modeled to some degree along connectivist lines (ETMOOC). There was some significant scaffolding in ETMOOC, through starting off with discussions of connected learning and help with blogging and commenting on blogs. There wasn’t clear evidence of moderating discussions from the course collaborators (several people collaborated on each two-week topic, acting in the role of “instructors” for a brief time), except insofar as some of the course collaborators were very actively present on Twitter and in commenting on others’ blogs, being sure to tweet or retweet or bookmark to Diigo or post to Google+ especially helpful or thought-provoking things. We didn’t have any trolling behaviour that I was aware of, and we also didn’t have a discussion forum. But IF there were problems in the Google+ groups or in Twitter chats, I would have hoped one or more of the collaborators would have actively worked to address them (and I think they would have, though of course since it didn’t happen (to my knowledge) I can’t be certain).
Some further thoughts
If one decides that Downes’ framework is the right one to use for evaluating an open online course like a cMOOC (which I haven’t decided yet; I still need to look more carefully at his arguments for it), it would make sense to unpack the four conditions more carefully and collect participants’ views on whether those specific ways of thinking about autonomy, diversity, openness and interactivity were manifested in the course. The discussion of these four conditions is at times rather vague here. What, more specifically, does learner “autonomy” mean, for example? Even if they don’t want to use Downes’ own views of autonomy, it would be helpful to specify what conception of autonomy they’re working with. I’ve also noted a similar point about interactivity, about which the discussion in the paper is also somewhat vague–what sort of interactivity would have indicated success, exactly, beyond just participants communicating with each other on blogs or forums?
I find it interesting that in his most recent writing on the topic of evaluating cMOOCs (see the longer version attached to this post, and my discussion of this point here (and the helpful comments I’ve gotten on that post!)), Downes argues that it should be some kind of expert in cMOOCs or in one of the fields/topics they cover that evaluates their quality, while here the authors looked to the participants’ experiences. Interesting, because it makes sense to me to actually focus on the experiences of the participants rather than to ask someone who may or may not have taken the course. That is, if one wants to find out if the course was effective for participants.
Still, I can see how some aspects of these conditions might be measured without looking at what participants experienced, or at least in other ways in addition to gathering participants’ subjective evaluations. The degree to which the course is “open,” for example, might have some elements that could be measured beyond or in addition to what participants themselves thought. Insofar as openness involves the course being open to anyone with a reliable internet connection to participate, without cost, and the ability to move into and out of the course easily as participants choose, that could be partly a matter of looking at the design and platform of the course itself, as well as participants’ evaluations of how easy it was to get into and out of the course. If openness also involves the sharing of one’s work, one could look to see how much of that was actually done, as well as ask participants about what they shared, why, and how (and what they did not, and why).
I just find it puzzling that in that recent post Downes doesn’t talk about asking participants about their experiences in a cMOOC at all. I’m not sure why.
[I just read a recent comment on an earlier post, which I haven’t replied to yet, which discusses exactly this point–it makes no sense to leave out student experiences. Should have read and replied to that before finalizing this post!]