This is the third in a series of posts on a research project I’m developing on evaluating cMOOCs. The first can be found here, and the second here. In this post I consider an article that uses Downes’ four “process conditions” for a knowledge-generating network to evaluate a cMOOC. In a later post I’ll consider another article that takes a somewhat critical look at these four conditions as applied to cMOOCs.
Mackness, J., Mak, S., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In Proceedings of the 7th International Conference on Networked Learning 2010 (pp. 266–275). Retrieved from http://eprints.port.ac.uk/5605/
In this article, Mackness et al. report findings from interviews of participants in the CCK08 MOOC (Connectivism and Connective Knowledge 2008; see here for a 2011 version of this course) insofar as these relate to Downes’ four process conditions for a knowledge-generating network: autonomy, diversity, openness, interactivity. In other words, they wanted to see if these conditions were met in CCK08, according to the participants. To best understand these results, if you’re not familiar with Downes’ work, it may be helpful to read an earlier post of mine that addresses and tries to explain these conditions.
Specifically, the researchers asked: “To what extent were autonomy, diversity, openness and connectedness/interactivity a reality for participants in the CCK08 MOOC and how much they were affected by the course design?” (271). They concluded that, in this particular course at least, there were difficulties with all of these factors.
Data
Data for this study came from 22 responses by participants (including instructors) to email interview questions (out of 58 who had self-selected, on a previous survey sent to 301 participants, to be interviewed). Unfortunately, the interview questions are not provided in the paper, so it’s hard to tell what the respondents were responding to. I find it helpful to see the questions so as to better understand the responses given, and be able to undertake a critical review of the interpretation of those responses given in an article.
Results
Autonomy
The researchers note that most respondents valued autonomy in a learning environment: “Overall, 59% of interview respondents (13/22) rated the importance of learner autonomy at 9 or 10 on a scale of 1-10 (1 = low; 10 = high)” (269). Unfortunately, I can’t tell if this means they valued the kind of autonomy they experienced in that particular course, or whether they valued the general idea of learner autonomy in an abstract way (but how was it defined?). Here is one place, for example, where providing the question asked would help readers understand the results.
Mackness et al. then argue that nevertheless, some participants (but how many out of the 22?) found the experience of autonomy in CCK08 to be problematic. The researchers provided quotes from two participants stating that they would have preferred more structure and guidance, and one course instructor who reported that learner autonomy led to some frustration that what s/he was trying to say or do in the course was not always “resonating with participants” (269).
The authors also provide a quote from a course participant who said they loved being able to work outside of assessment guidelines, but then comment on that statement by saying that “autonomy was equated with lack of assessment”–perhaps, but not necessarily (maybe they could get good feedback from peers, for example? Or maybe the instructors could still assess something outside of the guidelines? I don’t know, but the statement doesn’t seem to mesh, by itself, with the interpretation). Plus, the respondent saw this as a positive thing, whereas the rhetorical aspects of the interpretation suggest it was a negative, a difficulty with autonomy. I’m not seeing that.
The researchers conclude that the degree of learner autonomy in the course was affected by the following:
levels of fluency in English, the ‘expertise divide’, assessment for credit participants, personal learning styles, personal sense of identity and the power exerted, either implicitly or explicitly, by instructors through their communications, status and reputation, or by participants themselves….” (271)
In addition, there were reports of some “trolling” behaviour on the forums, which led some participants to “retreat to their blogs, effectively reducing their autonomy” (271). The authors point out that some constraint on autonomy in the forums through discouraging or shutting down such behaviour may have actually promoted autonomy amongst more learners.
Diversity
The researchers note that learner diversity was certainly present in the course, including diversity in geography, language, age, and background. They give examples of diversity “reflected in the learning preferences, individual needs and choices expressed by interview respondents” (269).
However, diversity was also a problem in at least one respect, namely that not all learners had the “skills or disposition needed to learn successfully, or to become autonomous learners in a MOOC” (271). This is not so much of a problem if there is significant scaffolding, such as support for participants’ “wayfinding in large online networks,” but CCK08 was instead designed to have “minimal instructor intervention” (271). In addition, in order to promote sharing in a network like a cMOOC, there needs to be a certain amount of trust built up, the authors point out; and the more large and diverse the network, the more work may need to be done to help participants build that trust.
Openness
CCk08 was available, for free, to anyone who wanted to participate (without receiving any university or other credits), so long as they had a reliable web connection. The interview data suggests that participants interpreted “openness” differently: some felt they should (and did) share their work with others (thus interpreting openness as involving sharing one’s work), some worked mostly alone and did not do much or any sharing–thereby interpreting openness, the author suggest, merely as the idea that the course was open for anyone with a reliable web connection to participate in. The authors seem to be arguing here that these differing conceptions of openness are problematic because there was an “implicit assumption in the course was that participants would be willing or ready to give and receive information, knowledge, opinions and ideas; in other words to share freely” (270), but that not everyone got that message. They point to a low rate of active participation: only 14% of the total enrolled participants (270).
They also note that amongst participants there was no “common understanding of openness as a characteristic of connectivism” (270), implying that there should have been. But I wonder if conscious understanding of openness, and the ability to express that as a clear concept, is necessary for a successful connectivist course. This is just a question at this point–I haven’t thought it through carefully. I would at least have liked to have seen more on why that should be considered a problem, as well as whether the respondents were asked specifically for their views of openness. The responses given in this section of the paper don’t refer to openness at all, making me think perhaps the researchers interpreted understandings of openness from one or more of the other things respondents said. That’s not a problem by itself, of course, but one might have gotten different answers if one had asked them their views of openness directly, and answers that might have been therefore more relevant to concluding whether or not participants shared a common understanding of openness.
Finally, Mackness et al. argue that some of the barriers noted above also led to problems in regard to participants’ willingness to openly communicate and share work with others: this can be “compromised by lack of clarity about the purpose and nature of the course, lack of moderation in the discussion forums, which would be expected on a traditional course, and the constraints (already discussed in relation to autonomy and diversity) under which participants worked” (272).
Interactivity
There were significant opportunities for interaction, for connecting with others, but the authors note that what is most important is not whether people did connect with others (and how much) as what these connections made possible. Respondents noted some important barriers to connecting as well as problems that meant some of the interactions did not yield useful benefits. As noted above, some participants pointed to “trolling” behaviour on the forums, and one said there were some “patronising” posts as well–which, the respondent said, likely led some participants to disengage from that mode of connection. Another respondent noted differences in expertise levels that led him/her to disengage when s/he could no longer “understand the issues being discussed” (271).
The researchers conclude that connectivity alone is not sufficient for effective interactivity–which of course makes sense–and that the degree of effective interactivity in CCK08 was not as great as it might have been with more moderation by instructors. However, the size of the course made this unfeasible (272).
One thing I would have liked to have seen in this analysis of “interactivity” is what Downes focuses on for this condition, namely the idea that the kind of interactivity needed is that which promotes emergent knowledge–knowledge that emerges from the interactions of the network as a whole, rather than from individual nodes (explained by Downes here and here, the first of which the authors themselves cite). This is partly because if they used Downes’ framework, it would make sense to evaluate the course with the specifics of what he means by “interactivity.” It’s also partly because I just really want to see how one might try to evaluate that form of interactivity.
Conclusion
Mackness et al. conclude that
some constraints and moderation exercised by instructors and/or learners may be necessary for effective learning in a course such as CCK08. These constraints might include light touch moderation to reduce confusion, or firm intervention to prevent negative behaviours which impede learning in the network, and explicit communication of what is unacceptable, to ensure the ‘safety’ of learners. (272)
Though, at the same time, they point to the small size of their sample, and the need for further studies of these sorts of courses to validate their findings.
That makes sense to me, from my unstudied perspective of someone who has participated in a few large and one small-ish open online courses, one of which seemed modeled to some degree along connectivist lines (ETMOOC). There was some significant scaffolding in ETMOOC, through starting off with discussions of connected learning and help with blogging and commenting on blogs. There wasn’t clear evidence of moderating discussions from the course collaborators (several people collaborated on each two-week topic, acting in the role of “instructors” for a brief time), except insofar as some of the course collaborators were very actively present on Twitter and in commenting on others’ blogs, being sure to tweet or retweet or bookmark to Diigo or post to Google+ especially helpful or thought-provoking things. We didn’t have any trolling behaviour that I was aware of, and we also didn’t have a discussion forum. But IF there were problems in the Google+ groups or in Twitter chats, I would have hoped one or more of the collaborators would have actively worked to address them (and I think they would have, though of course since it didn’t happen (to my knowledge) I can’t be certain).
Some further thoughts
If one decides that Downes’ framework is the right one to use for evaluating an open online course like a cMOOC (which I haven’t decided yet; I still need to look more carefully at his arguments for it), it would make sense to unpack the four conditions more carefully and collect participants’ views on whether those specific ways of thinking about autonomy, diversity, openness and interactivity were manifested in the course. The discussion of these four conditions is at times rather vague here. What, more specifically, does learner “autonomy” mean, for example? Even if they don’t want to use Downes’ own views of autonomy, it would be helpful to specify what conception of autonomy they’re working with. I’ve also noted a similar point about interactivity, about which the discussion in the paper is also somewhat vague–what sort of interactivity would have indicated success, exactly, beyond just participants communicating with each other on blogs or forums?
I find it interesting that in his most recent writing on the topic of evaluating cMOOCs (see the longer version attached to this post, and my discussion of this point here (and the helpful comments I’ve gotten on that post!)), Downes argues that it should be some kind of expert in cMOOCs or in one of the fields/topics they cover that evaluates their quality, while here the authors looked to the participants’ experiences. Interesting, because it makes sense to me to actually focus on the experiences of the participants rather than to ask someone who may or may not have taken the course. That is, if one wants to find out if the course was effective for participants.
Still, I can see how some aspects of these conditions might be measured without looking at what participants experienced, or at least in other ways in addition to gathering participants’ subjective evaluations. The degree to which the course is “open,” for example, might have some elements that could be measured beyond or in addition to what participants themselves thought. Insofar as openness involves the course being open to anyone with a reliable internet connection to participate, without cost, and the ability to move into and out of the course easily as participants choose, that could be partly a matter of looking at the design and platform of the course itself, as well as participants’ evaluations of how easy it was to get into and out of the course. If openness also involves the sharing of one’s work, one could look to see how much of that was actually done, as well as ask participants about what they shared, why, and how (and what they did not, and why).
I just find it puzzling that in that recent post Downes doesn’t talk about asking participants about their experiences in a cMOOC at all. I’m not sure why.
[I just read a recent comment on an earlier post, which I haven’t replied to yet, which discusses exactly this point–it makes no sense to leave out student experiences. Should have read and replied to that before finalizing this post!]
Hi Christina – it’s great to see this post :-) I do need time to read all the posts again with more time – but just to say that 2008 – the date of CCK08 on which this research paper was based seems rather a long time ago now.
As you have written, autonomy, diversity, interactivity and openness are all complex, open to interpretation and can be difficult to understand. My co-author Carmen Tschofen and I, tried to explore this further in our paper
Tschofen, C. & Mackness, J. (2011) Connectivism and Dimensions of Individual Experience. International Review of Research in Open and Distance Learning. http://www.irrodl.org/index.php/irrodl
and since then I have become particularly interested in the relationship between openness and individual identity.
I feel as though I have pushed a lot of my own work at you – sorry! – but it’s great to find someone who is interested in exactly the same topics as me.
I will come back to your posts and read them more carefully – and all the comments.
Thanks for an interesting series of posts.
Jenny
Hi Jenny:
Don’t feel bad at all about giving me citations for more of your work–it is right along the lines of things I’m interested in, so I’m glad to know about it! And I agree–it IS great to find someone interested in and working on the same topics. I think we will be interacting more in the future, I bet!
I actually knew about your paper with Carment Tschofen, and it’s next on my list to blog about (in this post I say I’m going to look at another paper that considers these four process conditions from Downes more carefully & critically, and the Tschofen & Mackness 2011 paper is the one!). So I’ll be saying more about that soon (not sure exactly when, as I’m about to go on holiday for a couple of weeks).
I’m working on these issues partly out of interest/a new research project, and partly I’m doing it now because I submitted a proposal for the Open Education 2013 conference about evaluating cMOOCs, which got accepted. I now need to work through these issues carefully before preparing my presentation! I would love to contact you with questions or asking for comments on some things related to your research or evaluating cMOOCs generally, if you don’t mind! I’ve now followed your RSS feed for your blog, so will be keeping up with your work more closely.
Hi Christina – thanks for all the responses. It would be great to talk to you some time. You have my email address now, so I’d be happy to be contacted any time. Hope your paper writing goes well.
Jenny
Hi again Christina,
I’m picking through your blog for additions to my reading list. As always. Again, an interesting, and insightful post. I haven’t too much to add here ( I feel, frankly, like the last years reading has largely, at this stage, cauterized the interior of my skull…) but just a quick note. I can’t remember the reference, but the stats on MOOCs seem to indicate that the geographical diverstity is typically limited in MOOCs.
Lots of first world students, tyopically in the US, Eurpope, and then Australia. secomnd and third world representation amongst active participants is typically tiny, (though it would be interesting to see figures for seminar downloads, which can be ten times larger than seminar participant numbers, to see if there;s a difference here).
From memory, the percentages of third and second world participants are fractions, or single percentages.
Language is one suggested barrier. Reliable access to terminals, and reliable net access are another.
I’m wondering if the diversity claims MOOCs sometimes make need to be heavily qualified, or contextualised. Ultimatley, I’m wondering what the diversity claims actually mean.
I’m guessing, from my reading, that the huge bulk of participants are, with few exceptions, typically American, and the remainder overwhelmingly native English speakers (for English Language MOOCs), and the percentage representations amongst active participants not from The US, Canada, Europe, Australia or New Zealand, are typically single percentage, or fraction numbers. That said, I’d need to take a closer look at that.
Re the lack of assessment quote from the study, perhaps that’s just phrasing. Downes, for example, seems quite open about the lack of tutor assessment, and the tutor to participant ratio makes this, I assume, a fairly standard stance amongst organisers.
Hi Keith:
I have heard that too, about the overwhelming number of participants in many English-language MOOCs being from the US, Europe and Australia/NZ. I don’t remember where I saw that, and I expect it differs depending on the topic of the course. Of course, having most of your participants be from places where English is a commonly-spoken language, if your MOOC is conducted in English, is not surprising. So yes, I’d agree with you that the diversity of participants may not be terribly great. And it would be best, I think, if it could be increased–if there could be more participants from different parts of the world, different cultures, different backgrounds. But the language and technological differences may make this difficult.
You’re probably right about the quote just being a matter of phrasing. But I couldn’t figure out what their point was supposed to be, which was a problem. And yes, many cMOOCs don’t have formal assessment, because the point is not to teach a certain amount of content or skills and then assess if people have learned those. I didn’t understand if the authors thought that the lack of assessment was a problem or not, or if they thought some of the participants were suggesting it as such.
Going back to diversity, it’s not obvious what sorts of diversity it would be best to have in a MOOC; that depends on what purpose one thinks diversity should serve. For Downes, from what I read so far, it’s a matter of diversity being necessary to generate new knowledge. But it seems many different kinds of diversity could do that, in different ways. I still need to read/write more about what Downes thinks about how new knowledge is generated in networks to figure out whether some kinds of diversity would be better at that than others.
Hi Christina and Keith – I’m interested in your discussion about diversity. I agree with Keith that diversity needs to be qualified. Yes – there is a dominance of participants from the US, Europe and Australia – but it’s not so long ago that even that would have been a real achivement – so whilst we will hopefully be moving towards a more globally represented participant group in MOOCs – and MOOC conveners should be making every effort to support this – I think we can and should not forget the enormous progress that has already been made.
Another interesting point about diversity is that it is likely to drop off in every respect (i.e. participants, resources, discussion perspectives etc.) as the MOOC progresses and engaging participant numbers decreases – which seems to be the current pattern in every MOOC. It would be interesting to compare diversity at the beginning of a MOOC with diversity at the end of a MOOC.
Re assessment and autonomy (and trying to remember back as far as 2008 and what I was thinking then), I think the point we were trying to make was that assessment is necessarily a constraint on autonomy, because the participant has to follow a given path, i.e. the assessment requirements, if they want to receive the qualification. It’s a rare student who will value their autonomy over the qualification, but I do remember one in my career, who when I told him that his marks would be much higher if he wrote his assignments in line with the assessment criteria, said that he wasn’t interested in getting high marks, he was more interested in following his learning interests – or words to that a fact. That was a real wake up call for me!
Jenny
Hi Jenny:
Good point about diversity dropping off as a MOOC goes on, along with participants. After awhile it’s mainly going to be the same kinds of people, probably, at least in the respect that they are the kinds that enjoy learning and connecting to others in that kind of environment. I remember there were people in etmooc who mentioned early on about feeling like there was not a lot of space for disagreement, or people with different views, and even though I tried to say that I hoped they would continue to speak because I wanted to hear those different views, the ones I’m thinking of dropped away. Feeling like you’re kind of alone in a group of people who all agree on certain things that you disagree on is likely alienating and not terribly engaging (except for a few who enjoy continuing to try to make their case). Most of us who made it all the way through etmooc are real cheerleaders for it, but every once in awhile someone will tweet about how etmooc just didn’t work for them. And, of course, they didn’t stay in past the first couple of weeks. Not that staying in has to be the desideratum; etmooc was clear that people were welcome to drop in and out as they wished. The organizers tried to structure it specifically in order that it would be easy to come in when the topic changes every two weeks, and not feel like you had missed anything crucial. But of course, connections that last past the course are built up over time, so people who came in for two weeks and left, for example, won’t have as many lasting connections.
Thanks for the clarification about assessment and autonomy! This interpretation makes complete sense, though I didn’t get it when I first read the article. I feel exactly the same way myself, as an adult learner who needs neither grades nor credits (which is a very different situation than my students!). I took a MOOC after etmooc that was much more structured in terms of assignments and instructions as to exactly what should be done. There were no assessments (except a way to earn three possible badges, but that was just by writing blog posts of a certain length on certain topics), but I still felt constrained, even though I KNEW I didn’t have to follow the directions. That they were there, and that others in the course were following them, was enough to make me feel pressure to do the same. And I was less engaged in that course, in large part because I was writing blog posts based on what someone else told me to write, rather than on what I was most interested in. So for me, not only was it a barrier to autonomy, but also engagement!
The point I would make about diversity – and I agree that the MOOCs we hosted were not ideally diverse – is that diversity is a criterion for assessment, and thus, our lack of diversity is a point of criticism, an element in our MOOCs that needs to be improved.
I would never claim that the CCK08-type MOOCs were perfect; my major claim is that these four criteria – autonomy, diversity, openness and interactivity – were the design criteria, the ideals we sought to instantiate, however imperfectly.
Thanks for your comment, Stephen! I’m not actually sure how to improve the diversity of cMOOCs from what exists so far. Perhaps focusing awareness of the MOOC in places/groups/disciplines/fields that are not often represented in MOOCs so far?