Contrasting the xMOOC and the … ds106 (#h817open, Activity 14)

For week four of the Open University course on Open Education, we were asked to compare MOOC models: either ds106 or the Change MOOC with something from Coursera or Udacity, focusing on “technology, pedagogy, and general approach and philosophy.”

I decided to go ahead and do this activity (though I’m not doing all of them for the course) because I really want to get a better sense of ds106. Plus, though I’ve explored Coursera a fair bit, and even signed up for one of their courses to see what it’s like being a participant, I haven’t looked at Udacity at all. While I kind of don’t care if I look at Udacity, this activity is a good excuse to look at ds106, which I do care about, and, well, I’ll at least know a bit more about Udacity in case that ever comes in handy.


“DS” stands for digital storytelling, and this course began in 2010, started by Jim Groom at the University of Mary Washington. It still has students registered officially at UMW, and there are sections at other campuses as well (see “other Spring 2013 courses” at the top of the ds106 site). In addition, it has, well, I have no idea how many other online participants who are participating in parts or all of the course. (There are over 150 blogs listed in the “open online participants” section, but that may not be the same as the number of people who are actually participating. And that doesn’t count the on-campus students.)

One thing that stands out about ds106, among many others, is that while it’s a course that has specific beginning and end times for on-campus participants, it explicitly invites anyone to drop in anytime they like and stay for as long (or as short) as they like. Some people may be participating in a fairly in-depth way, by setting up blogs that are syndicated on the site, while others may just do a few assignments here and there (thus, the near-impossibility of figuring out how many people are actually “participating” at any given time).

Ways of participating in ds106 (for open online participants)

1. The daily create: a low-key, low-commitment, super fun way to participate. Every day there is a new suggestion to create something, and anyone can do one or more of these and add them to the collection. The daily create site explains:

The daily create provides a space for regular practice of spontaneous creativity through challenges published every day. Each assignment should take no more than 15-20 minutes. There are no registrations, no prizes, just a community of people producing art daily.

For example, today’s daily create (April 21, 2013) is: “Take a photograph of something you must see everyday. Make it look like something else!” Once it’s done you simply upload it to Flickr with some specific tags, and voilà, they show up on the daily create site (well, barring some technical hiccups and such). You can also search Flickr for the specific tag for today and find all the creations. Utterly cool.

I decided to do the Daily Creates for April 21 and 22, and had much fun with them. You can see my photos here and here. (I’ve got a lot of work to do on the “creative” end of things.)

2. Do some assignments from the “open assignment bank.” According to the “about” page for the assignments, they are all created by ds106 students. Those who are taking the course in a formal sense on a campus don’t need to do all the same assignments–they can pick and choose in order to put together those that will equal a certain number of “points” for a topic in the course. And anyone can do any one or more of the assignments, anytime they like. One can either do them on one’s blog and register it with the blog aggregator, or upload it to the site directly.

3. Don’t just do the assignments, write about them in a blog. Tell a story about why you chose that assignment, the context of what you created, and how you did it so others can see the process. Then, connect your blog to the ds106 hub so it shows up here. Further, read some posts from others’ blogs and comment. Build community.

4. Follow along with an on-campus course. You could look at the posts from a particular on-campus course (see top menu of ds106 site) and do similar topics as they are, and comment on their blogs/assignments.

This is all in addition to following ds106 on Twitter through the #ds106 hashtag.

And really, what other “course” has its own radio station? The most amazing thing about it is that it’s open to anyone to broadcast on, so far as I can tell. Well, anyone who can figure out how to do it. Find out what’s on by following @ds106radio or the #ds106radio hashtag on Twitter.

And there’s a “tv” station too, though I’m not sure how it works. I just know I got a tweet about an upcoming presentation, and when I clicked on the tv station site I could watch the presentation. Seems to be an option for live chat, too. You can follow @ds106tv or the #ds106tv hashtag on Twitter.

Udacity: “Elementary Statistics”

At some point I need to learn some statistics for my work in the Scholarship of Teaching and Learning. So I decided to take a look at Udacity’s “Elementary Statistics” course, for possibly doing it later. 

Image Hertzsprung-Russell Diagram, flickr photo licensed CC-BY, shared by Arenamontanus

General observations

Starting off with the main Udacity “How it works” page, I find something suspicious:

The lecture is dead
Bite-sized videos make learning fun

My experience with Coursera was that the traditional, hour-or-so-long lecture format seemed to just be cut up into shorter pieces, with a talking head talking for, if I remember correctly, 10-15 minutes at a time, interspersed by quizzes or other activities. We were still supposed to watch all the pieces. That’s not what I’d call killing the lecture: a lecture is still a lecture, no matter how short it is. This point has been made countless times before (here is just one example, from the excellent “More or Less Bunk” blog by Jonathan Rees). The lecture is dead. Long live the (mini) lecture. So I’m right away wondering whether Udacity is going to be any different on this point.

And I really, really don’t like the “branding” they do: they call us “Udacians” (Coursera calls participants “Courserians”), and they have their own new word–see, e.g., here. Yuck. It really puts me off. I don’t mind the sense of identity I got through doing ETMOOC, a sense of community, of belonging to something. I think it’s because the latter was developed over time, rather than foisted upon people when they start; with Udacity I feel like I’m being told I’m part of a community in order to put me into a feeling of caring about the company, rather than letting that feeling develop over time (if at all).

About the course front page: I hate the fact that I have to actually enroll to see how the course works (unlike ds106, in which all elements are out there for anyone to see and start doing). No wonder these kinds of MOOCs have such large enrollment numbers. You have to enroll just to see the thing in the first place.

Why do they require a registration before you can get a real sense of a course? At the very least, they can keep track of people that way to send them marketing materials. And they can gather a bunch of data about participants–all one’s courses, all one’s work inside those courses, can be tracked if they can attach work in the course to specific people. Which makes me wonder: what is that data being used for, exactly? The privacy policy doesn’t answer that question fully:

We use the Personally Identifiable Information that we collect from you when you participate in an online course through the Website for managing and processing purposes, including but not limited to tracking attendance, progress and completion of an online course.

But what do they do with the information about progress in courses, besides store it so you can go back to the course later and see how much you’ve done, or use it to issue certificates? Well, here’s one answer: it’s being used to make money. Udacity and similar companies can identify students who might be good matches for employers, and the employers can pay for the service.

But I wonder if any of this data could be used to provide useful information on online teaching and learning. Maybe, maybe not, but we may never know unless researchers can get access to the data. (Mike Caulfield explains here that institutions that are partnered with Coursera can get at least some data, but I don’t know what Udacity’s policies are in this regard.)

Nor do I expect that I, as a participant, will have detailed access to my data, because I don’t own it; they do–a problem discussed by Audrey Watters, here (and in a great presentation for ETMOOC, linked here).

I decide to register for an account and take a deeper look at the course–because really, I want to see how they killed the lecture.

Starting the course

The course goes right away into a short (>2 min) introductory video, and I pretty quickly get the hang of how this course works: very short videos (0-2 mins, some 30-45 secs long) followed by quick quiz questions (multiple-choice, fill in the blank, that sort of machine-gradable thing), back and forth for each “lesson” (though some video segments don’t have quiz questions attached). At the end of each lesson there is a problem set. And so it goes, for 12 lessons.

One nice thing is that there is a link to forum questions connected to each of the short videos, because if you go to the main forum page, you just get a bunch of discussions that aren’t clearly organized by topic or lesson. You can organize them by tags, but you have to know what the tags are to do a search on them. Another nice thing is that for each video you can click on the “ask a question” button, and it automatically adds the right tags for you for that particular video segment.

I skipped ahead to the first problem set and tried to do some of them, just to see what they’re like. All multiple choice, and like the “quizzes” in the lessons, you are told right away if your answer is right or wrong. In the quizzes you can just skip ahead to the answer if you can’t figure it out; not so in the problem sets. You have to keep trying until you get it right (a process of elimination, in may cases) or just skip the question. Or, you can always take a look at the discussion forums, where I found that sometimes someone had helpfully posted the answers.

Apparently there will be a final exam, but it won’t be ready online until May (not all the lessons are ready yet, either).

Is the lecture dead?

Yes and no.

The course does a great job of mixing lecture with participant activities, such as short quizzes to apply what’s just been said, or sending you to third-party sites to do activities there. In the first lesson, they sent us to do a face memory test from the BBC, and then asked us to put our scores into a Google form. Much of the rest of the first lesson referred back to this test and how one might think about the data generated by it. That’s a nice way to use an example for a stats lesson.

I didn’t make it all the way to the end of the first lesson, but if I had, I might see what they are actually doing with the data generated by student participants who take the test and upload their scores into the Google form. What’s it being used for? I think it’s uploaded anonymously, but I’m not sure because you access the form through the course interface itself. Hmmmmm.

[And if my BBC face test data was connected to my personally identifiable information, then I should have had to fill out a consent form for it to be used, right? Might they have gotten ethics approval to collect such data? Or maybe they don’t need to? The important thing here is that none of these questions are answered, even the question of whether my Google form data had identifiable information on it. I just don’t know.]

The videos still contain lectures, but they are so short as to hardly seem such; often there is a quiz every 30 secs to 1 minute (sometimes longer, but not much). So there is a good deal of participant activity going on as well (one might even call it a form of “active learning”). And the videos for this course are (mostly) not face shots of instructors talking, but rather some kind of digital whiteboard with text and diagrams.

One could say these aren’t like lectures because they are so interspersed with participants having to do something. But the pedagogical approach that underpins lecturing is still in evidence, namely the knowledge/information transmission approach (more on this, below). So in some sense, there are still lectures here; they are just very, very short.

I tend to think there’s nothing wrong with having some lecturing going on here and there, though I’m also rather drawn to Jacques Rancière’s The Ignorant Schoolmaster, which can be read as suggesting that one ought not to act an expert and engage in explaining things to learners at all (see, e.g., the section on “Emancipatory Method” here, and the nice summary by my colleague Jon Beasley-Murray here, along with a critique I have to think about further).

I expect Udacity means that “the hour-long lecture, without  participant activities to break it up” is dead (which, of course, it’s not, but that’s another matter). But the “expert” as transmitter of knowledge to be grasped, and the “learner” as taking on that knowledge in exactly the same way as the expert, is not.


The most striking difference in terms of technology is this. For the Udacity course, there is some pretty heavy technological investment going into the production of the course. The videos are not just recordings of professors talking, but often of a digital board that one of the instructors writes on with a stylus, in different colours. The video switches fairly seamlessly into a quiz: the quiz looks just like what was last seen on the video, but when you move to it suddenly click boxes appear, and suddenly you’re in interactive mode. The technological structure of the course may not be terribly complicated (what do I know about such things? pretty much nothing), but my point is that the main technological investment is happening at the “course” side.

What’s different about ds106 is that the participants themselves create things with technology, with software and applications, rather than being consumers of such products produced by those in charge of the course. Instead of just passively interacting with things made by others, ds106 participants learn how to use technology to create their own artifacts. Just a quick glance at the Assignment Bank or The Daily Create shows that course participation is heavily focused on making things rather than (only) taking in knowledge from others. As does the fact that all the assignments (and at least some, or many, of The Daily Creates) are created by course participants.

Pedagogy and philosophy

Making and replicating

The above point about different uses of technology in the Udacity course vs. ds106 reminds me of some things George Siemens said about the difference between “xMOOCs,” like those from Udacity and Coursera, and “cMOOCs,” or connectivist MOOCs, like ETMOOC and Change 11 (I discuss some of the differences in an earlier blog post). He states here that

Our MOOC model [cMOOC] emphasizes creation, creativity, autonomy, and social networked learning. The Coursera model emphasizes a more traditional learning approach through video presentations and short quizzes and testing. Put another way, cMOOCs focus on knowledge creation and generation whereas xMOOCs focus on knowledge duplication.

Is ds106 a cMOOC? It does have the focus on creating over duplication. Alan Levine argues that it’s not a MOOC at all:

To me, all other MOOCs, be they x or c type, sis [sic] to create the same content/curriculum for everyone in the course- they all do the same tasks. And to be honest, the framing points are actually weekly lectures, be they videos spawned out of xMOOCs or webinars. The instruction in these modes are teacher centric (even if people can banter in chat boxes).

Should we say that’s the definitive answer to the question? I don’t know, and really, it doesn’t matter in the end. But Levine has a point about other open online courses being more focused on weekly presentations (ETMOOC was like this) and having the same general topics for all each week, even if there aren’t always common assignments given to everyone (there weren’t in ETMOOC). ETMOOC was also more of a set “event” happening at a certain time (though, thankfully, many of us are continuing to think and discuss and work together afterwards on a “blog reading group”). ds106 is even less structured than that, being something one can participate in anytime, an ongoing community more than a course–except for those who are taking it as part of an official educational program, that is.

The Udacity course on statistics definitely holds to a model of knowledge duplication, in which participants learn things from experts and duplicate that knowledge on quizzes and problem sets. This is not surprising, given the topic, and not really a problem, given the topic. I found it more problematic when looking at a Coursera course on critical thinking and argumentation.

For all that, though, the Udacity course doesn’t encourage passivity in participants; one is continually doing things with the information being presented, instead of mainly watching or listening. It’s just that one isn’t really making or creating new artifacts, new knowledge in these activities, things to be contributed back to the community of learners. Except, of course, on the discussion forums, which are not really integral to the course. You can go to them if you have a question, or want to see answers to others’ questions, or want to answer others’ questions, but I think you can do the whole course without ever going to the forums.


I’m not familiar enough with educational theories to be able to say much of anything scholarly here, so I’ll just make a couple of quick observations that risk being so general as to caricature the approaches in these two online experiences.

In the Udacity course, the philosophical approach has already been stated above: a kind of expert-transmission model. The instructors are experts who should explain the topics in a way that will work for the most participants possible. There can be no adjustment in the instruction for different participants, as it must necessarily be the same for all in the main presentations and quizzes (though it can be altered over time, if evidence suggests a need for it). The assumption has to be that there can be a way to reach at least a good portion of a mass audience of learners, through clarity of presentation and testing of understanding along the way. If this doesn’t work for some people, they can hopefully get help through the forums (which have contributions from both participants and, at times, the instructors).

The learning experience is, from what I experienced in the first lesson and problem set, entirely instructor-directed, with the participants going through an already-set and -structured path through the course. It is possible to earn a certificate for a course, according to the FAQ page, if you complete a certain number of “mastery questions” correctly and thus achieve at least a certain “mastery level.” In this case, “mastery” means being able to replicate the knowledge one has ingested.

ds106, by contrast, (at least for open, online participants) is participant-directed rather than instructor-directed. Participants decide what they want to do, and when. There is no indication that one ought to follow a pre-set path through the course, nor that one should try to work through most or all of the topics.

The instructors in ds106 are not acting as “experts” for the open participants. There is nothing in the way of information being given to participants that they must somehow return back in the same form. There is only the ds106 handbook, which provides advice and tips for using digital tools as well for blogging about one’s artifacts, but participants then create their own artifacts and knowledge with those tools. Indeed, the “experts” in ds106 are not the instructors, at least for the open participants–it’s really the other students. They are the ones producing the artifacts, creating the assignments, and commenting on each others’ work and blogs.


It’s no secret on this blog that I prefer the “cMOOC” structure to the xMOOC one. Generally I prefer providing students with more freedom to investigate things they find to be engaging and valuable than to tell them exactly what they should do in order to “learn.” (Though my reservations about rhizomatic learning are also relevant here).

So it would probably seem that I’d prefer ds106 to the Udacity course. Which I do. I really appreciate that the “course” is about what participants can create rather than about what experts have to tell them.

But there just are some things that lend themselves okay well to the expert, knowledge-dissemination model, like basic statistics. That’s not to say that I don’t think participants can add important critical and creative knowledge to the field of stats, but at the start, one has to just grasp some of the basic concepts in order to understand the field well enough to do so. Or at least, to talk to others in the field about one’s ideas. And Udacity does a fairly good job of that, from what I’ve seen.

I expect I’d have a different response to a Udacity-type course in philosophy, however.



  1. Worth the ‘lengthy’ read, Christina ;-)
    Seeing the instructivist approach so deftly used (in a Coursera course or two) with frequent learner activities interwoven challenged my view of this style too so it was good to see a more nuanced response from yourself as well. Within more participant-constructed cMOOC style learning, do you feel you always get sufficient feedback that challenges your learning and pushes the quality and depth of your work?

    1. Hi Paige:

      I have only had experience with one cMOOC so far (ETMOOC), so my evidence is pretty limited. But there, I did feel I got feedback that challenged me. The feedback consisted mostly of comments on my blog, or responses to my comments on others’ blogs, but many of those were really excellent. There was some of what I’d call feedback in Twitter chats, too–I have been amazed at how in-depth one can get during a chat on Twitter. It has to have a lot of back and forth, of course, since each Tweet is so short, but I’ve found myself pushed and challenged a fair bit by some people on Twitter. Certainly it depends on who is part of the cMOOC, how active they are, how committed to thinking through the issues & engaging others in conversation. But in my limited experience so far, it has worked.

  2. Thanks for taking the detailed look at ds106, Christina, I noted your enthusiasm in the Daily Creates. While it is a completely wide open experience, I do realize there is not a whole lot of general direction fo an open participant, and is much less of a course experience. I had some hopes of creating a sort of “Build Your Own Syllabus” experience out of all the resources, but teaching my class was all consuming, and I am really not being paid to build out more resources (except as a labor of joy).

    It’s less of a pull of the shelf experience, and one more full hopefully of ideas that are outside the lockstep of many of the xMOOCs.

    1. Hi Alan:

      Enthusiasm, for certain. In many respects–I love the fact that people can just jump in whenever they want and do as much as they want. I have been doing a few daily creates, and already connected with some ds106 people on twitter, which is fantastic. I see your point about maybe wanting to add in more direction in case anyone might want it, and the “build your own syllabus” idea is a nice one. Of course, you know, it doesn’t have to be you that does the resource-building; anyone who thinks this is a good idea could do it! Which I do. But I, too, am up to my ears in so many projects I said yes to already.

      I like the fact that it’s not a pull off the shelf experience, and more of an experiment. That’s what’s drawing me in. So glad I’ve heard about it (through your participation in ETMOOC, if I remember correctly), and taken a closer look!

  3. Lots to think about here.


    Courses which inform me that x is dead, and that anything bite sized is fun are automatically going to evoke suspicion in me. “Bite sized learning is fun!” Eurgh.

    I get to decide what I think is fun. In my case, a semi obsessive, otaku like engagement with a topic where I eat, sleep, drink, think, walk, talk and academically stalk the subject is fun. Bite sized learning is a waste of my time. Being told I will enjoy it smacks of those holiday camps where attendants faces were frozen into a rictus grin of enthusiasm.

    I agree with a huge amount of what you are posting here. The etmooc community building aspect, so key to motivation, supp[ort, resources, individual and group learning success, and that gives concrete yields in terms of how much effort people put in, for themselves, and for others. It’s organic, it’s a function of authentic engagment, and it’s an expression of, and discovery of authentic participant identity.

    Absolutely key to aspects of motivation, and the Udacian version sounds, like, well, we’re back staring at the soulless rictus grin of the holiday camp attendants.

    The lecture isn’t dead. And the Udacity cheerlead really sounds like “The lecture is dead. Now, here’s a video of an expert talking. Long live the lecture!”

    All that said, good MCQ’s can be useful in environemnts where resources are stretched. They work well as Advance Organsiers, and they can work extremely well where the focus is not on “right or wrong” but on providing opportunities for further leartning. A gplden ruile of MCQ feedback is never have a response that doesn;t contain meaningful information, and design both question and feedback to provide learning opportunities. MCQs are not a terminal point, but they can provide useful waypoints, especially in resoiurce thin environments. But a “try until you get it right” MCQ ticks none of those boxes. It encourages random pressing, and is profoundly discouraging and demotivational.

    When someone gets something wrong in an MCQ, they should know why, and they should probably have to do something…feedback should be formative, the question should have..well. I could go on and on.

    Not providing access to the overall course picture is just bizarre, and a little perverse. Course descriptors are necessary things. A person’s prior knowledge is key to their assimilation of new knowledge, and their sense of course achievability – informed by knowing what’s in the course, and what they’ll need to do – is key to their success.

    That said, etmooc had shortfalls here. Good course descriptors give information on what will be covered, but also on the base skills required. I think etmooc had good transparency, but they missed a part of the descriptor that is key. The part that often begins “participants will be expected to…” and goes on to detail specific detailed skills requirements, as well as options to upskill in advance. Telling your students, in detail, what ypou expect of them allows them to prepare, and be equal to those expectations – a key point that heklps avoid people drowning in the early days.

    I think your end point is key. The course type needs to be responsive, and situated in the topic, level and students that are taking it. And maybe that brings us to the elephant in the room regarding all MOOCs. Participation and completion rates – which neither cMOOCs not xMOOCs generally do a good job of engaging meaningfully with.

    You mention the low completion rate in xMOOCs, and there may be multiple resons for that. Some of those reasons may even be quite good. Some work has been done, but more needs to before we can have a meaningful conversation about it.

    But cMOOCs have extremely low participation rates. Active, core participants numbering 50 or so from a cohort of 1500 seems not unusual. That’s about 3.5 %. Admittedly, measuring aspects of this in cMOOCs is more difficult than in xMOOCs on an LMS, but where it can be measured – blog posts, tweets, Google + posts, it seems that participation runs at a rate similar to xMOOC completion rates, or, at times, a little lower.

    Re George Siemen’s quote on knowledge creation. I’m not sure that cMOOCs, innately, do focus on knowledge creation. Connectivist theory is full of holes, (behaviroism, cognitivism, and constructiivism are outmoded, chaos theory, pedagogically inflexible in the face of novice learners) some of them patched by excellent organisers, but, at it’s heart, it’s patchy at best. The unstated requirements – levels of digital literacy, established strategies for autonomous learning, technical and social nous, information sifiting and aggregation strategies – work well for some people, but are probably problematic for others. A quick rule of thumb, backed by evidence, is that you should give your novices a set of train tracks to follow, and your experts should probably be free.

    As with so many things, an either/or solution is probably not where progress lies. An xMOOC approach can help target specific learners who need structure, a cMOOC approach can allow people who have mastery to express their own learning and exploration. It seems likely an educational experience could benefit from both.

    1. Hi Keith:
      Thank you for another very thoughtful comment. At first I puzzled over MCQ’s, but the internet saved me: Multiple Choice Questionnaire, for anyone else who might not know what it means.

      I like your points about MCQs…I haven’t thought about using them for that sort of purpose. In fact, I have pretty much ignored using them altogether. Too many bad experiences. Could you explain more or give an example of how they could be used as wayfinders, or providing opportunities for further learning? And yes, random clicking until I got it right was extremely not useful and demotivating. Exactly.

      I agree that etmooc could have used some improvement in the way you suggest, perhaps by making the “advice and tutorials” page more prominent. It had a fair bit of what you’re talking about, but I think many people missed it. It didn’t do all of what you suggest, though.

      I agree that it would be good to do more research on completion rates, esp. in cMOOCs, which are less prominent and so might be less researched than xMOOCs. But one thing to consider is the value people are getting out of low participation. There’s a good discussion to be had about the potential value of lurking, at least for the participant him/herself (can’t contribute much to others if you’re only or mostly lurking). And etmooc, and maybe other cMOOCs (I don’t know b/c haven’t looked at this) was explicitly designed to allow for people to drop in and out, so that there wasn’t an expectation of completion, necessarily. So maybe looking at participation rates within certain topics would be better, rather than over a whole course.

      I am going to look more carefully at connectivism soon. I have read a lot about it, but haven’t digested it yet, which means, for me, writing about it. So a blog post is coming on that at some point. My initial reaction is similar to yours, except I do think that knowledge creation may be a goal of cMOOCs, even if it’s not met. And how to judge whether that’s met or not is probably very, very difficult, which is another concern.

      Ultimately, I agree with you on both cMOOCs & xMOOCs being useful, for different contexts. For stats, an xMOOC will probably work just fine. Maybe I’ll even work my way through the Udacity one, though it would be entirely on my own rather than with a set course schedule, synchronously with others. Which doesn’t work quite as well for me, as I’m not internally motivated to learn it, but rather just feel I must for my other work. So having the push of a synchronous course would help. The stats course at Carnegie Mellon is asynchronous too. Maybe I’ll look at one at Coursera.

      1. Michael Seery used MCQs as Advance Organisers for students on his course.

        He teaches chemistry, and noticed first years, with no prior knoweldeg of chemistry, were testing on average 28% lower than ones with prior knoweledge on his course.

        So he set up an Adv Organiser (omething which invokes prior knowledge, or adds to iut in advance of instruction – a reading list, homework that feeds into the next class, research, a short discussion on the toipic before class…) resource. Basically, a set of short lessons, tailored to introduce the basic ideas and jargon of a lecture ion advance of the lecture. Students engaged with the resource, usually a presentation, and andswered an MCQ at the end.

        Eacj MCQ had meaningful feedback, and incorrect feedback was designed so it meaningfully addressed the reason whu students had got it wrong. Not used as a test, but as a further teaching opportunity.

        Students access the resource regularly, the lectires tied directly in, sometimes referecing the resource, or sometimes springboarded into the resource via student discussions.

        The gap narrowed to 6%.

        He invokes cognitive load theory as the reson – novice leareners had too much to take in during lectures, and the Adv Org. lowered their cog load so they could learn more efficiently.

        The paper isn;t available yet, I think…but I have a copy if you would like to read it.

        1. That sounds like an excellent use of MCQs–thanks for explaining. I suppose that sort of thing could be used in advance of any class lecture/discussion, really. My concern would be adding to students’ workload, though. Would have to be careful not to make it too much, or reduce other things we’re asking them to do outside of class.

Comments are closed.