A feed through a feed led me to this interesting blog posting. It caught my attention because: 1.) lecturecapture is a hot topic these days (UBC included), and 2.) it makes some pretty strong claims, including:
6. Improves pass rate
7. Recording lectures improves bad lectures!
BTW I don’t disagree/contest these (or the other) findings in the report. But what I do find problematic is that the presentation in question (available here on SlideShare) doesn’t include the sort of critical research data that tells us how credible the claims are. Sample size? Prospective (or at least) retrospective power calculation? Statistical measures for claims (more specifically the claim that “They would prefer for all courses to be recorded”.
I suspect that some–probably all–these questions can be answered and in a manner that confirms the reliability of the findings. Which is why they should’ve been included in the original presentation. But already someone else has fed the results through their own filter–and the “they” is now generalized to “all students” rather the students at one particular institution–a technical one, which may account for “no major technical problems.”
This example is a rather benign one…but what about similar reports that make specious claims? We can and should do better. Our work is of a high enough standard to stand scrutiny. Or at least it should be…