Quant. Creep.

We live in a world that values stories, yet gives more credibility to statistics. This is not my preference; it is my experience. Government, in particular, likes information bundled in consumable, numeric bundles. Some pretty graphics help too.

One of the side effects of this is a discursive skew towards positivistic language when working within a qualitative (rather than quantitative) research paradigm. In fact, a surprising amount of the qualitative research that is published couches its findings using this sort of (ostensibly) scientific language, presumedly to give the results more credibility.

Though in fact, the opposite can result, particularly among those who understand the difference. About 87 per cent of the time, 19 times out of 20.   😉

Paradigmatic

Social research falls into two broad paradigms: quantitative research (using statistical analyses) and qualitative research (using narrative accounts. Stories, in other words. To drill down a bit deeper, quantative research falls into two common categories: correlational design (examining relationships between many variables, usually collected via surveys) or quasi-experimental design (testing hypotheses made prior to data collection in a quasi-controlled setting. Fore quantitative research findings to be considered reliable, the methodology has to be sound–and the sample size large.  These samples can be random (from an entire population…hard to do) or purposeful (as many persons as you can get from a population, but not random). The nature of the sample determines what statistical measures are valid. But once you’ve got your data, you click your computer to run your stats and get a very quick answer (well, it’s rarely that tidy, but that’s the gyst of it).

If anyone claims to be doing social research using an experimental design, they mean quasi-experimental: humans, unlike lab animals, cannot be raised in isolation from their peers in the pursuit of science. We are shaped by our surroundings: any claims to “control” say two “identical” classrooms in two “identical” schools are nonsense. We don’t leave our experiences in life at the door, they come with us. They are us.

Within qualitative research there is ethnography (observational field work and interviews), semi-structured interviewing, and action research (practitioners researching their own practices, individually or collectively). Sample sizes, in terms of conducting interviews, tend to be small. Interviewing is done until one starts getting either similar responses (saturation, or a convergence), for as long and as frequently as time and resources allow, or when it becomes clear saturation won’t happen. Qualitative date is analyzed iteratively and (ideally) collaboratively. Because of the relatively small sample size it’s important to triangulate one’s finding via one or more additional source. Interviewing experts in the field, or comparing census or public health data are ways to triangulate.

If one elects to combine correlational design and semi-structured interviewing it’s called mixed methods.

All social research is informed by solid research training; much of it is also informed by social theories (some call them “grand” theories) that endeavour to explain at a macro level how humans live. Some of the heavy hitters in social theory include Linda Tuhiwai Smith, Edward Said, Pierre Bourdieu, Max Weber, Judith Butler and Etienne Wenger.

Quant. Creep

Qualitative research endeavours to explain a local phenomenon, make sense of it, and use that analysis to advance our understanding of the human experience. Quantitative research examines precise aspects of human experience, looking for relationships–predictive ones–that help us understand the dynamics (personal, interpersonal, societal) that impact how we live our lives. When data are analyzed for a quantitative study, a research can make strong, generalizable claims from her findings. Qualitative researchers instead describe the context of the study and identify ways in which the finds might be transferable to other contexts.

Mass media, particularly advertising, muddies the waters even further. “Studies have shown,” four out of five dentists,” and “clinically proven” have become catch phrases rather than the precise language of research. How many studies? 80 per cent of dentists, or four out of the five you spoke to? In what sort of clinical setting, involving how many patients located at how many sites?  These phrases sound substantive. Impressive. Meaningful.

And hence the quantitative creep. We are socialized into believing these are the sorts of words one uses when we want to describe something as important. An action research study by five physical education teachers in a rural Canadian public school district could provide findings that are of value to many many PE teachers. But it’s not a study of all PE teachers or even a large sample of PE teachers. So it cannot tell us anything about “how PE teachers” in a general sense do, feel, or experience anything.

Seriously, it’s true. R=.92 p<.0001   😉

About John P Egan

Learning technology professional.
This entry was posted in Uncategorized. Bookmark the permalink.