Neuromyths in Education: Why do they persist?

In a post last year we discussed issues with the lack of evidence-based education, and during some recent professional development sessions and conversations at my university, this came to mind again, as some of the ideas that seem to be taken for granted in higher education seem to have very little supporting evidence. Indeed, education does seem an area where some of the so called “neuromyths” persist, and are even championed.

I came up against this when challenging some of the constructivists and postmodern educational ideas being discussed by a PhD student and a senior member of faculty in education. I was told “I didn’t realize people actually still thought that sort of thing” as if I was some sort of ludite dinosaur. Sadly, I have found members of nursing faculty are often patronized in such ways by academics from other disciplines, which usually signals to me an inability to make any useful counter-argument to a point, and possibly also a rather closed mind!

Neuromyths are really ideas about neurological/cognitive processes that have been repeated often enough to become considered as fact. Unfortunately some misconceptions about the brain persist in the classroom and beyond. Let us consider a few of these established ideas that pervade higher education that have mainly arisen from dubious educational psychology and persist as contemporary wisdom.

Left Brain – Right Brain

The idea is often suggested that people are predominately left or right brained in terms of their skills and aptitudes. E.g. left-brain predominant = logical and mathematically skilled, more organized and systematic whilst right brain predominant = artistic and creative. Just google “left-brain right brain” for many examples. Current research suggests regardless of personality or skill set, you use both the right and left hemispheres of your brain together to perform everyday tasks. Although certain functions, such as speech production, handedness, and facial recognition, tend to be dominated by one side of the brain in the great majority of people, most tasks require parallel processing from both hemispheres. The integration of input is made possible by the fibre connections between right and left sides of the brain called the corpus callosum. Unless an entire hemisphere is completely removed or damaged, no one should really be considered to be “right”- or “left”-brained

The Utility of Learning and Teaching Styles 

As educator James Atherton notes: most teachers would not argue with the proposition that people learn or teach in different ways. This has given rise to a whole host of theories of learning (and teaching) styles. There are at least 71 different learning styles inventories published. However, the assumptions of the “styles” adherents in education are that it is possible to develop a relatively simple typology of learning or teaching styles and then develop test instruments to ascertain where individuals fit, teach to address them, and (more worryingly)  assess the quality of teaching with reference to this.

The evidence to support this is unfortunately weak at best. The research does not support the notion that there are hard-wired styles, and many of the theories conflate learning styles with learning strategies, cognitive theories, or personality type theories. Certainly, students may well have learning preferences but they are not as clear-cut as these various inventories suggest, and motivation would appear to over-ride them every time (Pashler et al. 2008; Scott, 2010). Nevertheless, if you look at many university education and professional development sites they continue to be taught at sage wisdom, and many commercial enterprises exist who are happy to sell you a test.

The Learning Pyramid

The following diagram (or versions of it) appears in around 15,000 web sites (if you do a reverse image search on google – or simply search “learning pyramid”), and yet the evidence that supports it is very vague. It purportedly depicts the degree of retention of material with various teaching methods.

ntl_learning_pyramid

 

 

 

 

 

 

 

It may come from early work by Dale (1946/1969) but even the US based National Training Laboratories Institute for Applied Behavioural Science (who cite it) admit: “NTL believes it to be accurate” but says that they “can no longer trace the original research that supports the numbers” Magennis and Farrell (2005:48). It is also often conflated with the notion of the “cone of experience” in education and the Washington Post also did a nice article on the flaws with it in 2013. Again, there is probably some use and truth in the notion that some teaching methods will work better for some subjects and in some situations. However, the idea that there is a strong validated theoretical model with clearly defined categories is far from the truth.

Multiple Intelligences and Thinking Hats

Howard Gardner’s multiple intelligences model and Edward de Bono’s thinking hats are other good examples of theories I often hear discussed or quoted to support pedagogic approaches. Yet both are also good examples of modern neuromyths. Gardner first proposed his theory of different types of intelligence in 1983. Since then, it has undergone incremental changes, including the addition of one additional intelligence (bringing the total to eight). These different forms of intelligence have been advocated as a basis for changing the way in which we teach. But, repeated research and meta-analysis has found no evidence that individuals actually conform to Gardner’s theoretical categories. Also, according to a 2006 study many of Gardner’s “intelligences” correlate with the g factor, supporting the idea of a single dominant type of intelligence. 

Indeed, even intelligence quotient (IQ) theory itself is commonly misinterpreted. The first IQ test was made by French psychologist Alfred Binet in 1905, and since then the IQ test has become the most recognized tool for predicting academic and professional success. However, although well validated as a psychometric measure there are a number of myths about it that persist, such as:

  • It measures intelligence
  • IQ cant change
  • IQ is genetic

Lastly as a predictive factor for success, it would seem rather simplistic, and although generally a good predicator of performance, does not explain the many confounding examples of successful people who have lower IQ scores than those less successful.

The Thinking Hats site www.debonoforschools.com reads like rather a satire on the subject. It was originally proposed by Edward de Bono in 1985. The premise of the method is that the human brain thinks in a number of distinct ways which can be deliberately challenged, and hence planned for use in a structured way allowing one to develop tactics for thinking about particular issues. However,there is virtually no empirical evidence supporting the model, and it has often been parodied.

In the end, Gardner’s theory or de Bono’s thinking hats interesting ideas but probably not all that helpful for adoption in formal education.

 You Only Use 10% of Your Brain

Again this seems a a widespread common belief, but though the 10-percent myth is widespread, recent neuro-imaging technology has conclusively destroyed this. While not all of the brain is active all at once, functional magnetic resonance images (fMRI) show several brain areas are at work for any given activity, depending on what function is needed, and that we use the majority of our brain matter daily.

Lack of Theoretical Development and Testing?

Overall, I fear part of the problem here is the trend towards accepting postmodern constructivist epistemologies, over thorough scientific investigation, or what I might call the “its all good” syndrome. I worry that this ambivalence towards good evidence in academic inquiry is actually gathering steam, rather than diminishing with key examples being the current rise of so-called integrative science and quackademic medicine. Good scientific practice involves developing ideas into theories, and testing them repeatedly to identify the best of a set of competing hypotheses or explanations. That does not mean we have found the truth but the best explanation given our current understanding. An approach that accepts them all as equally valid explanations of the world offers little in practical value, apart from the ongoing generation of even more unsubstantiated theory.

Enough Already!

The call that we need more research into these theories is often suggested, but we should also recognize the comes a point when it is reasonable to say we have enough evidence, and move on to something new. It is not so much that these neuromyths are wrong, but that the evidence base and/or research methodology is flaky at best, and they have often been misinterpreted and generalized beyond their legitimate use, and make little sense in the real world of education. So, time to move educational theory on towards more productive areas where student performance can actually be shown to improve, such as with the use of improved feedback/formative assessment strategies.

There is an excellent balanced chapter on “neuromyths” from a recent book by the co-ordinator of the Neuroeducational.net site Howard-Jones, that is well worth a look.

Onwards and Upwards

Bernie

References

Atherton J. Read more on misrepresentation, myths and misleading ideas on James Atherton’s site at:http://www.learningandteaching.info/learning/myths.htm#ixzz33zAJEO7S

Dale, E. (1969) Audiovisual methods in teaching, third edition.  New York: The Dryden Press; Holt, Rinehart and Winston.

Doidge, N. The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science. Penguin Books, 2007

Howard-Jones P (2009) Introducing Neuroeducational Research London; Routledge.

Jarrett C. Why the Left-Brain Right-Brain Myth Will Probably Never DiePsychology Today, June 27, 2012

Magennis S and Farrell A (2005) “Teaching and Learning Activities: expanding the repertoire to support student learning” in G O’Neill, S Moore and B McMullin Emerging Issues in the Practice of University Learning and Teaching, Dublin; All Ireland Society for Higher Education/Higher Education Authority

Pashler H, McDaniel M, Rohrer D and Bjork R (2008) “Learning Styles; concepts and evidence” Psychological Science in the Public Interest vol. 9 no.3; available on-line at http://www.psychologicalscience.org/journals/pspi/PSPI_9_3.pdf accessed 21 May 2014.

Scott, C. (2010) The Enduring Appeal of ‘Learning Styles’ Australian Journal of Education 2010 54: 5 DOI: 10.1177/000494411005400102

Visser, Beth A.; Ashton, Michael C.; Vernon, Philip A. (2006), “g and the measurement of Multiple Intelligences: A response to Gardner”, Intelligence 34 (5): 507–510,

 

Sampling and Probability: probably…

Hello all,

A belated Happy New Year to all (note the logical form: year is NOT plural, except apparently in North America)!

We thought we would kick off the year with a quick discussion on sampling theory as it seems a subject fraught with confusion. To illustrate this point I note a section from the Statistics Canada website which was cited to me last year by a graduate student (postgrad for readers in Blighty). The Stats Canada site notes the following about non-probability sampling:

Non-probability Stats Canada clip

Now I certainly don’t claim to be a statistical expert as my expertize with inferential statistics is fairly limited. But I have a bit of a logic background from programming, so I do know a little about logical clauses. The problematic part for me is:

“in non-probability sampling, there is an assumption that there is an even distribution of characteristics within the population. This is what makes the researcher believe that any sample would be representative and because of that, results will be accurate.” 

Something about that didn’t seem quite right, and it seems inconsistent with the later statement:

in non-probability sampling, since elements are chosen arbitrarily, there is no way to estimate the probability of any one element being included in the sample.

Logically, if it is a non-probability sample, then the sample will not be representative of the probability of a phenomenon being present in a population. If a phenomenon is equally evident in all members of the population then it is a probability sample as the sample is subject to probabilistic inference. In the case of everyone demonstrating the phenomenon the probability of finding it in your sample would be 100%. In effect, if the first statement is true, then the second cannot be as they are mutually exclusive. I believe what they are trying to suggest, is that a non-probability sample is a targeted sample, selected to include from a frame (the set of people from whom the sample are drawn) who all exhibit the same characteristic, or have experiences the same phenomenon. Technically, this is not the same as an “even-distribution” though.

The way I was taught, and understand the difference between probability, and non-probability samples is as follows (and is also consistent with the second clause).

Non-Probability Sampling

Non-probability sampling does not depend upon the rationale of probability theory, and with it there is no way to estimate the likelihood for any particular element being included within a sample. Researchers may use this approach when a representative sample is unnecessary (such as to explore the existence of a phenomena or determine personal experience), or when a probability sample is unavailable.  Even with samples that are not representative of a population we can still explore the elements to describe phenomena or identify if a particular phenomenon exits.

Non-probability sampling may be useful in qualitative work, or for practicality such as in focus group selection. Non-probability sampling is also useful if there is a limited population size, as with very small frames the key statistical properties required to support a probability sample do not exist. E.g., surveying 20 users of a new tool in a specialty clinic. It may also be a useful technique where the frame parameters are uncertain. E.g., sampling street drug users. Techniques for non-probability sampling are summarized as follows:

Non Probability Sampling

Probability Sampling

Probability sampling is more commonly used in quantitative research and aims to use representative samples of a whole. It is based on probability theory and accepted statistical principles to allow the prediction that findings observed in the sample will occur in the whole population.  It requires that every element has the chance of being selected, ideally (but not necessarily) an equal chance. In this type of sampling the probability of selection of an element can be calculated, so a sample element can be weighted as necessary to give it unbiased representation. It also requires that random chance determine selection. In the case of random samples, mathematical theory is available to assess the sampling error. Thus, estimates obtained from random samples can be accompanied by measures of the uncertainty associated with the estimate e.g., standard error or confidence intervals. Examples of probability sampling techniques are summarized as follows:

Probability Sampling

This seems consistent with the literature I have looked at on the subject over the years (such as Lenth, 2001; Campbell, Machin & Walters, 2007; Polit & Beck, 2014). The advantages and disadvantages of both approaches can be summarized as:

Sampling Advantages and Disadvantages

I did write to Stats Canada, asking for an clarification, and even politely suggested a possible correction that would make their description consistent. However, I never heard back, so I guess they don’t have time to answer the blathering of an inquisitive nursing professor.

I then asked a couple of stats savvy colleagues if they could explain the apparent inconsistency. One said, “Err, that doesn’t seem right to me” and another “Well, if Stats Canada say so it must be right!” Therefore I am am none the wiser to their rationale. All I can say is from the good science perspective: never take for granted anything you read (from whatever source, and well, especially on the web)!

If any stats wizards ever read this blog please do pitch in and give us your thoughts.

Onwards and Upwards

Bernie

References

Campbell M.J., Machin D. & Walters S., (2013) Medical Statistics: A Textbook for the Health Sciences. Chichester, John Wiley.

Lenth, R.V. (2001). Some practical guidelines for effective sample size determination. The American Statistician, 55, 187-193.

Polit B.F. & Beck C.T. (2014) Essentials of Nursing Research: Appraising Evidence for Nursing Practice. New York. Wolters Kluwer

 

Any Colour You Like; defining the terms of modern science

This week a joint post from us!

Recently I have noticed an increasing trend towards generalization in much student work, at both undergraduate and postgraduate levels, and have some concerns this represents a gradual shift in terms of the level of scholarship and academic discrimination. Overall this seems to have been more evident over the last 10 years of so with the advent of postmodern approaches in my discipline. In the worst instances initial proposals for thesis work basically take form of “This is a problem, so I am going to talk to a bunch of people to see what they think and find out some stuff.” The latest trend of this I seem to be seeing in student writing seems to be “To explain this I call upon …” and I have to admit I have struggled to resist the temptation to add “…the power of Grey Skull.”

I was always a Thundercat man myself! However, in a similar vein, one thing that is increasingly happening here is that students are ‘retrofitting’ their work to theory. They carry out their research and then come up with statements such as “Vygotsky agrees with this” to which I normally write something like “That’s a bit of luck then!” (we’re far  more brutal with our feedback comments here in the UK…)

This isn’t a fault on our students side as we seem to have got across the idea that different theoretical perspectives must be acknowledged and no perspective is value free, but then also the principle that they are all equally valid, and you can choose any one that suits you. Students have been indoctrinated to always identify a specific theoretical perspective. Indeed, faculty incessantly ask them, “yes but what theoretical perspective are you going to use?” However, students often write “I  am going to use the XYZ lens” but do so to satisfy their professor, and then proceed without any attempt to explain why this is a  useful approach, give any justification, or consideration of alternatives. It actually reminds me a bit of the old BBC Play School TV show of my childhood where the story teller would say “today, children,  we are going to look through the round window.” Now, I know many postmodern scholars would nod sagely, and say “yes, exactly so!” but I must admit from an epistemological stance find this somewhat exasperating. In taking this approach what we are effectively doing is dumbing down the nature of scientific enquiry into a generalized descriptive melange, rather than a consideration of competing explanations, and discriminating arguments.

Allied to this, particularly at doctoral level, I have noticed a trend for students to write biographical pieces about themselves so you can see ‘where they are coming from’. This may have some validity if it actually related to any adopted theoretical stance , e.g. “Growing up in a working class area of the inner city forged my radicalism etc” but it rarely ever seems to. I agree with Bernie, you can’t just put ideas out like vegetables on a market stall. Sometimes,  I get the distinct impression that we have made students afraid to nail their colours to the mast, either because they don’t feel sufficiently confident with their approach or that they actually feel that they don’t have the depth of understanding to defend it. Increasingly students fall to citing another study that used their selected  approach or that the results justified the means. ironically this is even harder to defend academically.

Now in the dark and distant past of our undergraduate studies in Portsmouth, Roger and I had rather an eccentric lecturer who used to wear academic dress to lecture in (most of us thought he had either been sent down from Oxford/Cambridge, or was a big Batman fan, as no one else in the institution did so).

Yes, I remember him describing some environmental issue on the Yellow River, but doing so in Cantonese as he felt the local concerns didn’t translate well into English! We all thought he was, well, a little more than eccentric Bern, but looking back perhaps he was just ahead of the curve – sorry I digress…

Anyhow, I recall once him reprimanding me when I asked a question saying “”Define your terms young man, define your terms!”  Well, he did have a point, as if we are not specific we run the risk of obfuscating our meaning. Lets take the specific example of the use of the terms concept, construct and variable. These are not really interchangeable terms, which we can choose at will to describe phenomena but have specific meanings in the process of theory development.

Concepts are mental representations of things that allow us to share experiences and draw conclusions about the world. Concepts are also sometimes construed as abstract entities. They are expressions of an abstract form derived from generalization from particulars. For example, the concept of pain can be inferred from the observation of specific instances and records using inductive and abductive reasoning. Pain is a good example, as it remains a highly active area of research today.

To develop our concept into a form that can be explored further we need to describe it in some terms that can be analysed in detail. This leads us to the development of a construct or representative framework to describe the phenomena in measurable terms.  In science a construct is really a concept that has been deliberately adopted for a special scientific purpose. It has identified elements that can be measured (as the theoretical element itself cannot be directly observed or measured). The neuromatrix theory of pain, or intelligence is both good examples of constructs.  The actual elements of the construct are defined in specific terms that can be measured and these elements are known as variables. E.g. nociceptor potentials, or intelligence quotient (IQ). Once we have a construct with variables our theory can be tested though hypothesis generation and deductive reasoning to develop a theory that is substantiated by evidence.

Indeed I would agree with that, certainly distinguishing the terms ‘concept’, ‘construct’ and ‘variable’ and that these lead to generating hypotheses and then to testing (in what ever way is deemed appropriate).

In this way we can see the focus of empirical scientific work is really to generate and establish theories that can explain phenomena, and be used to support predictions of future events, or do other useful things. If we don’t define our terms carefully, consider arguments for the best explanation or choose a theoretical framework that we find appealing or fashionable without considering alternatives or justification, my concern is we move away from doing anything practically useful and into the realms of intellectualization for its own sake.

I agree,  justifying your stance  and terms carefully supports systematic and rigorous interrogation of the collected data. Data is only as robust as the methods used to collect it and the conclusions to any work are only as strong as the analytical processes that are used. Every step in the chain of rationale should work. If we employ an “I’m going to talk to a few people and find stuff out approach” as Bernie called it earlier, we really throw out any justification for how we might practically use those data, and what meaningful conclusions can be drawn. Also, foraging through a mass of interview transcripts and pulling out quotes to support a preconceived particular view is not systematic analysis. I fear that students often opt for such methods, not for any deep allegiance to post-modernism, or to any specific qualitative approach,  but rather that its perceived by some as an easier option, primarily because they don’t consider the complexity of the analytical methods that this requires. When students come to me with such project ideas I always asked them “How are you going to analyse this?” and most times the response is a blank expression. Perhaps in future I’ll add “…by the power of Grey Skull?”

To be fair to our esteemed students the blame lies with us, as it is our fault if, as academics, if we have taught them this sort of thing is acceptable. We only have our selves to blame, after all we shape our students behaviours in our own practices. Define your terms (although preferably not in a foreign language)!

Bernie and Roger