Neuromyths in Education: Why do they persist?

In a post last year we discussed issues with the lack of evidence-based education, and during some recent professional development sessions and conversations at my university, this came to mind again, as some of the ideas that seem to be taken for granted in higher education seem to have very little supporting evidence. Indeed, education does seem an area where some of the so called “neuromyths” persist, and are even championed.

I came up against this when challenging some of the constructivists and postmodern educational ideas being discussed by a PhD student and a senior member of faculty in education. I was told “I didn’t realize people actually still thought that sort of thing” as if I was some sort of ludite dinosaur. Sadly, I have found members of nursing faculty are often patronized in such ways by academics from other disciplines, which usually signals to me an inability to make any useful counter-argument to a point, and possibly also a rather closed mind!

Neuromyths are really ideas about neurological/cognitive processes that have been repeated often enough to become considered as fact. Unfortunately some misconceptions about the brain persist in the classroom and beyond. Let us consider a few of these established ideas that pervade higher education that have mainly arisen from dubious educational psychology and persist as contemporary wisdom.

Left Brain – Right Brain

The idea is often suggested that people are predominately left or right brained in terms of their skills and aptitudes. E.g. left-brain predominant = logical and mathematically skilled, more organized and systematic whilst right brain predominant = artistic and creative. Just google “left-brain right brain” for many examples. Current research suggests regardless of personality or skill set, you use both the right and left hemispheres of your brain together to perform everyday tasks. Although certain functions, such as speech production, handedness, and facial recognition, tend to be dominated by one side of the brain in the great majority of people, most tasks require parallel processing from both hemispheres. The integration of input is made possible by the fibre connections between right and left sides of the brain called the corpus callosum. Unless an entire hemisphere is completely removed or damaged, no one should really be considered to be “right”- or “left”-brained

The Utility of Learning and Teaching Styles 

As educator James Atherton notes: most teachers would not argue with the proposition that people learn or teach in different ways. This has given rise to a whole host of theories of learning (and teaching) styles. There are at least 71 different learning styles inventories published. However, the assumptions of the “styles” adherents in education are that it is possible to develop a relatively simple typology of learning or teaching styles and then develop test instruments to ascertain where individuals fit, teach to address them, and (more worryingly)  assess the quality of teaching with reference to this.

The evidence to support this is unfortunately weak at best. The research does not support the notion that there are hard-wired styles, and many of the theories conflate learning styles with learning strategies, cognitive theories, or personality type theories. Certainly, students may well have learning preferences but they are not as clear-cut as these various inventories suggest, and motivation would appear to over-ride them every time (Pashler et al. 2008; Scott, 2010). Nevertheless, if you look at many university education and professional development sites they continue to be taught at sage wisdom, and many commercial enterprises exist who are happy to sell you a test.

The Learning Pyramid

The following diagram (or versions of it) appears in around 15,000 web sites (if you do a reverse image search on google – or simply search “learning pyramid”), and yet the evidence that supports it is very vague. It purportedly depicts the degree of retention of material with various teaching methods.

ntl_learning_pyramid

 

 

 

 

 

 

 

It may come from early work by Dale (1946/1969) but even the US based National Training Laboratories Institute for Applied Behavioural Science (who cite it) admit: “NTL believes it to be accurate” but says that they “can no longer trace the original research that supports the numbers” Magennis and Farrell (2005:48). It is also often conflated with the notion of the “cone of experience” in education and the Washington Post also did a nice article on the flaws with it in 2013. Again, there is probably some use and truth in the notion that some teaching methods will work better for some subjects and in some situations. However, the idea that there is a strong validated theoretical model with clearly defined categories is far from the truth.

Multiple Intelligences and Thinking Hats

Howard Gardner’s multiple intelligences model and Edward de Bono’s thinking hats are other good examples of theories I often hear discussed or quoted to support pedagogic approaches. Yet both are also good examples of modern neuromyths. Gardner first proposed his theory of different types of intelligence in 1983. Since then, it has undergone incremental changes, including the addition of one additional intelligence (bringing the total to eight). These different forms of intelligence have been advocated as a basis for changing the way in which we teach. But, repeated research and meta-analysis has found no evidence that individuals actually conform to Gardner’s theoretical categories. Also, according to a 2006 study many of Gardner’s “intelligences” correlate with the g factor, supporting the idea of a single dominant type of intelligence. 

Indeed, even intelligence quotient (IQ) theory itself is commonly misinterpreted. The first IQ test was made by French psychologist Alfred Binet in 1905, and since then the IQ test has become the most recognized tool for predicting academic and professional success. However, although well validated as a psychometric measure there are a number of myths about it that persist, such as:

  • It measures intelligence
  • IQ cant change
  • IQ is genetic

Lastly as a predictive factor for success, it would seem rather simplistic, and although generally a good predicator of performance, does not explain the many confounding examples of successful people who have lower IQ scores than those less successful.

The Thinking Hats site www.debonoforschools.com reads like rather a satire on the subject. It was originally proposed by Edward de Bono in 1985. The premise of the method is that the human brain thinks in a number of distinct ways which can be deliberately challenged, and hence planned for use in a structured way allowing one to develop tactics for thinking about particular issues. However,there is virtually no empirical evidence supporting the model, and it has often been parodied.

In the end, Gardner’s theory or de Bono’s thinking hats interesting ideas but probably not all that helpful for adoption in formal education.

 You Only Use 10% of Your Brain

Again this seems a a widespread common belief, but though the 10-percent myth is widespread, recent neuro-imaging technology has conclusively destroyed this. While not all of the brain is active all at once, functional magnetic resonance images (fMRI) show several brain areas are at work for any given activity, depending on what function is needed, and that we use the majority of our brain matter daily.

Lack of Theoretical Development and Testing?

Overall, I fear part of the problem here is the trend towards accepting postmodern constructivist epistemologies, over thorough scientific investigation, or what I might call the “its all good” syndrome. I worry that this ambivalence towards good evidence in academic inquiry is actually gathering steam, rather than diminishing with key examples being the current rise of so-called integrative science and quackademic medicine. Good scientific practice involves developing ideas into theories, and testing them repeatedly to identify the best of a set of competing hypotheses or explanations. That does not mean we have found the truth but the best explanation given our current understanding. An approach that accepts them all as equally valid explanations of the world offers little in practical value, apart from the ongoing generation of even more unsubstantiated theory.

Enough Already!

The call that we need more research into these theories is often suggested, but we should also recognize the comes a point when it is reasonable to say we have enough evidence, and move on to something new. It is not so much that these neuromyths are wrong, but that the evidence base and/or research methodology is flaky at best, and they have often been misinterpreted and generalized beyond their legitimate use, and make little sense in the real world of education. So, time to move educational theory on towards more productive areas where student performance can actually be shown to improve, such as with the use of improved feedback/formative assessment strategies.

There is an excellent balanced chapter on “neuromyths” from a recent book by the co-ordinator of the Neuroeducational.net site Howard-Jones, that is well worth a look.

Onwards and Upwards

Bernie

References

Atherton J. Read more on misrepresentation, myths and misleading ideas on James Atherton’s site at:http://www.learningandteaching.info/learning/myths.htm#ixzz33zAJEO7S

Dale, E. (1969) Audiovisual methods in teaching, third edition.  New York: The Dryden Press; Holt, Rinehart and Winston.

Doidge, N. The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science. Penguin Books, 2007

Howard-Jones P (2009) Introducing Neuroeducational Research London; Routledge.

Jarrett C. Why the Left-Brain Right-Brain Myth Will Probably Never DiePsychology Today, June 27, 2012

Magennis S and Farrell A (2005) “Teaching and Learning Activities: expanding the repertoire to support student learning” in G O’Neill, S Moore and B McMullin Emerging Issues in the Practice of University Learning and Teaching, Dublin; All Ireland Society for Higher Education/Higher Education Authority

Pashler H, McDaniel M, Rohrer D and Bjork R (2008) “Learning Styles; concepts and evidence” Psychological Science in the Public Interest vol. 9 no.3; available on-line at http://www.psychologicalscience.org/journals/pspi/PSPI_9_3.pdf accessed 21 May 2014.

Scott, C. (2010) The Enduring Appeal of ‘Learning Styles’ Australian Journal of Education 2010 54: 5 DOI: 10.1177/000494411005400102

Visser, Beth A.; Ashton, Michael C.; Vernon, Philip A. (2006), “g and the measurement of Multiple Intelligences: A response to Gardner”, Intelligence 34 (5): 507–510,

 

Do You Understand Lupine Ways of Knowing? The value of reductio ad absurdum in scientific debate.

This week I thought I would raise the rather contentious  issue of the reductio ad absurdum argument (also known as argumentum ad absurdum). This is the ancient form of logical argument that seeks to demonstrate that an argument or idea is nonsense by showing that a false, ludicrous, absurd result follows from its acceptance, or alternatively that  an argument is sound as a false, untenable, or absurd result follows from its denial.

The nature of this argument has venerable roots and it is well documented as a form of logic in ancient Greece, used by such luminaries as Xenophranes, Socrates, and Plato . However, in modern academia there seem to be rather polarized views on it. 1) that it trivializes an argument and belittles the person taking a particular position or 2) that is is a valid and reasonable way of demonstrating that an idea is unsound. There also seems to be a cultural aspect in that I have found it used more frequently in Europe, whereas in North America it is somewhat frowned upon in many academic circles.

Naturally, as Rog and I are somewhat subversive and agitative academics (I use the term loosly) we are in full support of it, and to this end have just published a paper in Nursing Inquiry using exactly this form of argument to challenge the established wisdom of a specific postmodern argument for alternative ways of knowing. This paper was based on an earlier blogpost on this very blog site. Here, we use the ad absurdum argument to note that the principles used to support Carper’s  four ways of knowing can equally well be used to support a more creative typology (in this case including, arcane knowing, and lupine knowing).

Naturally, as with any form of intellectual rationale the argument is only as good as the fundamental data and facts it is based upon. Therefore, an ad absurdium argument can be misused, or poorly constructed. It is also often used erroneously as a Straw Man argument.

Considering what is absurd and what isn’t is a tricky thing for anyone, and particularly problematic in science.  For example, many Victorian scientists scoffed at the thought of powered flight, and even Einstein had issues with the notion of black-holes. Therefore, identifying absurdity is not something easily undertaken, as it may simply be the ideas presented are highly original or unconventional. The bacteria Helicobacter Pylori being suggested as a cause of gastric ulceration is a good example, as this theory was not readily accepted by the medical community for several years, despite good evidence.

Also, this is not the same as absurdity as used in common parlance. Commonly absurd positions are seen as ridiculous, or foolhardy, but an argument ad absurdum does not suggest the person making the argument should be ridiculed or lampooned. After all, we have all believed ridiculous things at one time or another; for western children the notion that Santa Clause brings all the children in the world toys on one night a year is a case in point! For the purpose of scientific thinking, for something to be demonstrated as absurd here we really need to see that there is inconsistency in the arguments presented. An absurd position may be considered one that is contrary to reason, irrational, or ludicrous to follow due to the practical implications of believing it. Unfortunately, several concepts now accepted and used in modern science arose in exactly this fashion: Quantum physics for example. However, repeated scientific observation and empirical data have proved quantum theory correct. So, paradigms change with time and we should be cautious about suggesting any position is ridiculous.

From a pragmatic position, I would argue an argument that can be demonstrated as fallacious by analysing its components, and demonstrating inconsistencies, or that you can demonstrate by accepting it you are also supporting associated positions that make no sense and have no practical value, then an ad absurdum position can be used effectively to demonstrate these weaknesses.

At the end of the day the sensitivities invoked by this form of argument are worth considering, and it is a form of rationale that is not easy to develop effectively. However, as long as the use of it involves demonstrating the nonsense an idea or position presents, rather than attacking the person making the argument, I would suggest it is a useful form of analysis. As a scientist if you are prepared to make any case, hypothesis or argument, you should be prepared to have it challenged and debated, and defend it. If the position is sound it will survive this critique, and win through. That is what good science is all about, but to make sound ad absurdum arguments you have to have a good working knowledge of the logical fallacies to start with.  They can also be a lot of fun too, and if this form was good enough for Socrates…

Bernie

 Reference

Carper B.A. (1978), “Fundamental Patterns of Knowing in Nursing”, Advances in Nursing Science 1(1), 13–24

Garrett B.M. & Cutting R.L. (2014) Ways of knowing: realism, non-realism, nominalism and a typology revisited with a counter perspective for nursing science. Nursing Inquiry. Retrieved 21 May 2014.

Rescher N. (2009) Reductio ad absurdumThe Internet Encyclopedia of Philosophy. Retrieved 21 May 2014.

 

 

 

Rum and Academic Pressure: What drives us to academic dishonesty?

Hi! I’m back from the horror of book writing and have also finally navigated around the issue that the IT Help desk is 6000 miles and 8 time zones away from where I am!

Due to both my joyous enthusiasm at being back and Bernie’s increasingly bleak emails appearing like Marley’s Ghost and just saying “Blog!” I have decided to. So, (a bit like London buses, nothing happens then five come along!) you’ve got two blogs this week.

A couple of things have converged recently, firstly a conversation with some students and secondly, finishing reading an interesting, but rather sad book.

The conservation with the students centred around their third year dissertations, in other words their first go at real research in that they design, collect and analyse data without really being told what to do.

This is somewhat disconcerting for some students, as they’ve never ventured very far from a sort of preferred expository form of learning. In other words, they’ve been told how and what to do by their teachers. Being ‘out on their own’ tends to freak them a bit and leads to queues of students outside my office asking for ‘tutorials’ which becomes something of a euphemism for ‘help’ at this time of year.

The conversation I had with a few this week centred to their write-ups. We were talking about the presentation of data and I told them that their projects did not need to include any raw data, only those data that had been ‘processed’ in some way. In other words graphed, or statistically treated. I didn’t want pages and pages of numbers. I think I added something droll about having better things to do with my life and that I’d have more fun reading a telephone directory. This seemed to promote genuine surprise in the tutorial group. “What?!” they chorused, “No primary data?!!” That’s right I replied.

After a brief (stunned) silence, one of them then said “But, Rog, we might make it all up!” I was equally knocked back by this and after another stunned silence (on my part) I replied somewhat hesitantly, “But why on earth would you?”

We then entered into an interesting discussion about ‘making up’ data. I argued that to do so was to prove that you didn’t really ‘get’ science because that ‘finding out stuff’ was the best bit. They countered by implying that I was naive in the extreme and didn’t understand the pressure students were under. They may a point I suppose.

The second and related event was that I have just re-read an interesting book that came out a few years ago called ‘A Rum Affair: How Botany’s “Piltdown Man” was Unmasked’ by Karl Sabbagh. It’s a very good book and I recommend it, although be warned I am about to rather ‘plot-spoil’ it if you haven’t read it.

It concerns the work of two academics in the 1950’s in the UK. One was Professor John Heslop-Harrison, a professor in botany who worked at Newcastle University and the other John Raven, a fellow at King’s College, Cambridge, who was a classical scholar. Raven worked primarily on pre-socratic philosophy but was also a keen amateur botanist.

Heslop-Harrison had (essentially) at this time proposed the idea that certain plants may have survived successive ice ages in so-called ecological refugia. In other words places such as sheltered, south facing locations where micro-climates may have established the required conditions for their survival. In other words, plants normally associated with distributions that were much further south, could well be found at more northerly latitudes. It was and still is a popular idea but at the time needed evidence to really verify it.

Heslop-Harrsion ran regular field trips to the then privately owned Hebridean Island of Rum, off Scotland’s west coast. He was the only academic to have permission to do so from the land-owners and the Island was off-limits to all others. It was during these trips that he, and his students, began to find species of plant that would be normally associated with southern Europe. These plant finds appeared to provide the definitive evidence to support the theory. His work was published in leading journals and the plants went on to be officially included in the British Flora. It was a good example of an idea, being tested in the field and then becoming a validated theory.

However, as more finds were made each year, some were becoming sceptical, particularly at Cambridge University. To rather unfairly cut a long but intriguing story short, the aforementioned John Raven managed, with some degree of subterfuge, to ‘negotiate’ a place on one of the field trips. Unknown to Heslop-Harrsion, he had gone to verify the finds.

After a few days on the site, a new ‘southern’ plant was indeed discovered. Raven visited the ‘find site’ a day or so later and did indeed find the same plant growing in the location. He dug one of the specimens up to authenticate its identification later when back in the lab back at Cambridge.

On his return, the plant was indeed the described species, but then he noticed something strange about the soil around the root bole. It appeared to be different from the rest of the soil in the bag. He sent it to a pedologist (soil scientist) colleague who ran an analysis of the soil around the root. He found the presence of specific minerals that could not be associated with the Island of Rum, they simply did not occur there. When they checked, these particular minerals almost had a geographical ‘finger print’ in that were nearly exclusively found in the soils of North East England, more spcifically, around Newcastle. The only conclusion was that they had been grown in Newcastle, then transported and planted on Rum.

Of course this is a summation of a long book and all sorts of twists and turns ensued including the status of Raven as an amateur botanist, against Heslop-Harrsion’s professorship, the role and status of the two institutions, Raven’s subterfuge etc. It does make a fascinating read.

However, the point here is what makes an emeritus professor, in a leading department, at a top university, fabricate data? Why should anybody at any level do that? Certainly, today academics are under terrific pressure to publish. I noticed recently that Peter Higgs (he of the Higgs boson) said that he’d never get a job now as he’d only published a couple of papers in his career. However, making up data to fit the theory remains a curious step to take. Interestingly, the very limited research into this area, suggests that once somebody does it, even at a fairly early career stage, they continue to do so later in their careers! Just cutting the odd corner, let’s just say that n=100, rather than n=10 (the mean won’t shift much). We repeated the process how many times? Let’s not include those data as they throw the mean out, or if we ignore those data we get a significant result, etc, etc, etc.

There was some work a few years ago that suggested a link between academic dishonesty at University and unethical behaviour in later years. If we want science to ‘do what it says on the tin’ in relation to understanding the world the world around us, what’s the point of making stuff up? If it’s just to get on with some sort of career, well, that might explain some of the unethical behaviours we’ve seen over the years.

At the end of the tutorial I said to the students, “Another thing, don’t come here next week and say “my experiment didn’t work”, experiments only ‘don’t work’ when you don’t get any data. If your results don’t fit the theory, well that’s a good thing isn’t it? It challenges orthodoxy, questions the established, confronts convention.”

They looked back at me with the sort of resigned expressions that showed they were thinking they wished they’d got someone else as supervisor. “Whatever you say” said one of them spectacularly missing the point.

 Reference

Sabbagh, Karl (1999) A Rum Affair how botany’s ‘piltdown man’ was unmasked. Allen Lane, London.