Open Access Science and the Rise of Predatory Publishing

Following our summer hiatus from the blog Roger and I thought we would kick off this academic year with a piece on the changing nature of peer review and science publishing.

Like most professors we are finding our mailboxes more and more full of academic spam. If it is not companies trying to sell me transgenic rats, monogrammed lab coats in stylish colours or bioassay systems, its now invitations to present papers at conferences in exotic locations or publish with new journals I have never heard of (and usually well outside of my discipline). It’s enough to make me long for the days when I simply won the Nigerian state lottery a couple of times a week.

New technologies have arisen that now allow academics to publish more directly than ever before. Traditionally scientific work has been presented to the world for consideration of its merits and for challenge. The principle being that ideas and claims are independently examined, become refined, and bad ones rejected. This is a central part of the skeptical nature of scientific inquiry and remains a firm part of academic training, with PhD. candidates being required to defend their theses in robust discussion with their peers.

The peer review process for scientific papers prior to publication is an embodiment of this principle. The value is that the process is self-critical, and the conventional wisdom is consistently challenged. Modern science is pragmatic in that it presents ideas for peer review and openly invites opportunity for anyone to challenge the dominant theory if they can come up with alternative results or better explanations supported by evidence. Nevertheless, we know this culture is not value free as we have discussed previously, and there are costs associated with the dissemination of scientific information are traditionally passed on to the end-user. Hence, the large publishers tend to be driven more by circulation and sales goals rather than more altruistic motivations, especially with books. For example, you can have an innovative well written book but still get it rejected by major publishers as it goes against the flow and is considered unlikely to sell in any volume. Despite their expertise and peer-review systems publisher’s often still get this spectacularly wrong (J.K. Rowling being a case in point in the world of fiction)! If we look at the number of retractions in scientific journals too, we also see some evidence of flaws in this traditional system.

So, with the advent of desktop publishing and mass circulation using the web, alternative models for dissemination have now arisen in all forms of academic publishing. Open access (OA) is one such innovation, and supports unrestricted online access to peer-reviewed scholarly research. A declaration on the principles behind OA were made at a 2003 Berllin Conference.  Although primarily intended for scholarly journal articles,it now also encompasses a growing number of theses, and book chapters too.This is hardly new though, and John Brockman noted there was a culture, of scientists communicating directly with the public about their work in media back in 1995 (Brockman, 1995). Nevertheless, the ideas behind OA publication reflect a desire to provide faster and more open access to scientific work.

Basically OA comes in two flavours, gratis open access, which is completely open and free online access, and libre open access, which is free online access plus some additional usage rights. These additional usage rights are often granted through the use of various licensing agreements such as Creative Commons. Authors have two options with OA publication. They can either  self-archive their papers in an open access repository, also known as “green” OA, or publish in an established OA journal, known as “gold” OA. Central repositories such as PubMed Central are examples of green OA, whilst gold OA usually use a fee-for-service model that tends to range from $1000 to $3000 per paper (depending on the open journal). This is justified on the basis of the editorial support and peer-review services involved in the publishing process.

Struggling academics looking to raise the profile of their work are often encouraged to use OA services to increase their citation rates (I have been advised of his on several occasions in my career), and some granting agencies require OA publication in any proposals submitted. So, overall there is growing pressure for academics to use these new publishing models, and gold OA seems to offer a robust peer-review process on a par with established traditional journals.

However, what is becoming more concerning is that the fee-for-service model has become a boom industry and entrepreneurs have recognized good money can be made here. This has led to the rise of the predatory publishing tactics which we are now seeing. The term was conceived by University of Colorado Denver librarian and researcher Jeffrey Beall upon noticing the large number of emails inviting him to submit articles or join the editorial board of previously unknown journals. Here is a classic example from my mailbox last week:

Pharmacology and Alternative Medicine Therapeutics

Dear Dr. Garrett,

Scholoxy Publication‘s journals are International Journal of Education and Research welcomes and acknowledges high quality theoretical and empirical original research papers, case studies, review papers, literature reviews, and technical note from researchers, academicians, professional, practitioners and students from all over the world.

We coordinately invite you to submit your papers to Pharmacology and Alternative Medicine Therapeutics an Open Access (Gold OA), peer reviewed, international online publishing journal which aims to publish premier papers on all the related areas of advanced research carried in its field.

The Journal has strong Editorial Board with eminent persons in the field and carries stringent peer review process.

It all sounds very genuine and scholarly, apart from the fact I don’t know anyone in the pharmacology department who has heard of them (in a positive way), and I am not even a pharmacology or alternative medicine professor! This unsolicited invite is actually from a pay-per-publication service whose peer-review process is completely unverified, and I am certainly a little suspicious as to how “stringent” the peer review process is when each publication is accompanied by a cheque.

More subtly these new publishers are also engaging academics to join their editorial boards, or become reviewers on their prestigious journals. Another tactic reflecting these practices is the use of what seem like personal invitations to present at conferences (in reality they are mail-merged bulk mailings to spam lists). To highlight these issues, I see the Canadian Association of Witch Doctors recently submitted and got a spoof paper approved at one such peer-reviewed  OMICS conference. Again, for academics beginning their careers (or even established academics) these may seem like great opportunities to develop their profiles or get their work to a broader audience.

So how can we discern the predatory publishers from genuine scholarly OA providers? Luckily, there are some resources that can help.Jeffrey Beal provides an extensive list of dubious OA outfits on Scholarly Open Access. Worth a look as it’s amazing how many there are!

There are also several sites that provide journal rankings, so academics can check out the status of their chosen journal. E.g., in my discipline (nursing) there are the following examples:

Many disciplines also have lists of established journals in their field, such as INANE’s list of nursing journals. However, even these lists are not foolproof in terms of establishing the academic credibility of journals. For example  Nursing Science Quarterly (incredibly still published by Sage – note my earlier comments on publishers motives) makes several of these lists and although not a predatory publication, is hardly a paragon of scientific excellence, self-citation or rigerous peer-review practices. I think Roger would see this one fit his “isn’t that like asking your mum to review your papers?” category.

Overall, the rise of predatory publishing and how it will impact the broader scientific community and influence the public understanding of science is something of a concern. It seems the best advice for scientists everywhere is buyer beware. There is nothing wrong with traditional journals, and we should remember there are a good many reputable OA journals. However, the usual practice is you send them a paper: not you receive an invite from them. Sometimes good journal editors do solicit work from established researchers and theorists in the field. But, if an offer comes your way to join an editorial board, present at a conference or publish in a venerable new journal and it seems too good to be true, it probably is.

Onwards and upwards

Bernie

References

Beall J. (2012) ON Predatory Publishers http://chronicle.com/blogs/brainstorm/on-predatory-publishers-a-qa-with-jeffrey-beall/47667 

Brockman, J. (1995). The third culture. New York: Simon & Schuster

 

Neuromyths in Education: Why do they persist?

In a post last year we discussed issues with the lack of evidence-based education, and during some recent professional development sessions and conversations at my university, this came to mind again, as some of the ideas that seem to be taken for granted in higher education seem to have very little supporting evidence. Indeed, education does seem an area where some of the so called “neuromyths” persist, and are even championed.

I came up against this when challenging some of the constructivists and postmodern educational ideas being discussed by a PhD student and a senior member of faculty in education. I was told “I didn’t realize people actually still thought that sort of thing” as if I was some sort of ludite dinosaur. Sadly, I have found members of nursing faculty are often patronized in such ways by academics from other disciplines, which usually signals to me an inability to make any useful counter-argument to a point, and possibly also a rather closed mind!

Neuromyths are really ideas about neurological/cognitive processes that have been repeated often enough to become considered as fact. Unfortunately some misconceptions about the brain persist in the classroom and beyond. Let us consider a few of these established ideas that pervade higher education that have mainly arisen from dubious educational psychology and persist as contemporary wisdom.

Left Brain – Right Brain

The idea is often suggested that people are predominately left or right brained in terms of their skills and aptitudes. E.g. left-brain predominant = logical and mathematically skilled, more organized and systematic whilst right brain predominant = artistic and creative. Just google “left-brain right brain” for many examples. Current research suggests regardless of personality or skill set, you use both the right and left hemispheres of your brain together to perform everyday tasks. Although certain functions, such as speech production, handedness, and facial recognition, tend to be dominated by one side of the brain in the great majority of people, most tasks require parallel processing from both hemispheres. The integration of input is made possible by the fibre connections between right and left sides of the brain called the corpus callosum. Unless an entire hemisphere is completely removed or damaged, no one should really be considered to be “right”- or “left”-brained

The Utility of Learning and Teaching Styles 

As educator James Atherton notes: most teachers would not argue with the proposition that people learn or teach in different ways. This has given rise to a whole host of theories of learning (and teaching) styles. There are at least 71 different learning styles inventories published. However, the assumptions of the “styles” adherents in education are that it is possible to develop a relatively simple typology of learning or teaching styles and then develop test instruments to ascertain where individuals fit, teach to address them, and (more worryingly)  assess the quality of teaching with reference to this.

The evidence to support this is unfortunately weak at best. The research does not support the notion that there are hard-wired styles, and many of the theories conflate learning styles with learning strategies, cognitive theories, or personality type theories. Certainly, students may well have learning preferences but they are not as clear-cut as these various inventories suggest, and motivation would appear to over-ride them every time (Pashler et al. 2008; Scott, 2010). Nevertheless, if you look at many university education and professional development sites they continue to be taught at sage wisdom, and many commercial enterprises exist who are happy to sell you a test.

The Learning Pyramid

The following diagram (or versions of it) appears in around 15,000 web sites (if you do a reverse image search on google – or simply search “learning pyramid”), and yet the evidence that supports it is very vague. It purportedly depicts the degree of retention of material with various teaching methods.

ntl_learning_pyramid

 

 

 

 

 

 

 

It may come from early work by Dale (1946/1969) but even the US based National Training Laboratories Institute for Applied Behavioural Science (who cite it) admit: “NTL believes it to be accurate” but says that they “can no longer trace the original research that supports the numbers” Magennis and Farrell (2005:48). It is also often conflated with the notion of the “cone of experience” in education and the Washington Post also did a nice article on the flaws with it in 2013. Again, there is probably some use and truth in the notion that some teaching methods will work better for some subjects and in some situations. However, the idea that there is a strong validated theoretical model with clearly defined categories is far from the truth.

Multiple Intelligences and Thinking Hats

Howard Gardner’s multiple intelligences model and Edward de Bono’s thinking hats are other good examples of theories I often hear discussed or quoted to support pedagogic approaches. Yet both are also good examples of modern neuromyths. Gardner first proposed his theory of different types of intelligence in 1983. Since then, it has undergone incremental changes, including the addition of one additional intelligence (bringing the total to eight). These different forms of intelligence have been advocated as a basis for changing the way in which we teach. But, repeated research and meta-analysis has found no evidence that individuals actually conform to Gardner’s theoretical categories. Also, according to a 2006 study many of Gardner’s “intelligences” correlate with the g factor, supporting the idea of a single dominant type of intelligence. 

Indeed, even intelligence quotient (IQ) theory itself is commonly misinterpreted. The first IQ test was made by French psychologist Alfred Binet in 1905, and since then the IQ test has become the most recognized tool for predicting academic and professional success. However, although well validated as a psychometric measure there are a number of myths about it that persist, such as:

  • It measures intelligence
  • IQ cant change
  • IQ is genetic

Lastly as a predictive factor for success, it would seem rather simplistic, and although generally a good predicator of performance, does not explain the many confounding examples of successful people who have lower IQ scores than those less successful.

The Thinking Hats site www.debonoforschools.com reads like rather a satire on the subject. It was originally proposed by Edward de Bono in 1985. The premise of the method is that the human brain thinks in a number of distinct ways which can be deliberately challenged, and hence planned for use in a structured way allowing one to develop tactics for thinking about particular issues. However,there is virtually no empirical evidence supporting the model, and it has often been parodied.

In the end, Gardner’s theory or de Bono’s thinking hats interesting ideas but probably not all that helpful for adoption in formal education.

 You Only Use 10% of Your Brain

Again this seems a a widespread common belief, but though the 10-percent myth is widespread, recent neuro-imaging technology has conclusively destroyed this. While not all of the brain is active all at once, functional magnetic resonance images (fMRI) show several brain areas are at work for any given activity, depending on what function is needed, and that we use the majority of our brain matter daily.

Lack of Theoretical Development and Testing?

Overall, I fear part of the problem here is the trend towards accepting postmodern constructivist epistemologies, over thorough scientific investigation, or what I might call the “its all good” syndrome. I worry that this ambivalence towards good evidence in academic inquiry is actually gathering steam, rather than diminishing with key examples being the current rise of so-called integrative science and quackademic medicine. Good scientific practice involves developing ideas into theories, and testing them repeatedly to identify the best of a set of competing hypotheses or explanations. That does not mean we have found the truth but the best explanation given our current understanding. An approach that accepts them all as equally valid explanations of the world offers little in practical value, apart from the ongoing generation of even more unsubstantiated theory.

Enough Already!

The call that we need more research into these theories is often suggested, but we should also recognize the comes a point when it is reasonable to say we have enough evidence, and move on to something new. It is not so much that these neuromyths are wrong, but that the evidence base and/or research methodology is flaky at best, and they have often been misinterpreted and generalized beyond their legitimate use, and make little sense in the real world of education. So, time to move educational theory on towards more productive areas where student performance can actually be shown to improve, such as with the use of improved feedback/formative assessment strategies.

There is an excellent balanced chapter on “neuromyths” from a recent book by the co-ordinator of the Neuroeducational.net site Howard-Jones, that is well worth a look.

Onwards and Upwards

Bernie

References

Atherton J. Read more on misrepresentation, myths and misleading ideas on James Atherton’s site at:http://www.learningandteaching.info/learning/myths.htm#ixzz33zAJEO7S

Dale, E. (1969) Audiovisual methods in teaching, third edition.  New York: The Dryden Press; Holt, Rinehart and Winston.

Doidge, N. The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science. Penguin Books, 2007

Howard-Jones P (2009) Introducing Neuroeducational Research London; Routledge.

Jarrett C. Why the Left-Brain Right-Brain Myth Will Probably Never DiePsychology Today, June 27, 2012

Magennis S and Farrell A (2005) “Teaching and Learning Activities: expanding the repertoire to support student learning” in G O’Neill, S Moore and B McMullin Emerging Issues in the Practice of University Learning and Teaching, Dublin; All Ireland Society for Higher Education/Higher Education Authority

Pashler H, McDaniel M, Rohrer D and Bjork R (2008) “Learning Styles; concepts and evidence” Psychological Science in the Public Interest vol. 9 no.3; available on-line at http://www.psychologicalscience.org/journals/pspi/PSPI_9_3.pdf accessed 21 May 2014.

Scott, C. (2010) The Enduring Appeal of ‘Learning Styles’ Australian Journal of Education 2010 54: 5 DOI: 10.1177/000494411005400102

Visser, Beth A.; Ashton, Michael C.; Vernon, Philip A. (2006), “g and the measurement of Multiple Intelligences: A response to Gardner”, Intelligence 34 (5): 507–510,

 

Do You Understand Lupine Ways of Knowing? The value of reductio ad absurdum in scientific debate.

This week I thought I would raise the rather contentious  issue of the reductio ad absurdum argument (also known as argumentum ad absurdum). This is the ancient form of logical argument that seeks to demonstrate that an argument or idea is nonsense by showing that a false, ludicrous, absurd result follows from its acceptance, or alternatively that  an argument is sound as a false, untenable, or absurd result follows from its denial.

The nature of this argument has venerable roots and it is well documented as a form of logic in ancient Greece, used by such luminaries as Xenophranes, Socrates, and Plato . However, in modern academia there seem to be rather polarized views on it. 1) that it trivializes an argument and belittles the person taking a particular position or 2) that is is a valid and reasonable way of demonstrating that an idea is unsound. There also seems to be a cultural aspect in that I have found it used more frequently in Europe, whereas in North America it is somewhat frowned upon in many academic circles.

Naturally, as Rog and I are somewhat subversive and agitative academics (I use the term loosly) we are in full support of it, and to this end have just published a paper in Nursing Inquiry using exactly this form of argument to challenge the established wisdom of a specific postmodern argument for alternative ways of knowing. This paper was based on an earlier blogpost on this very blog site. Here, we use the ad absurdum argument to note that the principles used to support Carper’s  four ways of knowing can equally well be used to support a more creative typology (in this case including, arcane knowing, and lupine knowing).

Naturally, as with any form of intellectual rationale the argument is only as good as the fundamental data and facts it is based upon. Therefore, an ad absurdium argument can be misused, or poorly constructed. It is also often used erroneously as a Straw Man argument.

Considering what is absurd and what isn’t is a tricky thing for anyone, and particularly problematic in science.  For example, many Victorian scientists scoffed at the thought of powered flight, and even Einstein had issues with the notion of black-holes. Therefore, identifying absurdity is not something easily undertaken, as it may simply be the ideas presented are highly original or unconventional. The bacteria Helicobacter Pylori being suggested as a cause of gastric ulceration is a good example, as this theory was not readily accepted by the medical community for several years, despite good evidence.

Also, this is not the same as absurdity as used in common parlance. Commonly absurd positions are seen as ridiculous, or foolhardy, but an argument ad absurdum does not suggest the person making the argument should be ridiculed or lampooned. After all, we have all believed ridiculous things at one time or another; for western children the notion that Santa Clause brings all the children in the world toys on one night a year is a case in point! For the purpose of scientific thinking, for something to be demonstrated as absurd here we really need to see that there is inconsistency in the arguments presented. An absurd position may be considered one that is contrary to reason, irrational, or ludicrous to follow due to the practical implications of believing it. Unfortunately, several concepts now accepted and used in modern science arose in exactly this fashion: Quantum physics for example. However, repeated scientific observation and empirical data have proved quantum theory correct. So, paradigms change with time and we should be cautious about suggesting any position is ridiculous.

From a pragmatic position, I would argue an argument that can be demonstrated as fallacious by analysing its components, and demonstrating inconsistencies, or that you can demonstrate by accepting it you are also supporting associated positions that make no sense and have no practical value, then an ad absurdum position can be used effectively to demonstrate these weaknesses.

At the end of the day the sensitivities invoked by this form of argument are worth considering, and it is a form of rationale that is not easy to develop effectively. However, as long as the use of it involves demonstrating the nonsense an idea or position presents, rather than attacking the person making the argument, I would suggest it is a useful form of analysis. As a scientist if you are prepared to make any case, hypothesis or argument, you should be prepared to have it challenged and debated, and defend it. If the position is sound it will survive this critique, and win through. That is what good science is all about, but to make sound ad absurdum arguments you have to have a good working knowledge of the logical fallacies to start with.  They can also be a lot of fun too, and if this form was good enough for Socrates…

Bernie

 Reference

Carper B.A. (1978), “Fundamental Patterns of Knowing in Nursing”, Advances in Nursing Science 1(1), 13–24

Garrett B.M. & Cutting R.L. (2014) Ways of knowing: realism, non-realism, nominalism and a typology revisited with a counter perspective for nursing science. Nursing Inquiry. Retrieved 21 May 2014.

Rescher N. (2009) Reductio ad absurdumThe Internet Encyclopedia of Philosophy. Retrieved 21 May 2014.