Impact & Peer Review: the problem with journal rankings as a source of evidence weighting

Last week, Roger Watson editor supremo of JAN published a blog piece on the importance of impact factors for academic journals in which he noted “The world of editing seems to be divided into two camps: those who agree with the use of impact factors and those who don’t.”

To be honest I am not sure I would agree with that. Apart from the fact this seems a thinly disguised excluded-middle/false-dichotomy argument (along the lines of “your are either with us or with the terrorists”) I think there are both benefits and pitfalls to journal rankings.

Firstly we should clarify what they are. A citation index is an index of citations between publications, allowing the user to easily establish citation links between documents. The first of these were created for the legal profession such as Shepard’s Citations (1873). The development of these (and particularly the advent of modern computer systems and databases) has supported the development of Impact Factor rankings. The impact factor (IF) of an academic journal is a measure reflecting the average number of citations to recent articles published in the journal for the two preceding years (although it can be for any specified period of time). It is frequently used as a proxy for the relative importance of a journal within its field. It was devised by Eugene Garfield and IFs are calculated yearly for those journals that are indexed in the Journal Citation Reports database. For example, if a journal has an IF of 2 for 2012, then its papers published between 2010-2011 received 2 citations each on average in papers published in 2012 in journals covered in the index. Sounds straightforward and objective enough, although I suspect if you have read this blog for any length of time I expect you hear a “but” coming…

1. It is a bit of a blunt tool. Good science is not always published in the peer-reviewed literature covered in the database. Especially today, much activity at the forefront of research is reported first in blogs, websites and at conferences.

2. The peer review process that results in publication is not exactly objective. Editors have a vested interest in circulation (despite claims that good science trumps circulation). I have been on the editorial board of a couple of journals for over 10 years and can attest to this wholeheartedly. Editorial boards generally support research and papers that fit within the current disciplinary paradigm (even when papers criticizing the current wisdom based on good science or argument arrive). Also, as publication is a competitive business papers by established authors in the field will generally receive higher support for publication than those of less well-known researchers. Naturally, anything written by Stephen Hawking is more likely to get published than a paper by an unknown physics prof. This tends to lead to a bit of a beauty-contest mentality and we inevitable end up with top ranked IF journals publishing the same sorts of papers from the same range of well-known authors. Many would argue in an academic meritocracy this is how it should be, but from the perspective of heterodoxy it is more problematic.

However, there are other problems with the peer review process. In an effort to establish if my own peer reviewing was within the normal range I once asked a journal editor what their rejection rate was. I got no response, and few editors will divulge this statistic openly (for most well-known journals it is well above 70%). Reviewers may take a position against a paper because it is bad, or because of their own methodological bias, a dissenting view, or simply because they were too busy to really read it but didn’t like the abstract. In my own academic career I have had one paper rejected at one journal (stating it was “not of interest” to readers) only to get it published in another and then get into their top 20 papers downloaded.

Some journals will accept (subject to amendments), a paper that has excellent new findings and methods but requires a few grammatical corrections. Other high IF Journals who have their pick of established authors may not, and will reject anything not in A1 publication ready form on submission. I have had colleagues who have had papers rejected from journals following two excellent reviews, with only minor typo changes requested. However, these same journals have accepted and published pretty awful examples of the English language from well-known people in the field! I have frequently received two different peer reviews for a paper, one saying it was rubbish whilst the other claiming it was the greatest thing since sliced bread. A single negative review from an established reviewer who holds some authority and power with the editorial board will likely sink any paper.

My experience is for the majority of researchers, even with sound work your chances of getting a good peer review in a journal are very much dependent on the luck of the draw. Reviewers are unpaid, busy academics, and if your paper is sent to one who is pressed for time, they may not give it their best consideration. In essence your chances of getting a paper into print in the top journals is competitive, but with the rules for the competition being something of a dark-art. So much for the academic meritocracy.

3. The impact factor measures citations not good science. If I manage to get an erroneous or controversial piece published it may well get widely cited. For example. If I manage to get a paper published that notes that Florence Nightingale had a little  known punk period in her adolescence in which she died her hair green to infuriate her parents, it may get a huge amount of citations despite the fact it is completely fictitious (as far as I am aware). The Davenas et al. (1988) paper proposing the memory of water theory (published in Nature no less) that was widely cited but subsequently discredited, is an excellent example.

4. Lastly, the impact factor seems to have little to do with the professional impact of the journal (at least in my field). I regularly make it a point to always ask new nurses I meet “In all honesty what professional/academic journals do you actually read?” To date less than 3% ever cite the Journal of Advanced Nursing or other high impact journals. So what impact are we really measuring here, impact amongst the nursing academics who make up the minority of our profession?

At the end of the day, both the peer review process and impact factor although flawed, are probably the best tools we currently have to promote good quality science. The fact that that some duffers like Nursing Science Quarterly get taken off the citation index (since 2009) shows that at least there is evidence of some rigour in the process. However in the new age of Web 2.0 things could change. It will also be interesting to see what impact of the huge shift to electronic publishing and pay-for-publication (usually around $2000 a paper) open access journals will have on the traditional journals.

Happy Canada Day to all; time to hit the BBQ and pass out the maple cookies!

Bernie

 Reference

Davenas, E., Beauvais, F., Amara, J., Oberbaum, M., Robinzon, B., Miadonna, A., Belon, P. (1988). Human basophil degranulation triggered by very dilute antiserum against IgE. Nature, 333(6176), 816-818.

 

 

 

 

 

1 thought on “Impact & Peer Review: the problem with journal rankings as a source of evidence weighting

Leave a Reply

Your email address will not be published. Required fields are marked *