2008 Interview with Sir Iain Chalmers – see wiki entry
********************************
Thank you so much for taking the time to speak to my readers, many of whom are physicians and health librarians who perform comprehensive searches of the literature. In 2006, the last time we spoke, we discussed the “systematic integration of primary research” in medicine.
********************************
1. Have we made progress in cumulating the evidence?
“Yes, there has been progress. There has been an explosion in the number of reports of systematic reviews, and in the use of this form of research by those preparing evidence summaries and clinical guidelines. In particular, it is encouraging that the editors of some important journals – PLoS Medicine and the Lancet are examples – have made clear that they value systematic reviews. Even the aloof New England Journal of Medicine has quietly come round to the realisation that it needs to help its readers by publishing reports of this kind of research.
However, there remains a long way to go, as evidenced in:
Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. Journal of the Royal Society of Medicine 2007 ;100:187-190. and Cooper NJ, Jones DR, Sutton AJ. The use of systematic reviews when designing studies. Clinical Trials 2005;2:260-264.
“Until every new piece of research begins by referring to one or more systematic reviews of relevant existing evidence it should be assumed that the research has not been designed as well as it could have been, and that there will be inappropriate duplication (as distinct from appropriate, planned replication). See: Chalmers I. The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the need for systematic reviews. In: Rothwell P, ed. Treating individuals. London: Lancet, 2007, pp 37-58.”
********************************
2. A pernicious problem in evidence cumulation is fragmentation of the medical literature. Is the best evidence not in MEDLINE and Cochrane anyway?
“It’s certainly very tedious, but it’s necessary to try to deal with various forms of reporting bias. For example, only two thirds of the clinical trials reported in conference abstracts go on to appear in the kind of full reports that are included in Medline, and those that do are more likely to have positive results Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews 2007, Issue 2.
No-one who is concerned to obtain unbiased estimates of the effects of treatments can afford to acquiesce in this situation and rely only on what happens to get into Medline. Indeed, had researchers at Johns Hopkins not relied on Medline in designing one of their studies, a young lab assistant might not have lost her life in a physiological experiment [McLellan F. 1966 and all that – when is a literature search done? Lancet 2001;358:646.]”
********************************
3. Another problem is the “underpowered” trials. Are most highly powered trials not in the high impact journals?
“It’s very rare to hear someone suggest that a trial is over-powered. All trials are underpowered to detect some important effects (for example, effect modification in subgroups, or rare side effects). Tom Chalmers once said that the most damaging paper he had ever coauthored was Freiman JA, Chalmers TC, Smith H Jr, Kuebler RR. The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial. Survey of 71 “negative” trials. NEJM 1978;299:690-694.
This was because, when he spoke with clinicians about the importance of evaluating their treatments, this paper was used by them as an excuse not to try to evaluate their practice, because they would never achieve adequate samples. So the question should be posed in terms of alternatives, namely: “Is it more ethical to continue in ignorance about the effects of one’s practice, or to participate in trials that endeavour to reduce that ignorance, even though the trials may be small”.
Schulz and Grimes have a very good discussion of this issue in: Sample size calculations in randomised trials: mandatory and mystical. Lancet 2005; 365: 1348-53. The key issue is that all well-designed trials should be reported publically whatever their statistical power or assumed applicability.
Physicians who allow their practice to be guided by the results of individual trials in high impact journals, rather than by systematic reviews of all relevant trials, risk avoidable harm to their patients, either by withholding treatments that are useful, or by prescribing treatments that are very unlikely to be useful and may be harmful. I’ve given illustrations of this in Chalmers I. The lethal consequences of failing to make use of all relevant evidence about the effects of medical treatments: the need for systematic reviews. In: Rothwell P, ed. Treating individuals. London: Lancet, 2007, pp 37-58.”
********************************
For more information about Dr. Chalmers, see his Wikipedia entry. – Dean