Research studies rely heavily on standard techniques to analyze data. From tests of new medications to performance trials of devices, these all depend on statistical outcomes. When research gets published, it goes through a numerous amount of trials and peer reviews in order to ensure the accuracy of the research. But can statistics alone ensure the accuracy of the paper? Just recently in 2011 there was a paper published in the Journal of Personality and Social Psychology that provided strong evidence for the existence of extrasensory perception (ESP), or the so-called ‘sixth sense’. Clearly, there is no debate on whether humans can actually predict the future.
So what possibly went wrong? How did this research paper even get published? Watch this video below for an explanation and analysis of the ESP controversy.
There are still many loopholes that exists in the formal reviewing process of publishing a paper. Dr. Daryl Bem, the author of the paper, used a lot of small techniques to increase the likelihood for his paper to be published. There aren’t exactly any big flaws in Bem’s paper, but there are little problems that would collectively add up for his paper to be flawed. Not only is Bem responsible, journals and publishers should also be responsible to prevent poor science from being published just to appeal for controversial science. Generally, these journals aim to grab your attention because they know humans naturally gravitate towards more controversial topics compared to topics that are dull.
In an interview with Dr. Ed Kroc, a statistics professor at the University of British Columbia (UBC), he gives his two cents on the issues of controversial science and the reliance of statistics in justifying significance. According to Kroc, news science tends to play on more of the sensational side of stories rather than communicate science more accurately. All in all, he concludes that people generally have a very superficial understanding of the scientific method as opposed to what it actually means.
This unfamiliarity causes a misunderstanding of the role statistical analysis plays in everyday research. As Kroc states, “We need to move away from the idea that statistics provides an automated machinery for making decisions. It doesn’t.”. Simply put, often researchers depend on statistics to make conclusions about their studies while failing to factor in personal bias and other similar studies done on the same subject. As a result of this misunderstanding, researchers can publish false findings which can lead to controversial science being spread in the public.
The following podcast further describes issues in statistical methods used in current research.
As discussed in our podcast, the field of research is not faultless. Not only do researchers need to better understand statistics and the scientific method to publish more scientifically accurate findings, the public is also partly responsible for having more knowlege about the formal research process to be able to identify untrue science being marketed as scientifically accurate in the public. Unfortunately no matter how vigilante researchers may be, poorly conducted studies like Bem’s study can still get published and spread false conclusions.
We would like to give special thanks to Dr. Ed Kroc for his time and Dr. Bruce Dunham, our instructor, for giving his guidance for this project.
By Braydan Pastucha, Andrew Ting, Lisa Liang, Florence Ng
SCIE 300 211 SO Project Group 3