On Quality Scientists

Humaira recently shared this interesting BenchFly post on “The 7 Requirements of All Effective Scientists”. In short, Mark Changizi states that the following are essential:

  1. You need an idea notebook
  2. Do not aim to solve some specific problem
  3. You may need 10 to 100 ideas before you find a good one
  4. Avoid feeling part of any specific academic community
  5. Avoid working for anyone, and that includes a granting agency
  6. Don’t publish without data
  7. Avoid all but the simplest experiments, and avoid building complex tools

The post On Quality Scientists appeared first on 夢と愛の千夜一夜.

Interesting Observations About Disgust

I thought David Pizarro’s TED talk on disgust was quite interesting. I thought it was interesting to explore the function of disgust, as well as the implications of disgust. Seeing disgust tied into both politics, social stigma, and simple perception to be incredibly interesting as well.

[Disgust] works through association. … This makes it very useful as a strategy if you want to convince somebody that an object or an individual or an entire social group is disgusting.

–David Pizarro

The post Interesting Observations About Disgust appeared first on 夢と愛の千夜一夜.

Perception and Science

Now, if perception is grounded in our history, it means that we’re only ever responding according to what we’ve done before. But that creates a tremendous problem because how can we ever see differently?

–Beau Lotto

I thought this was a super interesting talk on science and the nature of perception. If you recall that bee study published a while ago where most of the authors were children, two of the authors are giving this talk.

Creative Advertising

I just saw this really cool post about really creative advertising that Science World has going on right now. Thanks to Dave for sharing!

On that note… Science World is free this weekend! Be sure to check it out!

Vancouver Community Science Celebration: Free Weekend!
Saturday, October 13 & Sunday, October 14

Join Science World in celebrating the science all around us by visiting on this free weekend presented by BC Hydro and Genome British Columbia. Visitors will get to interact with Canadian Space Agency astronaut; explore Science World’s new spaces at TELUS World of Science (including parts of its new outdoor Ken Spencer Science Park); take a virtual tour of the ATLAS control room at CERN’s Large Hadron Collider; and much, much more. This weekend only!

This is the first event of its kind at TELUS World of Science, and we want you to be there. Let’s celebrate the science all around us at the Vancouver Community Science Celebration at TELUS World of Science!

This Community Science Celebration is part of Around the Dome in 30 Days: A Month-long Science Extravaganza. Come celebrate the cool science that happens all around us with an eye-popping marathon science show, live demonstrations and exhibits presented by real scientists from our community, hands-on activities for the whole family, and much, much more!

TED on Science Communication

Melissa Marshall’s tips for science communication?

Be sure to state “So What?” However, I don’t like her biomedical example–not all research does, nor do I think it should, have anthropocentric focus.

Minimize jargon.

Make everything as simple as possible, but no simpler.

–Albert Einstein

Don’t use bullet points–bullets kill.

And finally…

Yes, that makes about as much sense to me as it probably does to you; it should probably divide by the inverse of relevance instead…

Overall, I like the points she brings up about science communication. (I think too many people on Youtube are slamming her because the end equation is wrong.) Scientists are still, for the most part, struggling with engaging the public. I think these are important things to consider in doing so effectively.

Re: Publication Bias and the Need to Publish Negative Data?

Recently, Alex posted a response to one of my previous blog posts in which I noted Ben Goldacre’s TED talk on publication bias and its implications for science. However, unlike my original claim, Alex doesn’t believe that “publishing negative data is necessary in basic science research”. He says that it may not be very time-effective for scientists to publish negative results because “other researchers likely trust their own hands more than those of people they don’t know” and thus “they would try the experiment themselves anyway”. His second point is that “if they can’t replicate the result or cannot build from it, then that result is likely not true”. To me, this appears to be his underlying argument that “publishing negative results is important for clinical trials, but not needed for basic science research”. However, although his argument makes sense to me, I think it overlooks a few things that may be important to consider.

To start, I’d like to address the point that researchers are not likely to look up failed experiments because they’re likely to do their own experiments anyways. First, at least from what I’ve experienced, the first step researchers usually take in designing a plan to tackle their research question is to see what has already been done (both to see if this question has already been asked, and also to see what has already been asked from which this experiment can move forward). When negative results aren’t published, it makes it difficult to establish if an experiment is worth the time and effort of the researcher. Although the negative results aren’t necessary for basic science research, I think they are important for the efficiency of science. Second, I believe it is becoming more difficult to justify the allocation of resources towards experiments that are simply (un-)verifying the findings of other researchers instead of towards experiments that will produce publishable results. This is simply due to the increasing lack of funding being allocated to science. Even at Harvard, the effects of funding cuts are being felt (in fact, Alex is pictured in this recent article about the effects of funding cuts on Boston). With this financial pressure on researchers, I believe that their focus will be especially tuned on how to more effectively produce results efficiently using the funding that they have.

On Alex’s second point, that results are not likely true if results cannot be replicated or future experiments cannot be built from them, I’d like to look at a few examples from the past to address some of my concerns with this claim.

First, as an examination of human nature, I’d like to examine a case involving Nobel Laureate Robert Andrews Millikan. Millikan devised and performed an experiment which used oil droplets to determine the charge of an electron. In fact, it was for this (at least in part) that he received his Nobel Prize. What is unfortunate, though, is that when other scientists replicated his experiment, their human nature in caving to authority began to shine through. When they obtained results that were in line with the experiment, they wouldn’t question them at all. When they obtained results that weren’t in line with the experiment, they often looked for excuses as to what they were doing wrong instead of potential fundamental problems in Millikan’s experiment itself. Thus, despite the inability to replicate results from Millikan’s experiment, nothing was said immediately due to assumptions made by the scientists.

The second example I’d like to share is to demonstrate potential problems with reporting results that are not reproducible. I use this example specifically because I personally believe the people most likely to try to reproduce an experiment are the people in that lab that produced the original research in question. This is because labs are often highly specialized (and thus, a lab investigating hantavirus is more likely to reproduce an experiment about hantavirus than a lab investigating malaria), funding is limited (as mentioned above), and labs generally like to be sure of their science. Well, in a case back in the 1980s, David Baltimore and Thereza Imanishi-Kari were implicated in an event involving results that were not reproducible. After they published an immunology paper, a post-doctorate fellow, Margot O’Toole, in Imanishi-Kari’s lab discovered that she simply could not reproduce the results found by Imanish-Kari. After much trying, she brought the case to the attention of Baltimore, and although he refused to retract the paper, the case eventually escalated to a massive investigation and the paper being retracted due to findings of scientific misconduct. However, despite reporting this scientific misconduct in an effort to better the scientific community, O’Toole lost her job during the incident. In fact, to this day, there is a general fear and negative connotation to being a whistleblower. A potential problem with this is that results that are not reproducible may not immediately come to light.

In short, I still believe that publishing negative results (is not necessary but) is an important step for improving the quality of basic science research. I agree with Alex that publication bias has a more significant effect in the pharmaceutical world, but I do not believe that basic science is immune. I think we need to be aware that true objectivity in any scientist (or any human, for that matter) is a myth; publishing negative results can help paint a full picture of the natural world instead of leaving it to faith that our scientists will clue in that a piece of the puzzle has been inaccurately identified.