For Faster Reading

I’m posting this (by request) for Bridget and Kelvin.

readfa.st is a website that trains you to read faster. It does so by only revealing a couple of words at a time (which disappear again shortly after as the next words are revealed), forcing you to read at some speed that you set. It keeps track of your reading speed and helps motivate you to read faster over time.

It doesn’t give only random readings though–the neat thing about readfa.st is that you can upload PDFs to have it train you on. You can also use the bookmarklet to have it train you on websites, news articles, blog posts, etc. Some people may even find it useful for getting through reading material for school or work!

Let me know what you think :)

Midterm Prep: The Campbell Method

Last night I had the dreaded CHEM 233 midterm. If you’re in science at UBC, you have probably hears the rumours about this evil course. So what did I do to prepare? Here’s a rundown of my week leading up to the midterm:

5 days before: Did some textbook problems at Blenz in between Longboat races. Blenz Belgian milk hot chocolate helps soothe the pain.

4 days before: Initiated hardcore study mode. Killed a small forest with the amount of paper I used for practice problems. Completed online acid/base assignment. Aced it.

3 days before: Switched my Monday workout to the morning so I could use my midday break for work. Studied in the Harry Potter room while the presidents of UBC looked down on me in approval. Had a zombie apocalypse social with the rest of the Totem RAs in the evening.

48 hours before: Took a study break to watch talented Totem residents rock the Totem Coffee House. Highlights included QLXN’s Liam playing the hits of the 90s on the bassoon.

36 hours before: Visited my chem prof’s office hours. Spent so much time in the Law Library that people are beginning to wonder if I live there. Bernouilli’s Bagels and coffee are my only forms of sustenance.

24 hours before: Study session in Swing with fellow science student and generally awesome dude, Aaron. Spent most of the time jamming to Kanye and speaking to each other in German accents.

12 hours before: Crammed for a forgotten biology unit test while shoveling eggs into my mouth at breakfast.

8 hours before: Did some practice midterms. Reassured myself that I do, in fact, kind of know what I’m doing.

5 hours before: Chemistry class time. Tried to ignore the looks of intense panic on my classmates’ faces.

3 hours before: Realized that I am incapable of cramming any more knowledge into my brain. Went running up and down the Wreck Beach stairs instead.

90 minutes before: Headed to the Totem caf with fellow RAs and CHEM 233 students. Ate a grilled cheese sandwich and sweet potato soup (comfort food is a must). Made science puns to lighten the mood.

30 minutes before: Began the trek to the Chemistry building. Listened to pre-exam pump up music (“Til I Collapse” by Eminem always gets me in the zone).

10 minutes before: Descended into the toasty warm dungeon of CHEM B150. Found a spot in the middle of the room right next to Melinda for moral support.

5 minutes before: Started to bubble in my information on the Scantron. Watched the clock creep closer to 7 PM. Tried not to be freaked out by how thick the midterm felt.

1 minute before: Deep breath. Let’s do this.

After: Breathed a sigh of relief. Shook off the feelings that it didn’t go as well as I’d hoped. Headed to a friend’s place in Dunbar for celebratory margaritas.

Could I have done more to prepare? Definitely. But while I may not have gotten a perfect score, I still had a pretty good week. I managed to exercise, fulfill my extracurricular responsibilities, spend time with friends, and paddle around Jericho Beach while still studying my butt off. Balance is the key to making the most out of university (although we’ll see if I am singing a different tune once I get my score back). Happy studying!

Proactivity

Some day, in the years to come, you will be wrestling with the great temptation, or trembling under the great sorrow of your life. But the real struggle is here, now… Now it is being decided whether, in the day of your supreme sorrow or temptation, you shall miserably fail or gloriously conquer. Character cannot be made except by a steady, long continued process.

–Phillips Brooks

Re: Publication Bias and the Need to Publish Negative Data?

Recently, Alex posted a response to one of my previous blog posts in which I noted Ben Goldacre’s TED talk on publication bias and its implications for science. However, unlike my original claim, Alex doesn’t believe that “publishing negative data is necessary in basic science research”. He says that it may not be very time-effective for scientists to publish negative results because “other researchers likely trust their own hands more than those of people they don’t know” and thus “they would try the experiment themselves anyway”. His second point is that “if they can’t replicate the result or cannot build from it, then that result is likely not true”. To me, this appears to be his underlying argument that “publishing negative results is important for clinical trials, but not needed for basic science research”. However, although his argument makes sense to me, I think it overlooks a few things that may be important to consider.

To start, I’d like to address the point that researchers are not likely to look up failed experiments because they’re likely to do their own experiments anyways. First, at least from what I’ve experienced, the first step researchers usually take in designing a plan to tackle their research question is to see what has already been done (both to see if this question has already been asked, and also to see what has already been asked from which this experiment can move forward). When negative results aren’t published, it makes it difficult to establish if an experiment is worth the time and effort of the researcher. Although the negative results aren’t necessary for basic science research, I think they are important for the efficiency of science. Second, I believe it is becoming more difficult to justify the allocation of resources towards experiments that are simply (un-)verifying the findings of other researchers instead of towards experiments that will produce publishable results. This is simply due to the increasing lack of funding being allocated to science. Even at Harvard, the effects of funding cuts are being felt (in fact, Alex is pictured in this recent article about the effects of funding cuts on Boston). With this financial pressure on researchers, I believe that their focus will be especially tuned on how to more effectively produce results efficiently using the funding that they have.

On Alex’s second point, that results are not likely true if results cannot be replicated or future experiments cannot be built from them, I’d like to look at a few examples from the past to address some of my concerns with this claim.

First, as an examination of human nature, I’d like to examine a case involving Nobel Laureate Robert Andrews Millikan. Millikan devised and performed an experiment which used oil droplets to determine the charge of an electron. In fact, it was for this (at least in part) that he received his Nobel Prize. What is unfortunate, though, is that when other scientists replicated his experiment, their human nature in caving to authority began to shine through. When they obtained results that were in line with the experiment, they wouldn’t question them at all. When they obtained results that weren’t in line with the experiment, they often looked for excuses as to what they were doing wrong instead of potential fundamental problems in Millikan’s experiment itself. Thus, despite the inability to replicate results from Millikan’s experiment, nothing was said immediately due to assumptions made by the scientists.

The second example I’d like to share is to demonstrate potential problems with reporting results that are not reproducible. I use this example specifically because I personally believe the people most likely to try to reproduce an experiment are the people in that lab that produced the original research in question. This is because labs are often highly specialized (and thus, a lab investigating hantavirus is more likely to reproduce an experiment about hantavirus than a lab investigating malaria), funding is limited (as mentioned above), and labs generally like to be sure of their science. Well, in a case back in the 1980s, David Baltimore and Thereza Imanishi-Kari were implicated in an event involving results that were not reproducible. After they published an immunology paper, a post-doctorate fellow, Margot O’Toole, in Imanishi-Kari’s lab discovered that she simply could not reproduce the results found by Imanish-Kari. After much trying, she brought the case to the attention of Baltimore, and although he refused to retract the paper, the case eventually escalated to a massive investigation and the paper being retracted due to findings of scientific misconduct. However, despite reporting this scientific misconduct in an effort to better the scientific community, O’Toole lost her job during the incident. In fact, to this day, there is a general fear and negative connotation to being a whistleblower. A potential problem with this is that results that are not reproducible may not immediately come to light.

In short, I still believe that publishing negative results (is not necessary but) is an important step for improving the quality of basic science research. I agree with Alex that publication bias has a more significant effect in the pharmaceutical world, but I do not believe that basic science is immune. I think we need to be aware that true objectivity in any scientist (or any human, for that matter) is a myth; publishing negative results can help paint a full picture of the natural world instead of leaving it to faith that our scientists will clue in that a piece of the puzzle has been inaccurately identified.

Harp Covers

This is just a little project I’ve been working on for the last couple of months. I’ve been arranging popular music for the harp, and then recording it and putting it on YouTube. (So far I seem to be on a bit of a Zelda kick…)

My YouTube channel is here, and here’s my latest video:

 

(How do I have time for this? I don’t know! Probably by not practicing enough for band rehearsals…)

The Value of Conflict

Alice and George were very good at conflict. They saw it as thinking.

Margaret Heffernan presents the interesting case here on the value of conflict. Too often today is conflict seen as a bad thing. In her TED talk below, Heffernan illustrates what conflict can bring to the table (sometimes uniquely).

I agree that society needs to learn better how to handle, tolerate, and engage in arguments. Afterall, learning is simply the understanding of something that previously went against you.