Category Archives: Science in the News

A New Dawn of Early Detection of Sepsis

A new method developed by researchers at the University of British Columbia could help clinicians predict sepsis within an hour using an endotoxin tolerance signature (endotoxin tolerance is defined as a reduced responsiveness to a lipopolysaccharide, also known as LPS, following a first encounter with endotoxin).

Sepsis is one of the most deadly diseases around the world. People should get awareness of  it. This image shows some symptoms of sepsis as well as the cause of sepsis. ( Image Credit: Medical Device)

Sepsis is an inflammatory disease triggered by bacterial infections. There are 18 million cases every year around the world. Diagnosis of sepsis is a race against time because for every hour delay in sepsis diagnosis, there is an eight percent increased risk of death. However, sepsis is difficult to diagnose. A basic diagnosis will take 24 to 36 hours, but with this method, proposed by Professor Bob Hancock’s research group, clinicians can start a therapy immediately.

Check out the following podcast on the background information about sepsis.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

The new method defined a gene expression signature characteristic of endotoxin tolerance. Researchers correlated this gene signature with early sepsis and determine whether this signature was associated with development of confirmed sepsis and organ dysfunction. Overall they found that the subsequent development of confirmed sepsis and suspected sepsis patients with new organ dysfunction are significantly associated with an endotoxin tolerance gene signature. All 593 sepsis patients presented an expression profile strongly associated with the endotoxin tolerance signature (p<0.01). “We could differentiate between guys who are sick but went on to sepsis and guys who did not go on to sepsis”, says Hancock, “also could differentiate guys who could go onto organ failure and guys who would not go onto organ failure.”

Equipment Professor Hancock and his research team used in the research. Photo credit: Xindi Wang

Equipment Professor Hancock and his research team used in the research. Photo credit: Xindi Wang

In the following video, Professor Hancock demonstrate techniques used in his research to find the early detection of sepsis.

A potential misunderstanding about sepsis has also been revealed in the article. Sepsis had been treated as an inflammatory disease; however, many anti-inflammatory drugs failed to treat sepsis. The gene signature, used in this new method found by Hancock’s research team, relates to cellular reprogramming which is a special type of immune-suppression. Hancock emphasizes, “If we can reverse that immune-suppression then we have a really good chance of a new therapy”.

For future research, Professor Hancock suggests that larger clinical trials should be done to confirm these findings. He also expects to increase test functionality in order to  have a fast and accurate diagnostic test for sepsis in the early stage.


 

Reference

Cavaillon, J., & Minou, A. (2006). Bench-to-bedside review: Endotoxin tolerance as a model of   leukocyte reprogramming in sepsis. Critical Care, (10)


 

By Group 2

Harsheen Chawla, Erik Johnson, Lincoln Li and Xindi Wang

To Bee or Not to Bee

Can you imagine a world without honeybees? At first glance, bees can be quite frightening, especially if you are allergic to them.  However, honeybee populations are currently on a steady decline and a loss of these insects can have serious effects on our society.  As it is these small pollinators that have a hand in providing us with a third of what you see in all produce departments. Not to mention, the delicious honey that they provide us.

honey-bees

A group of honey bees. Image credit: Pixabay.com

Pollination is an incredibly important step in producing new healthy plants, some of which are used as food by humans and other species. There are two types of pollination: cross-pollination and self-pollination. Cross-pollination is the process of plant reproduction that requires an external mechanism, such as insects or wind, to transfer the pollen of one plant to another plant of the same species. On the other hand, self-pollination only needs the pollen produced by itself to reproduce.

If pollinators, such as bees, are not in abundance, it can put pressure on plants to self-pollinate, which can lower genetic diversity in plants. A decrease in genetic diversity in one species can lead to a decrease in biodiversity among species’ which can be very harmful to an ecosystem.  Some plants can self-pollinate without penalty, but for others, pollinators are crucial to maintaining genetic diversity in the species.

P1080306 Bombus Rhinanthus

A bumblebee and Rhinanthus minor. Image credit: Dr. Hargreaves

In research led by Dr. Anna Hargreaves, a herb called Rhinanthus minor was studied in the Rocky mountains of Alberta. Interestingly, this herb is able to self-pollinate successfully, but also produces flowers that attract bees to promote cross-pollination. She investigated how a reduction in bee visitations might affect the distribution of the plant. Watch the video below for more details on her research.

YouTube Preview Image

As mentioned earlier, honeybee populations are quickly declining. This is a problem because honeybees are considered to be the primary pollinator of the majority of human food crops. There are several causes thought to affect honeybee populations, including several chemicals contained in pesticides used in agriculture. For this reason, it is important to take immediate action to prevent a decline in the honeybee population.

YouTube Preview Image

Image credit (from podcast) : www.flickr.com

Next time you’re eating peaches, honey, or receiving flowers as a gift, think about how bees have contributed to your life, and what you can do to make sure these products and the bees are available in the future.

Stay buzzy as a bee,

Group 3
Candace Chang, Dixon Leroux, Dorothy Ordogh, & Rafael Alfaro

3-D Printer Changes Building Industry

Have you heard about the new innovative technology, the 3-D printer? Well, the 3-D printer is a printer which has been recently developed and is presumed as the “ultimate builder” for practically anything. From medical equipment to bones and body parts, the 3-D printer is capable making of countless things.

Mojo 3D Printer Photocredits: Wikimedia Commons

Until recently, the 3-D printer was only able to print small objects. The 3-D printer has now been designed for larger scaled objects, such as printing houses and other infrastructures. A Chinese company calleded Winsun has claimed to have built 10 houses in 24 hours (see video below). While in other places, such as Amsterdam, there is a company, DUS Architect, that is replacing cement and mortar with bio-based renewable resources to build houses using 3-D printers. (A link to the DUS Architect Video story)

YouTube Preview Image

Video credit: New China TV

The new technology will be much safer than the conventional method of building and will be cheaper. Alternative materials can be used instead of cement and other common building materials, which are be environmentally threatening. This idea of a machine replacing thousands of construction workers may not sound appealing, but there are some places in the world where there are not enough workers. For example, in developing countries, the working conditions are harsh and workers may not have the correct tools to build a safe house. If a house can be built in 24 hours, people will have a much sturdier and safer house with minimal labor and can be much more affordable.

Photo Credits: Wikipedia

3-D printers may just be the starting point of a completely different lifestyle for the future. With the countless number of uses it has, 3-D printers can replace many machines and potentially reduce harmful actions such as pollution.

-Tommy Kim

Moving Beyond Silicon (Part Three): The Holy Grail, Quantum Computing.

“This is a revolution not unlike the early days of computing. It is a transformation in the way computers are thought about.”

– Ray Johnson, Lockheed Martin

In Part One of this series, we discussed how Photonics could extend Moore’s Law by allowing conventional computers to send information at light speed. In Part Two, we discussed how Graphene could extend Moore’s law by creating computers that could operate thousands of times faster, cheaper, cooler, and friendlier to the environment. But what if the solution to Moore’s Law isn’t harnessing a new technology, or implementing some new material; what if the only way to make Moore’s law obsolete, is to go back to the drawing board and rethink how information is computed. Welcome to the world of Quantum Computing.

D-Wave 128qubit chip

A chip constructed by D-Wave Systems, a company in Burnaby, B.C., designed to operate as a 128-qubit quantum optimization processor, Credit: D-Wave Systems (Wikimedia Commons)

In order to appreciate the impact of quantum computing, it will first be necessary to understand how it differs from classical computing. To get a decent overview, please watch the following short explanation by Isaac McAuley.

YouTube Preview Image

Now, with a better understanding of Quantum Computing and how it differs from classical computing, we can ask, “Why is this development so important?”

In order to answer this, consider that Quantum Computers can solve certain problems much more efficiently then our fastest computers can. For instance, suppose you have a budget for buying groceries and you want to work out which items at the store will give you the best value for your money; a quantum computer can solve this task much faster then a classical one. But let’s try a less trivial example. Suppose you take that very same problem and now you  are a hydro company, you have a limited amount of electricity to provide your entire city with, and you want to find the best method of providing electricity to all people within your city at all hours of the day. Ever further, consider that you might be a doctor and that you want to radiate the most amount of cancer out of your patient’s body, using the smallest amount of radio isotopes, and by compromising the least amount of their immune system. All of these are problems of optimization that a quantum computer can solve at breakneck speeds. Think about it, how much time and money is spent trying to solve these problems and how much scientific progress could be made if they could all of these problems could be solved exponentially faster. For further consideration, checkout the following video by Lockheed Martin (one of the first buyers of a Quantum Computer) below:

YouTube Preview Image

Now that we are familiar with how Quantum Computing differs from classical computing, and what Quantum Computing could do for scientific research, the question one might ask is, “Why do we not have Quantum Computers yet?” The simplest answer is that while some Quantum Computers are for sale at exorbitant prices (The D-Wave One 128 Qubit Computer remains a costly $10,000,000 USD), Quantum Computers remain highly prone to errors.

Recently, researchers at the Martinis Lab at the University of Santa Barbara have developed a new technology for Quantum Computers that allows the computer to check itself for errors without compromising how the system operates. One of the fundamental obstacles when working with Quantum Computers is that measuring a Qubit changes its inherent state. Therefor, any operation performed on a Qubit, such as checking to see that the Qubit stores the information that you want, will defeat the purpose of the system altogether.

Why? Well, because Quantum Physics, that’s why.

This new system allows Qubits to work together in order to ensure that the information within them is preserved by storing information across several Qubits which backup their neighbouring Qubits. According to chief researcher Julian Kelly, this new development allows Quantum computers the ability to

“pull out just enough information to detect errors, but not enough to peek under the hood and destroy the quantum-ness”

This development could allow Quantum Computers the reliability needed to not only ensure that they work as intended; but also, decrease the price of the current Quantum Computers as most of the money spent on a Quantum Computer is on the environmental controls the machine is placed in to prevent errors from occurring.

If you are interested in learning more about Quantum Computing, I highly recommend the following articles as introductions to what will surely be a revolution in Computer Science:

1. Quantum Computing for Everyone by Michael Neilson (a writer on the standard text for Quantum Computing, )
2. The Limits of Quantum by Scott Aronson in Scientific American (an MIT Professor of Computer Science)
3. The Revolutionary Quantum Computer that May Not Be Quantum at All by Wired Science

If you have any questions, please feel free to comment. I hope you all enjoyed this three part series on what the future of computation holds in trying to surpass Moore’s Law. Whatever way you look at it, the future looks bright indeed!

– Corey Wilson

The Cure to Cancer May Only Be a Sip Away

The Oral Cancer Foundation reports that oral cancer is responsible for over 8,000 deaths per year in the United States alone. That is an average of one death per hour. Oral cancer can target many areas of the mouth and neck, such as the tongue, lips, and lymph nodes (oval-shaped organs). Fortunately, researchers have been studying the effects of a very popular drink that could lead to promising treatments for oral cancer.

13080645805_b03a5045e9_o

Mitochondria in a Cell, Source: Flickr Commons

The article, Green tea ingredient may target protein to kill oral cancer cells, published in January 2015 states that a compound in green tea may be able to treat patients with oral cancer. Researchers at Penn State’s Center for Plant and Mushroom Foods for Health studied epigallocatechin-3-gallate (EGCG), a compound found in green tea. They compared the affects of EGCG on normal human oral cells versus human oral cancer cells. They grew these cells in petri dishes and exposed them to the compound. Surprisingly, they found that EGCG damages the mitochondria in only oral cancer cells. The mitochondria are vital parts of the cell that provides energy, but once they become damaged, they are unable to function correctly. This type of disruption to the mitochondria will cause the oral cancer cells to undergo programmed cell death.

Morning_cup_of_green_tea

A Cup of Green Tea, Source: Wikimedia Commons

Dr. Lambert, the co-director at Penn State’s Center for Plant and Mushroom Foods for Health, argues that the selective nature of EGCG to attack oral cancer cells and not normal cells may be applied to other types of cancers as well. He also mentions the benefits of consuming green tea over current  methods to treat cancer. For instance, chemotherapy drugs target rapidly dividing cells, but cannot differentiate between fast-growing cancer cells and normal dividing cells in your hair follicles and intestines. Unfortunately, these drugs can cause harmful and unpleasant side effects like hair loss, nausea, and vomiting. However, the selective nature of green tea may be able treat cancer patients without the presence of these terrible side effects. Overall, consuming green tea would be less harmful and also a lot cheaper than existing cancer treatments.

So, can we state with certainty that you will be able to drink your way to a cure to cancer in the future? The current research looks promising, but only through further research, like clinical trials can we really determine if a sip of green tea will in fact be the new anti-cancer treatment.

Check out the video below uploaded by iHealthTube.com for more information on green tea!

YouTube Preview Image

 

By: Navjit Moore

A Step Closer to Nuclear Fusion Reactors

When people hear the term ‘nuclear’ they view it as a negative and dangerous field of technology that can create large problems due to radiation and improper disposal of radioactive waste. Most non-war applications of nuclear technology are around the generation of electricity through nuclear fission, the other method, using nuclear fusion to generate electricity, is still very much experimental but that might change with new fusion reactor designs.

The_Sun_by_the_Atmospheric_Imaging_Assembly_of_NASA's_Solar_Dynamics_Observatory_-_20100819

The sun produces its energy through nuclear fusion; Image Courtesy of Wikimedia Commons

 

While generating electricity from nuclear power plants is a topic of debate due to safety (a very prominent disaster was the Chernobyl reactor meltdown) and waste product concerns it is important to understand that most nuclear facilities use nuclear fission, splitting a large atom, to generate their power. The method of nuclear fusion on the other hand is opposite in the way that it involves fusing smaller atoms to create a larger atom and is much safer due to the different starting materials and products created.

248px-Deuterium-tritium_fusion.svg

A diagram showing the fusion reaction of two common fusion reactants Deuterium and Tritium; Image Courtesy of Wikimedia Commons

 

When comparing nuclear fusion and fission we see that although they both involve working with atoms their energy consumption and production are vastly different as well as the by-products created. Nuclear fusion both creates less radioactive waste as well as producing more energy than nuclear fission; but the catch with nuclear fission is that due to the large amount of energy required to start a fusion reaction currently energy production from fusion reactors is in an experimental stage.

 

Currently, in an effort to push nuclear fusion energy forward, the International Thermonuclear Experimental Reactor (ITER) is being built in France based off the Tokamak design style. The ITER is expected to work but is a very large reactor and for fusion power to become the future smaller reactors will need to be possible.

Fortunately the company of Lockheed Martin has released more details on an experimental fusion reactor prototype that could make fusion power a common reality. They are in the process of designing a compact fusion reactor using their own designs. The compact fusion reactor (CFR) being designed is expected to be ten times smaller  while producing the same power output as a Tokamak styled reactor, like the ITER. Currently the Lockheed Martin project is still in early stages but the designers are hopeful that they will be able to produce an early prototype in 5 years. While 5 years might seem like a long time the main thing  is that fusion reactors are much closer to becoming a reality than they were before.

YouTube Preview Image

The ability to have the massive energy produced by fusion reactors in a compact design means that energy intensive processes such as desalination would be much more affordable and the CFR could even be installed into ships or applied to providing power to cities. With the CFR it could open doors to provide sustainable energy to the world.

– Matthew Leupold