Air Pollution and its Effect on the Brain

Standard

I grew up in Beijing, China, where air pollution is no stranger. So, upon reading this article about nano air pollutants and their potential effects on the brain, I was intrigued, but also quite concerned. I am quite aware of how damaging air pollution can be, especially to those who endure it for days, weeks or years on end. I lived in Beijing for 14 years, so throughout my time there, I no doubt inhaled my share of nasty particles, probably none of which I want in my system. Luckily, I haven’t experienced any health problems relating to pollution to this day. But what could still come? That is a constant worry of mine, so reading this article really hit home for me.

In this article, Pearce Stevens discusses two main ways in which the pollutants have access to the brain. This article describes nano-particles, which are very dangerous because of their size. They are so small that they are able to gain entrance to cells, despite being protected by a membrane. Luckily for us, the brain is quite protected. There are three modes of transport in which the brain communicates with the rest of the body. The first being the blood. The blood in the brain is separated from that of the body via the blood-brain-barrier, which is a selectively permeable membrane, only allowing in what is beneficial. The second mode of transport is the cerebrospinal fluid (CSF), which flows in the brain, and extends into the spinal column. CSF serves no communication purposes, it’s only use is for protection from trauma, so we will ignore it here. The final way is via nerves.

Unfortunately for us, due to the size of these nano-particles, they are able to pass to the brain using both the blood and the nerves. The blood brain barrier can only filter out particles up to a certain size, and thus, nano-particles are able to enter. The olfactory nerve (the way in which you sense smell), located in your nose, have a direct connection to your brain. Again, due to their size, the nano-particles are able to enter the olfactory nerve, and can from here move straight to the brain. But what do these nano-particles do in the brain once they’ve gotten there? That’s the real scary part.

One fantastic thing about our body is our immune system, which is designed to fight off intruders such as these nano-particles. If this immune response occurs, which is often accompanied by swelling, which is terrible for the brain. The brain can only get so big, because of it’s confinement within the skull. Therefore, when the brain swells, serious damage can result if the swelling is severe enough from the brain bashing against the skull. Additional problems resulting from these nano-particles can include stroke, damaged brain signaling, increased presence of free radicals, all of which are very damaging.

If this isn’t enough to encourage people in some countries to cut down the driving and live a little greener, I don’t know what is! One can only hope that with the green technology market in it’s current state that more and more effort will be put into preventing health concerns such as those from nano-particles in air pollution.

Works Cited:

Pearce Stevens, Alison. “Nano Air Pollutants Strike a Blow to the Brain.” Science News for Students. 17 Dec. 2014. Web. 28 Dec. 2014.

URL: here

Brain Plasticity and Smartphone Use

Standard

As human beings, technology is something that consumes us. This consumption obviously has many positives, but also many negatives. Most psychologists take a negative view on smartphone use, as it takes away from reality and takes away from the types of interactions we were genetically designed to have. After considering these negatives, I wondered what kind of brain changes were occurring that was causing some of these behaviours. I stumbled upon this article today which discussed the effect of smartphone use on the brain, primarily as it relates to the expression of the thumb in the cerebral cortex. This article doesn’t necessarily highlight smartphone use as a positive, it merely presents the fact that there are brain changes present.

The article, by Gindrat et al. is called “Use-Dependent Cortical Processing from Fingertips in Touchscreen Phone Users” (link here), and highlights a study conducted using electroen- cephalography (EEG), where they measured the cortical potentials in response to mechanical touch on the thumb, index, and mid- dle fingertips of touchscreen phone users and nonusers. This study was conducted because the researchers wanted to see if other types of plasticity was transferable to smartphone use. For example, in people who learn a very skill-intensive instrument (such as the violin or piano), their cerebral cortices change, allowing for the brain to create more connections where they are needed. In this case, those connections are needed to be made between areas involving dexterity of the fingers. Therefore, when using an EEG, you will be able to identify increased activation of the areas related to the fingers, in comparison to those who don’t have such skilled use of the fingers.

This is exactly what Gindrat et al. discovered when in comes to the use of smartphones. In their study, small shocks were given to various fingers, and the cortical activation measured. Interestingly, in all of the smartphone users, there was a greater activation of all fingers (not just the thumb which is primarily used) at the cortical level, when compared to the users of non-smartphone cellular devices.

I am starting to realize more and more that the old saying “practice makes perfect” has a solid root in neuroscience. The more you do something, the faster your brain wants to be able to conduct that movement, and so more connections will form. In the Gindrat study, they saw that even after 10 days of greater smartphone use, there was already enhanced cortical activation. From this, we can identify how fast the brain is adapting to the environment around it. If we can take information like this and transfer it into our own lives, we may be able to function faster and more efficiently in tasks that each of us desire!

Works Cited:

Gindrat, Anne-Dominique, Magali Chytiris, Myriam Balerna, Eric M. Rouiller, and Arko Ghosh. “Use-Dependent Cortical Processing from Fingertips in Touchscreen Phone Users.”Current Biology 25 (2015): 1-8. Web. 28 Dec. 2014.

URL: here

Video Games and the Brain

Standard

My brother has been spending a lot of his holiday playing video games. The other day I questioned him how good for him all of this playing time would be. We argued for a few minutes, as I claimed it had to be bad for his brain, and he, of course, claimed it to be beneficial. I did some research in attempt to prove him wrong. After looking around for a while, I found a perspective article written by Bavelier et al. featuring numerous authors, each with their own perspective on how video games affect the brain.

Bavelier first highlights that we must be careful when making sweeping generalizations about “video games”, because there are not only numerous games, but also many ways in which they can be played (whether with a console, on a computer or on a cellular device or tablet). In the study that Bavelier refers to, they specifically focus on “action” games, and research indicates that there are in fact benefits that go along with these, such as low-level vision enhancements, increased processing speeds and better visual attention. Such games can be used in rehabilitation for those with a “lazy eye”, Bavelier claims.

While my brother was thrilled by these findings, he wasn’t as pleased with what Merzenich had to say in his perspective piece. Merzenich mentioned that there is a direct negative correlation between the number of hours a day spent playing video games and one’s academic achievement. While this obviously doesn’t indicate lowered intelligence from playing video games. It could in fact be that the increased screen time takes time away from one’s studies, but I thought it was interesting nonetheless. Another study highlighted by Bavelier indicated that despite video game’s aren’t bad for the brain, they also may not be good. Despite the fact that experienced video gamers are generally better at spatial navigation in computer-mediated tasks that those who were not of the same experience level, this experience didn’t actually translate to the same spatial navigation skills in the real world. So, being an excessive gamer may only help one be better at the specific game they are playing, and not actually have any transferable skills.

I find it interesting that these were some of the findings, because in one of my classes this term where we learned lots about neuro-rehabilitation, this seemed to be a general finding about lots of different therapies. Despite the consistent effort to try and make a task, or a therapy transferable to real life, it is sometimes difficult to do so.

After showing my brother some of this research, he did agree that maybe his gaming was a little excessive, “but what’s a kid to do?”.

 

Works Cited:

Bavelier, Daphne, et al. “Brains on Video Games.” Nature reviews.Neuroscience 12.12 (2011): 763.

URL: here

 

Severe consequences of recurring all-nighters?

Standard

After enduring a rough batch of finals, where I was constantly baffled by the number of people who just don’t sleep for two weeks straight, I decided to do a little research on just how bad disrupting your normal sleep cycles are. I have always been aware of the fact that consistent sleeping hours are very important to our daily function. I was surprised however, upon reading an article by Fernandez et al., that chronic disruption in our circadian cycles can lead to impaired declarative memory. The sad irony from this being that, those students who decide to pull frequent all-nighters in attempting to achieve their academic goals might actually be setting themselves back.

In a few other studies, it seems as though chronic alternations of sleep-wake cycles can lead to minor cognitive impairment (MCI) or even dementia (Schlosser Covell et al.). In Schlosser Covell et al.’s article, they highlight that “optimal cognitive function depends upon the appropriate timing of sleep, wakefulness, and synchronization of brain clocks in the cortex, hippocampus, and cerebellum.” Therefore, by hindering the appropriate timing of sleep, we are decreasing our optimal cognitive functioning. Additionally, Schlosser Covell et al. go on to discuss the importance of the suprachiasmic nucleus in sleep regulation. This area naturally becomes less active with age. It is therefore hypothesized that decreased activation of the suprachiasmic nucleus is correlated with both sleep disturbances and MCI or dementia. While this is not a direct cause-effect relationship, studies have shown there to be a significant correlation.

This therefore begs the question: are people with current sleep disturbances (whether it is naturally occurring, stress-induced or students pulling all-nighters) are setting themselves up for MCI in the future? This could be a serious consequence of trying to get straight A’s in university. If this is in fact the case, high schools and universities need to be doing more to ensure the wellbeing of their students, so long-term irreparable damage is not done during these few short years studying.

This is also a question for professionals to glance at. By working shift-work, especially alternating between days and nights frequently, one is also disrupting their body’s natural sleep-wake cycles. The same case can be made for people traveling lots for their jobs, who are constantly dealing with time-change. How should this be handled? We can’t force people into positions where their sleep-wake cycle won’t be disturbed. However, one alternative way would be to make people more aware of these facts, and continue research in this area so people can become more mindful of this fact, and try their best to maintain a “proper” sleep schedule.

 

Works Cited:

Fernandez, Fabian, et al. “Dysrhythmia in the Suprachiasmatic Nucleus Inhibits Memory Processing.” Science 346.6211 (2014): 854-7.

URL: here

Schlosser Covell, Gretchen E., et al. “Disrupted Daytime Activity and Altered Sleep-Wake Patterns may Predict Transition to Mild Cognitive Impairment Or Dementia: A Critically Appraised Topic.” The neurologist 18.6 (2012): 426-9.

URL: here

Woman born without cerebellum, brain plasticity at its finest

Standard

In September, I read an article about a Chinese woman who was born without a cerebellum. The cerebellum is the area of the brain responsible for so many important functions, such as coordinating movement and balance. Damage to the cerebellum once it has been developed can lead to permanent, irreversible damage. This woman represents the 9th case ever presented. According to her mother, she hadn’t been able to walk until the age of 4, and hadn’t been able to speak properly until the age of 6. Clearly, the absence of her cerebellum caused her issues, but not nearly as significant as one might expect.

After going back and reading the newspaper article , I went and found the original research paper(a letter to the editor of Brain – an Oxford Journal of Neurology) published by the Chinese doctors who treated this woman (see below for the research article by Yu et al.). Her condition was discovered after coming in to the hospital complaining of dizziness that had lasted over a month. After performing a CT scan, neurologists discovered that the area where the cerebellum should have been was filled with nothing other than cerebrospinal fluid (CSF). With the cerebellum estimated to contain over half of the neurons in a normal healthy brain (even though it only contains 10% of the volume), the cerebellum is estimated to be one of the most important areas of the brain functionally. The fact that this woman is still alive, despite having some developmental difficulties, is amazing. It is clear evidence of how resilient the brain is to damage, even more so if the defects are congenital.

In my opinion, scientific discoveries such as this one will never cease to amaze me. The mere fact that the brain is so capable of adapting based on different conditions baffles me. It fascinates me that the brain is not only able to recognize that there is a deficit in a specific area, but also is able to act on those deficits and alter connections so that it can resume somewhat normal functioning. Sure, this woman was unable to walk properly her entire life, and had severe balance issues. However, she was able to walk, talk and move around relatively easily in comparison to someone who had a cerebellum until her age and then had it removed.

 

Works Cited:

Yu, Feng, Qing-jun Jiang, Xi-yan Sun, and Rong-wei Zhang. “A New Case of Complete Primary Cerebellar Agenesis: Clinical and Imaging Findings in a Living Patient.” Brain: A Journal of Neurology (2014): 1-5. Web. 19 Dec. 2014. <http://brain.oxfordjournals.org.ezproxy.library.ubc.ca/content/early/2014/08/22/brain.awu239#ref-list-1>.

Targeted Temperature Management for Acute Brain Injury?

Standard

Today I came across an article by Choi et al. highlighting the use of Targeted Temperature Management (TTM), which is therapeutic cooling of the body (induced hypothermia), as a way for treating acute brain injury as well as preventing extensive neural damage. While I was already aware of it’s use in the acute treatment phase in hospitals, I was unaware of it’s effects for different aspects of brain injury and exactly how it was implemented. TTM has been deemed the “most powerful mechanism of neuroprotection currently available”.

TTM seems to work by decreasing the brain’s metabolism, limiting oxygen uptake, thereby preventing the failure of many neural mechanisms involved in cellular metabolism. By reducing the amount of oxygen required, we are reducing the action of sodium and potassium pump (NaKATPase) to decrease the need of ATP (which is made using oxygen via cellular metabolism). Often times, traumatic brain injury (TBI) can cause bleeding in the brain, which can lead to decreased oxygen levels in the blood, causing neural damage. Therefore, TTM is an amazing invention, enabling an injured brain to still function even if there is a brain bleed (depending on severity). There are numerous severe dangers in conducting such a procedure, such as shivering, kidney dysfunction, immune function impairments to name a few. However, despite these dangers, it seems as though the benefits severely outweigh the negatives as it can prevent death in some cases.

Some conditions where TTM is most commonly used include ischemic stroke, TBI and cardiac arrest. Choi outlines these numerous conditions and highlights their effectiveness.

Cardiac arrest results in a stoppage of blood-flow to the brain, leading to wide-spread damage. When someone’s heart starts beating again, you get a rapid perfusion of the brain, which, despite bringing back the much-needed oxygen, can cause additional damage. It does seem that TTM down to anywhere between 32-34 degrees Celsius for 24 hours is significantly beneficial. For TBI, TTM seems to be ineffective, however, Choi hypothesizes some reasons why such studies have been unsuccessful, and proposes a few ideas for how such studies could be improved. In the case of ischemic stroke, where there is a blockage of blood-flow to the brain (either due to a embolism or plaque build-up), TTM is a feasible approach. By decreasing the amount of blood required by the brain, it reduces the damage until the embolism or plaque can be cleared.

Despite not being effective for certain conditions, it seems as though TTM being implemented into emergency and critical care medicine is beneficial, and should obviously continue to be used.

 

Works Cited:

Choi, H. Alex, Neeraj Badjatia, and Stephan A. Mayer. “Hypothermia for Acute Brain Injury–Mechanisms and Practical Aspects.” Nature reviews.Neurology 8.4 (2012): 214.

URL: http://www.nature.com.ezproxy.library.ubc.ca/nrneurol/journal/v8/n4/full/nrneurol.2012.21.html

Biomarkers for Neurodegenerative Disorders

Standard

One of the things that makes diagnosing neurodegenerative diseases so difficult is the lack of symptoms prior to the disease progressing past the point of no return. For example, in Parkinson’s disease, only once 90% of the Substantia Nigra (an area in the brain which produces the neurotransmitter Dopamine) has been degraded does the person start producing symptoms. Symptoms with other diseases such as Alzheimer’s and Huntington’s only appear after significant damage has been done. If we were able to implement drugs such as L-Dopa, or other similar dopamine antagonists into a person with Parkinsons’ system earlier, we would be able to substantially decrease the degradation process, thus extending people’s lives significantly. What are the ways that we can do this? I was determined to find some disorders and alternative ways of diagnosis. It seems as though biomarkers seem to be the most promising way of early diagnosis.

A biomarker is a measurable indicator of a biological condition, usually a pathogen whose presence can be determined through examining one’s body fluid. Alyward indicates that a biomarker must meet three key conditions to be classified as such. 1) It must be objectively measureable; 2) it must be able to predict known endpoints; and 3) it must be associated with known mechanisms of pathology of the disease. There are many promising biomarkers for neurodegenerative diseases that we may be able to use to identify a disease at a substantially earlier stage. While these haven’t yet been implemented into current medical checks, they might be in the near future. This way, one will be able to identify how one’s biomarkers are changing over time, and hence be aware of any changes that may be indicative of disease. There are numerous other biomarkers being investigated for Alzheimer’s disease, but these two have the largest direct link to disease indication.

In an article I recently read by Aylward, both a neuroimaging technique and a biomarker for diagnosing Huntington’s disease are discussed in detail. Alyward first discusses the possibility of using MRI and striatal volume (the area degraded in Huntington’s leading to excessive movement) as a biomarker. The idea is that since an area of the brain is being degraded, you will be able to see structural changes in the brain that are only seen in patients with Huntington’s. Thus, this biomarker follows all three of Alyward’s initial conditions for a biomarker. The benefit of Huntington’s is that it does have a clear genetic link, so genetic tests can also reveal the disease. However, there are many other neurodegenerative disease, such as Alzheimer’s or Parkinson’s where the genetic link is not as clear cut.

In another article, Craig-Schapiro et al. examine some of the biomarkers for Alzheimer’s disease. There are two types proteins that accumulate in Alzheimer’s disease. The first type if Amyloid-Beta (A-B). In a normal, healthy brain, A-B proteins are removed from the brain via the cerebrospinal fluid (CSF). In Alzheimer’s, this doesn’t occur, and the A-B proteins stick together forming plaques. The second type of protein is Tau. Tau proteins in Alzheimer’s have been altered, and thus clump together forming “tangles”. When a cell dies, it releases it’s protein, which is subsequently released into the CSF. Thus, if we take a sample of the CSF and see an absence of A-B protein or an accumulation of this altered Tau protein, it indicates the possibility of Alzheimer’s disease.

One of the main problems with all of these technologies is that we can never be sure if there is a difference between the current state and the previous state without testing the person at a previous stage. Having controls tested to indicate a normal baseline is the best we currently have, but there is no way that this baseline is applicable for everyone, as everyone has a significantly different brain. If we were to test a person at certain stages of their lives to have valid comparable samples, we would increase our ability for early diagnosis much earlier. If only the cost of such technologies were such that we would be able to implement such testing earlier in life. Hopefully some type of baseline tests will be implemented in the near future to be able to capture disease earlier.

Works Cited:

Aylward, Elizabeth H. “Magnetic Resonance Imaging Striatal Volumes: A Biomarker for Clinical Trials in Huntington’s Disease.” Movement disorders : Official Journal of the Movement Disorder Society 29.11 (2014): 1429-33.

URL: http://onlinelibrary.wiley.com.ezproxy.library.ubc.ca/doi/10.1002/mds.26013/full

Craig-Schapiro, Rebecca, Anne M. Fagan, and David M. Holtzman. “Biomarkers of Alzheimer’s Disease.” Neurobiology of Disease 35.2 (2009): 128-40.

URL: http://www.sciencedirect.com.ezproxy.library.ubc.ca/science/article/pii/S0969996108002544#

 

Brain Tumors and Disconnection Syndromes

Standard

A few weeks ago, I came across a research article outlining a case study about a man who had been diagnosed with a brain tumor. This really got me thinking about what we currently know about the brains function, and how that can be generalized to the entire population. How do we know which areas of the brain are involved in certain functioning? How can we figure this out? The more I thought about it, the more I realized that brain tumors are a perfect example of all that we don’t know.

In the article by Burns et al., they highlighted the fact that this mans brain tumor (which was in the orbitofrontal cortex), presented itself with symptoms “typical” of that area, but additionally, presented itself with a few symptoms typical of temporal lobe injury. Now, how can a symptom appear if there is no direct damage to that area by tumor displacement? This article didn’t conclude this question for me, it merely presented the facts of the symptoms and treatment. This caused me to search a little further for an answer to this question.

In an article written by Catani et al. called “What is a disconnection syndrome?”, they answered many of the unclear facts from the Burns et al. article. Alexander Luria was a neuropsychologist with the majority of his work being done in the later half of the 20th century. Luria did not see the brain as different areas functioning separately. Alternatively, he saw it as a series of connections. This does prove to be true, as evident through white matter tracts in the brain seen though neuroimaging. Catani presents the idea of higher order cognitive functioning being a result of the “associative convergence (or integration) of information from multiple sources”. If the brain is a series of connections, and all of our output requires interactions between many brains areas, then it is easy to see how the patient with the orbitofrontal tumor was lacking some of the functions that lie in the temporal lobe. If the tumor displaced or crushed one of the connections between the two areas, any of the functions that rely on that connection would thereby be damaged.

It is quite interesting that damage to one area of the brain seems like a dysfunction in another area. Hopefully, with such information and more information we can gain appreciation for how amazing of an organ we carry around with us on a daily basis.

Works Cited:

Burns, Jeffrey M., and Russell H. Swerdlow. “Right Orbitofrontal Tumor with Pedophilia Symptom and Constructional Apraxia Sign.” Archives of Neurology 60.3 (2003): 437-40.

Catani, Marco, and Marsel Mesulam. “What is a Disconnection Syndrome?” Cortex 44.8 (2008): 911-3.

Cocaine’s Influence on Alcohol Seeking Behaviour

Standard

On November 6th 2014, I had the pleasure of listening to a lecture by Andrew Haack from the University of Utah on dopaminergic pathways in ethanol-seeking behaviour in rats. While his lecture was quite intriguing, it left me looking for more information in regards to these pathways being modulated, and what other factors have influences in these reward pathways. Additionally, I wanted to learn how the described alcohol-seeking behavioural pathway differers under the influence on other drugs.

I stumbled upon this interesting article which discussed the influence cocaine has on alcohol-seeking behaviour in rats. The researchers were interested in seeing how exposure of a drug during recovery of addiction to another drug could actually increase the addiction to both drugs.

To initiate this specific experiment, the researchers began by initiating ethanol into the rats everyday lives. In a similar manner to that described by Haack in Thursday’s lecture. This method involved slowly inducing ethanol to the rats via lever-pressing behaviours. After ‘addiction’ has occurred, the rats willingly lever-press to receive ethanol. “alcohol-seeking was assessed through the use of the Pavlovian Spontaneous Recovery (PSR) model, while alcohol-relapse drinking was assessed through the use of the alcohol-deprivation effect”. The PSR condition involved depriving the rats of their ethanol levers for the 60-minute session. For the relapse condition, rats were deprived of their ethanol levers for 7 days, before being returned to a cage with ethanol levers for their testing.

Cocaine HCl (0, 1, or 10mg/kg) was injected either immediately, 30 minutes or 4 hours before their testing sessions. In the PSR condition, researchers found that the dose of Cocaine HCl increased the rats responding to the ethanol lever compared to those with saline controls.

With the relapse condition, Cocaine HCl (1 or 10mg/kg) was injected. In the conditions where rats were given cocaine HCl immediately before being returned to their cage with an ethanol lever, there was no effect on their responding. However, in the condition where they were given Cocaine HCl injections 4 hours prior to their relapse testing, their ethanol responding was severely enhanced in comparison to the saline controls.

What we can determine from this study is that Cocaine HCl can have an effect on the ethanol-seeking and relapse behaviours. The effect of Cocaine HCl on the ethanol-relapse drinking could be indicative of a more complex interaction occuring between abuse of multiple drugs, but ultimately leading to priming of relapse with a four-hour delay.

If we are able to generalize some of these animal research studies to humans, it seems that humans with severe drug abuse issues may be prone to relapse, and these neuronal connections can be targeted in therapies to help aid relapse.

Works Cited:

Hauser, Sheketha R., et al. “Cocaine Influences Alcohol-Seeking Behavior and Relapse Drinking in Alcohol-Preferring (p) Rats.” Alcoholism, Clinical and Experimental Research 38.10 (2014): 2678-86.

Neurodegenerative Diseases: Deep Brain Stimulation a Treatment for Alzheimer’s?

Standard

In a few of my classes recently, I have been learning about Alzheimer’s disease, and it’s fascinating. A few years ago, my grandmother was diagnosed with Alzheimer’s, and I have been conducting research in this area ever since. I really want to understand the ways that neuroscientists are currently implementing treatments for this disease. Recently, I took it upon myself to conduct a research task to find out where we are with our treatment plans for those with Alzheimer’s, and I found that there were actually way more treatments than I had originally thought.

If you have no idea what Alzheimer’s is, or need a refresher, take a peek here.

First things first, there more proposed treatments for Alzheimer’s than I can count. I could create a huge list, along with their pros and cons, but that would just be tedious and boring for you to read, so i’m just going to tell you about the most exciting treatment: Deep Brain Stimulation (DBS).

Deep Brain Stimulation is a common practice in patients with both Parkinson’s and Depression. DBS involves surgically inserting an electrode to certain areas of the brain, and applying an electric current to disrupt the normal firings of neurons in these specific areas. In Alzheimer’s, the electrodes are placed in the hippocampal fornix, an output to the brain’s memory processor. It is important to note that DBS is only currently being used in clinical trials in Alzheimer’s patients, and it hasn’t yet been implemented into common treatment practices.

In a clinical trial conducted by Laxton, Lozano and colleagues at the University of Toronto, DBS was performed on six patients with Alzheimer’s disease. The researchers implanted the electrodes bilaterally (to both brain hemispheres), and conducted a series of neuroimaging techniques to determine functional and structural changes resulting from DBS.

After 12 months of this treatment, scans showed that the patients had significant increases of glucose metabolism in both the parietal and temporal cortices of the brain. Decreased glucose metabolism indicates lower activity, and thus greater impairment. Therefore, this increase in glucose metabolism indicates that the area that wasn’t functioning as well improved. The researchers  also found this through cognitive testing. These patients seemed to be at a better cognitive state than a person who wouldn’t have had this treatment would have been at. However, the researchers here didn’t have a control group, so this is an area of research that needs to be further investigated.

Currently, there are numerous clinical trials being conducted comparing the use of DBS with control patients who receive electrode stimulation on the surface of the brain, acting as the placebo group. Hopefully, with a few more years of research, the exact benefit of DBS can be determined, and if promising, can be implemented into normal Alzheimer’s treatment.

Works Cited:

Kaplan, Arline. “Deep brain stimulation: new promise in Alzheimer disease and depression?” Psychiatric Times Dec. 2012: 1. Health Reference Center Academic. Web. 6 Nov. 2014.