Friday, December 9, 2011

HIV and Burkitt's Lymphoma in the Brain

Chemotherapy typically follows after radiosurgery to treat small cancerous tumors in the human brain. However, when the tumors are the result of Burkitt's Lymphoma, the disease is exponentially more serious and requires many more treatments. Burkitt's Lymphoma is a fast-growing cancer of the lymphatic system, specifically B-Cells (classifying it as a type of leukemia). All types of Burkitt's are associated with HIV and immunodeficiency. In fact, 90% of AIDS cases are complicated by an onset of Burkitt's.


In the types of Burkitt's encountered in North America, the cancer usually starts in the belly area (abdomen). The disease can also start in the ovaries, testes, brain, and spinal fluid. Swelling of the lymph nodes is the primary symptom of the condition.

http://lymphoma.about.com/od/nonhodgkinlymphoma/p/burkitts.htm

l

When diagnosed early, chemo can be extremely effective as a solution for Burkitt's ironically because of its naturally fast progression (the chemo progresses quickly when the cancer grows quickly). Patients treated with HAART (Highly Active Antiretroviral Therapy) for HIV, have a typically better chance for survival with Burkitt's lymphoma as well. These patients are basically being treated with a drug cocktail of sorts since HAART is defined as treatment with at least three active anti-retroviral medications (ARV’s).


The chemotherapy treatments are administered through a ventricular cathoder that is surgically inserted. It is a short surgery and an easy insertion into the third ventricle of the brain once the tumor is located via MRI scan. Surgeons implant what is called an Ommaya Resevoir for easy drug administration:


http://www.cw.bc.ca/library/pamphlets/search_view.asp?keyword=373


If the patient, however, has already developed carcinomatosis and has widespread cancerous lesions throughout the body (as is often the result of lymphoma), the treatments are unfortunately less effective.

Thursday, December 8, 2011

Clinical Applications of Hallucinogens

“Turn on, tune in, drop out.” Those are the famous words of the late psychologist Timothy Leary, a figurehead of the sixties famous for his experiments surrounding the psychological implications of hallucinogen, specifically LSD, use. Scientific interest in hallucinogen research more or less died with Leary’s dismissal from Harvard University in 1963, but, according to an article in Bloomberg Businessweek, it’s making a comeback.

This time, “magic mushrooms” are the focus of attention. Specifically, researchers are interested in psilocybin, the active compound responsible for the hallucinations mushroom users experience. Psilocybin accomplishes this, according to a very recent study, by acting as a “super agonist” to serotonin, a neurotransmitter associated with everything from depression to learning to hallucination. By binding to serotonin receptors, psilocybin both increases serotonin uptake and inhibits glutamate uptake. The resulting g-protein signal cascade is thought to trigger hallucinations, although the exact mechanisms are unknown.

Claimed clinical applications of the psychedelic compound range from anti-smoking therapy to chronic “suicide headache” relief. The most promising area of research surrounds the effect of psilocybin on the psyche of patients diagnosed with terminal illness. In a recent study, Roland Griffiths, a neuroscientist at Johns Hopkins University, administered psilocybin to 36 subjects, none of whom had taken the substance before. After taking the substance, the subjects were observed during their “trip,” an experience some described as “one of the five most meaningful experiences of their lives.”

But that was expected. Psilocybin is known to produce mystical, sometimes even spiritual short-term experiences. What is remarkable is that, 14 months later, almost all of the volunteers reported viewing life through a better, more positive lens. According to a Wired.com article on the study, “over half [of the volunteers] reported substantial increases in life satisfaction and positive behavior, while no long-term negative effects were reported.”

Although the implications are immense for increasing the quality of life for patients with depression or patients diagnosed with terminal illnesses suffering severe anxiety, I still do not know how I feel about the production of a drug derived from psilocybin. There certainly are cases were neurological intervention is necessary to improve the quality of life, but a pill so closely related to a well known hallucinogenic drug scares me. Is happiness, caused by the neurological tweakings of a drug, actually happiness?

Wednesday, November 30, 2011

The Joker


When I hear the word psychopath, I think of the Joker. For those of you who do not know who the Joker is, you’re clearly illiterate and describing him wouldn’t help. But, on the off chance that someone exists out there who is capable of reading, and has somehow managed to stay oblivious to the magic, mystery, and magnificence that is the Batman comic series, I’ll elucidate:

The Joker is batman’s arch nemesis. Since his introduction in Batman #1, he has cared about nothing else than the destruction of Batman and Gotham City. He wears a purple jacket, has dyed green hair, and a permanent smile drawn on his face in blood-red makeup.

He is INSANE. But, is he a psychopath? A new study by neuroscientists at the University of Wisconsin-Madison suggests no.

In the study, prisoners of the Wisconsin Department of Corrections were analyzed using two methods: Diffusion Tensor Imaging (DTI), which depicts brain structural integrity, and fMRI scanning, which utilizes blood flow to measure neural activity. Half of the prisoners had been previously diagnosed as psychopaths, and the other half had been diagnosed as “normal.”

Compared to the “normal” prisoners, the brains of the psychopathic prisoners had significantly less neural communication (fMRI) and white matter connections (DTI) between the ventromedial prefrontal cortex, the brain area typically associated with empathy and guilt, and the amygdala, which is typically associated with fear and anxiety. As the author’s of the study put it “"Those two structures in the brain [ventromedial prefrontal cortex and amygdala], which are believed to regulate emotion and social behavior, seem to not be communicating as they should."

But this finding illustrates only part of the psychopathic mind. Lacking the ability to think and feel in a socially acceptable manner explains why psychopaths feel no remorse for their actions, but some neurological phenomenon must explain what motivates their criminal actions. That something is dopamine. Another recent study by the National Institute of Health found psychopathic brains to possess a hyperactive dopamine reward system which is thought to be the reason a psychopath will “keep seeking a reward at any cost…” no matter if that means indulging in “…violent crime, recidivism, and substance abuse.”

So, is the Joker a psychopath? My answer is no. The Joker only possesses half the ingredients. He’s impulsive and he’ll commit and recommit atrocities, regardless of the consequences; he clearly has a hyperactive dopamine reward system. The Joker is out to prove a point, however. With every crime he commits, he tries to turn the population of Gotham City on one another. The Joker needs to prove to Gotham City , the world, and himself, that all of human kind is as evil as he is. A psychopath commits a crime, and then doesn’t see what he/she did wrong or why he/she should feel guilty. The Joker knows he did something wrong, he just has an agenda. At the end of the day, when the Joker looks himself in the mirror, he feels guilty, but he knows what he did was necessary to prove his point.

The Joker is not a psychopath, he’s just a crazily determined murderer/robber/arsonist.

Wednesday, November 16, 2011

Disease and the Brain

“In human newborns, the brain demands 87 per cent of the body’s metabolic budget.”

This is the opening line of a paper recently published in the “Proceedings of the Royal Society of Biological Sciences,” and it is on this line that the paper’s argument hinges. In the paper, the authors propose a new theory for cognitive ability (i.e. IQ) differentiations across the globe. According to their “parasite-stress” hypothesis, there is a direct correlation between infectious disease prevalence in a given region, and IQ scores.

Previous theories explaining IQ distributions have ranged from a country’s health and education systems to its gross domestic product. My favorite theory (its hard to type with sarcasm, but just know I’m currently doing it), claims there is a direct correlation between colder temperatures and increased IQs. According to the theory, only individuals with relatively high intelligence can survive in colder regions, as they must overcome more severe obstacles. As much as I want to believe that people from colder regions are smarter than those from hotter (I’m from Minnesota), I just don’t believe that colder regions present more of a challenge to survival than hotter ones. For example, aren’t hotter regions more likely to have more prevalent infectious diseases?

Infectious diseases sap enormous amounts of energy from their hosts through various means, including tissue destruction, diarrhea, and immune system exhaustion. For this reason, contracting a disease early in life can be extremely threatening to brain development, this new paper claims. According to the study, disease prevalence, more than any other factor (i.e. temperature, distance from Africa, GDP, etc.), directly correlated to inhibited cognitive development.

If disease is completely eradicated, then will everyone be a genius? I would guess no. Early exposure to diseases robs the developing brain of the energy it needs to reach its full potential. That being said, not everyone has the same potential for brain development. Energy availability is not the only factor in play. There are a variety of other genetic and epigenetic factors at work in and on all of us.

Wednesday, November 9, 2011

What Are You Thinking?





We have finally merged the fields of science and magic. Scientists can now read our minds…kind of.

Scientists at U.C. Berkeley have recently succeeded in reconstructing a subject’s visual experiences using nothing more than an fMRI machine, a computer, and some YouTube clips (18 million to be exact).

In their study, a subject was instructed to lie in an fMRI machine for a few hours and watch a stream of YouTube trailers while their brain activity (specifically blood flow to the visual cortex) was monitored and analyzed for specific reactions to each trailer. 18 million YouTube clips (not including any footage from the movie trailers) were then fed into the computer, which analyzed each clip, predicted how the human brain would react to each clip, and reconstructed a stream of videos as similar as possible to that viewed by the subject. At the bottom of this post, I’ve posted a link to the two video streams (viewed and reconstructed) side by side. It’s pretty eery how close they are.

Although this isn’t exactly mind reading, it’s still pretty cool, and it is paving the way for new developments in “mind reading” technology. If we can decipher visual stimuli in a subject’s brain, how long until we can reproduce dreams? How about thoughts? Would it be possible to read the brain activity of a mute individual, decode what they are thinking, and translate it into machine assisted speech?

The possible benefits for this type of technology are endless, but they come with some interesting ethical dilemmas to discuss. Science can be an amazing force for good in an individual’s life, but there could come a point at which technology has developed too far and begins to encroach on the basic human rights (such as privacy and independence) of an individual. “Mind reading” technology could potentially be used to take from individual’s information they wish to keep secret (this is exactly why our friends at DARPA are funding research similar to that of the U.C. Berkeley scientists), or even influence a person’s thought processes.

So, its crazily cool and potentially extremely beneficial. But, along with the good must come some bad. I, however, look forward to the day that technology has developed to a point where we must discuss some of these dilemmas.

http://www.youtube.com/watch?v=nsjDnYxJ0bo

Sunday, November 6, 2011

This post will begin a series of entries relating to research in Penn’s Stellar-Chance Laboratories under the surveillance of Dr. Jean Bennett, M.D., Ph.D. I have been fortunate enough to acquire an internship in Dr. Bennett’s laboratory that has and will allow me to continue to observe and aid in both the clinical and laboratory research aspects of her clinical trials for gene treatment of inherited retinal degenerations.

Dr. Bennett is currently an FM Kirby Professor of Ophthalmology, having graduated from Yale with a BS in Honors Biology, A Ph.D. in Zoology from Berkeley, and an M.D. from Harvard Medical School. Her current, and most well-renowned, work consists of her research in the identification and characterization of genes that are defective in blinding and currently untreatable hereditary retinal degenerations such as retinitispigmentosa, macular degeneration and choroideremia. In order to gain better comprehension of the genetic disorders that initiate the development of such conditions, Dr. Bennett began with a study on the genetics behind inherited blindness in dogs. The dogs in the experiment lacked a specific protein that provides a nutrient that is essential for the retina to go through the electrical process that results in vision. The disease in humans is referred to as Leber Congenital Amaurosis (LCA), and often presents itself in humans by dramatically worsening vision over the host’s first twenty years of life. By age eighteen, host’s are typically considered to be legally blind. After successfully developing a gene therapy through a viral vector mechanism (in which the missing gene is surgically injected via a viral infection tothe host’s retina) that effectively cured the lab dogs of blindness, Dr. Bennett led her team to begin running clinical trials on humans, even children.

I was lucky enough to have the opportunity to meet Dr. Bennett’s first young patient, Corey Haas. Corey, who is currently eleven years old and has been seeing exponentially better in his left eye since his first injection with the viral vector two years ago, is currently at CHOP awaiting his second injection for his right eye. This past week, Corey underwent some retinal imaging with one of the most complex pieces of imaging machinery I have ever set my eyes on. UPenn is the home of one of four Adaptive Optics Scanning Laser Ophthalmoscopy Machines in existence (Pictured above). At 32 frames per second, this machine can take microscopic images of individual retinal cells (the rods and cones). The Camera itself operates on two focuses. Corey, or any patient, would put their head up on a level surface and stares at a target through a set of lenses, and during the first focusing process, the computer digitally adapts to any imperfections in Corey’s cornea such as those created by astigmatism. Then, a second focusing process begins to adjust for the axial length (the distance between Corey’s fovea and cornea) so as to get the clearest image of his retina. The target that Corey looks at can be moved around so that different parts of the retina can be photographed. Images show data relating cone density, and can be read along with fMRI images to determine high areas of cone activity. Cone photoreceptors display as white dots on the black canvas of the retina as shown in the image below:

While LGA is a relatively rare condition, only occurring in 1 out of every 50,000 newborns according to the US National Library of Medicine, http://ghr.nlm.nih.gov/condition/leber-congenital-amaurosis, the dramatic progress that Corey and his fellow patients are making under the care of Dr. Bennett and her team may expedite development of gene therapy for more common retinal diseases, such as age-related macular degeneration.

Watch Corey’s Story Below!
http://www.youtube.com/watch?v=Z-VY64rSYr0

Saturday, November 5, 2011

Money or Sleep?


According to a new study from Cornell University, you probably chose money. When given the choice between a $80,000 salary that will allow 7.5 hours of sleep versus $140,000 and 6 hours of sleep, the majority of the participants chose the $140,000. I’m not sure who has a choice like this in the real world, but even if they did, does this result make sense for people with souls (aka not Whartonites)?

According to the 99%, money doesn’t buy you happiness. According to Johnny Depp, “money doesn’t buy you happiness, but it buys you a big enough yacht to sail right up to it.” Germany seems to disagree. Despite their economic stability, only 41.1% of Germans report to be “thriving” compared to the economically “unstable” US with 52.9% thrivers. The numbers are reversed when comparing “struggling,” with Germany at 53.1% and Americans at 43.5%. How can this be when Germany has one of the lowest unemployment rates, lowest annual average hours worked, and universal health care? Can Johnny Depp actually be wrong?

A recent Gallup poll showed that the economy and health care are two of the top concerns for Americans. Financial insecurity is a major cause for anxiety, depression, and even seeking therapy in the US, so why are Americans seemingly happier than Germans? Unfortunately, not enough research has been done comparing German sleep patterns to American ones. However, the Great Land of the Free is known for its optimism, which can always make things look good even when they aren’t! We’re also known for our “laziness” (don’t kill the messenger) so maybe that’s all the sleep studies we need. Either way, a good mood and a few extra hours of sleep do not sound like bad advice for anyone. And if you feel really bad about choosing money over sleep, or vice versa, then move to Germany where you can have 7.5 hours of sleep and $140,000 too! Go Deuschland!

Thursday, November 3, 2011

The Power of Loss

In the 2009 NFC championship, the Minnesota Vikings faced off against the New Orleans Saints for a berth into the Superbowl and a fairy tale ending to the career of Minnesota’s then newly acquired veteran quarterback, Brett Favre. The Saints were the clear favorites in the matchup – they were younger, they were faster, they were stronger. But, to the surprise of everyone watching, as the clock ticked down the remaining seconds of the game, Favre and his Vikings were tied with the Saints and on the forty-yard line, on the edge of field goal territory. One field goal, that was all the Vikings needed, and with Ryan Longwell, one of the league’s best kickers, waiting on the sideline, it looked like Minnesota fans were finally going to have something to celebrate. Then, Favre did something unusual. Instead of calling a conservative play, essentially something to use up a down so that Longwell could come on the field and propel the Vikings into history, he threw an interceptio and lost the game.

In order to understand the thought process behind this now infamous stain on the illustrious career of Brett Favre, we must try to put ourselves in Favre’s shoes. On third down, on the forty-yard line, we have two choices: run the ball, or throw the ball. Now, lets analyze the potential outcomes of these two choices from an economic perspective. The first course of action, running the ball, would likely move the ball an insignificant amount closer to the end zone, only slightly increasing Longwell’s chances of kicking a field goal. Running also, however, has a very low potential for devastating losses, as fumbling the ball is a rare occurrence. The second course of action, passing the ball, could benefit the team greatly, in that it would push the line of scrimmage significantly closer to the end zone and greatly increase Longwell’s probability of kicking a field goal, but it also has relatively high potential for significant loss, in the form of an interception, which would yield the ball, and as it turned out, the game, to the Saints. Presented with this scenario, low risk for low gains vs. high risk for high gains, most people would choose the former, as I will discuss later. Favre, however, did the opposite, and was forced to live with what we all fear: extreme loss.

It has long been accepted that, as stated in the famous paper “Prospect Theory: An analysis of decision under risk” by psychologists, Kahneman and Tversky, human beings are averse to losses. Don’t believe me? Then take, for example, this survey question posed by Daniel Kahneman in his new book Thinking Fast and Slow:

The U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. If program A is adopted, 200 people will be saved. If program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor?

The first program is the seemingly obvious correct choice, as it guarantees a gain. Now here’s the second scenario:

The U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. If program C is adopted, 400 people will die. If program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor?

Although the same exact scenario, we suddenly have to stop and think. Framed in relation to loss, not gain, an easy decision has suddenly become a complex dilemma that gets jammed somewhere in the frontal cortex.

So, humans are averse to loss, more so then we are attracted to gains. Think about it, even though there was a one-third probability in the program D that everyone would live, was that enough to counterbalance the chance that everyone could die? Even in situations where gains clearly outweigh potential losses, humans, Kahneman and Tversky argue in their paper, will more than likely act irrationally, reacting not to a reasonable evaluation of costs and benefits, as economics would like us to, but to the ominous threat of a loss, always lurking around the corner. But if we know we act irrationally in the face of potential loss, can we not use that knowledge as a tool to fight our own instincts, and act in a rational manner?

In his new book, Kahneman analyzes the idea that self-knowledge should enable humans to correct their irrational actions and behave just as the economists would like. Kahneman claims that self-knowledge is, well, worthless. I’ll offer myself as an example. I’ve taken economics, and I understand the basics of cost-benefit analysis. Yet, when constructing my second semester schedule this past week, did I sign up for a writing seminar, as would be the smart thing to do? Nope. I’m scared of the time I’ll lose to writing essay after essay in that class, and I don’t care that the benefits of getting it out of the way early may outweigh these costs. Maybe, you think differently than me. Maybe you are able to analyze costs and benefits and make a rational choice, completely independent of loss aversion. Brett Favre did it anyway, and look where it got him.