Saturday, December 1, 2012

Tis the Season of Spending


By: Maura Weber

Rather than venture out into the endless abyss of long lines and packed stores, my Black Friday tradition has always been to put up lights with my dad while blasting Christmas music through the suburbs of Philadelphia.  As many shoppers search the web for bargains, we visit the weather channel hoping that Friday’s forecast will be mild and rain free.

In many ways, Black Friday is social psychology at its finest as shoppers scramble to get the best bargains as they kick off the holiday season.  I consider myself a frugal shopper, and yet the years I have gone out Black Friday shopping I always find myself dropping much more cash than normal.  In a recent article by the Huffington Post, the psychology of Black Friday shopping and splurging was explored.

The article cites two main principles that are responsible for retailer’s success on Black Friday.  The first is a concept known as the scarcity principle, which is the idea that if something is rare, we should try to get our hands on it as soon as possible.  This leads to impulsive purchases as consumers become more focused on buying the latest trends and do not take time to think if the item is really something they or a loved one would want.  Something that retailers use to enhance this effect is the time crunch they put on shoppers by offering “one-day only sales.”  I remember calling my mom freshman year about Ann Taylor’s 40% off sale that was supposedly running for only one day… the sale ended up lasting over a week.  Advertisements such as these cause our brains to act in a more emotional way and bypass the rational decision making processes needed to engage in frugal and smart shopping.  A second principle shoppers often fall under the spell of is the social proof principle, namely the idea of justifying your behavior based on what others around you are doing.  For instance, people might justify standing outside in the cold for hours because “if others are doing it, it must be worth it!”  In this way, many people rationalize the negative aspects of Black Friday – such as long lines and large crowds – because they have already committed to going shopping and want their experience to match their expectations.

Whatever a person’s reason for Black Friday shopping is, it is important to remember that while stores may be offering “unbeatable bargains,” a retailer’s main goal is to make money.  Thus, the deals are probably not as “unbeatable” as they seem.  However, there is something to be said for the excitement that Black Friday brings for the upcoming holidays.  Still, it is important to not get too wrapped up in all the commotion of holiday spending and miss the more important aspects of the season.  Things like making a budget and thinking about what gifts you are going to buy before aimlessly venturing out into the malls have been shown to help shoppers keep their spending in check.

Wednesday, November 28, 2012

Neuroethics


Jenna Hebert

Two recent New York Times articles reflect on the ethical ramifications of the latest findings in neuroscience research. In his provocative piece “Can Neuroscience Challenge Roe v. Wade?,” William Egginton questions neuroscience’s place in a case that could potentially overturn Roe v. Wade. An Idaho statute, the “Pain-Capable Unborn Child Protection Act,” and others like it cite recent findings of pain sentience in fetuses as reason for making abortion illegal. Egginton criticizes the government’s use of research from the natural sciences in general as evidence for expanding or contracting citizens’ rights. He refers to Immanuel Kant’s argument that while science can tell us much about the world we live in, it can tell us nothing about the existence of God, the immortality of the soul, or the origin of human freedom. Should science try to come to conclusions about these questions, it would necessarily fall into error. Egginton explains that regardless of whether neuroscience can tell us something about fetuses and their ability to feel pain, it cannot serve as the answer to big, fundamental questions about what counts as a full-fledged human deserving of all his/her Constitutional rights. Science should complement thinking - not replace it.
David Duncan touches on a related controversial issue in his article “How Science Can Build a Better You” that was published a few days later. He begins the piece by asking, “If a brain implant were safe and available and allowed you to operate your iPad or car using only thought, would you want one? What about an embedded device that gently bathed your brain in electrons and boosted memory and attention? Would you order one for your children?” He explores the fact that in two to three decades, technology could not only improve life for the impaired, but also enhance life for the healthy. Some scientists are opposed to the use of technology for the nonimpaired, explaining that college students around the country, for example, are already overdosing on Adderall and Provigil to stay up late studying for exams. Nevertheless, the “Age of Enhancement” seems inevitable. Neuroscientists, for instance, have developed a pill that might improve the memory of patients with dementia. Perhaps this same pill will be used to enhance the memory of healthy people in the future. A brain implant has been able to partially restore hearing in more than 200,000 deaf people. Who is to say that this anybody who can afford it will not use this device to hear better someday? Some neuroscientists even believe that we will be able to create drugs that alter enzymes connected with genes in the brain that control dopamine levels.
Duncan, like Egginton, however, is cautious about how we use findings in neuroscience, and science in general. Specifically, he fears that these expensive technologies would widen the already large gap between the rich and lower-income families. He even proposes that these artificial enhancements may also challenge what it fundamentally means to be human. As we advance in science, particularly a field that is progressing as rapidly as neuroscience, we need to take a step back and make sure that we are constantly considering the ethical implications of new findings.

Thursday, November 22, 2012

It might not take a concussion...

...to induce brain changes related to head impacts. A recent study showed that professional soccer players who had not gotten a concussion still showed signs of traumatic brain injury, most likely from frequent, unprotected headers. (Source: http://healthland.time.com/2012/11/13/study-soccer-players-without-concussions-still-have-brain-changes/#ixzz2CA6pIuz7)

What exactly are these "changes" in the brain? Are they necessarily bad?

The study found differences in the white matter of the brain, which includes nerves and their myelin coatings. These componenets are crucial to forming networks necessary for cognition.

Scans were done on 10 professional soccer players who had never had concussions and compared to those of 10 competitive swimmers. The researchers used diffusion tensor imaging (DTI), which is on the microscopic level and is better at showing white matter changes than MRI.

The article reminds us that the study has limitations. Concussions have yet to be clinically defined. More studies need to be done looking at specific ages. Professional athletes may not report having had a concussion since it's not in their career interest.

But how would more knowledge of the extent of traumatic brain injury caused by frequent head impacts affect the sports world? It seems like a pain for both refs and athletes to try to put safety limitations in place, such as mandatory time out of a game after some number of hits.

 And would regulations or limitations cause reports of concussions or less severe impacts to be understated, leading to risking health for the sake of preserving one's eligibility? (For an analogy, consider how alcohol poisoning reports at Penn might change without the alcohol amnesty policy...)

For now, I suppose the best we can do is to try to figure out how extensive and or harmful these white matter changes can actually be.



Thursday, November 15, 2012

Right to Die


By Jenny Brodsky

I lost my best friend to suicide a little over three years ago and to this day I still wonder if he could have gotten better or if choosing death was the right thing for him.  The right to die has been a topic of debate for many years, but it’s not common knowledge that assisted suicide is legal in Washington and Oregon or that there are groups around the nation who are illegally helping people commit suicide.  The article In ‘The Suicide Plan,’ Frontline Explores Hidden World of Assisted Suicide discusses a PBS show airing on November 13, 2012 that presents the debate on the right to die from the view of those who are choosing to end their lives.
            The article starts by introducing a woman who, after 50 years of marriage, had to witness her husband die a slow and painful death due to lung cancer.  After this experience she decided she would die on her own terms.  She found a group called Compassion and Choices and ordered 60 pills from them that she would take over 15 minutes to kill herself.  This end-of-life group has been working underground in a world of assisted suicide that many don’t even know exist.   
            Aside from the obvious controversial issue, we must consider whether anyone of sound mind would want to commit suicide.  Suicide has been investigated for many years and there is strong evidence that the serotonergic system of the brain corresponds to suicidal behavior.  This is where serotonin is found, which has been linked to many cognitive disorders such as depression and schizophrenia.   When serotonin levels are low, it has been observed that the brain experiences behavioral dis-inhibition causing things like impulsivity. 
So, we must ask ourselves: Can someone want to die who is completely psychologically sound?  And if not, then wouldn’t the right thing to do be to treat them, not assist them in their impulsive desires? In addition, if assisted suicide becomes legal in some cases, where can anyone draw the line?  This issue may not ever be agreed upon, but it is clear scientists will need more neurological evidence to know for sure whether suicide is in fact the result of a psychological imbalance or, for some, a rational decision.

Sources:



            

Thursday, November 8, 2012

Unfinished Business, the Zeigarnik Effect, and Why You're Addicted to Tetris.

By: Veena Krish


Tetris is undoubtedly one of the most successful game franchises of history. 100 million copies of Tetris have been downloaded to cell phones, a special edition has sold for over $15,000, and a man in England was once jailed for 4 months for playing the cell phone version on an flight, "endangering the safety of the aircraft". (Guinness World Records 2011, 2010, 2008).

Why the addiction to Tetris? Standard models in psychology and in game theory have proposed reasons for such fascination. Tom Stafford of "Neurohacks" recently accumulated these reasons in an article "The Psychology of Tetris".

One of these reasons first drew attention in the early 20th century. In the 1930s, a Russian psychologist, Bluma Zeigarnik, noticed that busy waiters had near perfect memory of orders--up until the food was delivered. Once the orders were complete, they were, for the most part, instantly forgotten. Zeigarnik hypothesized that the waiters could remember the orders only when they were serving because of the potency of incomplete transactions. She theorized a phenomenon where unfinished tasks hold our attention by clinging to active memory until they are completed.

Zeignarik followed up her observations by measuring such retention in a lab setting: she asked participants to complete small tasks and afterwards asked which ones they remembered doing. Some participants were interrupted while completing their tasks, others were not, and ultimately, those who were interrupted remembered more activities. Such disparity supported her theories: incomplete tasks, no matter how trivial, tend to nag at our subconscious and cling to our active memories until they are resolved. After resolution, we quickly lose these memories. The Zeignarik Effect now is used to describe how tension created by incomplete transactions affects our

The Zeigamik Effect can be used to explain why the game holds our attention so well. Combined with our natural proclivity to clean up messes (Stafford snarks, "Many human games are basically ritualized tidying up"), the Zeigarnik Effect explains that a falling Tetris block bothers us until the block has found its place on the ground. As soon as that one task is complete, the game presents us with another challenge that picks at our brain until complete, and we can't escape.

Since then, the Zeigarmik Effect has been studied in numerous contexts. Stafford talks about quiz shows commanding attention because of irritation caused by not knowing answers to proposed questions. The effect may suggest that study breaks, offer interruptions in long sessions of memorization, are beneficial to learning. Other experiments have shown that the "nagging" on our active memory helps us avoid procrastination.

The effect has moreover been studied to understand traumatic memory. Many believe the tension created by an unresolved traumatic experience can provoke unwanted memories. Until this tension is relieved by talking about experiences or writing down stories, victims are more likely to relieve events.

Controversy has arose concerning the validity of Zeignarik's conclusions, and the reason for the effect still remains a mystery. One explanation theorizes that some designs in the brain encourage attentiveness and goal-orientation. While such attentiveness may prove essential for completing tasks, it may have unwarranted effects. In Stafford's words, "Like a clever parasite, Tetris takes advantage of the mind's basic pleasure in getting things done and uses it against us".

Sources:

Tuesday, October 30, 2012

New Therapies for Stroke Victims Increase Neuroplasticity


By: Kevin O'Sullivan

            Twenty years ago, scientists believed most brain development occurred during a child’s first two years.  After a child turned two, the popular theory speculated that no new brain cells were developed, and any increase in neural function resulted from an increase in efficiency of neurological pathways. Scientists also believed that, during adulthood, the physiological structure of the brain remained mostly unchanged. Brain structure would only change after trauma to the head, poor health habits or other outside stimuli that would result in loss of brain cells. In short, popular medical theory thought the adult brain could only physiologically change for the worse.
            Today, most neurologists support the theory of neuroplasticity. By definition, neuroplasticity is the ability of the human brain to physiologically alter itself in response to stimuli. Simply, the theory suggests that our brains are physically shaped by our experiences. In theory, our brain function can not only deteriorate from our experiences, as suggested by scientists twenty years ago, but can also improve.
In the medical community, some doctors are trying to apply the theory of neuroplasticity to stroke patients.  After a stroke, one of the most common side effects in patients is loss of motor function. Upon examining an affected patient’s nervous system, doctors have found that neural re-organization has almost always occurred in patients who have lost some motor functions. If this neural re-organization were corrected, the patient could be expected to gain most, if not all, lost motor function backs.
One effective treatment doctors have found is nervous system stimulation. By administering low-volt electrical stimuli to both the brain and peripheral nervous system, doctors have found they have increased the plasticity of the neurons and improved some motor function. Why these electrical stimuli have been able to increase plasticity is not fully understood, but some scientists believe these improvements are associated with changes in synaptic activity, gene expression and increased neurotransmitter levels. With an increased understanding of neuroplasticity, non-invasive treatments reliant on electrical stimuli could become extremely effective in the near future. 

Thursday, October 25, 2012

The synesthesia gene: why did it survive?



                                                                        By: Jenna Hebert

            Most of the blog’s readers have probably heard of synesthesia, a phenomenon in which “stimuli presented through one modality will spontaneously evoke sensations in an unrelated modality.” In other words, some people can see music, taste colors, and associate numbers or letters with colors. This raises some interesting questions: where did this trait come from? Why has it been conserved in the population? While it would certainly make listening to music or reading a vivid experience, synesthesia has no evolutionary advantages...right? To address this question, V.S. Ramachandran, a renowned neurologist at UC San Diego, investigated the neural basis of the disorder. He proposed that synesthesia is the result of an excess of neural connections between different modules in the brain. Supposedly, these different regions in the brain that are interconnected in the fetus do not completely separate, leading to cross wiring. While there is no definite proof that it has a genetic basis, the trait does tend to run in families suggesting that it is transmitted from parent to offspring.
Ramachandran considers several explanations for why the synesthesia gene was concerned. Since the disorder is not deleterious or advantageous, perhaps natural selection never selected for nor against the gene. It is also possible that everyone falls somewhere along a synesthesia spectrum, and those we identify as veritable synesthetes are at the tail end. A more interesting explanation considers the possibility that synesthesia might in fact be an advantageous trait. For example, the gene is frequent among artists, musicians, and other people who spend a significant amount of time on creative activities. Ramachandran suggests that because synesthesia results from cross-wiring between different modules in the brain, it is conducive to creativity and innovation. Creativity is, after all, combining ideas or things in novel ways. Another possible advantage to synesthesia is a prodigious memory. Because synesthetes associate things with more than one sense, numbers or letters are more salient. Daniel Tammet, for example, was able to memorize pi to 22,514 digits using synesthetic associations. Ramachandran also proposes that synesthetes have an enhanced sensory processing. Depending on the type of synesthesia, they perform better than control subjects at discriminating between similar colors and demonstrate increased tactile acuity. Synesthesia and its origins clearly present a fascinating mystery to neuroscientists, and I am eager to see what future research will find.

David Brang, V. S. Ramachandran. “Survival of the Synesthesia Gene: Why Do People Hear Colors and Taste Words?”

Tuesday, October 23, 2012

Why Doctors Should Read More Books


By: Beatriz Gadala-Maria

A not-so-recent article in the New York Times addressed a question that I think runs through the mind of most science majors, “Why the humanities?” In our fast-paced academic environment, we are quick to dismiss those classes that may seem pointless in our future careers (i.e. the infamous Writing Seminar) and tempted to fill sector requirements with easy classes that will boost our GPAs but do little for our academic growth. After all, when will a doctor incorporate classic literature into her career?

It turns out that reading novels (and even watching movies) is more beneficial than we would have imagined. It has been proven that, “individuals who frequently read fiction seem to be better able to understand other people, empathize with them and see the world from their perspective.” Movies, but not television, have a similar effect on our brains. This phenomenon is explained by an overlap that exists between brain networks used to understand stories and those used in interactions with other individuals, especially interactions that involving the thoughts and feelings of others. Stories and dramas ultimately act as simulations that help us understand the complexities of real life. This understanding can lead to greater empathy in human interactions, an important skill in any future career. For students who want to be doctors or psychiatrists, this empathy can be particularly critical in interactions with future patients.

Literature has many other beneficial and interesting effects on our brain. Besides stimulating the areas commonly associated with speech and language, such as Broca’s Area and Wernike’s Area, similes and metaphors have the power to stimulate areas in our brain associated with scent and taste, depending on what they describe. In a Spanish study conducted in 2006, when participants read words such as “coffee” and “perfume” their primary olfactory cortex (the area in our brains associated with smell) lit up in an fMRI. In another study, when participants read metaphors dealing with sensation, their sensory cortex became activated. Similarly, phrases regarding motion lead to activation of motor cortices. For our brain, these neurological events are undistinguishable from those that occur when we actually experience what we read about. Neuroimaging technology has proven that literature and fiction are more powerful than we previously could have ever imagined, making the humanities more relative to our everyday lives and future science careers than we would have previously considered.