Arquivo da tag: Cognição

Why Are Elderly Duped? Area in Brain Where Doubt Arises Changes With Age (Science Daily)

ScienceDaily (Aug. 16, 2012) — Everyone knows the adage: “If something sounds too good to be true, then it probably is.” Why, then, do some people fall for scams and why are older folks especially prone to being duped?

An answer, it seems, is because a specific area of the brain has deteriorated or is damaged, according to researchers at the University of Iowa. By examining patients with various forms of brain damage, the researchers report they’ve pinpointed the precise location in the human brain, called the ventromedial prefrontal cortex, that controls belief and doubt, and which explains why some of us are more gullible than others.

“The current study provides the first direct evidence beyond anecdotal reports that damage to the vmPFC (ventromedial prefrontal cortex) increases credulity. Indeed, this specific deficit may explain why highly intelligent vmPFC patients can fall victim to seemingly obvious fraud schemes,” the researchers wrote in the paper published in a special issue of the journal Frontiers in Neuroscience.

A study conducted for the National Institute of Justice in 2009 concluded that nearly 12 percent of Americans 60 and older had been exploited financially by a family member or a stranger. And, a report last year by insurer MetLife Inc. estimated the annual loss by victims of elder financial abuse at $2.9 billion.

The authors point out their research can explain why the elderly are vulnerable.

“In our theory, the more effortful process of disbelief (to items initially believed) is mediated by the vmPFC, which, in old age, tends to disproportionately lose structural integrity and associated functionality,” they wrote. “Thus, we suggest that vulnerability to misleading information, outright deception and fraud in older adults is the specific result of a deficit in the doubt process that is mediated by the vmPFC.”

The ventromedial prefrontal cortex is an oval-shaped lobe about the size of a softball lodged in the front of the human head, right above the eyes. It’s part of a larger area known to scientists since the extraordinary case of Phineas Gage that controls a range of emotions and behaviors, from impulsivity to poor planning. But brain scientists have struggled to identify which regions of the prefrontal cortex govern specific emotions and behaviors, including the cognitive seesaw between belief and doubt.

The UI team drew from its Neurological Patient Registry, which was established in 1982 and has more than 500 active members with various forms of damage to one or more regions in the brain. From that pool, the researchers chose 18 patients with damage to the ventromedial prefrontal cortex and 21 patients with damage outside the prefrontal cortex. Those patients, along with people with no brain damage, were shown advertisements mimicking ones flagged as misleading by the Federal Trade Commission to test how much they believed or doubted the ads. The deception in the ads was subtle; for example, an ad for “Legacy Luggage” that trumpets the gear as “American Quality” turned on the consumer’s ability to distinguish whether the luggage was manufactured in the United States versus inspected in the country.

Each participant was asked to gauge how much he or she believed the deceptive ad and how likely he or she would buy the item if it were available. The researchers found that the patients with damage to the ventromedial prefrontal cortex were roughly twice as likely to believe a given ad, even when given disclaimer information pointing out it was misleading. And, they were more likely to buy the item, regardless of whether misleading information had been corrected.

“Behaviorally, they fail the test to the greatest extent,” says Natalie Denburg, assistant professor in neurology who devised the ad tests. “They believe the ads the most, and they demonstrate the highest purchase intention. Taken together, it makes them the most vulnerable to being deceived.” She added the sample size is small and further studies are warranted.

Apart from being damaged, the ventromedial prefrontal cortex begins to deteriorate as people reach age 60 and older, although the onset and the pace of deterioration varies, says Daniel Tranel, neurology and psychology professor at the UI and corresponding author on the paper. He thinks the finding will enable doctors, caregivers, and relatives to be more understanding of decision making by the elderly.

“And maybe protective,” Tranel adds. “Instead of saying, ‘How would you do something silly and transparently stupid,’ people may have a better appreciation of the fact that older people have lost the biological mechanism that allows them to see the disadvantageous nature of their decisions.”

The finding corroborates an idea studied by the paper’s first author, Erik Asp, who wondered why damage to the prefrontal cortex would impair the ability to doubt but not the initial belief as well. Asp created a model, which he called the False Tagging Theory, to separate the two notions and confirm that doubt is housed in the prefrontal cortex.

“This study is strong empirical evidence suggesting that the False Tagging Theory is correct,” says Asp, who earned his doctorate in neuroscience from the UI in May and is now at the University of Chicago.

Kenneth Manzel, Bryan Koestner, and Catherine Cole from the UI are contributing authors on the paper. The National Institute on Aging and the National Institute of Neurological Disorders and Stroke funded the research.

Interest in Arts Predicts Social Responsibility (Science Daily)

ScienceDaily (Aug. 16, 2012) — If you sing, dance, draw, or act — and especially if you watch others do so — you probably have an altruistic streak, according to a study by researchers at the University of Illinois at Chicago.

People with an active interest in the arts contribute more to society than those with little or no such interest, the researchers found. They analyzed arts exposure, defined as attendance at museums and dance, music, opera and theater events; and arts expression, defined as making or performing art.

“Even after controlling for age, race and education, we found that participation in the arts, especially as audience, predicted civic engagement, tolerance and altruism,” said Kelly LeRoux, assistant professor of public administration at UIC and principal investigator on the study.

In contrast to earlier studies, Generation X respondents were found to be more civically engaged than older people.

LeRoux’s data came from the General Social Survey, conducted since 1972 by the National Data Program for the Sciences, known by its original initials, NORC. A national sample of 2,765 randomly selected adults participated.

“We correlated survey responses to arts-related questions to responses on altruistic actions — like donating blood, donating money, giving directions, or doing favors for a neighbor — that place the interests of others over the interests of self,” LeRoux said. “We looked at ‘norms of civility.’ Previous studies have established norms for volunteering and being active in organizations.”

The researchers measured participation in neighborhood associations, church and religious organizations, civic and fraternal organizations, sports groups, charitable organizations, political parties, professional associations and trade unions.

They measured social tolerance by two variables:

  • Gender-orientation tolerance, measured by whether respondents would agree to having gay persons speak in their community or teach in public schools, and whether they would oppose having homosexually themed books in the library.
  • Racial tolerance, measured by responses regarding various racial and ethnic groups, including African-Americans, Hispanics, and Asian Americans. Eighty percent of the study respondents were Caucasian, LeRoux said.

The researchers measured altruistic behavior by whether respondents said they had allowed a stranger to go ahead of them in line, carried a stranger’s belongings, donated blood, given directions to a stranger, lent someone an item of value, returned money to a cashier who had given too much change, or looked after a neighbor’s pets, plants or mail.

“If policymakers are concerned about a decline in community life, the arts shouldn’t be disregarded as a means to promote an active citizenry,” LeRoux said. “Our positive findings could strengthen the case for government support for the arts.”

The study was based on data from 2002, the most recent year in which the General Social Survey covered arts participation. LeRoux plans to repeat the study with results from the 2012 survey, which will include arts data.

Irony Seen Through the Eye of MRI (Science Daily)

ScienceDaily (Aug. 3, 2012) — In the cognitive sciences, the capacity to interpret the intentions of others is called “Theory of Mind” (ToM). This faculty is involved in the understanding of language, in particular by bridging the gap between the meaning of the words that make up a statement and the meaning of the statement as a whole.

In recent years, researchers have identified the neural network dedicated to ToM, but no one had yet demonstrated that this set of neurons is specifically activated by the process of understanding of an utterance. This has now been accomplished: a team from L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) has shown that the activation of the ToM neural network increases when an individual is reacting to ironic statements.

Published in Neuroimage, these findings represent an important breakthrough in the study of Theory of Mind and linguistics, shedding light on the mechanisms involved in interpersonal communication.

In our communications with others, we are constantly thinking beyond the basic meaning of words. For example, if asked, “Do you have the time?” one would not simply reply, “Yes.” The gap between what is saidand what it means is the focus of a branch of linguistics called pragmatics. In this science, “Theory of Mind” (ToM) gives listeners the capacity to fill this gap. In order to decipher the meaning and intentions hidden behind what is said, even in the most casual conversation, ToM relies on a variety of verbal and non-verbal elements: the words used, their context, intonation, “body language,” etc.

Within the past 10 years, researchers in cognitive neuroscience have identified a neural network dedicated to ToM that includes specific areas of the brain: the right and left temporal parietal junctions, the medial prefrontal cortex and the precuneus. To identify this network, the researchers relied primarily on non-verbal tasks based on the observation of others’ behavior[1]. Today, researchers at L2C2 (Laboratoire sur le Langage, le Cerveau et la Cognition, Laboratory on Language, the Brain and Cognition, CNRS / Université Claude Bernard-Lyon 1) have established, for the first time, the link between this neural network and the processing of implicit meanings.

To identify this link, the team focused their attention on irony. An ironic statement usually means the opposite of what is said. In order to detect irony in a statement, the mechanisms of ToM must be brought into play. In their experiment, the researchers prepared 20 short narratives in two versions, one literal and one ironic. Each story contained a key sentence that, depending on the version, yielded an ironic or literal meaning. For example, in one of the stories an opera singer exclaims after a premiere, “Tonight we gave a superb performance.” Depending on whether the performance was in fact very bad or very good, the statement is or is not ironic.

The team then carried out functional magnetic resonance imaging (fMRI) analyses on 20 participants who were asked to read 18 of the stories, chosen at random, in either their ironic or literal version. The participants were not aware that the test concerned the perception of irony. The researchers had predicted that the participants’ ToM neural networks would show increased activity in reaction to the ironic sentences, and that was precisely what they observed: as each key sentence was read, the network activity was greater when the statement was ironic. This shows that this network is directly involved in the processes of understanding irony, and, more generally, in the comprehension of language.

Next, the L2C2 researchers hope to expand their research on the ToM network in order to determine, for example, whether test participants would be able to perceive irony if this network were artificially inactivated.

Note:

[1] For example, Grèzes, Frith & Passingham (J. Neuroscience, 2004) showed a series of short (3.5 second) films in which actors came into a room and lifted boxes. Some of the actors were instructed to act as though the boxes were heavier (or lighter) than they actually were. Having thus set up deceptive situations, the experimenters asked the participants to determine if they had or had not been deceived by the actors in the films. The films containing feigned actions elicited increased activity in the rTPJ (right temporal parietal junction) compared with those containing unfeigned actions.

Journal Reference:

Nicola Spotorno, Eric Koun, Jérôme Prado, Jean-Baptiste Van Der Henst, Ira A. Noveck. Neural evidence that utterance-processing entails mentalizing: The case of ironyNeuroImage, 2012; 63 (1): 25 DOI:10.1016/j.neuroimage.2012.06.046

Teen Survival Expectations Predict Later Risk-Taking Behavior (Science Daily)

ScienceDaily (Aug. 1, 2012) — Some young people’s expectations that they will not live long, healthy lives may actually foreshadow such outcomes.

New research published August 1 in the open access journal PLOS ONEreports that, for American teens, the expectation of death before the age of 35 predicted increased risk behaviors including substance abuse and suicide attempts later in life and a doubling to tripling of mortality rates in young adulthood.

The researchers, led by Quynh Nguyen of Northeastern University in Boston, found that one in seven participants in grades 7 to 12 reported perceiving a 50-50 chance or less of surviving to age 35. Upon follow-up interviews over a decade later, the researchers found that low expectations of longevity at young ages predicted increased suicide attempts and suicidal thoughts as well as heavy drinking, smoking, and use of illicit substances later in life relative to their peers who were almost certain they would live to age 35.

“The association between early survival expectations and detrimental outcomes suggests that monitoring survival expectations may be useful for identifying at-risk youth,” the authors state.

The study compared data collected from 19,000 adolescents in 1994-1995 to follow-up data collected from the same respondents 13-14 years later. The cohort was part of the National Longitudinal Study of Adolescent Health (Add Health), conducted by the Carolina Population Center and funded by the National Institutes of Health and 23 other federal agencies and foundations.

Journal Reference:

Quynh C. Nguyen, Andres Villaveces, Stephen W. Marshall, Jon M. Hussey, Carolyn T. Halpern, Charles Poole. Adolescent Expectations of Early Death Predict Adult Risk BehaviorsPLoS ONE, 2012; 7 (8): e41905 DOI: 10.1371/journal.pone.0041905

Brain Imaging Can Predict How Intelligent You Are: ‘Global Brain Connectivity’ Explains 10 Percent of Variance in Individual Intelligence (Science Daily)

ScienceDaily (Aug. 1, 2012) — When it comes to intelligence, what factors distinguish the brains of exceptionally smart humans from those of average humans?

New research suggests as much as 10 percent of individual variances in human intelligence can be predicted based on the strength of neural connections between the lateral prefrontal cortex and other regions of the brain. (Credit: WUSTL Image / Michael Cole)

As science has long suspected, overall brain size matters somewhat, accounting for about 6.7 percent of individual variation in intelligence. More recent research has pinpointed the brain’s lateral prefrontal cortex, a region just behind the temple, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence.

Now, new research from Washington University in St. Louis suggests that another 10 percent of individual differences in intelligence can be explained by the strength of neural pathways connecting the left lateral prefrontal cortex to the rest of the brain.

Published in the Journal of Neuroscience, the findings establish “global brain connectivity” as a new approach for understanding human intelligence.

“Our research shows that connectivity with a particular part of the prefrontal cortex can predict how intelligent someone is,” suggests lead author Michael W. Cole, PhD, a postdoctoral research fellow in cognitive neuroscience at Washington University.

The study is the first to provide compelling evidence that neural connections between the lateral prefrontal cortex and the rest of the brain make a unique and powerful contribution to the cognitive processing underlying human intelligence, says Cole, whose research focuses on discovering the cognitive and neural mechanisms that make human behavior uniquely flexible and intelligent.

“This study suggests that part of what it means to be intelligent is having a lateral prefrontal cortex that does its job well; and part of what that means is that it can effectively communicate with the rest of the brain,” says study co-author Todd Braver, PhD, professor of psychology in Arts & Sciences and of neuroscience and radiology in the School of Medicine. Braver is a co-director of the Cognitive Control and Psychopathology Lab at Washington University, in which the research was conducted.

One possible explanation of the findings, the research team suggests, is that the lateral prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner.

“There is evidence that the lateral prefrontal cortex is the brain region that ‘remembers’ (maintains) the goals and instructions that help you keep doing what is needed when you’re working on a task,” Cole says. “So it makes sense that having this region communicating effectively with other regions (the ‘perceivers’ and ‘doers’ of the brain) would help you to accomplish tasks intelligently.”

While other regions of the brain make their own special contribution to cognitive processing, it is the lateral prefrontal cortex that helps coordinate these processes and maintain focus on the task at hand, in much the same way that the conductor of a symphony monitors and tweaks the real-time performance of an orchestra.

“We’re suggesting that the lateral prefrontal cortex functions like a feedback control system that is used often in engineering, that it helps implement cognitive control (which supports fluid intelligence), and that it doesn’t do this alone,” Cole says.

The findings are based on an analysis of functional magnetic resonance brain images captured as study participants rested passively and also when they were engaged in a series of mentally challenging tasks associated with fluid intelligence, such as indicating whether a currently displayed image was the same as one displayed three images ago.

Previous findings relating lateral prefrontal cortex activity to challenging task performance were supported. Connectivity was then assessed while participants rested, and their performance on additional tests of fluid intelligence and cognitive control collected outside the brain scanner was associated with the estimated connectivity.

Results indicate that levels of global brain connectivity with a part of the left lateral prefrontal cortex serve as a strong predictor of both fluid intelligence and cognitive control abilities.

Although much remains to be learned about how these neural connections contribute to fluid intelligence, new models of brain function suggested by this research could have important implications for the future understanding — and perhaps augmentation — of human intelligence.

The findings also may offer new avenues for understanding how breakdowns in global brain connectivity contribute to the profound cognitive control deficits seen in schizophrenia and other mental illnesses, Cole suggests.

Other co-authors include Tal Yarkoni, PhD, a postdoctoral fellow in the Department of Psychology and Neuroscience at the University of Colorado at Boulder; Grega Repovs, PhD, professor of psychology at the University of Ljubljana, Slovenia; and Alan Anticevic, an associate research scientist in psychiatry at Yale University School of Medicine.

Funding from the National Institute of Mental Health supported the study (National Institutes of Health grants MH66088, NR012081, MH66078, MH66078-06A1W1, and 1K99MH096801).

Local Weather Patterns Affect Beliefs About Global Warming (Science Daily)

People living in places experiencing warmer-than-normal temperatures at the time they were surveyed were significantly more likely than others to say there is evidence for global warming. (Credit: © Rafael Ben-Ari / Fotolia)

ScienceDaily (July 25, 2012) — Local weather patterns temporarily influence people’s beliefs about evidence for global warming, according to research by political scientists at New York University and Temple University. Their study, which appears in theJournal of Politics, found that those living in places experiencing warmer-than-normal temperatures at the time they were surveyed were significantly more likely than others to say there is evidence for global warming.

“Global climate change is one of the most important public policy challenges of our time, but it is a complex issue with which Americans have little direct experience,” wrote the study’s co-authors, Patrick Egan of New York University and Megan Mullin of Temple University. “As they try to make sense of this difficult issue, many people use fluctuations in local temperature to reassess their beliefs about the existence of global warming.”

Their study examined five national surveys of American adults sponsored by the Pew Research Center: June, July, and August 2006, January 2007, and April 2008. In each survey, respondents were asked the following question: “From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?” On average over the five surveys, 73 percent of respondents agreed that Earth is getting warmer.

Egan and Mullin wondered about variation in attitudes among the survey’s respondents, and hypothesized that local temperatures could influence perceptions. To measure the potential impact of temperature on individuals’ opinions, they looked at zip codes from respondents in the Pew surveys and matched weather data to each person surveyed at the time of each poll. They used local weather data to determine if the temperature in the location of each respondent was significantly higher or lower than normal for that area at that time of year.

Their results showed that an abnormal shift in local temperature is associated with a significant shift in beliefs about evidence for global warming. Specifically, for every three degrees Fahrenheit that local temperatures in the past week have risen above normal, Americans become one percentage point more likely to agree that there is ”solid evidence” that Earth is getting warmer. The researchers found cooler-than-normal temperatures have similar effects on attitudes — but in the opposite direction.

The study took into account other variables that may explain the results — such as existing political attitudes and geography — and found the results still held.

The researchers also wondered if heat waves — or prolonged higher-than-normal temperatures — intensified this effect. To do so, they looked at respondents living in areas that experienced at least seven days of temperatures of 10° or more above normal in the three weeks prior to interview and compared their views with those who experienced the same number of hot days, but did not experience a heat wave.

Their estimates showed that the effect of a heat wave on opinion is even greater, increasing the share of Americans believing in global warming by 5.0 to 5.9 percentage points.

However, Egan and Mullin found the effects of temperature changes to be short-lived — even in the wake of heat waves. Americans who had been interviewed after 12 or more days had elapsed since a heat wave were estimated to have attitudes that were no different than those who had not been exposed to a heat wave.

“Under typical circumstances, the effects of temperature fluctuations on opinion are swiftly wiped out by new weather patterns,” they wrote. “More sustained periods of unusual weather cause attitudes to change both to a greater extent and for a longer period of time. However, even these effects eventually decay, leaving no long-term impact of weather on public opinion.”

The findings make an important contribution to the political science research on the relationship between personal experience and opinion on a larger issue, which has long been studied with varying results.

“On issues such as crime, the economy, education, health care, public infrastructure, and taxation, large shares of the public are exposed to experiences that could logically be linked to attitude formation,” the researchers wrote. “But findings from research examining how these experiences affect opinion have been mixed. Although direct experience — whether it be as a victim of crime, a worker who has lost a job or health insurance, or a parent with children in public schools — can influence attitudes, the impact of these experiences tends to be weak or nonexistent after accounting for typical predictors such as party identification and liberal-conservative ideology.”

“Our research suggests that personal experience has substantial effects on political attitudes,” Egan and Mullin concluded. “Rich discoveries await those who can explore these questions in ways that permit clean identification of these effects.”

Egan is an assistant professor in the Wilf Family Department of Politics at NYU and Mullin is an associate professor in the Department of Political Science at Temple University

Scientists Read Monkeys’ Inner Thoughts: Brain Activity Decoded While Monkeys Avoid Obstacle to Touch Target (Science Daily)

ScienceDaily (July 19, 2012) — By decoding brain activity, scientists were able to “see” that two monkeys were planning to approach the same reaching task differently — even before they moved a muscle.

The obstacle-avoidance task is a variation on the center-out reaching task in which an obstacle sometimes prevents the monkey from moving directly to the target. The monkey must first place a cursor (yellow) on the central target (purple). This was the starting position. After the first hold, a second target appeared (green). After the second hold an obstacle appeared (red box). After the third hold, the center target disappeared, indicating a “go” for the monkey, which then moved the cursor out and around the obstacle to the target. (Credit: Moran/Pearce)

Anyone who has looked at the jagged recording of the electrical activity of a single neuron in the brain must have wondered how any useful information could be extracted from such a frazzled signal.

But over the past 30 years, researchers have discovered that clear information can be obtained by decoding the activity of large populations of neurons.

Now, scientists at Washington University in St. Louis, who were decoding brain activity while monkeys reached around an obstacle to touch a target, have come up with two remarkable results.

Their first result was one they had designed their experiment to achieve: they demonstrated that multiple parameters can be embedded in the firing rate of a single neuron and that certain types of parameters are encoded only if they are needed to solve the task at hand.

Their second result, however, was a complete surprise. They discovered that the population vectors could reveal different planning strategies, allowing the scientists, in effect, to read the monkeys’ minds.

By chance, the two monkeys chosen for the study had completely different cognitive styles. One, the scientists said, was a hyperactive type, who kept jumping the gun, and the other was a smooth operator, who waited for the entire setup to be revealed before planning his next move. The difference is clearly visible in their decoded brain activity.

The study was published in the July 19th advance online edition of the journal Science.

All in the task

The standard task for studying voluntary motor control is the “center-out task,” in which a monkey or other subject must move its hand from a central location to targets placed on a circle surrounding the starting position.

To plan the movement, says Daniel Moran, PhD, associate professor of biomedical engineering in the School of Engineering & Applied Science and of neurobiology in the School of Medicine at Washington University in St. Louis, the monkey needs three pieces of information: current hand and target position and the velocity vector that the hand will follow.

In other words, the monkey needs to know where his hand is, what direction it is headed and where he eventually wants it to go.

A variation of the center-out task with multiple starting positions allows the neural coding for position to be separated from the neural coding for velocity.

By themselves, however, the straight-path, unimpeded reaches in this task don’t let the neural coding for velocity to be distinguished from the neural coding for target position, because these two parameters are always correlated. The initial velocity of the hand and the target are always in the same direction.

To solve this problem and isolate target position from movement direction, doctoral student Thomas Pearce designed a novel obstacle-avoidance task to be done in addition to the center-out task.

Crucially, in one-third of the obstacle-avoidance trials, either no obstacle appeared or the obstacle didn’t block the monkey’s path. In either case, the monkey could move directly to the target once he got the “go” cue.

The population vector corresponding to target position showed up during the third hold of the novel task, but only if there was an obstacle. If an obstacle appeared and the monkey had to move its hand in a curved trajectory to reach the target, the population vector lengthened and pointed at the target. If no obstacle appeared and the monkey could move directly to the target, the population vector was insignificant.

In other words, the monkeys were encoding the position of the target only when it did not lie along a direct path from the starting position and they had to keep its position “in mind” as they initially moved in the “wrong” direction.

“It’s all,” Moran says, “in the design of the task.”

And then some magic happens

Pearce’s initial approach to analyzing the data from the experiment was the standard one of combining the data from the two monkeys to get a cleaner picture.

“It wasn’t working,” Pearce says, “and I was frustrated because I couldn’t figure out why the data looked so inconsistent. So I separated the data by monkey, and then I could see, wow, they’re very different. They’re approaching this task differently and that’s kind of cool.”

The difference between the monkey’s’ styles showed up during the second hold. At this point in the task, the target was visible, but the obstacle had not yet appeared.

The hyperactive monkey, called monkey H, couldn’t wait. His population vector during that hold showed that he was poised for a direct reach to the target. When the obstacle was then revealed, the population vector shortened and rotated to the direction he would need to move to avoid the obstacle.

The smooth operator, monkey G, in the meantime, idled through the second hold, waiting patiently for the obstacle to appear. Only when it was revealed did he begin to plan the direction he would move to avoid the obstacle.

Because he didn’t have to correct course, monkey G’s strategy was faster, so what advantage was it to monkey H to jump the gun? In the minority of trials where no obstacle appeared, monkey H approached the target more accurately than monkey G. Maybe monkey H is just cognitively adapted to a Whac-A-Mole world. And monkey G, when caught without a plan, was at a disadvantage.

Working with the monkeys, the scientists had been aware that they had very different personalities, but they had no idea this difference would show up in their neural recordings.

“That’s what makes this really interesting,” Moran says.

Social Identification, Not Obedience, Might Motivate Unspeakable Acts (Science Daily)

ScienceDaily (July 18, 2012) — What makes soldiers abuse prisoners? How could Nazi officials condemn thousands of Jews to gas chamber deaths? What’s going on when underlings help cover up a financial swindle? For years, researchers have tried to identify the factors that drive people to commit cruel and brutal acts and perhaps no one has contributed more to this knowledge than psychological scientist Stanley Milgram.

Just over 50 years ago, Milgram embarked on what were to become some of the most famous studies in psychology. In these studies, which ostensibly examined the effects of punishment on learning, participants were assigned the role of “teacher” and were required to administer shocks to a “learner” that increased in intensity each time the learner gave an incorrect answer. As Milgram famously found, participants were willing to deliver supposedly lethal shocks to a stranger, just because they were asked to do so.

Researchers have offered many possible explanations for the participants’ behavior and the take-home conclusion that seems to have emerged is that people cannot help but obey the orders of those in authority, even when those orders go to the extremes.

This obedience explanation, however, fails to account for a very important aspect of the studies: why, and under what conditions, people did not obey the experimenter.

In a new article published in Perspectives on Psychological Science, a journal of the Association for Psychological Science, researchers Stephen Reicher of the University of St. Andrews and Alexander Haslam and Joanne Smith of the University of Exeter propose a new way of looking at Milgram’s findings.

The researchers hypothesized that, rather than obedience to authority, the participants’ behavior might be better explained by their patterns of social identification. They surmised that conditions that encouraged identification with the experimenter (and, by extension, the scientific community) led participants to follow the experimenters’ orders, while conditions that encouraged identification with the learner (and the general community) led participants to defy the experimenters’ orders.

As the researchers explain, this suggests that participants’ willingness to engage in destructive behavior is “a reflection not of simple obedience, but of active identification with the experimenter and his mission.”

Reicher, Haslam, and Smith wanted to examine whether participants’ willingness to administer shocks across variants of the Milgram paradigm could be predicted by the extent to which the variant emphasized identification with the experimenter and identification with the learner.

For their study, the researchers recruited two different groups of participants. The expert group included 32 academic social psychologists from two British universities and on Australian university. The nonexpert group included 96 first-year psychology students who had not yet learned about the Milgram studies.

All participants were read a short description of Milgram’s baseline study and they were then given details about 15 variants of the study. For each variant, they were asked to indicate the extent to which that variant would lead participants to identify with the experimenter and the scientific community and the extent to which it would lead them to identify with the learner and the general community.

The results of the study confirmed the researchers’ hypotheses. Identification with the experimenter was a very strong positive predictor of the level of obedience displayed in each variant. On the other hand, identification with the learner was a strong negative predictor of the level of obedience. The relative identification score (identification with experimenter minus identification with learner) was also a very strong predictor of the level of obedience.

According to the authors, these new findings suggest that we need to rethink obedience as the standard explanation for why people engage in cruel and brutal behavior. This new research “moves us away from a dominant viewpoint that has prevailed within and beyond the academic world for nearly half a century — a viewpoint suggesting that people engage in barbaric acts because they have little insight into what they are doing and conform slavishly to the will of authority,” they write.

These new findings suggest that social identification provides participants with a moral compass and motivates them to act as followers. This followership, as the authors point out, is not thoughtless — “it is the endeavor of committed subjects.”

Looking at the findings this way has several advantages, Reicher, Haslam, and Smith argue. First, it mirrors recent historical assessments suggesting that functionaries in brutalizing regimes — like the Nazi bureaucrat Adolf Eichmann — do much more than merely follow orders. And it simultaneously accounts for why participants are more likely to follow orders under certain conditions than others.

The researchers acknowledge that the methodology used in this research is somewhat unorthodox — the most direct way to examine the question of social identification would involve recreating the Milgram paradigm and varying different aspects of the paradigm to manipulate social identification with both experimenter and learner. But this kind of research involves considerable ethical challenges. The purpose of the article, the authors say, is to provide a strong theoretical case for such research, “so that work to address the critical question of why (and not just whether) people still prove willing to participate in brutalizing acts can move forward.”

*   *   *

Most People Will Administer Shocks When Prodded By ‘Authority Figure’

ScienceDaily (Dec. 22, 2008) — Nearly 50 years after one of the most controversial behavioral experiments in history, a social psychologist has found that people are still just as willing to administer what they believe are painful electric shocks to others when urged on by an authority figure.

Jerry M. Burger, PhD, replicated one of the famous obedience experiments of the late Stanley Milgram, PhD, and found that compliance rates in the replication were only slightly lower than those found by Milgram. And, like Milgram, he found no difference in the rates of obedience between men and women.

Burger’s findings are reported in the January issue of American Psychologist. The issue includes a special section reflecting on Milgram’s work 24 years after his death on Dec. 20, 1984, and analyzing Burger’s study.

“People learning about Milgram’s work often wonder whether results would be any different today,” said Burger, a professor at Santa Clara University. “Many point to the lessons of the Holocaust and argue that there is greater societal awareness of the dangers of blind obedience. But what I found is the same situational factors that affected obedience in Milgram’s experiments still operate today.”

Stanley Milgram was an assistant professor at Yale University in 1961 when he conducted the first in a series of experiments in which subjects – thinking they were testing the effect of punishment on learning – administered what they believed were increasingly powerful electric shocks to another person in a separate room. An authority figure conducting the experiment prodded the first person, who was assigned the role of “teacher” to continue shocking the other person, who was playing the role of “learner.” In reality, both the authority figure and the learner were in on the real intent of the experiment, and the imposing-looking shock generator machine was a fake.

Milgram found that, after hearing the learner’s first cries of pain at 150 volts, 82.5 percent of participants continued administering shocks; of those, 79 percent continued to the shock generator’s end, at 450 volts. In Burger’s replication, 70 percent of the participants had to be stopped as they continued past 150 volts – a difference that was not statistically significant.

“Nearly four out of five of Milgram’s participants who continued after 150 volts went all the way to the end of the shock generator,” Burger said. “Because of this pattern, knowing how participants react at the 150-volt juncture allows us to make a reasonable guess about what they would have done if we had continued with the complete procedure.”

Milgram’s techniques have been debated ever since his research was first published. As a result, there is now an ethics codes for psychologists and other controls have been placed on experimental research that have effectively prevented any precise replications of Milgram’s work. “No study using procedures similar to Milgram’s has been published in more than three decades,” according to Burger.

Burger implemented a number of safeguards that enabled him to win approval for the work from his university’s institutional review board. First, he determined that while Milgram allowed his subjects to administer “shocks” of up to 450 volts in 15-volt increments, 150 volts appeared to be the critical point where nearly every participant paused and indicated reluctance to continue. Thus, 150 volts was the top range in Burger’s study.

In addition, Burger screened out any potential subjects who had taken more than two psychology courses in college or who indicated familiarity with Milgram’s research. A clinical psychologist also interviewed potential subjects and eliminated anyone who might have a negative reaction to the study procedure.

In Burger’s study, participants were told at least three times that they could withdraw from the study at any time and still receive the $50 payment. Also, these participants were given a lower-voltage sample shock to show the generator was real – 15 volts, as compared to 45 volts administered by Milgram.

Several of the psychologists writing in the same issue of American Psychologist questioned whether Burger’s study is truly comparable to Milgram’s, although they acknowledge its usefulness.

“…there are simply too many differences between this study and the earlier obedience research to permit conceptually precise and useful comparisons,” wrote Arthur G. Miller, PhD, of Miami University in Oxford, Ohio.

“Though direct comparisons of absolute levels of obedience cannot be made between the 150-volt maximum of Burger’s research design and Milgram’s 450-volt maximum, Burger’s ‘obedience lite’ procedures can be used to explore further some of the situational variables studied by Milgram, as well as look at additional variables,” wrote Alan C. Elms, PhD, of the University of California, Davis. Elms assisted Milgram in the summer of 1961.

What was he thinking? Study turns to ape intellect (AP)

By SETH BORENSTEIN-Associated Press Sunday, June 24, 2012

WASHINGTON (AP) – The more we study animals, the less special we seem.

Baboons can distinguish between written words and gibberish. Monkeys seem to be able to do multiplication. Apes can delay instant gratification longer than a human child can. They plan ahead. They make war and peace. They show empathy. They share.

“It’s not a question of whether they think _ it’s how they think,” says Duke University scientist Brian Hare. Now scientists wonder if apes are capable of thinking about what other apes are thinking.

The evidence that animals are more intelligent and more social than we thought seems to grow each year, especially when it comes to primates. It’s an increasingly hot scientific field with the number of ape and monkey cognition studies doubling in recent years, often with better technology and neuroscience paving the way to unusual discoveries.

This month scientists mapping the DNA of the bonobo ape found that, like the chimp, bonobos are only 1.3 percent different from humans.

Says Josep Call, director of the primate research center at the Max Planck Institute in Germany: “Every year we discover things that we thought they could not do.”

Call says one of his recent more surprising studies showed that apes can set goals and follow through with them.

Orangutans and bonobos in a zoo were offered eight possible tools _ two of which would help them get at some food. At times when they chose the proper tool, researchers moved the apes to a different area before they could get the food, and then kept them waiting as much as 14 hours. In nearly every case, when the apes realized they were being moved, they took their tool with them so they could use it to get food the next day, remembering that even after sleeping. The goal and series of tasks didn’t leave the apes’ minds.

Call says this is similar to a person packing luggage a day before a trip: “For humans it’s such a central ability, it’s so important.”

For a few years, scientists have watched chimpanzees in zoos collect and store rocks as weapons for later use. In May, a study found they even add deception to the mix. They created haystacks to conceal their stash of stones from opponents, just like nations do with bombs.

Hare points to studies where competing chimpanzees enter an arena where one bit of food is hidden from view for only one chimp. The chimp that can see the hidden food, quickly learns that his foe can’t see it and uses that to his advantage, displaying the ability to perceive another ape’s situation. That’s a trait humans develop as toddlers, but something we thought other animals never got, Hare said.

And then there is the amazing monkey memory.

At the National Zoo in Washington, humans who try to match their recall skills with an orangutan’s are humbled. Zoo associate director Don Moore says: “I’ve got a Ph.D., for God’s sake, you would think I could out-think an orang and I can’t.”

In French research, at least two baboons kept memorizing so many pictures _ several thousand _ that after three years researchers ran out of time before the baboons reached their limit. Researcher Joel Fagot at the French National Center for Scientific Research figured they could memorize at least 10,000 and probably more.

And a chimp in Japan named Ayumu who sees strings of numbers flash on a screen for a split-second regularly beats humans at accurately duplicating the lineup. He’s a YouTube sensation, along with orangutans in a Miami zoo that use iPads.

Nature or nurture? It may depend on where you live (AAAS)

12-Jun-2012

By Craig Brierley

The extent to which our development is affected by nature or nurture – our genetic make-up or our environment – may differ depending on where we live, according to research funded by the Medical Research Council and the Wellcome Trust.

In a study published today in the journal Molecular Psychiatry, researchers from the Twins Early Development Study at King’s College London’s Institute of Psychiatry studied data from over 6,700 families relating to 45 childhood characteristics, from IQ and hyperactivity through to height and weight. They found that genetic and environmental contributions to these characteristics vary geographically in the United Kingdom, and published their results online as a series of nature-nurture maps.

Our development, health and behaviour are determined by complex interactions between our genetic make-up and the environment in which we live. For example, we may carry genes that increase our risk of developing type 2 diabetes, but if we eat a healthy diet and get sufficient exercise, we may not develop the disease. Similarly, someone may carry genes that reduce his or her risk of developing lung cancer, but heavy smoking may still lead to the disease.

The UK-based Twins Early Development Study follows over 13,000 pairs of twins, both identical and non-identical, born between 1994 and 1996. When the twins were age 12, the researchers carried out a broad survey to assess a wide range of cognitive abilities, behavioural (and other) traits, environments and academic achievement in 6,759 twin pairs. The researchers then designed an analysis that reveals the UK’s genetic and environmental hotspots, something which had never been done before.

“These days we’re used to the idea that it’s not a question of nature or nurture; everything, including our behaviour, is a little of both,” explains Dr Oliver Davis, a Sir Henry Wellcome Postdoctoral Fellow at King’s College London’s Institute of Psychiatry. “But when we saw the maps, the first thing that struck us was how much the balance of genes and environments can vary from region to region.”

“Take a trait like classroom behaviour problems. From our maps we can tell that in most of the UK around 60% of the difference between people is explained by genes. However, in the South East genes aren’t as important: they explain less than half of the variation. For classroom behaviour, London is an ‘environmental hotspot’.”

The maps give the researchers a global overview of how the environment interacts with our genomes, without homing in on particular genes or environments. However, the patterns have given them important clues about which environments to explore in more detail.

“The nature-nurture maps help us to spot patterns in the complex data, and to try to work out what’s causing these patterns,” says Dr Davis. “For our classroom behaviour example, we realised that one thing that varies more in London is household income. When we compare maps of income inequality to our nature-nurture map for classroom behaviour, we find income inequality may account for some of the pattern.

“Of course, this is just one example. There are any number of environments that vary geographically in the UK, from social environments like health care or education provision to physical environments like altitude, the weather or pollution. Our approach is all about tracking down those environments that you wouldn’t necessarily think of at first.”

It may be relatively easy to explain environmental hotspots, but what about the genetic hotspots that appear on the maps: do people’s genomes vary more in those regions? The researchers believe this is not the case; rather, genetic hotspots are areas where the environment exposes the effects of genetic variation.

For example, researchers searching for gene variants that increase the risk of hay fever may study populations from two regions. In the first region people live among fields of wind-pollinated crops, whereas the second region is miles away from those fields. In this second region, where no one is exposed to pollen, no one develops hay fever; hence any genetic differences between people living in this region would be invisible.

On the other hand, in the first region, where people live among the fields of crops, they will all be exposed to pollen and differences between the people with a genetic susceptibility to hay fever and the people without will stand out. That would make the region a genetic hotspot for hay fever.

“The message that these maps really drive home is that your genes aren’t your destiny. There are plenty of things that can affect how your particular human genome expresses itself, and one of those things is where you grow up,” says Dr Davis.

Resilient People More Satisfied With Life (Science Daily)

ScienceDaily (May 23, 2012) — When confronted with adverse situations such as the loss of a loved one, some people never fully recover from the pain. Others, the majority, pull through and experience how the intensity of negative emotions (e.g. anxiety, depression) grows dimmer with time until they adapt to the new situation. A third group is made up of individuals whose adversities have made them grow personally and whose life takes on new meaning, making them feel stronger than before.

Researchers at the Basic Psychology Unit at Universitat Autònoma de Barcelona analyzed the responses of 254 students from the Faculty of Psychology in different questionnaires. The purpose was to evaluate their level of satisfaction with life and find connections between their resilience and their capacity of emotional recovery, one of the components of emotional intelligence which consists in the ability to control one’s emotions and those of others.

Research data shows that students who are more resilient, 20% of those surveyed, are more satisfied with their lives and are also those who believe they have control over their emotions and their state of mind. Resilience therefore has a positive prediction effect on the level of satisfaction with one’s life.

“Some of the characteristics of being resilient can be worked on and improved, such as self-esteem and being able to regulate one’s emotions. Learning these techniques can offer people the resources needed to help them adapt and improve their quality of life”, explains Dr Joaquín T Limonero, professor of the UAB Research Group on Stress and Health at UAB and coordinator of the research.

Published recently in Behavioral Psychology, the study included the participation of UAB researcher Jordi Fernández Castro; professors of the Gimbernat School of Nursing (a UAB-affiliated centre) Joaquín Tomás-Sábado and Amor Aradilla Herrera; and psychologist and researcher of Egarsat, M. José Gómez-Romero.

Wearing Two Different Hats: Moral Decisions May Depend On the Situation (Science Daily)

ScienceDaily (May 23, 2012) — An individual’s sense of right or wrong may change depending on their activities at the time — and they may not be aware of their own shifting moral integrity — according to a new study looking at why people make ethical or unethical decisions.

Focusing on dual-occupation professionals, the researchers found that engineers had one perspective on ethical issues, yet when those same individuals were in management roles, their moral compass shifted. Likewise, medic/soldiers in the U.S. Army had different views of civilian casualties depending on whether they most recently had been acting as soldiers or medics.

In the study, to be published in a future issue of The Academy of Management Journal, lead author Keith Leavitt of Oregon State University found that workers who tend to have dual roles in their jobs would change their moral judgments based on what they thought was expected of them at the moment.

“When people switch hats, they often switch moral compasses,” Leavitt said. “People like to think they are inherently moral creatures — you either have character or you don’t. But our studies show that the same person may make a completely different decision based on what hat they may be wearing at the time, often without even realizing it.”

Leavitt, an assistant professor of management in the College of Business at OSU, is an expert on non-conscious decision making and business ethics. He studies how people make decisions and moral judgments, often based on non-conscious cues.

He said recent high-profile business scandals, from the collapse of Enron to the Ponzi scheme of Bernie Madoff, have called into question the ethics of professionals. Leavitt said professional organizations, employers and academic institutions may want to train and prepare their members for practical moral tensions they may face when asked to serve in multiple roles.

“What we consider to be moral sometimes depends on what constituency we are answering to at that moment,” Leavitt said. “For a physician, a human life is priceless. But if that same physician is a managed-care administrator, some degree of moral flexibility becomes necessary to meet their obligations to stockholders.”

Leavitt said subtle cues — such as signage and motivation materials around the office — should be considered, along with more direct training that helps employees who juggle multiple roles that could conflict with one another.

“Organizations and businesses need to recognize that even very subtle images and icons can give employees non-conscious clues as to what the firm values,” he said. “Whether they know it or not, people are often taking in messages about what their role is and what is expected of them, and this may conflict with what they know to be the moral or correct decision.”

The researchers conducted three different studies with employees who had dual roles. In one case, 128 U.S. Army medics were asked to complete a series of problem-solving tests, which included subliminal cues that hinted they might be acting as either a medic or a soldier. No participant said the cues had any bearing on their behavior — but apparently they did. A much larger percentage of those in the medic category than in the soldier category were unwilling to put a price on human life.

In another test, a group of engineer-managers were asked to write about a time they either behaved as a typical manager, engineer, or both. Then they were asked whether U.S. firms should engage in “gifting” to gain a foothold in a new market. Despite the fact such a practice would violate federal laws, more than 50 percent of those who fell into the “manager” category said such a practice might be acceptable, compared to 13 percent of those in the engineer category.

“We find that people tend to make decisions that may conflict with their morals when they are overwhelmed, or when they are just doing routine tasks without thinking of the consequences,” Leavitt said. “We tend to play out a script as if our role has already been written. So the bottom line is, slow down and think about the consequences when making an ethical decision.”

Heart Rules the Head When We Make Financial Decisions (Science Daily)

ScienceDaily (May 21, 2012) — Our ‘gut feelings’ influence our decisions, overriding ‘rational’ thought, when we are faced with financial offers that we deem to be unfair, according to a new study. Even when we are set to benefit, our physical response can make us more likely to reject a financial proposition we consider to be unjust.

Conducted by a team from the University of Exeter, Medical Research Council Cognition and Brain Sciences Unit and University of Cambridge, the research is published in the journal Cognitive, Affective, & Behavioural Neuroscience.

The research adds to growing evidence that our bodies can sometimes govern how we think and feel, rather than the other way round. It also reveals that those people who are more in tune with their bodies are more likely to be led by their ‘gut feelings’.

The study was based on a well-known psychological test, the Ultimatum Game. 51 participants were presented with a series of financial offers, based on different ways of dividing £10. Players frequently reject unfair offers in this game even though it leads to personal financial loss — an ‘irrational’ decision from an economic perspective.

The researchers measured participants’ physical responses to each offer by recording how much they sweated through the fingertips and how much their heart rate changed. How accurately participants could ‘listen’ to their bodies was measured on a different task by asking them to count their heartbeats and comparing their accuracy to their actual heart rate recording. Those people who showed a bigger physical response to unfair offers were more likely to reject them, but this was only the case if individuals were also able to accurately ‘listen’ to what their bodies were telling them.

The findings show that individuals who have a strong ‘gut-reaction’ and are in tune with their own physical responses are more likely to reject unfair financial offers, even if this decision results in personal losses.

Lead researcher Dr Barney Dunn of Psychology at the University of Exeter said: “This research supports the idea that what happens in our bodies can sometimes shape how we think and feel in our minds. Everyday phrases like ‘following your heart’ and ‘trusting your gut’ can often, it seems, be accurate.”

“Humans are highly attuned to unfairness and we are sometimes required to weigh up the demands of maintaining justice with preserving our own economic self-interest. At a time when ideas of fairness in the financial sector — from bankers’ bonuses to changes to pension schemes — are being widely debated, it is important to recognise why some individuals rebel against perceived unfairness, whereas other people are prepared to accept the status quo.”

Educational Games to Train Middle Schoolers’ Attention, Empathy (Science Daily)

ScienceDaily (May 21, 2012) — Two years ago, at a meeting on science and education, Richard Davidson challenged video game manufacturers to develop games that emphasize kindness and compassion instead of violence and aggression.

With a grant from the Bill & Melinda Gates Foundation, the University of Wisconsin-Madison professor is now answering his own call. With Kurt Squire, an associate professor in the School of Education and director of the Games Learning Society Initiative, Davidson received a $1.39 million grant this spring to design and rigorously test two educational games to help eighth graders develop beneficial social and emotional skills — empathy, cooperation, mental focus, and self-regulation.

“By the time they reach the eighth grade, virtually every middle-class child in the Western world is playing smartphone apps, video games, computer games,” says Davidson, the William James and Vilas Research Professor of Psychology and Psychiatry at UW-Madison. “Our hope is that we can use some of that time for constructive purposes and take advantage of the natural inclination of children of that age to want to spend time with this kind of technology.”

The project grew from the intersection of Davidson’s research on the brain bases of emotion, Squire’s expertise in educational game design, and the Gates Foundation’s interest in preparing U.S. students for college readiness-possessing the skills and knowledge to go on to post-secondary education without the need for remediation.

“Skills of mindfulness and kindness are very important for college readiness,” Davidson explains. “Mindfulness, because it cultivates the capacity to regulate attention, which is the building block for all kinds of learning; and kindness, because the ability to cooperate is important for everything that has to do with success in life, team-building, leadership, and so forth.”

He adds that social, emotional, and interpersonal factors influence how students use and apply their cognitive abilities.

Building on research from the Center for Investigating Healthy Minds at UW-Madison’s Waisman Center, the initial stage of the project will focus on designing prototypes of two games. The first game will focus on improving attention and mental focus, likely through breath awareness.

“Breathing has two important characteristics. One is that it’s very boring, so if you’re able to attend to that, you can attend to most other things,” Davidson says. “The second is that we’re always breathing as long as we’re alive, and so it’s an internal cue that we can learn to come back to. This is something a child can carry with him or her all the time.”

The second game will focus on social behaviors such as kindness, compassion, and altruism. One approach may be to help students detect and interpret emotions in others by reading non-verbal cues such as facial expressions, tone of voice, and body posture.

“We’ll use insights gleaned from our neuroscience research to design the games and will look at changes in the brain during the performance of these games to see how the brain is actually affected by them,” says Davidson. “Direct feedback from monitoring the brain while students are playing the games will help us iteratively adjust the game design as this work goes forward.”

Their analyses will include neural imaging and behavioral testing before, during, and after students play the games, as well as looking at general academic performance.

The results will help the researchers determine how the games impact students and whether educational games are a useful medium for teaching these behaviors and skills, as well as evaluate whether certain groups of kids benefit more than others.

“Our hope is that we can begin to address these questions with the use of digital games in a way that can be very easily scaled and, if we are successful, to potentially reach an extraordinarily large number of youth,” says Davidson.

New issue of the journal Ephemera – Theory and Politics in Organization, on “The atmosphere business”

volume 12, number 1/2 (may 2012)
editorial
Steffen Böhm, Anna-Maria Murtola and Sverre Spoelstra The atmosphere business
notes
Mike Childs Privatising the atmosphere: A solution or dangerous con?
Oscar Reyes Carbon markets after Durban
Gökçe Günel A dark art: Field notes on cardon capture and storage policy negotiations at COP17
Patrick Bond Durban’s conference of polluters, market failure and critic failure
Tadzio Mueller The people’s climate summit in Cochabamba: A tragedy in three acts
interview
Larry Lohmann and Steffen Böhm Critiquing carbon markets: A conversation
articles
Robert Fletcher Capitalizing on chaos: Climate change and disaster capitalism
Jerome Whitington The prey of uncertainty: Climate change as opportunity
Ingmar Lippert Carbon classified? Unpacking heterogenous relations inscribed into corporate carbon emissions
Joanna Cabello and Tamra Gilbertson A colonial mechanism to enclose lands: A critical review of two REDD+-focused special issues
Rebecca Pearse Mapping REDD in the Asia-Pacific: Governance, marketisation and contention
Esteve Corbera and Charlotte Friedli Planting trees through the Clean Development Mechanism: A critical assessment
reviews
Siddhartha Dabhi The ‘third way’ for climate action
Peter Newell Carbon trading in South Africa: Plus ça change?
David L. Levy Can capitalism survive climate change?

Television Has Less Effect On Education About Climate Change Than Other Forms Of Media (Science Daily)

ScienceDaily (Oct. 16, 2009) — Worried about climate change and want to learn more? You probably aren’t watching television then. A new study by George Mason University Communication Professor Xiaoquan Zhao suggests that watching television has no significant impact on viewers’ knowledge about the issue of climate change. Reading newspapers and using the web, however, seem to contribute to people’s knowledge about this issue.

The study, “Media Use and Global Warming Perceptions: A Snapshot of the Reinforcing Spirals,” looked at the relationship between media use and people’s perceptions of global warming. The study asked participants how often they watch TV, surf the Web, and read newspapers. They were also asked about their concern and knowledge of global warming and specifically its impact on the polar regions.

“Unlike many other social issues with which the public may have first-hand experience, global warming is an issue that many come to learn about through the media,” says Zhao. “The primary source of mediated information about global warming is the news.”

The results showed that people who read newspapers and use the Internet more often are more likely to be concerned about global warming and believe they are better educated about the subject. Watching more television, however, did not seem to help.

He also found that individuals concerned about global warming are more likely to seek out information on this issue from a variety of media and nonmedia sources. Other forms of media, such as the Oscar-winning documentary “The Inconvenient Truth” and the blockbuster thriller “The Day After Tomorrow,” have played important roles in advancing the public’s interest in this domain.

Politics also seemed to have an influence on people’s perceptions about the science of global warming. Republicans are more likely to believe that scientists are still debating the existence and human causes of global warming, whereas Democrats are more likely to believe that a scientific consensus has already been achieved on these matters.

“Some media forms have clear influence on people’s perceived knowledge of global warming, and most of it seems positive,” says Zhao. “Future research should focus on how to harness this powerful educational function.”

Increased Knowledge About Global Warming Leads To Apathy, Study Shows (Science Daily)

ScienceDaily (Mar. 27, 2008) — The more you know the less you care — at least that seems to be the case with global warming. A telephone survey of 1,093 Americans by two Texas A&M University political scientists and a former colleague indicates that trend, as explained in their recent article in the peer-reviewed journal Risk Analysis.

“More informed respondents both feel less personally responsible for global warming, and also show less concern for global warming,” states the article, titled “Personal Efficacy, the Information Environment, and Attitudes toward Global Warming and Climate Change in the USA.”

The study showed high levels of confidence in scientists among Americans led to a decreased sense of responsibility for global warming.

The diminished concern and sense of responsibility flies in the face of awareness campaigns about climate change, such as in the movies An Inconvenient Truth and Ice Age: The Meltdown and in the mainstream media’s escalating emphasis on the trend.

The research was conducted by Paul M. Kellstedt, a political science associate professor at Texas A&M; Arnold Vedlitz, Bob Bullock Chair in Government and Public Policy at Texas A&M’s George Bush School of Government and Public Service; and Sammy Zahran, formerly of Texas A&M and now an assistant professor of sociology at Colorado State University.

Kellstedt says the findings were a bit unexpected. The focus of the study, he says, was not to measure how informed or how uninformed Americans are about global warming, but to understand why some individuals who are more or less informed about it showed more or less concern.

“In that sense, we didn’t really have expectations about how aware or unaware people were of global warming,” he says.

But, he adds, “The findings that the more informed respondents were less concerned about global warming, and that they felt less personally responsible for it, did surprise us. We expected just the opposite.

“The findings, while rather modest in magnitude — there are other variables we measured which had much larger effects on concern for global warming — were statistically quite robust, which is to say that they continued to appear regardless of how we modeled the data.”

Measuring knowledge about global warming is a tricky business, Kellstedt adds.

“That’s true of many other things we would like to measure in surveys, of course, especially things that might embarrass people (like ignorance) or that they might feel social pressure to avoid revealing (like prejudice),” he says.

“There are no industry standards, so to speak, for measuring knowledge about global warming. We opted for this straightforward measure and realize that other measures might produce different results.”

Now, for better or worse, scientists have to deal with the public’s abundant confidence in them. “But it cannot be comforting to the researchers in the scientific community that the more trust people have in them as scientists, the less concerned they are about their findings,” the researchers conclude in their study.

Lead Dust Is Linked to Violence, Study Suggests (Science Daily)

ScienceDaily (Apr. 17, 2012) — Childhood exposure to lead dust has been linked to lasting physical and behavioral effects, and now lead dust from vehicles using leaded gasoline has been linked to instances of aggravated assault two decades after exposure, says Tulane toxicologist Howard W. Mielke.

Vehicles using leaded gasoline that contaminated cities’ air decades ago have increased aggravated assault in urban areas, researchers say.

The new findings are published in the journal Environment International by Mielke, a research professor in the Department of Pharmacology at the Tulane University School of Medicine, and demographer Sammy Zahran at the Center for Disaster and Risk Analysis at Colorado State University.

The researchers compared the amount of lead released in six cities: Atlanta, Chicago, Indianapolis, Minneapolis, New Orleans and San Diego, during the years 1950-1985. This period saw an increase in airborne lead dust exposure due to the use of leaded gasoline. There were correlating spikes in the rates of aggravated assault approximately two decades later, after the exposed children grew up.

After controlling for other possible causes such as community and household income, education, policing effort and incarceration rates, Mielke and Zahran found that for every one percent increase in tonnages of environmental lead released 22 years earlier, the present rate of aggravated assault was raised by 0.46 percent.

“Children are extremely sensitive to lead dust, and lead exposure has latent neuroanatomical effects that severely impact future societal behavior and welfare,” says Mielke. “Up to 90 per cent of the variation in aggravated assault across the cities is explained by the amount of lead dust released 22 years earlier.” Tons of lead dust were released between 1950 and 1985 in urban areas by vehicles using leaded gasoline, and improper handling of lead-based paint also has contributed to contamination.

Violence in Men Caused by Unequal Wealth and Competition, Study Suggests (Science Daily)

ScienceDaily (Apr. 17, 2012) — Violence in men can be explained by traditional theories of sexual selection. In a review of the literature, Professor John Archer from the University of Central Lancashire, a Fellow of the British Psychological Society, points to a range of evidence that suggests that high rates of physical aggression and assaults in men are rooted in inter-male competition.

These findings are presented April 18 at the British Psychological Society Annual Conference held at the Grand Connaught Rooms, London (18-20 April).

Professor Archer describes evidence showing that differences between men and women in the use of physical aggression peak when men and women are in their twenties. In their twenties, men are more likely to report themselves as high in physical aggression, and to be arrested for engaging in assaults and the use of weapons, than at any other age. They also engage in these activities at a phenomenally higher rate than women.

Professor Archer highlights that sex differences in aggression are not observed in relation to indirect forms of aggression but become larger with the severity of violence. Indeed, at the extreme end of violence, there are a minimal number of female-female homicides in the face of a high male-male homicide rate. Interestingly, men are also much more likely to engage in risky behaviour in the presence of other men.

Professor Archer says that a range of male features that develop during adolescence arising from hormonal changes in testosterone accentuate aggressive behaviour. Examples include the growth of facial hair, voice pitch and facial changes such as brow ridge and chin size. He implicates height, weight and strength differences between men and women as further evidence of male adaptation to engage in fighting.

How does the environment influence aggression and violence? Professor Archer suggests there are two key principles — unequal wealth and a high ratio of sexually active men to women — that may increase physical aggression and violence in young men.

Professor Archer says: “The research evidence highlights that societal issues such as inequality of wealth and competition between males may contribute to the violence we see in today’s society.”

See Dan read: Baboons can learn to spot real words (Guardian)

AP foreign, Saturday April 14 2012 (The Guardian)

SETH BORENSTEIN

AP Science Writer= WASHINGTON (AP) — Dan the baboon sits in front of a computer screen. The letters BRRU pop up. With a quick and almost dismissive tap, the monkey signals it’s not a word. Correct. Next comes, ITCS. Again, not a word. Finally KITE comes up.

He pauses and hits a green oval to show it’s a word. In the space of just a few seconds, Dan has demonstrated a mastery of what some experts say is a form of pre-reading and walks away rewarded with a treat of dried wheat.

Dan is part of new research that shows baboons are able to pick up the first step in reading — identifying recurring patterns and determining which four-letter combinations are words and which are just gobbledygook.

The study shows that reading’s early steps are far more instinctive than scientists first thought and it also indicates that non-human primates may be smarter than we give them credit for.

“They’ve got the hang of this thing,” said Jonathan Grainger, a French scientist and lead author of the research.

Baboons and other monkeys are good pattern finders and what they are doing may be what we first do in recognizing words.

It’s still a far cry from real reading. They don’t understand what these words mean, and are just breaking them down into parts, said Grainger, a cognitive psychologist at the Aix-Marseille University in France.

In 300,000 tests, the six baboons distinguished between real and fake words about three-out-of-four times, according to the study published in Thursday’s journal Science.

The 4-year-old Dan, the star of the bunch and about the equivalent age of a human teenager, got 80 percent of the words right and learned 308 four-letter words.

The baboons are rewarded with food when they press the right spot on the screen: A blue plus sign for bogus combos or a green oval for real words.

Even though the experiments were done in France, the researchers used English words because it is the language of science, Grainger said.

The key is that these animals not only learned by trial and error which letter combinations were correct, but they also noticed which letters tend to go together to form real words, such as SH but not FX, said Grainger. So even when new words were sprung on them, they did a better job at figuring out which were real.

Grainger said a pre-existing capacity in the brain may allow them to recognize patterns and objects, and perhaps that’s how we humans also first learn to read.

The study’s results were called “extraordinarily exciting” by another language researcher, psychology professor Stanislas Dehaene at the College of France, who wasn’t part of this study. He said Grainger’s finding makes sense. Dehaene’s earlier work says a distinct part of the brain visually recognizes the forms of words. The new work indicates this is also likely in a non-human primate.

This new study also tells us a lot about our distant primate relatives.

“They have shown repeatedly amazing cognitive abilities,” said study co-author Joel Fagot, a researcher at the French National Center for Scientific Research.

Bill Hopkins, a professor of psychology at the Yerkes Primate Center in Atlanta, isn’t surprised.

“We tend to underestimate what their capacities are,” said Hopkins, who wasn’t part of the French research team. “Non-human primates are really specialized in the visual domain and this is an example of that.”

This raises interesting questions about how the complex primate mind works without language or what we think of as language, Hopkins said. While we use language to solve problems in our heads, such as deciphering words, it seems that baboons use a “remarkably sophisticated” method to attack problems without language, he said.

Key to the success of the experiment was a change in the testing technique, the researchers said. The baboons weren’t put in the computer stations and forced to take the test. Instead, they could choose when they wanted to work, going to one of the 10 computer booths at any time, even in the middle of the night.

The most ambitious baboons test 3,000 times a day; the laziest only 400.

The advantage of this type of experiment setup, which can be considered more humane, is that researchers get far more trials in a shorter time period, he said.

“They come because they want to,” Fagot said. “What do they want? They want some food. They want to solve some task.”

As linguagens da psicose (Revista Fapesp)

Abordagem matemática evidencia as diferenças entre os discursos de quem tem mania ou esquizofrenia

CARLOS FIORAVANTI | Edição 194 – Abril de 2012

Como o estudo foi feito: os entrevistados relatavam um sonho e a entrevistadora convertia as palavras mais importantes em pontos e as frases em setas para examinar a estrutura da linguagem

Para os psiquiatras e para a maioria das pessoas, é relativamente fácil diferenciar uma pessoa com psicose de quem não apresentou nenhum distúrbio mental já diagnosticado: as do primeiro grupo relatam delírios e alucinações e por vezes se apresentam como messias que vão salvar o mundo. Porém, diferenciar os dois tipos de psicose – mania e esquizofrenia – já não é tão simples e exige um bocado de experiência pessoal, conhecimento e intuição dos especialistas. Uma abordagem matemática desenvolvida no Instituto do Cérebro da Universidade Federal do Rio Grande do Norte (UFRN) talvez facilite essa diferenciação, fundamental para estabelecer os tratamentos mais adequados para cada enfermidade, ao avaliar de modo quantitativo as diferenças nas estruturas de linguagem verbal adotadas por quem tem mania ou esquizofrenia.

A estratégia de análise – com base na teoria dos grafos, que representou as palavras como pontos e a sequência entre elas nas frases por setas – indicou que as pessoas com mania são muito mais prolixas e repetitivas do que as com esquizofrenia, geralmente lacônicas e centradas em um único assunto, sem deixar o pensamento viajar. “A recorrência é uma marca do discurso do paciente com mania, que conta três ou quatro vezes a mesma coisa, enquanto aquele com esquizofrenia fala objetivamente o que tem para falar, sem se desviar, e tem um discurso pobre em sentidos”, diz a psiquiatra Natália Mota, pesquisadora do instituto. “Em cada grupo”, diz Sidarta Ribeiro, diretor do instituto, “o número de palavras, a estrutura da linguagem e outros indicadores são completamente distintos”.

Eles acreditam que conseguiram dar os primeiros passos rumo a uma forma objetiva de diferenciar as duas formas de psicose, do mesmo modo que um hemograma é usado para atestar uma doença infecciosa, desde que os próximos testes, com uma amostra maior de participantes, reforcem a consistência dessa abordagem e os médicos consintam em trabalhar com um assistente desse tipo. Os testes comparativos descritos em um artigo recém-publicado na revista PLoS One indicaram que essa nova abordagem proporciona taxas de acerto da ordem de 93% no diagnóstico, enquanto as escalas psicométricas hoje em uso, com base em questionários de avaliação de sintomas, chegam a apenas 67%. “São métodos complementares”, diz Natália. “As escalas psicométricas e a experiência dos médicos continuam indispensáveis.”

“O resultado é bastante simples, mesmo para quem não entende matemática”, diz o físico Mauro Copelli, da Universidade Federal de Pernambuco (UFPE), que participou desse trabalho. O discurso das pessoas com mania se mostra como um emaranhado de pontos e linhas, enquanto o das com esquizofrenia se apresenta como uma reta, com poucos pontos. A teoria dos grafos, que levou a esses diagramas, tem sido usada há séculos para examinar as trajetórias pelas quais um viajante poderia visitar todas as cidades de uma região, por exemplo. Mais recentemente, tem servido para otimizar o tráfego aéreo, considerando os aeroportos como um conjunto de pontos ou nós conectados entre si por meio dos aviões.

“Na primeira vez que rodei o programa de grafos, as diferenças de linguagem saltaram aos olhos”, conta Natália. Em 2007, ao terminar o curso de medicina e começar a residência médica em psiquiatria no hospital da UFRN, Natália notava que muitos diagnósticos diferenciais de mania e de esquizofrenia dependiam da experiência pessoal e de julgamentos subjetivos dos médicos – os que trabalhavam mais com pacientes com esquizofrenia tendiam a encontrar mais casos de esquizofrenia e menos de mania – e muitas vezes não havia consenso. Já se sabia que as pessoas com mania falam mais e se desviam do tópico central muito mais facilmente que as com esquizofrenia, mas isso lhe pareceu genérico demais. 
Em um congresso científico em 2008 em Fortaleza ela conversou com Copelli, que já colaborava com Ribeiro e a incentivou a trabalhar com grafos. No início ela resistiu, por causa da pouca familiaridade com matemática, mas logo depois a nova teoria lhe pareceu simples e prática.

Para levar o trabalho adiante, ela gravou e, com a ajuda de Nathália Lemos e Ana Cardina Pieretti, transcreveu as entrevistas com 24 pessoas 
(oito com mania, oito com esquizofrenia e oito sem qualquer distúrbio mental diagnosticado), a quem pedia para relatar um sonho; qualquer comentário fora desse tema era considerado um voo da imaginação, bastante comum entre as pessoas com mania.

“Já na transcrição, os relatos dos pacientes com mania eram claramente maiores que os com esquizofrenia”, diz. Em seguida, ela eliminou elementos menos importantes como artigos e preposições, dividiu a frase em sujeito, verbo e objetos, representados por pontos ou nós, enquanto a sequência entre elas na frase era representada por setas, unindo dois nós, e assinalou as que não se referiam ao tema central do relato, ou seja, o sonho recente que ela pedira para os entrevistados contarem, e marcavam um desvio do pensamento, comum entre as pessoas com mania.

Um programa específico para grafos baixado de graça na internet indicava as características relevantes para análise – ou atributos – e representava as principais diferenças de discurso entre os participantes, como quantidades de nós, extensão e densidade das conexões entre os pontos, recorrência, prolixidade (ou logorreia) e desvio do tópico central. “É supersimples”, assegura Natália. Nas validações e análises dos resultados, ela contou também com a colaboração de Osame Kinouchi, da Universidade de São Paulo (USP) em Ribeirão Preto, e Guillermo Cecchi, do Centro de Biologia Computacional da IBM, Estados Unidos.

Resultado: as pessoas com mania obtiveram uma pontuação maior que as com esquizofrenia em quase todos os itens avaliados. “A logorreia típica de pacientes com mania não resulta só do excesso de palavras, mas de um discurso que volta sempre ao mesmo tópico, em comparação com o grupo com esquizofrenia”, ela observou. Curiosamente, os participantes do grupo-controle, sem distúrbio mental diagnosticado, apresentaram estruturas discursivas de dois tipos, ora redundantes como os participantes com mania, ora enxutas como os com esquizofrenia, refletindo as diferenças entre suas personalidades ou a motivação para, naquele momento, falar mais ou menos. “A patologia define o discurso, não é nenhuma novidade”, diz ela. “Os psiquiatras são treinados para reconhecer essas diferenças, mas dificilmente poderão dizer que a recorrência de um paciente com mania está 28% menor, por mais experientes que sejam.”

“O ambiente interdisciplinar do instituto foi essencial para realizar esse estudo, porque eu estava todo dia trocando ideias com gente de outras áreas. Nivaldo Vasconcelos, um engenheiro de computação, me ajudou muito”, diz ela. O Instituto do Cérebro, em funcionamento desde 2007, conta atualmente com 13 professores, 22 estudantes de graduação e 42 de pós, 8 pós-doutorandos e 30 técnicos. “Vencidas as dificuldades iniciais, conseguimos formar um grupo de pesquisadores jovens e talentosos”, comemora Ribeiro. “A casa em que estamos agora tem um jardim amplo, e muitas noites ficamos lá até as duas, três da manhã, falando sobre ciência e tomando chimarrão.”

Artigo científico
MOTA, N.B. et al
Speech graphs provide 
a quantitative measure of thought disorder 
in psychosis. PLoS ONE (no prelo).

You can’t do the math without the words (University of Miami Press Release)

University of Miami anthropological linguist studies the anumeric language of an Amazonian tribe; the findings add new perspective to the way people acquire knowledge, perception and reasoning

Marie Guma Diaz
University of Miami

 VIDEO: Caleb Everett, assistant professor in the department of anthropology at the University of Miami College of Arts and Sciences, talks about the unique insight we gain about people by studying…

CORAL GABLES, FL (February 20, 2012)–Most people learn to count when they are children. Yet surprisingly, not all languages have words for numbers. A recent study published in the journal ofCognitive Science shows that a few tongues lack number words and as a result, people in these cultures have a difficult time performing common quantitative tasks. The findings add new insight to the way people acquire knowledge, perception and reasoning.

The Piraha people of the Amazon are a group of about 700 semi-nomadic people living in small villages of about 10-15 adults, along the Maici River, a tributary of the Amazon. According to University of Miami (UM) anthropological linguist Caleb Everett, the Piraha are surprisingly unable to represent exact amounts. Their language contains just three imprecise words for quantities: Hòi means “small size or amount,” hoì, means “somewhat larger amount,” and baàgiso indicates to “cause to come together, or many.” Linguists refer to languages that do not have number specific words as anumeric.

“The Piraha is a really fascinating group because they are really only one or two groups in the world that are totally anumeric,” says Everett, assistant professor in the Department of Anthropology at the UM College of Arts and Sciences. “This is maybe one of the most extreme cases of language actually restricting how people think.”

His study “Quantity Recognition Among speakers of an Anumeric Language” demonstrates that number words are essential tools of thought required to solve even the simplest quantitative problems, such as one-to-one correspondence.

“I’m interested in how the language you speak affects the way that you think,” says Everett. “The question here is what tools like number words really allows us to do and how they change the way we think about the world.”

The work was motivated by contradictory results on the numerical performance of the Piraha. An earlier article reported the people incapable of performing simple numeric tasks with quantities greater than three, while another showed they were capable of accomplishing such tasks.

Everett repeated all the field experiments of the two previous studies. The results indicated that the Piraha could not consistently perform simple mathematical tasks. For example, one test involved 14 adults in one village that were presented with lines of spools of thread and were asked to create a matching line of empty rubber balloons. The people were not able to do the one-to-one correspondence, when the numbers were greater than two or three.

The study provides a simple explanation for the controversy. Unbeknown to other researchers, the villagers that participated in one of the previous studies had received basic numerical training by Keren Madora, an American missionary that has worked with the indigenous people of the Amazon for 33 years, and co-author of this study. “Her knowledge of what had happened in that village was crucial. I understood then why they got the results that they did,” Everett says.

Madora used the Piraha language to create number words. For instance she used the words “all the sons of the hand,” to indicate the number four. The introduction of number words into the village provides a reasonable explanation for the disagreement in the previous studies.

The findings support the idea that language is a key component in processes of the mind. “When they’ve been introduced to those words, their performance improved, so it’s clearly a linguistic effect, rather than a generally cultural factor,” Everett says. The study highlights the unique insight we gain about people and society by studying mother languages.

“Preservation of mother tongues is important because languages can tell us about aspects of human history, human cognition, and human culture that we would not have access to if the languages are gone,” he says. “From a scientific perspective I think it’s important, but it’s most important from the perspective of the people, because they lose a lot of their cultural heritage when their languages die.”

Will one researcher’s discovery deep in the Amazon destroy the foundation of modern linguistics? (The Chronicle of Higher Education)

The Chronicle Review

By Tom Bartlett

March 20, 2012

Angry Words

chomsky everett

A Christian missionary sets out to convert a remote Amazonian tribe. He lives with them for years in primitive conditions, learns their extremely difficult language, risks his life battling malaria, giant anacondas, and sometimes the tribe itself. In a plot twist, instead of converting them he loses his faith, morphing from an evangelist trying to translate the Bible into an academic determined to understand the people he’s come to respect and love.

Along the way, the former missionary discovers that the language these people speak doesn’t follow one of the fundamental tenets of linguistics, a finding that would seem to turn the field on its head, undermine basic assumptions about how children learn to communicate, and dethrone the discipline’s long-reigning king, who also happens to be among the most well-known and influential intellectuals of the 20th century.

It feels like a movie, and it may in fact turn into one—there’s a script and producers on board. It’s already a documentary that will air in May on the Smithsonian Channel. A play is in the works in London. And the man who lived the story, Daniel Everett, has written two books about it. His 2008 memoir Don’t Sleep, There Are Snakes, is filled with Joseph Conrad-esque drama. The new book, Language: The Cultural Tool, which is lighter on jungle anecdotes, instead takes square aim at Noam Chomsky, who has remained the pre-eminent figure in linguistics since the 1960s, thanks to the brilliance of his ideas and the force of his personality.

But before any Hollywood premiere, it’s worth asking whether Everett actually has it right. Answering that question is not straightforward, in part because it hinges on a bit of grammar that no one except linguists ever thinks about. It’s also made tricky by the fact that Everett is the foremost expert on this language, called Pirahã, and one of only a handful of outsiders who can speak it, making it tough for others to weigh in and leading his critics to wonder aloud if he has somehow rigged the results.

More than any of that, though, his claim is difficult to verify because linguistics is populated by a deeply factionalized group of scholars who can’t agree on what they’re arguing about and who tend to dismiss their opponents as morons or frauds or both. Such divisions exist, to varying degrees, in all disciplines, but linguists seem uncommonly hostile. The word “brutal” comes up again and again, as do “spiteful,” “ridiculous,” and “childish.”

With that in mind, why should anyone care about the answer? Because it might hold the key to understanding what separates us from the rest of the animals.

Imagine a linguist from Mars lands on Earth to survey the planet’s languages (presumably after obtaining the necessary interplanetary funding). The alien would reasonably conclude that the languages of the world are mostly similar with interesting but relatively minor variations.

As science-fiction premises go it’s rather dull, but it roughly illustrates Chomsky’s view of linguistics, known as Universal Grammar, which has dominated the field for a half-century. Chomsky is fond of this hypothetical and has used it repeatedly for decades, including in a 1971 discussion with Michel Foucault, during which he added that “this Martian would, if he were rational, conclude that the structure of the knowledge that is acquired in the case of language is basically internal to the human mind.”

In his new book, Everett, now dean of arts and sciences at Bentley University, writes about hearing Chomsky bring up the Martian in a lecture he gave in the early 1990s. Everett noticed a group of graduate students in the back row laughing and exchanging money. After the talk, Everett asked them what was so funny, and they told him they had taken bets on precisely when Chomsky would once again cite the opinion of the linguist from Mars.

The somewhat unkind implication is that the distinguished scholar had become so predictable that his audiences had to search for ways to amuse themselves. Another Chomsky nugget is the way he responds when asked to give a definition of Universal Grammar. He will sometimes say that Universal Grammar is whatever made it possible for his granddaughter to learn to talk but left the world’s supply of kittens and rocks speechless—a less-than-precise answer. Say “kittens and rocks” to a cluster of linguists and eyes are likely to roll.

Chomsky’s detractors have said that Universal Grammar is whatever he needs it to be at that moment. By keeping it mysterious, they contend, he is able to dodge criticism and avoid those who are gunning for him. It’s hard to murder a phantom.

Everett’s book is an attempt to deliver, if not a fatal blow, then at least a solid right cross to Universal Grammar. He believes that the structure of language doesn’t spring from the mind but is instead largely formed by culture, and he points to the Amazonian tribe he studied for 30 years as evidence. It’s not that Everett thinks our brains don’t play a role—they obviously do. But he argues that just because we are capable of language does not mean it is necessarily prewired. As he writes in his book: “The discovery that humans are better at building human houses than porpoises tells us nothing about whether the architecture of human houses is innate.”

The language Everett has focused on, Pirahã, is spoken by just a few hundred members of a hunter-gatherer tribe in a remote part of Brazil. Everett got to know the Pirahã in the late 1970s as an American missionary. With his wife and kids, he lived among them for months at a time, learning their language from scratch. He would point to objects and ask their names. He would transcribe words that sounded identical to his ears but had completely different meanings. His progress was maddeningly slow, and he had to deal with the many challenges of jungle living. His story of taking his family, by boat, to get treatment for severe malaria is an epic in itself.

His initial goal was to translate the Bible. He got his Ph.D. in linguistics along the way and, in 1984, spent a year studying at the Massachusetts Institute of Technology in an office near Chomsky’s. He was a true-blue Chomskyan then, so much so that his kids grew up thinking Chomsky was more saint than professor. “All they ever heard about was how great Chomsky was,” he says. He was a linguist with a dual focus: studying the Pirahã language and trying to save the Pirahã from hell. The second part, he found, was tough because the Pirahã are rooted in the present. They don’t discuss the future or the distant past. They don’t have a belief in gods or an afterlife. And they have a strong cultural resistance to the influence of outsiders, dubbing all non-Pirahã “crooked heads.” They responded to Everett’s evangelism with indifference or ridicule.

As he puts it now, the Pirahã weren’t lost, and therefore they had no interest in being saved. They are a happy people. Living in the present has been an excellent strategy, and their lack of faith in the divine has not hindered them. Everett came to convert them, but over many years found that his own belief in God had melted away.

So did his belief in Chomsky, albeit for different reasons. The Pirahã language is remarkable in many respects. Entire conversations can be whistled, making it easier to communicate in the jungle while hunting. Also, the Pirahã don’t use numbers. They have words for amounts, like a lot or a little, but nothing for five or one hundred. Most significantly, for Everett’s argument, he says their language lacks what linguists call “recursion”—that is, the Pirahã don’t embed phrases in other phrases. They instead speak only in short, simple sentences.

In a recursive language, additional phrases and clauses can be inserted in a sentence, complicating the meaning, in theory indefinitely. For most of us, the lack of recursion in a little-known Brazilian language may not seem terribly interesting. But when Everett published a paper with that finding in 2005, the news created a stir. There were magazine articles and TV appearances. Fellow linguists weighed in, if only in some cases to scoff. Everett had put himself and the Pirahã on the map.

His paper might have received a shrug if Chomsky had not recently co-written a paper, published in 2002, that said (or seemed to say) that recursion was the single most important feature of human language. “In particular, animal communication systems lack the rich expressive and open-ended power of human language (based on humans’ capacity for recursion),” the authors wrote. Elsewhere in the paper, the authors wrote that the faculty of human language “at minimum” contains recursion. They also deemed it the “only uniquely human component of the faculty of language.”

In other words, Chomsky had finally issued what seemed like a concrete, definitive statement about what made human language unique, exposing a possible vulnerability. Before Everett’s paper was published, there had already been back and forth between Chomsky and the authors of a response to the 2002 paper, Ray Jackendoff and Steven Pinker. In the wake of that public disagreement, Everett’s paper had extra punch.

It’s been said that if you want to make a name for yourself in modern linguistics, you have to either align yourself with Chomsky or seek to destroy him. Either you are desirous of his approval or his downfall. With his 2005 paper, Everett opted for the latter course.

Because the pace of academic debate is just this side of glacial, it wasn’t until June 2009 that the next major chapter in the saga was written. Three scholars who are generally allies of Chomsky published a lengthy paper in the journal Language dissecting Everett’s claims one by one. What he considered unique features of Pirahã weren’t unique. What he considered “gaps” in the language weren’t gaps. They argued this in part by comparing Everett’s recent paper to work he published in the 1980s, calling it, slightly snidely, his earlier “rich material.” Everett wasn’t arguing with Chomsky, they claimed; he was arguing with himself. Young Everett thought Pirahã had recursion. Old Everett did not.

Everett’s defense was, in so many words, to agree. Yes, his earlier work was contradictory, but that’s because he was still under Chomsky’s sway when he wrote it. It’s natural, he argued, even when doing basic field work, cataloging the words of a language and the stories of a people, to be biased by your theoretical assumptions. Everett was a Chomskyan through and through, so much so that he had written the MSN Encarta encyclopedia entry on him. But now, after more years with the Pirahã, the scales had fallen from his eyes, and he saw the language on its own terms rather than those he was trying to impose on it.

David Pesetsky, a linguistics professor at MIT and one of the authors of the critical Languagepaper, thinks Everett was trying to gin up a “Star Wars-level battle between himself and the forces of Universal Grammar,” presumably with Everett as Luke Skywalker and Chomsky as Darth Vader.

Contradicting Everett meant getting into the weeds of the Pirahã language, a language that Everett knew intimately and his critics did not. “Most people took the attitude that this wasn’t worth taking on,” Pesetsky says. “There’s a junior-high-school corridor, two kids are having a fight, and everyone else stands back.” Everett wrote a lengthy reply that Pesetsky and his co-authors found unsatisfying and evasive. “The response could have been ‘Yeah, we need to do this more carefully,'” says Pesetsky. “But he’s had seven years to do it more carefully and he hasn’t.”

Critics haven’t just accused Everett of inaccurate analysis. He’s the sole authority on a language that he says changes everything. If he wanted to, they suggest, he could lie about his findings without getting caught. Some were willing to declare him essentially a fraud. That’s what one of the authors of the 2009 paper, Andrew Nevins, now at University College London, seems to believe. When I requested an interview with Nevins, his reply read, “I may be being glib, but it seems you’ve already analyzed this kind of case!” Below his message was a link to an article I had written about a Dutch social psychologist who had admitted to fabricating results, including creating data from studies that were never conducted. In another e-mail, after declining to expand on his apparent accusation, Nevins wrote that the “world does not need another article about Dan Everett.”

In 2007, Everett heard reports of a letter signed by Cilene Rodrigues, who is Brazilian, and who co-wrote the paper with Pesetsky and Nevins, that accuses him of racism. According to Everett, he got a call from a source informing him that Rodrigues, an honorary research fellow at University College London, had sent a letter to the organization in Brazil that grants permission for researchers to visit indigenous groups like the Pirahã. He then discovered that the organization, called FUNAI, the National Indian Foundation, would no longer grant him permission to visit the Pirahã, whom he had known for most of his adult life and who remain the focus of his research.

He still hasn’t been able to return. Rodrigues would not respond directly to questions about whether she had signed such a letter, nor would Nevins. Rodrigues forwarded an e-mail from another linguist who has worked in Brazil, which speculates that Everett was denied access to the Pirahã because he did not obtain the proper permits and flouted the law, accusations Everett calls “completely false” and “amazingly nasty lies.”

Whatever the reason for his being blocked, the question remains: Is Everett’s work racist? The accusation goes that because Everett says that the Pirahã do not have recursion, and that all human languages supposedly have recursion, Everett is asserting that the Pirahã are less than human. Part of this claim is based on an online summary, written by a former graduate student of Everett’s, that quotes traders in Brazil saying the Pirahã “talk like chickens and act like monkeys,” something Everett himself never said and condemns. The issue is sensitive because the Pirahã, who eschew the trappings of modern civilization and live the way their forebears lived for thousands of years, are regularly denigrated by their neighbors in the region as less than human. The fact that Everett is American, not Brazilian, lends the charge added symbolic weight.

When you read Everett’s two books about the Pirahã, it is nearly impossible to think that he believes they are inferior. In fact, he goes to great lengths not to condescend and offers defenses of practices that outsiders would probably find repugnant. In one instance he describes, a Pirahã woman died, leaving behind a baby that the rest of the tribe thought was too sick to live. Everett cared for the infant. One day, while he was away, members of the tribe killed the baby, telling him that it was in pain and wanted to die. He cried, but didn’t condemn, instead defending in the book their seemingly cruel logic.

Likewise, the Pirahã’s aversion to learning agriculture, or preserving meat, or the fact that they show no interest in producing artwork, is portrayed by Everett not as a shortcoming but as evidence of the Pirahã’s insistence on living in the present. Their nonhierarchical social system seems to Everett fair and sensible. He is critical of his own earlier attempts to convert the Pirahã to Christianity as a sort of “colonialism of the mind.” If anything, Everett is more open to a charge of romanticizing the Pirahã culture.

Other critics are more measured but equally suspicious. Mark Baker, a linguist at Rutgers University at New Brunswick, who considers himself part of Chomsky’s camp, mentions Everett’s “vested motive” in saying that the Pirahã don’t have recursion. “We always have to be a little careful when we have one person who has researched a language that isn’t accessible to other people,” Baker says. He is dubious of Everett’s claims. “I can’t believe it’s true as described,” he says.

Chomsky hasn’t exactly risen above the fray. He told a Brazilian newspaper that Everett was a “charlatan.” In the documentary about Everett, Chomsky raises the possibility, without saying he believes it, that Everett may have faked his results. Behind the scenes, he has been active as well. According to Pesetsky, Chomsky asked him to send an e-mail to David Papineau, a professor of philosophy at King’s College London, who had written a positive, or at least not negative, review of Don’t Sleep, There Are Snakes. The e-mail complained that Papineau had misunderstood recursion and was incorrectly siding with Everett. Papineau thought he had done nothing of the sort. “For people outside of linguistics, it’s rather surprising to find this kind of protection of orthodoxy,” Papineau says.

And what if the Pirahã don’t have recursion? Rather than ferreting out flaws in Everett’s work as Pesetsky did, Chomsky’s preferred response is to say that it doesn’t matter. In a lecture he gave last October at University College London, he referred to Everett’s work without mentioning his name, talking about those who believed that “exceptions to the generalizations are considered lethal.” He went on to say that a “rational reaction” to finding such exceptions “isn’t to say ‘Let’s throw out the field.'” Universal Grammar permits such exceptions. There is no problem. As Pesetsky puts it: “There’s nothing that says languages without subordinate clauses can’t exist.”

Except the 2002 paper on which Chomsky’s name appears. Pesetsky and others have backed away from that paper, arguing not that it was incorrect, but that it was “written in an unfortunate way” and that the authors were “trying to make certain things comprehensible about linguistics to a larger public, but they didn’t make it clear that they were simplifying.” Some say that Chomsky signed his name to the paper but that it was actually written by Marc Hauser, the former professor of psychology at Harvard University, who resigned after Harvard officials found him guilty of eight counts of research misconduct. (For the record, no one has suggested the alleged misconduct affected his work with Chomsky.)

Chomsky declined to grant me an interview. Those close to him say he sees Everett as seizing on a few stray, perhaps underexplained, lines from that 2002 paper and distorting them for his own purposes. And the truth, Chomsky has made clear, should be apparent to any rational person.

Ted Gibson has heard that one before. When Gibson, a professor of cognitive sciences at MIT, gave a paper on the topic at a January meeting of the Linguistic Society of America, held in Portland, Ore., Pesetsky stood up at the end to ask a question. “His first comment was that Chomsky never said that. I went back and found the slide,” he says. “Whenever I talk about this question in front of these people I have to put up the literal quote from Chomsky. Then I have to put it up again.”

Geoffrey Pullum, a professor of linguistics at the University of Edinburgh, is also vexed at how Chomsky and company have, in his view, played rhetorical sleight-of-hand to make their case. “They have retreated to such an extreme degree that it says really nothing,” he says. “If it has a sentence longer than three words then they’re claiming they were right. If that’s what they claim, then they weren’t claiming anything.” Pullum calls this move “grossly dishonest and deeply silly.”

Everett has been arguing about this for seven years. He says Pirahã undermines Universal Grammar. The other side says it doesn’t. In an effort to settle the dispute, Everett asked Gibson, who holds a joint appointment in linguistics at MIT, to look at the data and reach his own conclusions. He didn’t provide Gibson with data he had collected himself because he knows his critics suspect those data have been cooked. Instead he provided him with sentences and stories collected by his missionary predecessor. That way, no one could object that it was biased.

In the documentary about Everett, handing over the data to Gibson is given tremendous narrative importance. Everett is the bearded, safari-hatted field researcher boating down a river in the middle of nowhere, talking and eating with the natives. Meanwhile, Gibson is the nerd hunched over his keyboard back in Cambridge, crunching the data, examining it with his research assistants, to determine whether Everett really has discovered something. If you watch the documentary, you get the sense that what Gibson has found confirms Everett’s theory. And that’s the story you get from Everett, too. In our first interview, he encouraged me to call Gibson. “The evidence supports what I’m saying,” he told me, noting that he and Gibson had a few minor differences of interpretation.

But that’s not what Gibson thinks. Some of what he found does support Everett. For example, he’s confirmed that Pirahã lacks possessive recursion, phrases like “my brother’s mother’s house.” Also, there appear to be no conjunctions like “and” or “or.” In other instances, though, he’s found evidence that seems to undercut Everett’s claims—specifically, when it comes to noun phrases in sentences like “His mother, Itaha, spoke.”

That is a simple sentence, but inserting the mother’s name is a hallmark of recursion. Gibson’s paper, on which Everett is a co-author, states, “We have provided suggestive evidence that Pirahã may have sentences with recursive structures.”

If that turns out to be true, it would undermine the primary thesis of both of Everett’s books about the Pirahã. Rather than the hero who spent years in the Amazon emerging with evidence that demolished the field’s predominant theory, Everett would be the descriptive linguist who came back with a couple of books full of riveting anecdotes and cataloged a language that is remarkable, but hardly changes the game.

Everett only realized during the reporting of this article that Gibson disagreed with him so strongly. Until then, he had been saying that the results generally supported his theory. “I don’t know why he says that,” Gibson says. “Because it doesn’t. He wrote that our work corroborates it. A better word would be falsified. Suggestive evidence is against it right now and not for it.” Though, he points out, the verdict isn’t final. “It looks like it is recursive,” he says. “I wouldn’t bet my life on it.”

Another researcher, Ray Jackendoff, a linguist at Tufts University, was also provided the data and sees it slightly differently. “I think we decided there is some embedding but it is of limited depth,” he says. “It’s not recursive in the sense that you can have infinitely deep embedding.” Remember that in Chomsky’s paper, it was the idea that “open-ended” recursion was possible that separated human and animal communication. Whether the kind of limited recursion Gibson and Jackendoff have noted qualifies depends, like everything else in this debate, on the interpretation.

Everett thinks what Gibson has found is not recursion, but rather false starts, and he believes further research will back him up. “These are very short, extremely limited examples and they almost always are nouns clarifying other nouns,” he says. “You almost never see anything but that in these cases.” And he points out that there still doesn’t seem to be any evidence of infinite recursion. Says Everett: “There simply is no way, even if what I claim to be false starts are recursive instead, to say, “‘My mother, Susie, you know who I mean, you like her, is coming tonight.'”

The field has a history of theoretical disagreements that turn ugly. In the book The Linguistic Wars, published in 1995, Randy Allen Harris tells the story of another skirmish between Chomsky and a group of insurgent linguists called generative semanticists. Chomsky dismissed his opponents’ arguments as absurd. His opponents accused him of altering his theories when confronted and of general arrogance. “Chomsky has the impressive rhetorical talent of offering ideas which are at once tentative and fully endorsed, of appearing to take the if out of his arguments while nevertheless keeping it safely around,” writes Harris.

That rhetorical talent was on display in his lecture last October, in which he didn’t just disagree with other linguists, but treated their arguments as ridiculous and a mortal danger to the field. The style seems to be reflected in his political activism. Watch his 1969 debate on Firing Lineagainst William F. Buckley Jr., available on YouTube, and witness Chomsky tie his famous interlocutor in knots. It is a thorough, measured evisceration. Chomsky is willing to deploy those formidable skills in linguistic arguments as well.

Everett is far from the only current Chomsky challenger. Recently there’s been a rise in so-called corpus linguistics, a data-driven method of evaluating a language, using computer software to analyze sentences and phrases. The method produces detailed information and, for scholars like Gibson, finally provides scientific rigor for a field he believes has been mired in never-ending theoretical disputes. That, along with the brain-scanning technology that linguists are increasingly making use of, may be able to help resolve questions about how much of the structure of language is innate and how much is shaped by culture.

But Chomsky has little use for that method. In his lecture, he deemed corpus linguistics nonscientific, comparing it to doing physics by describing the swirl of leaves on a windy day rather than performing experiments. This was “just statistical modeling,” he said, evidence of a “kind of pathology in the cognitive sciences.” Referring to brain scans, Chomsky joked that the only way to get a grant was to propose an fMRI.

As for Universal Grammar, some are already writing its obituary. Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology, has stated flatly that “Universal Grammar is dead.” Two linguists, Nicholas Evans and Stephen Levinson, published a paper in 2009 titled “The Myth of Language Universals,” arguing that the “claims of Universal Grammar … are either empirically false, unfalsifiable, or misleading in that they refer to tendencies rather than strict universals.” Pullum has a similar take: “There is no Universal Grammar now, not if you take Chomsky seriously about the things he says.”

Gibson puts it even more harshly. Just as Chomsky doesn’t think corpus linguistics is science, Gibson doesn’t think Universal Grammar is worthwhile. “The question is, ‘What is it?’ How much is built-in and what does it do? There are no details,” he says. “It’s crazy to say it’s dead. It was never alive.”

Such proclamations have been made before and Chomsky, now 83, has a history of outmaneuvering and outlasting his adversaries. Whether Everett will be yet another in a long line of would-be debunkers who turn into footnotes remains to be seen. “I probably do, despite my best intentions, hope that I turn out to be right,” he says. “I know that it is not scientific. But I would be a hypocrite if I didn’t admit it.”