Arquivo da tag: Cognição

Cientistas identificam gene que relaciona estrutura cerebral à inteligência (O Globo)

JC e-mail 4892, de 11 de fevereiro de 2014

Descoberta pode ter implicações importantes para a compreensão de transtornos psiquiátricos como esquizofrenia e autismo

Cientistas do King’s College London identificaram, pela primeira vez, um gene que relaciona a espessura da massa cinzenta do cérebro à inteligência. O estudo foi publicado nesta terça-feira na revista “Molecular Psychiatry” e pode ajudar a entender os mecanismos biológicos por trás de determinados danos intelectuais.

Até agora já se sabia que a massa cinzenta tinha um papel importante para a memória, atenção, pensamento, linguagem e consciência. Estudos anteriores também já mostravam que a espessura do córtex cerebral tinha a ver com a habilidade intelectual, mas nenhum gene tinha sido identificado.

Um time internacional de cientistas, liderado pelo King´s College, analisou amostras de DNA e exames de ressonância magnética por imagem de 1.583 adolescentes saudáveis de 14 anos, que também se submeteram a uma série de testes para determinar inteligência verbal e não verbal.

– Queríamos descobrir como diferenças estruturais no cérebro tinham a ver com diferenças na habilidade intelectual. Identificamos uma variação genética relacionada à plasticidade sináptica, de como os neurônios se comunicam – explica Sylvane Desrivières, principal autora do estudo, pelo Instituto de Psiquiatria do King’s College London. – Isto pode nos ajudar a entender o que acontece em nível neuronal com certas formas de comprometimento intelectual, onde a habilidade de comunicação dos neurônios é, de alguma forma, comprometida.

Ela acrescenta que é importante apontar que a inteligência é influenciada por muitos fatores genéticos e ambientais. O gene que identificamos só explica uma pequena proporção das diferenças nas habilidades intelectuais e não é, de forma alguma, “o gene da inteligência”.

Os pesquisadores observaram 54 mil possíveis variações envolvidas no desenvolvimento cerebral. Em média, adolescentes com uma variante genética particular tinham um córtex mais fino no hemisfério cerebral esquerdo, particularmente nos lobos frontal e temporal, e executavam bem testes de capacidade intelectual. A variação genética afeta a expressão do gene NPTN, que codifica uma proteína que atua nas sinapses neuronais e, portanto, afeta a forma como as células do cérebro se comunicam.

Para confirmar as suas conclusões, os pesquisadores estudaram o gene NPTN em células de camundongo e do cérebro humano. Os pesquisadores verificaram que o gene NPTN tinha uma atividade diferente nos hemisférios esquerdo e direito do cérebro, o que pode fazer com que o hemisfério esquerdo seja mais sensível aos efeitos das mutações NPTN. Os resultados sugerem que algumas diferenças na capacidade intelectual podem resultar da diminuição da função do gene NPTN em determinadas regiões do hemisfério esquerdo do cérebro.

A variação genética identificada neste estudo representa apenas uma estimativa de 0,5% da variação total em inteligência. No entanto, as descobertas podem ter implicações importantes para a compreensão dos mecanismos biológicos subjacentes de vários transtornos psiquiátricos, como esquizofrenia e autismo, nas quais a capacidade cognitiva é uma característica fundamental da doença.

http://oglobo.globo.com/ciencia/cientistas-identificam-gene-que-relaciona-estrutura-cerebral-inteligencia-11563313#ixzz2t1amCUSy

Anúncios

Money makes people right-wing, inegalitarian, UK study finds (Science Daily)

Date: 

February 6, 2014

Source: University of Warwick

Summary: Lottery winners tend to switch towards support for a right-wing political party and to become less egalitarian, according to new research on UK data.

Evidence on Switchers: The Percentage of People Who Switched Right (Conservative), and Previously Did Not Vote Conservative, After a Lottery Win Source: BHPS Data, Waves 7-18. Credit: Source: BHPS Data, Waves 7-18; Graph courtesy of University of Warwick

Lottery winners tend to switch towards support for a right-wing political party and to become less egalitarian, according to new research on UK data by Professor Andrew Oswald of the University of Warwick and Professor Nattavudh Powdthavee of the London School of Economic and the Melbourne Institute of Applied Economic and Social Research, University of Melbourne.

Their study, published as a new University of Warwick working paper under the title “Does Money Make People Right-Wing and Inegalitarian: A Longitudinal Study of Lottery Wins”, shows that the larger the win, the more people tilt to the right. The study uses information on thousands of people and on lottery wins up to 200,000 pounds sterling. The authors say it is the first research of its kind.

The authors believe their paper has wide implications for how democracy works. Professor Oswald said he had become doubtful of the view that morality was an objective choice. “In the voting booth, monetary self-interest casts a long shadow, despite people’s protestations that there are intellectual reasons for voting for low tax rates.”

“We are not sure exactly what goes on inside people’s brains”, said Nick Powdthavee, “but it seems that having money causes people to favour conservative right-wing ideas. Humans are creatures of flexible ethics.”

The authors believe their paper has wide implications for how democracy works. Professor Oswald said he had become doubtful of the view that morality was an objective choice. “In the voting booth, monetary self-interest casts a long shadow, despite people’s protestations that there are intellectual reasons for voting for low tax rates.”

The authors’ paper comments that: “The causes of people’s political attitudes are largely unknown. One possibility is that individuals’ attitudes towards politics and redistribution are motivated by deeply ethical view. Our study provides empirical evidence that voting choices are made out of self-interest.”

Using a nationally representative sample of lottery winners in the UK – the British Household Panel Survey – the researchers have been able to explore the observed longitudinal changes in political allegiance of the bigger winners to the smaller winners. The effect is also sizeable. Winning a few thousand pounds in the lottery has an effect on right-wingness that is just under half of completing a good standard of education (i.e. A-levels) at high school.

The lottery winning effect is far stronger for males than females. The authors are not sure why.

The study has nobody who wins millions and millions. “We’d certainly love to be able to track the views of the rare giant winners”, said Professor Oswald, “if any lottery company would like to work with our research team.”

Journal Reference:

  1. Andrew Oswald, Nattavudh Powdthavee. Does Money Make People Right-Wing and Inegalitarian: A Longitudinal Study of Lottery WinsUniversity of Warwick, February 2014

Memória humana é capaz de ‘reescrever’ o passado com experiências atuais (O Globo)

JC e-mail 4889, de 06 de fevereiro de 2014

Cientistas descobrem que nossas lembranças não são como vídeos, que armazenam perfeitamente as informações

Nossa memória viaja no tempo e arranca fragmentos do presente para inseri-los no passado. Essa foi a constatação de um novo estudo elaborado pela “Northwestern Medicine Feinberg School of Medicine”, de Chicago, nos Estados Unidos. O trabalho constatou que, em termos de precisão, ela está longe de se assemelhar às câmeras de video, que armazenam perfeitamente as informações. Assim, nossa memória reescreve o passado com as informações atuais, atualizando suas lembranças com novas experiências.

O estudo é o primeiro a mostrar especificamente como a memória humana relaciona tão fortemente o presente ao passado. Ele indica o ponto exato no tempo em que uma informação, de forma incorreta, é implantada em uma memória existente.

Segundo a autora do estudo, a pós-doutora em ciências sociais médicas Donna Jo Bridge, para nos ajudar a sobreviver, a memória se adapta a um ambiente em constante mudança e nos ajuda a lidar com o que é importante agora.

– Nossa memória não é como uma câmera de vídeo. Ela reformula e edita eventos para criar uma história adequada à realidade atual – ressalta Bridge à BBC News. Essa “edição” acontece no hipocampo.

Para a realização do experimento, 17 pessoas estudaram 168 objetos dispostos em uma tela de computador, com variadas imagens de fundo. Os cientistas observaram a atividade cerebral dos participantes, assim como o movimento dos olhos.

As imagens traziam cenas como o fundo do oceano ou a vista aérea de terras agrícolas. Em seguida, os pesquisadores pediram aos participantes para que eles colocassem o objeto no local original, mas em uma nova tela de fundo. O que se verificou foi que os objetos sempre eram colocados em um local incorreto.

– Eles sempre escolhiam o local que já haviam selecionado na etapa anterior. Isso mostra que sua memória original do local foi alterada para remeter a localização que lembravam na nova tela de fundo – disse a cientista.

Os participantes também fizeram testes de ressonância magnética para que fossem observadas as atividades cerebrais. O movimento dos olhos também foi estudado.

– Todo mundo gosta de pensar em memória como alguma coisa que nos permite lembrar vividamente nossa infância ou o que fizemos na semana passada. Porém, a noção de uma memória perfeita é um mito – disse à BBC News Joel Voss, autor sênior da pesquisa e professor assistente de ciências sociais .

Bridge acrescenta que o estudo pode ter implicações para depoimentos de testemunhas, por exemplo.

– Nossa memória é construída para mudar, não somente relatar fatos. Sendo assim, não somos testemunhas muito confiáveis – observou à BBC.

http://oglobo.globo.com/saude/memoria-humana-capaz-de-reescrever-passado-com-experiencias-atuais-11511975#ixzz2sY2I0RZv

Can workshops on household water use impact consumer behavior? (Science Daily)

Date: January 31, 2014

Source: American Society for Horticultural Science

Summary: Researchers studied the effectiveness of workshops designed to focus on residential water conservation using a sample of irrigation water use data for 57 workshop participants and 43 nonparticipants. Results indicated that the 2-hour workshops were effective in reducing attendees’ irrigation water use; however, the effect was short lived. Results also showed that effects of workshop attendance depended on the household sample, and found that water use increased for some low-use workshop participants.

In Florida, where population growth, drought, and saltwater intrusion are affecting finite water sources, researchers are looking for effective ways to educate consumers about household water use habits. Despite an average annual rainfall of 55 inches, Florida was included on the Natural Resources Defense Council’s list of states with the greatest risk of water shortages in the coming years; the daily total state domestic water use in Florida is the fourth highest in the United States. A large proportion of Florida’s water is not used for human consumption, but is used for irrigating residential landscapes. In fact, a recent South Florida Water Management District study reported that outdoor water use in their area constitutes up to 50% of total household water consumption, and that up to 50% of the water applied to lawns is wasted through evaporation or overwatering.

Universities and municipalities are addressing this critical environmental concern through outreach and extension programs designed to educate the public about water conversation. But are these workshops effective in actually helping participants reduce their water use? Tatiana Borisova and Pilar Useche from the University of Florida conducted a study published in HortTechnology to determine the effectiveness of free, 2-hour irrigation management workshops conducted by the Florida Cooperative Extension Service in cooperation with a local water provider in order to find out if there were short- and long-term impacts of workshop participation. “Landscape management outreach programs have been implemented by regional and local agencies, Cooperative Extension Services, and other organizations to encourage more efficient irrigation water use and residential water conservation,” explained lead author Borisova. “However, limited information exists about the effectiveness of such programs.”

The team studied actual water use data for 12 months before and after workshops, and then compared water use data from workshop participants with the water use of households that did not participate in the workshop. They found “statistically significant reduction in water use” only in the month of the workshop. “Although the workshop has an impact on water use, this impact is very short-lived,” noted Borisova. “For workshop participants and nonparticipants, water use returns to the base level immediately in the months following the workshop.” The authors added that reinforcement of the educational message received during the workshop is probably required to sustain water-use reductions over time.

The team also found that the effect of workshop attendance depended on the sample of the households considered. For example, in the subsample of the low water-use households, water use tended to increase following the workshop. “The overall objective of the workshop was to improve the irrigation efficiency by reducing water wastes. However, households with low average water use may already be technically efficient, and workshop attendance cannot reduce their irrigation water use further without negatively affecting the yard aesthetics and plant health,” explained Borisova.

Borisova and Useche recommend development of a comprehensive evaluation approach for water use programs that includes evaluation of actual water use reductions in order to more accurately quantify program impact, design more effective educational programs, and better target the programs to consumers.

The complete study and abstract are available on the ASHS HortTechnology electronic journal web site: http://horttech.ashspublications.org/content/23/5/668.abstract

Journal Reference:

  1. Tatiana Borisova and Pilar Useche. Exploring the Effects of Extension Workshops on Household Water-use BehaviorHortTechnology, October 2013

Brain regions thought to be uniquely human share many similarities with monkeys (Science Daily)

January 28, 2014

Source: Cell Press

Summary: New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.

 (A) The right vlFC ROI. Dorsally it included the inferior frontal sulcus and, more posteriorly, it included PMv; anteriorly it was bound by the paracingulate sulcus and ventrally by the lateral orbital sulcus and the border between the dorsal insula and the opercular cortex. (B) A schematic depiction of the result of the 12 cluster parcellation solution using an iterative parcellation approach. We subdivided PMv into ventral and dorsal regions (6v and 6r, purple and black). We delineated the IFJ area (blue) and areas 44d (gray) and 44v (red) in lateral pars opercularis. More anteriorly, we delineated areas 45 (orange) in the pars triangularis and adjacent operculum and IFS (green) in the inferior frontal sulcus and dorsal pars triangularis. We found area 12/47 in the pars orbitalis (light blue) and area Op (bright yellow) in the deep frontal operculum. We also identified area 46 (yellow), and lateral and medial frontal pole regions (FPl and FPm, ruby colored and pink). Credit: Neuron, Neubert et al.

New research suggests a surprising degree of similarity in the organization of regions of the brain that control language and complex thought processes in humans and monkeys. The study, publishing online January 28 in the Cell Press journal Neuron, also revealed some key differences. The findings may provide valuable insights into the evolutionary processes that established our ties to other primates but also made us distinctly human.

The research concerns the ventrolateral frontal cortex, a region of the brain known for more than 150 years to be important for cognitive processes including language, cognitive flexibility, and decision-making. “It has been argued that to develop these abilities, humans had to evolve a completely new neural apparatus; however others have suggested precursors to these specialized brain systems might have existed in other primates,” explains lead author Dr. Franz-Xaver Neubert of the University of Oxford, in the UK.

By using non-invasive MRI techniques in 25 people and 25 macaques, Dr. Neubert and his team compared ventrolateral frontal cortex connectivity and architecture in humans and monkeys. The investigators were surprised to find many similarities in the connectivity of these regions. This suggests that some uniquely human cognitive traits may rely on an evolutionarily conserved neural apparatus that initially supported different functions. Additional research may reveal how slight changes in connectivity accompanied or facilitated the development of distinctly human abilities.

The researchers also noted some key differences between monkeys and humans. For example, ventrolateral frontal cortex circuits in the two species differ in the way that they interact with brain areas involved with hearing.

“This could explain why monkeys perform very poorly in some auditory tasks and might suggest that we humans use auditory information in a different way when making decisions and selecting actions,” says Dr. Neubert.

A region in the human ventrolateral frontal cortex — called the lateral frontal pole — does not seem to have an equivalent area in the monkey. This area is involved with strategic planning, decision-making, and multi-tasking abilities.

“This might relate to humans being particularly proficient in tasks that require strategic planning and decision making as well as ‘multi-tasking’,” says Dr. Neubert.

Interestingly, some of the ventrolateral frontal cortex regions that were similar in humans and monkeys are thought to play roles in psychiatric disorders such as attention deficit hyperactivity disorder, obsessive compulsive disorder, and substance abuse. A better understanding of the networks that are altered in these disorders might lead to therapeutic insights.

Journal Reference:

  1. Franz-Xaver Neubert et al. Comparison of human ventral frontal cortex areas for cognitive control and language with areas in monkey frontal cortex.Neuron, Jan 28, 2014

Kids Have Skewed View of Gender Segregation (Science Daily)

Jan. 9, 2014 — Children believe the world is far more segregated by gender than it actually is, implies a new study led by a Michigan State University scholar.

Jennifer Watling Neal and colleagues examined classroom friendships in five U.S. elementary schools. Their findings, published in the journal Child Development, found boys and girls had no problems being friends together but for some reason had a perception that only boys played with boys and girls played with girls.

“Kids believe gender plays a larger role in friendship that it actually does,” said Neal, assistant professor of psychology.

Children who have more accurate perceptions of the social relationships around them may be better able to avoid conflict and have more positive interactions with their peers, Neal said.

The findings also have implications when the students grow up.

“In adulthood,” Neal said, “we know that people who have accurate perceptions of workplace relationships tend to be perceived as more powerful and have better reputations than their colleagues.”

The study of 426 second- through fourth-graders found gender is still important in the formation of friendships; children were nine times more likely to be friends if they were the same gender.

However, when asked about their friends’ friends, a child was 50 times more likely to believe two classmates were friends when they were the same gender.

“Thus, while gender does matter a great deal in the formation of children’s friendships, children think it is nearly the only relevant factor,” Neal said.

Journal Reference:

  1. Jennifer Watling Neal, Zachary P. Neal, Elise Cappella. I Know Who My Friends Are, but Do You? Predictors of Self-Reported and Peer-Inferred RelationshipsChild Development, 2013; DOI: 10.1111/cdev.12194

Scientists: Americans are becoming weather wimps (AP)

By SETH BORENSTEIN

— Jan. 9, 2014 5:33 PM EST

Deep Freeze Weather Wimps

FILE – In this Sunday, Jan. 5, 2014, file photo, a person struggles to cross a street in blowing and falling snow as the Gateway Arch appears in the distance, in St. Louis. The deep freeze that gripped much of the nation this week wasn’t unprecedented, but with global warming we’re getting far fewer bitter cold spells, and many of us have forgotten how frigid winter used to be. (AP Photo/Jeff Roberson, File)

WASHINGTON (AP) — We’ve become weather wimps.

As the world warms, the United States is getting fewer bitter cold spells like the one that gripped much of the nation this week. So when a deep freeze strikes, scientists say, it seems more unprecedented than it really is. An Associated Press analysis of the daily national winter temperature shows that cold extremes have happened about once every four years since 1900.

Until recently.

When computer models estimated that the national average daily temperature for the Lower 48 states dropped to 17.9 degrees on Monday, it was the first deep freeze of that magnitude in 17 years, according to Greg Carbin, warning meteorologist for the National Oceanic and Atmospheric Administration.

That stretch — from Jan. 13, 1997 to Monday — is by far the longest the U.S. has gone without the national average plunging below 18 degrees, according to a database of daytime winter temperatures starting in January 1900.

In the past 115 years, there have been 58 days when the national average temperature dropped below 18. Carbin said those occurrences often happen in periods that last several days so it makes more sense to talk about cold outbreaks instead of cold days. There have been 27 distinct cold snaps.

Between 1970 and 1989, a dozen such events occurred, but there were only two in the 1990s and then none until Monday.

“These types of events have actually become more infrequent than they were in the past,” said Carbin, who works at the Storm Prediction Center in Norman, Okla. “This is why there was such a big buzz because people have such short memories.”

Said Jeff Masters, meteorology director of the private firm Weather Underground: “It’s become a lot harder to get these extreme (cold) outbreaks in a planet that’s warming.”

And Monday’s breathtaking chill? It was merely the 55th coldest day — averaged for the continental United States — since 1900.

The coldest day for the Lower 48 since 1900 — as calculated by the computer models — was 12 degrees on Christmas Eve 1983, nearly 6 degrees chillier than Monday.

The average daytime winter temperature is about 33 degrees, according to Carbin’s database.

There have been far more unusually warm winter days in the U.S. than unusually cold ones.

Since Jan. 1, 2000, only two days have ranked in the top 100 coldest: Monday and Tuesday. But there have been 13 in the top 100 warmest winter days, including the warmest since 1900: Dec. 3, 2012. And that pattern is exactly what climate scientists have been saying for years, that the world will get more warm extremes and fewer cold extremes.

Nine of 11 outside climate scientists and meteorologists who reviewed the data for the AP said it showed that as the world warms from heat-trapping gas spewed by the burning of fossil fuels, winters are becoming milder. The world is getting more warm extremes and fewer cold extremes, they said.

“We expect to see a lengthening of time between cold air outbreaks due to a warming climate, but 17 years between outbreaks is probably partially due to an unusual amount of natural variability,” or luck, Masters said in an email. “I expect we’ll go far fewer than 17 years before seeing the next cold air outbreak of this intensity.

And the scientists dismiss global warming skeptics who claim one or two cold days somehow disproves climate change.

“When your hands are freezing off trying to scrape the ice off your car, it can be all too tempting to say, ‘Where’s global warming now? I could use a little of that!’ But you know what? It’s not as cold as it used to be anymore,” Texas Tech University climate scientist Katharine Hayhoe said in an email.

The recent cold spell, which was triggered by a frigid air mass known as the polar vortex that wandered way south of normal, could also be related to a relatively new theory that may prove a weather wild card, said Rutgers University climate scientist Jennifer Francis. Her theory, which has divided mainstream climate scientists, says that melting Arctic sea ice is changing polar weather, moving the jet stream and causing “more weirdness.”

Ryan Maue, a meteorologist with the private firm Weather Bell Analytics who is skeptical about blaming global warming for weather extremes, dismisses Francis’ theory and said he has concerns about the accuracy of Carbin’s database. Maue has his own daily U.S. average temperature showing that Monday was colder than Carbin’s calculations.

Still, he acknowledged that cold nationwide temperatures “occurred with more regularity in the past.”

Many climate scientists say Americans are weather weenies who forgot what a truly cold winter is like.

“I think that people’s memory about climate is really terrible,” Texas A&M University climate scientist Andrew Dessler wrote in an email. “So I think this cold event feels more extreme than it actually is because we’re just not used to really cold winters anymore.”

Patient in ‘Vegetative State’ Not Just Aware, but Paying Attention, Study Suggests (Science Daily)

Oct. 31, 2013 — A patient in a seemingly vegetative state, unable to move or speak, showed signs of attentive awareness that had not been detected before, a new study reveals. This patient was able to focus on words signalled by the experimenters as auditory targets as successfully as healthy individuals. If this ability can be developed consistently in certain patients who are vegetative, it could open the door to specialised devices in the future and enable them to interact with the outside world.

This scan depicts patterns of the vegetative patient’s electrical activity over the head when they attended to the designated words, and when they when they were distracted by novel but irrelevant words. (Credit: Clinical Neurosciences)

The research, by scientists at the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) and the University of Cambridge, is published today, 31 October, in the journalNeuroimage: Clinical.

For the study, the researchers used electroencephalography (EEG), which non-invasively measures the electrical activity over the scalp, to test 21 patients diagnosed as vegetative or minimally conscious, and eight healthy volunteers. Participants heard a series of different words — one word a second over 90 seconds at a time — while asked to alternatingly attend to either the word ‘yes’ or the word ‘no’, each of which appeared 15% of the time. (Some examples of the words used include moss, moth, worm and toad.) This was repeated several times over a period of 30 minutes to detect whether the patients were able to attend to the correct target word.

They found that one of the vegetative patients was able to filter out unimportant information and home in on relevant words they were being asked to pay attention to. Using brain imaging (fMRI), the scientists also discovered that this patient could follow simple commands to imagine playing tennis. They also found that three other minimally conscious patients reacted to novel but irrelevant words, but were unable to selectively pay attention to the target word.

These findings suggest that some patients in a vegetative or minimally conscious state might in fact be able to direct attention to the sounds in the world around them.

Dr Srivas Chennu at the University of Cambridge, said: “Not only did we find the patient had the ability to pay attention, we also found independent evidence of their ability to follow commands — information which could enable the development of future technology to help patients in a vegetative state communicate with the outside world.

“In order to try and assess the true level of brain function and awareness that survives in the vegetative and minimally conscious states, we are progressively building up a fuller picture of the sensory, perceptual and cognitive abilities in patients. This study has added a key piece to that puzzle, and provided a tremendous amount of insight into the ability of these patients to pay attention.”

Dr Tristan Bekinschtein at the MRC Cognition and Brain Sciences Unit said: “Our attention can be drawn to something by its strangeness or novelty, or we can consciously decide to pay attention to it. A lot of cognitive neuroscience research tells us that we have distinct patterns in the brain for both forms of attention, which we can measure even when the individual is unable to speak. These findings mean that, in certain cases of individuals who are vegetative, we might be able to enhance this ability and improve their level of communication with the outside world.”

This study builds on a joint programme of research at the University of Cambridge and MRC CBSU where a team of researchers have been developing a series of diagnostic and prognostic tools based on brain imaging techniques since 1998. Famously, in 2006 the group was able to use fMRI imaging techniques to establish that a patient in a vegetative state could respond to yes or no questions by indicating different, distinct patterns of brain activity.

Journal Reference:

  1. Srivas Chennu, Paola Finoia, Evelyn Kamau, Martin M. Monti, Judith Allanson, John D. Pickard, Adrian M. Owen, Tristan A. Bekinschtein. Dissociable endogenous and exogenous attention in disorders of consciousnessNeuroImage: Clinical, 2013; DOI: 10.1016/j.nicl.2013.10.008

Sociological explanations for climate change denial (resilience.org)

by Olga Bonfiglio, originally published by Energy Bulletin  | MAR 17, 2012

Talk about climate change seems to be a taboo subject in America today.

Three years ago promises by both major political parties to do something have gone by the wayside while today’sRepublican presidential candidates reject evidence that humans are responsible for the warming of the earth

The mainstream media routinely report on extreme weather, like this winter’s high temperatures and last summer’s droughts, but reporters and commentators typically veer away from connecting it to climate change.

Last October, the New York Times reported that those who believed the Earth was warming dropped from 79 percent in 2006 to 59 percent.  However, a month laterpolling experts blamed this decline on a collapse in media coverage and pollsters’ deeply flawed questions.

It turns out that the United States is one of the few countries in the world still quibbling over climate change, and its influence is stymieing progress at environmental summits like Durbin (2011), Cancun (2010), and Copenhagen (2009).

What’s going on?

As you may expect, it’s about money, politics, culture and media bias.

Ron Kramer, a sociologist at Western Michigan University, has been studying how sociological and cultural factors are preventing Americans from talking about or acting on climate change.  He drew on the research of sociologist Stanley Cohen, professor emeritus at the London School of Economics, who says that denial “refers to the maintenance of social worlds in which an undesirable situation (event, condition, phenomenon) is unrecognized, ignored or made to seem normal.”

He cites three categories of denial:

  • literal denial is: “the assertion that something did not happen or is not true.”
  • With an interpretive denial, the basic facts are not denied, however, “…they are given a different meaning from what seems apparent to others.” People recognize that something is happening but that it’s good for us.
  • Implicatory denial “covers the multitude of vocabularies, justifications, rationalizations, evasions that we use to deal with our awareness of so many images of unmitigated suffering.” Here, “knowledge itself is not an issue. The genuine challenge is doing the ‘right’ thing with this knowledge.”

Through literal and interpretive denial, climate change deniers declare that the earth is not warming even though 98 percent of our scientists have written thousands of peer-reviewed papers and reports concluding that climate change is real and caused by human activity.

Actually, deniers are organized by conservative think tanks funded by the fossil fuel industry that attempt to create doubt about climate science and block actions that would reduce greenhouse gas emissions and create clean energy alternatives.

To do this they use conspiracy theories and “fake” experts with no background in climate science.  They insist on absolute certainty, cherry-pick the data and ignore the larger body of evidence or misrepresent data and promote logical fallacies like “the climate has changed in the past, therefore current change is natural.”

“Creating doubt blocks any action,” said Kramer.  “This is the same tactic the tobacco industry used to deny that smoking was harmful to people’s health. And, some of the same people are now doing this with climate change.”

The “conservative climate change denial counter-movement,” as Kramer calls it, was led by the George W. Bush administration and congressional Republicans who obstruct and manipulate the political system to promote their view.  They are complemented by right-wing media outlets such as Fox News and Rush Limbaugh.  The mainstream media’s “balancing norm” practice then allows the denialists to be placed on par with the climate scientists.

The rationale behind this counter-movement is that corporations make money if Americans continue their materialistic attitudes and practices so they lobby politicians to protect their interests and they fund think tanks that accuse “eco-scientists” of destroying capitalism, as revealed by the New YorkTimes.

Corporations provide research monies to university scientists, who rely on them as support from state legislatures and benefactors dwindles.  They advance narratives that people from other parts of the world are worse off than Americans, so we don’t have to change.  They make claims to virtue on TV ads about all they are doing for the environment (a.k.a. “greenwashing”).  And, they argue that doing something about global warming will cost too much and cause them to lose their competitive edge.

Research shows that conservative white males are more likely to espouse climate change denial than other groups for two reasons.  They tend to filter out any information that is different from their already-held worldview because it threatens the identity, status and esteem they receive by being part of their group, he said.  Sociologists call this “Identity Protective Cognition.”

Secondly, conservative white males have a stronger need to justify the status quo and resist attempts to change it.  Sociologists call this “System Justification.”

For example, successful conservative white males stridently defend the capitalist system because it has worked well for them. For anyone to imply the system is not functioning is an unfathomable impossibility akin to blasphemy. [Climatewire via Scientific American]

“Identity Protective Cognition” should also inform environmental activists that the information deficit model of activism is not always a good approach, warned Kramer.  Just providing more information may not change anyone’s views given their commitment to a particular cultural worldview.

In implicatory denial people recognize that something untoward is happening but they fail to act because they are emotionally uncomfortable or troubled about it.

For example, there are the people who are aware of climate change and have some information about it, but take no action, make no behavioral changes and remain apathetic.

This response occurs when people confront confusing and conflicting information from political leaders and the media.  Consequently, they have yet another reason for denial—or they believe the problem can be overcome with technology and they can go on with their lives.

“At some level people understand that climate change can alter human civilization, but they feel a sense of helplessness and powerlessness at the prospect,” said Kramer.  “Others feel guilty that they may have caused the problem.”

Several cultural factors also thwart any decisive action on climate change, said Kramer.

Americans have a tendency toward “anti-intellectualism,” so “nerdy” climate scientists are easily suspect.

Our strong sense of “individualism” helps us strive toward our individual goals, but it likewise keeps us from joining together to do something about climate change.  They ask:  “What good does it do to recycle or drive less when we have such a huge, complex problem as climate change?”

“American exceptionalism” celebrates the American way of life, which has given us a vast bounty of wealth and material goods.  We want to continue this life and, in fact, deserve it.  Nothing bad will happen to us.

Finally, “political alienation” keeps us from trusting our political system to tackle the problem.

“What we ultimately need is international agreement about what to do about climate change,” said Kramer.  “Nothing will happen, however, until the United States commits to doing something.”

What to do about climate change?

Kramer believes we should regard climate change as a matter of social justice and not just science because the people most affected by it are not the ones who created it.

North American and European lifestyles, which are based on easy and cheap access to fossil fuels for energy, agriculture and consumer products have inadvertently caused much suffering, poverty and environmental degradation to the people of the Global South.

To illustrate, analysts at Maplecroft have produced a map that measures of the risk of climate change impacts and the social and financial ability of communities and governments to cope with it.  The most vulnerable nations are Haiti, Bangladesh and Zimbabwe.

Second, Kramer emphasized the moral obligation we have to future generations and other species.  Simply put, we must reduce our use of fossil fuels that are largely responsible for emitting greenhouse gases into the atmosphere and causing extreme weather conditions like hurricanes, floods, drought, tsunami, earthquakes, heat waves, warm winters and melting polar ice caps.

Again, people’s lives, livelihoods and communities are affected and it shouldn’t escape notice that human encroachment on animal habitats is contributing to massive species losses, what some call the Sixth Great Extinction.  Criminologists are grappling with the language characterizing this wonton disregard with words like “ecocide” and “biocide.”

Kramer suggests that we shift our lifestyles from a culture based on materialistic consumption to a culture based on the caretaking of the Earth, as advocated by Scott Russell Sanders in A Conservationist Manifesto (2009).  Sanders asks such questions as (see a video with Sanders)

  • What would a truly sustainable economy look like?
  • What responsibilities do we bear for the well-being of future generations?
  • What responsibilities do we bear toward Earth’s millions of other species?
  • In a time of ecological calamity and widespread human suffering, how should we imagine a good life?

Third, Kramer calls for a more “prophetic imagination” as put forward by Walter Brueggeman, a theologian and professor emeritus at Columbia Theological Seminary, where we take a reasoned approach and face the realities of climate change, confront the truth, “penetrate the numbness and despair,” and avoid drowning in our sense of loss and grief that is paralyzing us from action.

Such an approach can give voice to a “hope-filled responsibility” where people are empowered to act rather than left listless and inattentive.

“It’s not about someone being responsible, but all of us,” said Kramer, “because we are all affected by climate change.”

One major way we can do that is by reducing greenhouse gas emissions—and we have a way to measure our progress.

Bill McKibben, author of The End of Nature (1988), one of the first popularized books about global warming, contends that rising counts in greenhouse gas emissions are threatening our world.  Today, we are at 400 parts per million (ppm) and heading toward 550-650 ppm compared to pre-industrial counts that measured at 275 ppm.  McKibben advocates a goal of 350 ppm and has encouraged people in 188 countries to reduce carbon emissions in their communities.  (See the video explaining this charge.)

However, can people really be motivated to act given the complexity of the problem?

Kramer harkens to Howard Zinn’s autobiographical book, You Can’t Be Neutral on a Moving Train (1994).  In it Zinn says that we have to “look to history” to see that people working at the grassroots level were able to make change despite tremendous and entrenched obstacles.  It was people who ended slavery and Apartheid, liberated India, dismantled the Soviet Union and initiated the Arab Spring of 2011.  (See a video with Zinn)

“Climate change is a political issue” Kramer insisted.  “We know what to do.  We know that we need to mitigate the carbon emissions from fossil fuels.  What we lack is the political will and the mechanisms to move forward.”

Kramer insisted that climate change is not a party or ideological issue but rather a humanity issue.

“Planet Earth will survive,” he concluded, “but will human civilization?”

Disaster Relief Donations Track Number of People Killed, Not Number of Survivors (Science Daily)

Sep. 23, 2013 — People pay more attention to the number of people killed in a natural disaster than to the number of survivors when deciding how much money to donate to disaster relief efforts, according to new research published in Psychological Science, a journal of the Association for Psychological Science. The donation bias can be reversed, however, with a simple change in terminology.

“While fatalities have a severe impact on the afflicted community or country, disaster aid should be allocated to people affected by the disaster — those who are injured, homeless, or hungry,” says lead researcher Ioannis Evangelidis of Rotterdam School of Management, Erasmus University (RSM) in the Netherlands. “Our research shows that donors tend not to consider who really receives the aid.”

This discrepancy leads to a “humanitarian disaster,” say Evangelidis and colleague Bram Van den Bergh, where money is given disproportionately toward the natural disasters with the most deaths, instead of the ones with the most people in desperate need of help.

The researchers began by examining humanitarian relief data for natural disasters occurring between 2000 and 2010. As they expected, they found that the number of fatalities predicted the probability of donation, as well as the amount donated, by private donors in various disasters. Their model estimated that about $9,300 was donated per person killed in a given disaster. The number of people affected in the disasters, on the other hand, appeared to have no influence on the amount donated to relief efforts.

Evangelidis and Van den Bergh believe that donors are more likely to pay attention to a death toll when deciding how much to give because the term “affected” is ambiguous. In many cases, though, fatalities don’t correlate with the number of actual people in need.

To find a way to combat this donation bias, the researchers brought participants into the laboratory and presented them with several scenarios, involving various types of disasters and different numbers of people killed and affected.

Overall, participants allocated more money when a disaster resulted in a high death toll — even when the number of people affected was low — mirroring the data from the real natural disasters.

The bias was reversed, however, when participants had to compare two earthquakes — one that killed 4,500 and affected 7,500 versus one that claimed 7,500 and affected 4,500 — before allocating funds.

The act of comparing the two disasters seems to have forced the participants to think critically about which group actually needed the aid more. Notably, the effect carried over when the participants were asked to allocate funds for a third disaster

But the easiest, and most realistic, way to reduce the donation bias may involve a simple change in terminology. When the researchers swapped the term “affected” with the much less ambiguous term “homeless,” participants believed that money should be allocated according to the number of homeless people following a disaster.

“Above all, attention should be diverted from the number of fatalities to the number of survivors in need,” Evangelidis and Van den Bergh conclude. “We are optimistic that these insights will enhance aid to victims of future disasters.”

Journal Reference:

  1. I. Evangelidis, B. Van den Bergh. The Number of Fatalities Drives Disaster Aid: Increasing Sensitivity to People in NeedPsychological Science, 2013; DOI:10.1177/0956797613490748

Orangutans Plan Their Future Route and Communicate It to Others (Science Daily)

Sep. 11, 2013 — Male orangutans plan their travel route up to one day in advance and communicate it to other members of their species. In order to attract females and repel male rivals, they call in the direction in which they are going to travel. Anthropologists at the University of Zurich have found that not only captive, but also wild-living orangutans make use of their planning ability.

Male orangutans face the direction they plan to travel and emit ‘long calls’ in that direction. (Credit: UZH)

For a long time it was thought that only humans had the ability to anticipate future actions, whereas animals are caught in the here and now. But in recent years, clever experiments with great apes in zoos have shown that they do remember past events and can plan for their future needs. Anthropologists at the University of Zurich have now investigated whether wild apes also have this skill, following them for several years through the dense tropical swamplands of Sumatra.

Orangutans communicate their plans

Orangutans generally journey through the forest alone, but they also maintain social relationships. Adult males sometimes emit loud ‘long calls’ to attract females and repel rivals. Their cheek pads act as a funnel for amplifying the sound in the same way as a megaphone. Females that only hear a faint call come closer in order not to lose contact. Non-dominant males on the other hand hurry in the opposite direction if they hear the call coming loud and clear in their direction.

“To optimize the effect of these calls, it thus would make sense for the male to call in the direction of his future whereabouts, if he already knew about them,” explains Carel van Schaik. “We then actually observed that the males traveled for several hours in approximately the same direction as they had called.”

In extreme cases, long calls made around nesting time in the evening predicted the travel direction better than random until the evening of the next day.Carel van Schaik and his team conclude that orangutans plan their route up to a day ahead. In addition, the males often announced changes in travel direction with a new, better-fitting long call. The researchers also found that in the morning, the other orangutans reacted correctly to the long call of the previous evening, even if no new long call was emitted.

“Our study makes it clear that wild orangutans do not simply live in the here and now, but can imagine a future and even announce their plans. In this sense, then, they have become a bit more like us,” concludes Carel van Schaik.

Journal Reference:

  1. Carel P. van Schaik, Laura Damerius, Karin Isler. Wild Orangutan Males Plan and Communicate Their Travel Direction One Day in AdvancePLoS ONE, 2013; 8 (9): e74896 DOI: 10.1371/journal.pone.0074896

Poor Concentration: Poverty Reduces Brainpower Needed for Navigating Other Areas of Life (Science Daily)

Aug. 29, 2013 — Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes. 

Research based at Princeton University found that poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life. Experiments showed that the impact of financial concerns on the cognitive function of low-income individuals was similar to a 13-point dip in IQ, or the loss of an entire night’s sleep. To gauge the influence of poverty in natural contexts, the researchers tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Each farmer performed better on common fluid-intelligence and cognition tests post-harvest compared to pre-harvest. (Credit: Image courtesy of Princeton University)

Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.

In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.

But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.

“These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention,” said Zhao, who is now an assistant professor of psychology at the University of British Columbia.

“Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success,” she said. “We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty.”

The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.

“Stress itself doesn’t predict that people can’t perform well — they may do better up to a point,” Shafir said. “A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly.”

The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.

“They can make the same mistakes, but the outcomes of errors are more dear,” Shafir said. “So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out.”

The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.

Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.

To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.

The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.

“These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth,” Zhao said. “Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind.”

“We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources,” Shafir added. “It’s not about being a poor person — it’s about living in poverty.”

Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.

“When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting,” Shafir said. “It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity.”

The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.

“You want to design a context that is more scarcity proof,” said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.

“There’s very little you can do with time to get more money, but a lot you can do with money to get more time,” Shafir said. “The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help.”

Journal Reference:

  1. A. Mani, S. Mullainathan, E. Shafir, J. Zhao. Poverty Impedes Cognitive FunctionScience, 2013; 341 (6149): 976 DOI: 10.1126/science.1238041

Language can reveal the invisible, study shows (University of Wisconsin-Madison)

Public release date: 26-Aug-2013

By Gary Lupyan, University of Wisconsin-Madison

MADISON, Wis. — It is natural to imagine that the sense of sight takes in the world as it is — simply passing on what the eyes collect from light reflected by the objects around us.

But the eyes do not work alone. What we see is a function not only of incoming visual information, but also how that information is interpreted in light of other visual experiences, and may even be influenced by language.

Words can play a powerful role in what we see, according to a study published this month by University of Wisconsin–Madison cognitive scientist and psychology professor Gary Lupyan, and Emily Ward, a Yale University graduate student, in the journal Proceedings of the National Academy of Sciences.

“Perceptual systems do the best they can with inherently ambiguous inputs by putting them in context of what we know, what we expect,” Lupyan says. “Studies like this are helping us show that language is a powerful tool for shaping perceptual systems, acting as a top-down signal to perceptual processes. In the case of vision, what we consciously perceive seems to be deeply shaped by our knowledge and expectations.”

And those expectations can be altered with a single word.

To show how deeply words can influence perception, Lupyan and Ward used a technique called continuous flash suppression to render a series of objects invisible for a group of volunteers.

Each person was shown a picture of a familiar object — such as a chair, a pumpkin or a kangaroo — in one eye. At the same time, their other eye saw a series of flashing, “squiggly” lines.

“Essentially, it’s visual noise,” Lupyan says. “Because the noise patterns are high-contrast and constantly moving, they dominate, and the input from the other eye is suppressed.”

Immediately before looking at the combination of the flashing lines and suppressed object, the study participants heard one of three things: the word for the suppressed object (“pumpkin,” when the object was a pumpkin), the word for a different object (“kangaroo,” when the object was actually a pumpkin), or just static.

Then researchers asked the participants to indicate whether they saw something or not. When the word they heard matched the object that was being wiped out by the visual noise, the subjects were more likely to report that they did indeed see something than in cases where the wrong word or no word at all was paired with the image.

“Hearing the word for the object that was being suppressed boosted that object into their vision,” Lupyan says.

And hearing an unmatched word actually hurt study subjects’ chances of seeing an object.

“With the label, you’re expecting pumpkin-shaped things,” Lupyan says. “When you get a visual input consistent with that expectation, it boosts it into perception. When you get an incorrect label, it further suppresses that.”

Experiments have shown that continuous flash suppression interrupts sight so thoroughly that there are no signals in the brain to suggest the invisible objects are perceived, even implicitly.

“Unless they can tell us they saw it, there’s nothing to suggest the brain was taking it in at all,” Lupyan says. “If language affects performance on a test like this, it indicates that language is influencing vision at a pretty early stage. It’s getting really deep into the visual system.”

The study demonstrates a deeper connection between language and simple sensory perception than previously thought, and one that makes Lupyan wonder about the extent of language’s power. The influence of language may extend to other senses as well.

“A lot of previous work has focused on vision, and we have neglected to examine the role of knowledge and expectations on other modalities, especially smell and taste,” Lupyan says. “What I want to see is whether we can really alter threshold abilities,” he says. “Does expecting a particular taste for example, allow you to detect a substance at a lower concentration?”

If you’re drinking a glass of milk, but thinking about orange juice, he says, that may change the way you experience the milk.

“There’s no point in figuring out what some objective taste is,” Lupyan says. “What’s important is whether the milk is spoiled or not. If you expect it to be orange juice, and it tastes like orange juice, it’s fine. But if you expected it to be milk, you’d think something was wrong.”

The Battle Over Global Warming Is All in Your Head (Time)

Despite the fact that more people now acknowledge that climate change represents a significant threat to human well-being, this has yet to translate into any meaningful action. Psychologists may have an answer as to why this is

By , Aug. 19, 2013

165161414
ANDREY SMIRNOV/AFP/GETTY IMAGES. Climate campaigns, like this one from Greenpeace in Moscow, have failed to galvanize public support for strong climate action

Today the scientific community is in almost total agreement that the earth’s climate is changing as a result of human activity, and that this represents a huge threat to the planet and to us. According to a Pew survey conducted in March, however, public opinion lags behind the scientific conclusion, with only 69% of those surveyed accepting the view that the earth is warming — and only 1 in 4 Americans see global warming as a major threat. Still, 69% is a solid majority, which begs the question, Why aren’t we doing anything about it?

This political inertia in the face of unprecedented threat is the most fundamental challenge to tackling climate change. Climate scientists and campaigners have long debated how to better communicate the message to nonexperts so that climate science can be translated into action. According to Christopher Rapley, professor of climate science at University College London, the usual tactic of climate experts to provide the public with information isn’t enough because “it does not address key underlying causes.” We are all bombarded with the evidence of climate change on an almost a daily basis, from new studies and data to direct experiences of freakish weather events like last year’s epic drought in the U.S. The information is almost unavoidable.

If it’s not a data deficit that’s preventing people from doing more on global warming, what is it? Blame our brains. Renee Lertzman, an applied researcher who focuses on the psychological dimensions of sustainability, explains that the kind of systemic threat that climate change poses to humans is “unique both psychologically and socially.” We face a minefield of mental barriers and issues that prevent us from confronting the threat.

For some, the answer lies in cognitive science. Daniel Gilbert, a professor of psychology at Harvard, has written about why our inability to deal with climate change is due in part to the way our mind is wired. Gilbert describes four key reasons ranging from the fact that global warming doesn’t take a human form — making it difficult for us to think of it as an enemy — to our brains’ failure to accurately perceive gradual change as opposed to rapid shifts. Climate change has occurred slowly enough for our minds to normalize it, which is precisely what makes it a deadly threat, as Gilbert writes, “because it fails to trip the brain’s alarm, leaving us soundly asleep in a burning bed.”

Robert Gifford, a professor of psychology and environmental studies at the University of Victoria in Canada, also picks up on the point about our brains’ difficulty in grasping climate change as a threat. Gifford refers to this and other psychological barriers to mitigating climate change as “dragons of inaction.” Since authoring a paperon the subject in 2011 in which he outlined seven main barriers, or dragons, he has found many more. “We’re up to around 30,” he notes. “Now it’s time to think about how we can slay these dragons.” Gifford lists factors such as limited cognition or ignorance of the problem, ideologies or worldviews that may prevent action, social comparisons with other people and perceived inequity (the “Why should we change if X corporation or Y country won’t?”) and the perceived risks of changing our behavior.

Gifford is reluctant to pick out one barrier as being more powerful or limiting than another. “If I had to name one, I would nominate the lack of perceived behavioral control; ‘I’m only one person, what can I do?’ is certainly a big one.” For many, the first challenge will be in recognizing which dragons they have to deal with before they can overcome them. “If you don’t know what your problem is, you don’t know what the solution is,” says Gifford.

Yet this approach can only work if people are prepared to acknowledge that they have a problem. But for those of us who understand that climate change is a problem yet make little effort to cut the number of overseas trips we make or the amount of meat we consume, neither apathy nor denial really explains the dissonance between our actions and beliefs. Lertzman has come to the conclusion that this is not because of apathy — a lack of feeling — but because of the simple fact that we care an overwhelming amount about both the planet and our way of life, and we find that conflict too painful to bear. Our apparent apathy is just a defense mechanism in the face of this psychic pain.

“We’re reluctant to come to terms with the fact that what we love and enjoy and what gives us a sense of who we are is also now bound up with the most unimaginable devastation,” says Lertzman. “When we don’t process the pain of that, that’s when we get stuck and can’t move forward.” Lertzman refers to this inability to mourn as “environmental melancholia,” and points to South Africa’s postapartheid Truth and Reconciliation Commission as an example of how to effectively deal with this collective pain. “I’m not saying there should be one for climate or carbon, but there’s a lot to be said for providing a means for people to talk together about climate change, to make it socially acceptable to talk about it.”

Rosemary Randall, a trained psychotherapist, has organized something close to this. She runs the U.K.-based Carbon Conversations, a program that brings people together to talk in a group setting about ways of halving their personal carbon footprint. Writing in Aeon, an online magazine, Randall suggests that climate change is such a disturbing subject, that “like death, it can raise fears and anxieties that people feel have no place in polite conversation.” Randall acknowledges that while psychology and psychoanalysis aren’t the sole solutions to tackling climate change, “they do offer an important way of thinking about the problem.”

Lertzman says the mainstream climate-change community has been slow to register the value of psychology and social analysis in addressing global warming. “I think there’s a spark of some interest, but also a wariness of what this means, what it might look like,” she notes. Gifford says otherwise, however, explaining that he has never collaborated with other disciplines as much as he does now. “I may be a little biased because I’m invested in working in it, but in my view, climate change, and not mental health, is the biggest psychological problem we face today because it affects 100% of the global population.”

Despite the pain, shame, difficulty and minefield of other psychological barriers that we face in fully addressing climate change, both Lertzman and Gifford are still upbeat about our ability to face up to the challenge. “It’s patronizing to say that climate change is too big or abstract an issue for people to deal with,” says Lertzman. “There can’t be something about the human mind that stops us grappling with these issues given that so many people already are — maybe that’s what we should be focusing on instead.”

Read more: http://science.time.com/2013/08/19/in-denial-about-the-climate-the-psychological-battle-over-global-warming/#ixzz2chLdZ25H

The Science of Why We Don’t Believe Science (Mother Jones)

How our brains fool us on climate, creationism, and the end of the world.

By  | Mon Apr. 18, 2011 3:00 AM PDT


“A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.” So wrote the celebrated Stanford University psychologist Leon Festinger [1] (PDF), in a passage that might have been referring to climate change denial—the persistent rejection, on the part of so many Americans today, of what we know about global warming and its human causes. But it was too early for that—this was the 1950s—and Festinger was actually describing a famous case study [2] in psychology.

Festinger and several of his colleagues had infiltrated the Seekers, a small Chicago-area cult whose members thought they were communicating with aliens—including one, “Sananda,” who they believed was the astral incarnation of Jesus Christ. The group was led by Dorothy Martin, a Dianetics devotee who transcribed the interstellar messages through automatic writing.

Through her, the aliens had given the precise date of an Earth-rending cataclysm: December 21, 1954. Some of Martin’s followers quit their jobs and sold their property, expecting to be rescued by a flying saucer when the continent split asunder and a new sea swallowed much of the United States. The disciples even went so far as to remove brassieres and rip zippers out of their trousers—the metal, they believed, would pose a danger on the spacecraft.

Festinger and his team were with the cult when the prophecy failed. First, the “boys upstairs” (as the aliens were sometimes called) did not show up and rescue the Seekers. Then December 21 arrived without incident. It was the moment Festinger had been waiting for: How would people so emotionally invested in a belief system react, now that it had been soundly refuted?

Read also: the truth about Climategate. [3]. Read also: the truth about Climategate [4].

At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they’d all been spared at the last minute. Festinger summarized the extraterrestrials’ new pronouncement: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Their willingness to believe in the prophecy had saved Earth from the prophecy!

From that day forward, the Seekers, previously shy of the press and indifferent toward evangelizing, began to proselytize. “Their sense of urgency was enormous,” wrote Festinger. The devastation of all they had believed had made them even more certain of their beliefs.

In the annals of denial, it doesn’t get much more extreme than the Seekers. They lost their jobs, the press mocked them, and there were efforts to keep them away from impressionable young minds. But while Martin’s space cult might lie at on the far end of the spectrum of human self-delusion, there’s plenty to go around. And since Festinger’s day, an array of new discoveries in psychology and neuroscience has further demonstrated how our preexisting beliefs, far more than any new facts, can skew our thoughts and even color what we consider our most dispassionate and logical conclusions. This tendency toward so-called “motivated reasoning [5]” helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, “death panels,” the birthplace and religion of the president [6] (PDF), and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.

The theory of motivated reasoning builds on a key insight of modern neuroscience [7] (PDF): Reasoning is actually suffused with emotion (or what researchers often call “affect”). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we’re aware of it. That shouldn’t be surprising: Evolution required us to react very quickly to stimuli in our environment. It’s a “basic human survival skill,” explains political scientist Arthur Lupia[8] of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.

We apply fight-or-flight reflexes not only to predators, but to data itself.

We’re not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn’t take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that’s highly biased, especially on topics we care a great deal about.

Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber [9] of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to build an argument and challenge what they’re hearing.”

In other words, when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt [10]: We may think we’re being scientists, but we’re actually being lawyers [11] (PDF). Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

That’s a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully, I can go to great lengths to explain away behavior that seems obvious to everybody else—everybody who isn’t too emotionally invested to accept it, anyway. That’s not to suggest that we aren’t also motivated to perceive the world accurately—we are. Or that we never change our minds—we do. It’s just that we have other important goals besides accuracy—including identity affirmation and protecting one’s sense of self—and often those make us highly resistant to changing our beliefs when the facts say we should.

Modern science originated from an attempt to weed out such subjective lapses—what that great 17th century theorist of the scientific method, Francis Bacon, dubbed the “idols of the mind.” Even if individual researchers are prone to falling in love with their own theories, the broader processes of peer review and institutionalized skepticism are designed to ensure that, eventually, the best ideas prevail.

Scientific evidence is highly susceptible to misinterpretation. Giving ideologues scientific data that’s relevant to their beliefs is like unleashing them in the motivated-reasoning equivalent of a candy store.

Our individual responses to the conclusions that science reaches, however, are quite another matter. Ironically, in part because researchers employ so much nuance and strive to disclose all remaining sources of uncertainty, scientific evidence is highly susceptible to selective reading and misinterpretation. Giving ideologues or partisans scientific data that’s relevant to their beliefs is like unleashing them in the motivated-reasoning equivalent of a candy store.

Sure enough, a large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs. In a classic 1979 experiment [12] (PDF), pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies—and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more “convincing.”

Since then, similar results have been found for how people respond to “evidence” about affirmative action, gun control, the accuracy of gay stereotypes [13], and much else. Even when study subjects are explicitly instructed to be unbiased and even-handed about the evidence, they often fail.

And it’s not just that people twist or selectively read scientific evidence to support their preexisting views. According to research by Yale Law School professor Dan Kahan [14] and his colleagues, people’s deep-seated views about morality, and about the way society should be ordered, strongly predict whom they consider to be a legitimate scientific expert in the first place—and thus where they consider “scientific consensus” to lie on contested issues.

In Kahan’s research [15] (PDF), individuals are classified, based on their cultural values, as either “individualists” or “communitarians,” and as either “hierarchical” or “egalitarian” in outlook. (Somewhat oversimplifying, you can think of hierarchical individualists as akin to conservative Republicans, and egalitarian communitarians as liberal Democrats.) In one study, subjects in the different groups were asked to help a close friend determine the risks associated with climate change, sequestering nuclear waste, or concealed carry laws: “The friend tells you that he or she is planning to read a book about the issue but would like to get your opinion on whether the author seems like a knowledgeable and trustworthy expert.” A subject was then presented with the résumé of a fake expert “depicted as a member of the National Academy of Sciences who had earned a Ph.D. in a pertinent field from one elite university and who was now on the faculty of another.” The subject was then shown a book excerpt by that “expert,” in which the risk of the issue at hand was portrayed as high or low, well-founded or speculative. The results were stark: When the scientist’s position stated that global warming is real and human-caused, for instance, only 23 percent of hierarchical individualists agreed the person was a “trustworthy and knowledgeable expert.” Yet 88 percent of egalitarian communitarians accepted the same scientist’s expertise. Similar divides were observed on whether nuclear waste can be safely stored underground and whether letting people carry guns deters crime. (The alliances did not always hold. Inanother study [16] (PDF), hierarchs and communitarians were in favor of laws that would compel the mentally ill to accept treatment, whereas individualists and egalitarians were opposed.)

Head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.

In other words, people rejected the validity of a scientific source because its conclusion contradicted their deeply held views—and thus the relative risks inherent in each scenario. A hierarchal individualist finds it difficult to believe that the things he prizes (commerce, industry, a man’s freedom to possess a gun to defend his family [16]) (PDF) could lead to outcomes deleterious to society. Whereas egalitarian communitarians tend to think that the free market causes harm, that patriarchal families mess up kids, and that people can’t handle their guns. The study subjects weren’t “anti-science”—not in their own minds, anyway. It’s just that “science” was whatever they wanted it to be. “We’ve come to a misadventure, a bad situation where diverse citizens, who rely on diverse systems of cultural certification, are in conflict,” says Kahan [17].

And that undercuts the standard notion that the way to persuade people is via evidence and argument. In fact, head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.

Take, for instance, the question of whether Saddam Hussein possessed hidden weapons of mass destruction just before the US invasion of Iraq in 2003. When political scientists Brendan Nyhan and Jason Reifler showed subjects fake newspaper articles [18] (PDF) in which this was first suggested (in a 2004 quote from President Bush) and then refuted (with the findings of the Bush-commissioned Iraq Survey Group report, which found no evidence of active WMD programs in pre-invasion Iraq), they found that conservatives were more likely than before to believe the claim. (The researchers also tested how liberals responded when shown that Bush did not actually “ban” embryonic stem-cell research. Liberals weren’t particularly amenable to persuasion, either, but no backfire effect was observed.)

Another study gives some inkling of what may be going through people’s minds when they resist persuasion. Northwestern University sociologist Monica Prasad [19] and her colleagues wanted to test whether they could dislodge the notion that Saddam Hussein and Al Qaeda were secretly collaborating among those most likely to believe it—Republican partisans from highly GOP-friendly counties. So the researchers set up a study [20] (PDF) in which they discussed the topic with some of these Republicans in person. They would cite the findings of the 9/11 Commission, as well as a statement in which George W. Bush himself denied his administration had “said the 9/11 attacks were orchestrated between Saddam and Al Qaeda.”

One study showed that not even Bush’s own words could change the minds of Bush voters who believed there was an Iraq-Al Qaeda link.

As it turned out, not even Bush’s own words could change the minds of these Bush voters—just 1 of the 49 partisans who originally believed the Iraq-Al Qaeda claim changed his or her mind. Far more common was resisting the correction in a variety of ways, either by coming up with counterarguments or by simply being unmovable:

Interviewer: [T]he September 11 Commission found no link between Saddam and 9/11, and this is what President Bush said. Do you have any comments on either of those?

Respondent: Well, I bet they say that the Commission didn’t have any proof of it but I guess we still can have our opinions and feel that way even though they say that.

The same types of responses are already being documented on divisive topics facing the current administration. Take the “Ground Zero mosque.” Using information from the political myth-busting site FactCheck.org [21], a team at Ohio State presented subjects [22] (PDF) with a detailed rebuttal to the claim that “Feisal Abdul Rauf, the Imam backing the proposed Islamic cultural center and mosque, is a terrorist-sympathizer.” Yet among those who were aware of the rumor and believed it, fewer than a third changed their minds.

A key question—and one that’s difficult to answer—is how “irrational” all this is. On the one hand, it doesn’t make sense to discard an entire belief system, built up over a lifetime, because of some new snippet of information. “It is quite possible to say, ‘I reached this pro-capital-punishment decision based on real information that I arrived at over my life,'” explains Stanford social psychologist Jon Krosnick [23]. Indeed, there’s a sense in which science denial could be considered keenly “rational.” In certain conservative communities, explains Yale’s Kahan, “People who say, ‘I think there’s something to climate change,’ that’s going to mark them out as a certain kind of person, and their life is going to go less well.”

This may help explain a curious pattern Nyhan and his colleagues found when they tried to test the fallacy [6] (PDF) that President Obama is a Muslim. When a nonwhite researcher was administering their study, research subjects were amenable to changing their minds about the president’s religion and updating incorrect views. But when only white researchers were present, GOP survey subjects in particular were more likely to believe the Obama Muslim myth than before. The subjects were using “social desirabililty” to tailor their beliefs (or stated beliefs, anyway) to whoever was listening.

Which leads us to the media. When people grow polarized over a body of evidence, or a resolvable matter of fact, the cause may be some form of biased reasoning, but they could also be receiving skewed information to begin with—or a complicated combination of both. In the Ground Zero mosque case, for instance, a follow-up study [24] (PDF) showed that survey respondents who watched Fox News were more likely to believe the Rauf rumor and three related ones—and they believed them more strongly than non-Fox watchers.

Okay, so people gravitate toward information that confirms what they believe, and they select sources that deliver it. Same as it ever was, right? Maybe, but the problem is arguably growing more acute, given the way we now consume information—through the Facebook links of friends, or tweets that lack nuance or context, or “narrowcast [25]” and often highly ideological media that have relatively small, like-minded audiences. Those basic human survival skills of ours, says Michigan’s Arthur Lupia, are “not well-adapted to our information age.”

A predictor of whether you accept the science of global warming? Whether you’re a Republican or a Democrat.

If you wanted to show how and why fact is ditched in favor of motivated reasoning, you could find no better test case than climate change. After all, it’s an issue where you have highly technical information on one hand and very strong beliefs on the other. And sure enough, one key predictor of whether you accept the science of global warming is whether you’re a Republican or a Democrat. The two groups have been growing more divided in their views about the topic, even as the science becomes more unequivocal.

So perhaps it should come as no surprise that more education doesn’t budge Republican views. On the contrary: In a 2008 Pew survey [26], for instance, only 19 percent of college-educated Republicans agreed that the planet is warming due to human actions, versus 31 percent of non-college educated Republicans. In other words, a higher education correlated with an increased likelihood of denying the science on the issue. Meanwhile, among Democrats and independents, more education correlated with greater acceptance of the science.

Other studies have shown a similar effect: Republicans who think they understand the global warming issue best are least concerned about it; and among Republicans and those with higher levels of distrust of science in general, learning more about the issue doesn’t increase one’s concern about it. What’s going on here? Well, according to Charles Taber and Milton Lodge of Stony Brook, one insidious aspect of motivated reasoning is that political sophisticates are prone to be more biased than those who know less about the issues. “People who have a dislike of some policy—for example, abortion—if they’re unsophisticated they can just reject it out of hand,” says Lodge. “But if they’re sophisticated, they can go one step further and start coming up with counterarguments.” These individuals are just as emotionally driven and biased as the rest of us, but they’re able to generate more and better reasons to explain why they’re right—and so their minds become harder to change.

That may be why the selectively quoted emails of Climategate were so quickly and easily seized upon by partisans as evidence of scandal. Cherry-picking is precisely the sort of behavior you would expect motivated reasoners to engage in to bolster their views—and whatever you may think about Climategate, the emails were a rich trove of new information upon which to impose one’s ideology.

Climategate had a substantial impact on public opinion, according to Anthony Leiserowitz [27], director of the Yale Project on Climate Change Communication [28]. It contributed to an overall drop in public concern about climate change and a significant loss of trust in scientists. But—as we should expect by now—these declines were concentrated among particular groups of Americans: Republicans, conservatives, and those with “individualistic” values. Liberals and those with “egalitarian” values didn’t lose much trust in climate science or scientists at all. “In some ways, Climategate was like a Rorschach test,” Leiserowitz says, “with different groups interpreting ambiguous facts in very different ways.”

Is there a case study of science denial that largely occupies the political left? Yes: the claim that childhood vaccines are causing an epidemic of autism.

So is there a case study of science denial that largely occupies the political left? Yes: the claim that childhood vaccines are causing an epidemic of autism. Its most famous proponents are an environmentalist (Robert F. Kennedy Jr. [29]) and numerous Hollywood celebrities (most notably Jenny McCarthy [30] and Jim Carrey). TheHuffington Post gives a very large megaphone to denialists. And Seth Mnookin [31], author of the new book The Panic Virus [32], notes that if you want to find vaccine deniers, all you need to do is go hang out at Whole Foods.

Vaccine denial has all the hallmarks of a belief system that’s not amenable to refutation. Over the past decade, the assertion that childhood vaccines are driving autism rateshas been undermined [33] by multiple epidemiological studies—as well as the simple fact that autism rates continue to rise, even though the alleged offending agent in vaccines (a mercury-based preservative called thimerosal) has long since been removed.

Yet the true believers persist—critiquing each new study that challenges their views, and even rallying to the defense of vaccine-autism researcher Andrew Wakefield, afterhis 1998 Lancet paper [34]—which originated the current vaccine scare—was retracted and he subsequently lost his license [35] (PDF) to practice medicine. But then, why should we be surprised? Vaccine deniers created their own partisan media, such as the website Age of Autism, that instantly blast out critiques and counterarguments whenever any new development casts further doubt on anti-vaccine views.

It all raises the question: Do left and right differ in any meaningful way when it comes to biases in processing information, or are we all equally susceptible?

There are some clear differences. Science denial today is considerably more prominent on the political right—once you survey climate and related environmental issues, anti-evolutionism, attacks on reproductive health science by the Christian right, and stem-cell and biomedical matters. More tellingly, anti-vaccine positions are virtually nonexistent among Democratic officeholders today—whereas anti-climate-science views are becoming monolithic among Republican elected officials.

Some researchers have suggested that there are psychological differences between the left and the right that might impact responses to new information—that conservatives are more rigid and authoritarian, and liberals more tolerant of ambiguity. Psychologist John Jost of New York University has further argued that conservatives are “system justifiers”: They engage in motivated reasoning to defend the status quo.

This is a contested area, however, because as soon as one tries to psychoanalyze inherent political differences, a battery of counterarguments emerges: What about dogmatic and militant communists? What about how the parties have differed through history? After all, the most canonical case of ideologically driven science denial is probably the rejection of genetics in the Soviet Union, where researchers disagreeing with the anti-Mendelian scientist (and Stalin stooge) Trofim Lysenko were executed, and genetics itself was denounced as a “bourgeois” science and officially banned.

The upshot: All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?

We all have blinders in some situations. The question then becomes: What can be done to counteract human nature?

Given the power of our prior beliefs to skew how we respond to new information, one thing is becoming clear: If you want someone to accept new evidence, make sure to present it to them in a context that doesn’t trigger a defensive, emotional reaction.

This theory is gaining traction in part because of Kahan’s work at Yale. In one study [36], he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—”Scientific Panel Recommends Anti-Pollution Solution to Global Warming” and “Scientific Panel Recommends Nuclear Solution to Global Warming”—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.

You can follow the logic to its conclusion: Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a “culture war of fact.” In other words, paradoxically, you don’t lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.


Links:
[1] https://motherjones.com/files/lfestinger.pdf
[2] http://www.powells.com/biblio/61-9781617202803-1
[3] http://motherjones.com/environment/2011/04/history-of-climategate
[4] http://motherjones.com/environment/2011/04/field-guide-climate-change-skeptics
[5] http://www.ncbi.nlm.nih.gov/pubmed/2270237
[6] http://www-personal.umich.edu/~bnyhan/obama-muslim.pdf
[7] https://motherjones.com/files/descartes.pdf
[8] http://www-personal.umich.edu/~lupia/
[9] http://www.stonybrook.edu/polsci/ctaber/
[10] http://people.virginia.edu/~jdh6n/
[11] https://motherjones.com/files/emotional_dog_and_rational_tail.pdf
[12] http://synapse.princeton.edu/~sam/lord_ross_lepper79_JPSP_biased-assimilation-and-attitude-polarization.pdf
[13] http://psp.sagepub.com/content/23/6/636.abstract
[14] http://www.law.yale.edu/faculty/DKahan.htm
[15] https://motherjones.com/files/kahan_paper_cultural_cognition_of_scientific_consesus.pdf
[16] http://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1095&context=fss_papers
[17] http://seagrant.oregonstate.edu/blogs/communicatingclimate/transcripts/Episode_10b_Dan_Kahan.html
[18] http://www-personal.umich.edu/~bnyhan/nyhan-reifler.pdf
[19] http://www.sociology.northwestern.edu/faculty/prasad/home.html
[20] http://sociology.buffalo.edu/documents/hoffmansocinquiryarticle_000.pdf
[21] http://www.factcheck.org/
[22] http://www.comm.ohio-state.edu/kgarrett/FactcheckMosqueRumors.pdf
[23] http://communication.stanford.edu/faculty/krosnick/
[24] http://www.comm.ohio-state.edu/kgarrett/MediaMosqueRumors.pdf
[25] http://en.wikipedia.org/wiki/Narrowcasting
[26] http://people-press.org/report/417/a-deeper-partisan-divide-over-global-warming
[27] http://environment.yale.edu/profile/leiserowitz/
[28] http://environment.yale.edu/climate/
[29] http://www.huffingtonpost.com/robert-f-kennedy-jr-and-david-kirby/vaccine-court-autism-deba_b_169673.html
[30] http://www.huffingtonpost.com/jenny-mccarthy/vaccine-autism-debate_b_806857.html
[31] http://sethmnookin.com/
[32] http://www.powells.com/biblio/1-9781439158647-0
[33] http://discovermagazine.com/2009/jun/06-why-does-vaccine-autism-controversy-live-on/article_print
[34] http://www.thelancet.com/journals/lancet/article/PIIS0140673697110960/fulltext
[35] http://www.gmc-uk.org/Wakefield_SPM_and_SANCTION.pdf_32595267.pdf
[36] http://www.scribd.com/doc/3446682/The-Second-National-Risk-and-Culture-Study-Making-Sense-of-and-Making-Progress-In-The-American-Culture-War-of-Fact

When Will My Computer Understand Me? (Science Daily)

June 10, 2013 — It’s not hard to tell the difference between the “charge” of a battery and criminal “charges.” But for computers, distinguishing between the various meanings of a word is difficult.

A “charge” can be a criminal charge, an accusation, a battery charge, or a person in your care. Some of those meanings are closer together, others further apart. (Credit: Image courtesy of University of Texas at Austin, Texas Advanced Computing Center)

For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software. Driven initially by efforts to translate Russian scientific texts during the Cold War (and more recently by the value of information retrieval and data analysis tools), these efforts have met with mixed success. IBM’s Jeopardy-winningWatson system and Google Translate are high profile, successful applications of language technologies, but the humorous answers and mistranslations they sometimes produce are evidence of the continuing difficulty of the problem.

Our ability to easily distinguish between multiple word meanings is rooted in a lifetime of experience. Using the context in which a word is used, an intrinsic understanding of syntax and logic, and a sense of the speaker’s intention, we intuit what another person is telling us.

“In the past, people have tried to hand-code all of this knowledge,” explained Katrin Erk, a professor of linguistics at The University of Texas at Austin focusing on lexical semantics. “I think it’s fair to say that this hasn’t been successful. There are just too many little things that humans know.”

Other efforts have tried to use dictionary meanings to train computers to better understand language, but these attempts have also faced obstacles. Dictionaries have their own sense distinctions, which are crystal clear to the dictionary-maker but murky to the dictionary reader. Moreover, no two dictionaries provide the same set of meanings — frustrating, right?

Watching annotators struggle to make sense of conflicting definitions led Erk to try a different tactic. Instead of hard-coding human logic or deciphering dictionaries, why not mine a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a weighted map of relationships — a dictionary without a dictionary?

“An intuition for me was that you could visualize the different meanings of a word as points in space,” she said. “You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations (“the newspaper published charges…”). The meaning of a word in a particular context is a point in this space. Then we don’t have to say how many senses a word has. Instead we say: ‘This use of the word is close to this usage in another sentence, but far away from the third use.'”

To create a model that can accurately recreate the intuitive ability to distinguish word meaning requires a lot of text and a lot of analytical horsepower.

“The lower end for this kind of a research is a text collection of 100 million words,” she explained. “If you can give me a few billion words, I’d be much happier. But how can we process all of that information? That’s where supercomputers and Hadoop come in.”

Applying Computational Horsepower

Erk initially conducted her research on desktop computers, but around 2009, she began using the parallel computing systems at the Texas Advanced Computing Center (TACC). Access to a special Hadoop-optimized subsystem on TACC’s Longhornsupercomputer allowed Erk and her collaborators to expand the scope of their research. Hadoop is a software architecture well suited to text analysis and the data mining of unstructured data that can also take advantage of large computer clusters. Computational models that take weeks to run on a desktop computer can run in hours on Longhorn. This opened up new possibilities.

“In a simple case we count how often a word occurs in close proximity to other words. If you’re doing this with one billion words, do you have a couple of days to wait to do the computation? It’s no fun,” Erk said. “With Hadoop on Longhorn, we could get the kind of data that we need to do language processing much faster. That enabled us to use larger amounts of data and develop better models.”

Treating words in a relational, non-fixed way corresponds to emerging psychological notions of how the mind deals with language and concepts in general, according to Erk. Instead of rigid definitions, concepts have “fuzzy boundaries” where the meaning, value and limits of the idea can vary considerably according to the context or conditions. Erk takes this idea of language and recreates a model of it from hundreds of thousands of documents.

Say That Another Way

So how can we describe word meanings without a dictionary? One way is to use paraphrases. A good paraphrase is one that is “close to” the word meaning in that high-dimensional space that Erk described.

“We use a gigantic 10,000-dimentional space with all these different points for each word to predict paraphrases,” Erk explained. “If I give you a sentence such as, ‘This is a bright child,’ the model can tell you automatically what are good paraphrases (‘an intelligent child’) and what are bad paraphrases (‘a glaring child’). This is quite useful in language technology.”

Language technology already helps millions of people perform practical and valuable tasks every day via web searches and question-answer systems, but it is poised for even more widespread applications.

Automatic information extraction is an application where Erk’s paraphrasing research may be critical. Say, for instance, you want to extract a list of diseases, their causes, symptoms and cures from millions of pages of medical information on the web.

“Researchers use slightly different formulations when they talk about diseases, so knowing good paraphrases would help,” Erk said.

In a paper to appear in ACM Transactions on Intelligent Systems and Technology, Erk and her collaborators illustrated they could achieve state-of-the-art results with their automatic paraphrasing approach.

Recently, Erk and Ray Mooney, a computer science professor also at The University of Texas at Austin, were awarded a grant from the Defense Advanced Research Projects Agency to combine Erk’s distributional, high dimensional space representation of word meanings with a method of determining the structure of sentences based on Markov logic networks.

“Language is messy,” said Mooney. “There is almost nothing that is true all the time. “When we ask, ‘How similar is this sentence to another sentence?’ our system turns that question into a probabilistic theorem-proving task and that task can be very computationally complex.”

In their paper, “Montague Meets Markov: Deep Semantics with Probabilistic Logical Form,” presented at the Second Joint Conference on Lexical and Computational Semantics (STARSEM2013) in June, Erk, Mooney and colleagues announced their results on a number of challenge problems from the field of artificial intelligence.

In one problem, Longhorn was given a sentence and had to infer whether another sentence was true based on the first. Using an ensemble of different sentence parsers, word meaning models and Markov logic implementations, Mooney and Erk’s system predicted the correct answer with 85% accuracy. This is near the top results in this challenge. They continue to work to improve the system.

There is a common saying in the machine-learning world that goes: “There’s no data like more data.” While more data helps, taking advantage of that data is key.

“We want to get to a point where we don’t have to learn a computer language to communicate with a computer. We’ll just tell it what to do in natural language,” Mooney said. “We’re still a long way from having a computer that can understand language as well as a human being does, but we’ve made definite progress toward that goal.”

People Are Overly Confident in Their Own Knowledge, Despite Errors (Science Daily)

June 10, 2013 — Overprecision — excessive confidence in the accuracy of our beliefs — can have profound consequences, inflating investors’ valuation of their investments, leading physicians to gravitate too quickly to a diagnosis, even making people intolerant of dissenting views. Now, new research confirms that overprecision is a common and robust form of overconfidence driven, at least in part, by excessive certainty in the accuracy of our judgments.

New research confirms that overprecision is a common and robust form of overconfidence driven, at least in part, by excessive certainty in the accuracy of our judgments. (Credit: © pressmaster / Fotolia)

The research, conducted by researchers Albert Mannes of The Wharton School of the University of Pennsylvania and Don Moore of the Haas School of Business at the University of California, Berkeley, revealed that the more confident participants were about their estimates of an uncertain quantity, the less they adjusted their estimates in response to feedback about their accuracy and to the costs of being wrong.

“The findings suggest that people are too confident in what they know and underestimate what they don’t know,” says Mannes.

The new findings are published in Psychological Science, a journal of the Association for Psychological Science.

Research investigating overprecision typically involves asking people to come up with a 90% confidence interval around a numerical estimate — such as the length of the Nile River — but this doesn’t always faithfully reflect the judgments we have to make in everyday life. We know, for example, that arriving 15 minutes late for a business meeting is not the same as arriving 15 minutes early, and that we ought to err on the side of arriving early.

Mannes and Moore designed three studies to account for the asymmetric nature of many everyday judgments. Participants estimated the local high temperature on randomly selected days and their accuracy was rewarded in the form of lottery tickets toward a prize. For some trials, they earned tickets if their estimates were correct or close to the actual temperature (above or below); in other trials, they earned tickets for correct guesses or overestimates; and in some trials they earned tickets for correct guesses or underestimates.

The results showed that participants adjusted their estimates in the direction of the anticipated payoff after receiving feedback about their accuracy, just as Mannes and Moore expected.

But they didn’t adjust their estimates as much as they should have given their actual knowledge of local temperatures, suggesting that they were overly confident in their own powers of estimation.

Only when the researchers provided exaggerated feedback — in which errors were inflated by 2.5 times — were the researchers able to counteract participants’ tendency towards overprecision.

The new findings, which show that overprecision is a common and robust phenomenon, urge caution:

“People frequently cut things too close — arriving late, missing planes, bouncing checks, or falling off one of the many ‘cliffs’ that present themselves in daily life,” observe Mannes and Moore.

“These studies tell us that you shouldn’t be too certain about what’s going to happen, especially when being wrong could be dangerous. You should plan to protect yourself in case you aren’t as right as you think you are.”

Journal Reference:

  1. A. E. Mannes, D. A. Moore. A Behavioral Demonstration of Overconfidence in JudgmentPsychological Science, 2013; DOI: 10.1177/0956797612470700

You’re So Vain: Study Links Social Media Use and Narcissism (Science Daily)

June 11, 2013 — Facebook is a mirror and Twitter is a megaphone, according to a new University of Michigan study exploring how social media reflect and amplify the culture’s growing levels of narcissism.

New research shows that narcissistic college students and their adult counterparts use social media in different ways to boost their egos and control others’ perceptions of them. (Credit: © mtkang / Fotolia)

The study, published online inComputers in Human Behavior, was conducted by U-M researchers Elliot Panek, Yioryos Nardis and Sara Konrath.

“Among young adult college students, we found that those who scored higher in certain types of narcissism posted more often on Twitter,” said Panek, who recently received his doctorate in communication studies from U-M and will join Drexel University this fall as a visiting fellow.

“But among middle-aged adults from the general population, narcissists posted more frequent status updates on Facebook.”

According to Panek, Facebook serves narcissistic adults as a mirror.

“It’s about curating your own image, how you are seen, and also checking on how others respond to this image,” he said. “Middle-aged adults usually have already formed their social selves, and they use social media to gain approval from those who are already in their social circles.”

For narcissistic college students, the social media tool of choice is the megaphone of Twitter.

“Young people may overevaluate the importance of their own opinions,” Panek said. “Through Twitter, they’re trying to broaden their social circles and broadcast their views about a wide range of topics and issues.”

The researchers examined whether narcissism was related to the amount of daily Facebook and Twitter posting and to the amount of time spent on each social media site, including reading the posts and comments of others.

For one part of the study, the researchers recruited 486 college undergraduates. Three-quarters were female and the median age was 19. Participants answered questions about the extent of their social media use, and also took a personality assessment measuring different aspects of narcissism, including exhibitionism, exploitativeness, superiority, authority and self-sufficiency.

For the second part of the study, the researchers asked 93 adults, mostly white females, with an average age of 35, to complete an online survey.

According to Panek, the study shows that narcissistic college students and their adult counterparts use social media in different ways to boost their egos and control others’ perceptions of them.

“It’s important to analyze how often social media users actually post updates on sites, along with how much time they spend reading the posts and comments of others,” he said.

The researchers were unable to determine whether narcissism leads to increased use of social media, or whether social media use promotes narcissism, or whether some other factors explain the relationship. But the study is among the first to compare the relationship between narcissism and different kinds of social media in different age groups.

Funding for the study comes in part from The Character Project, sponsored by Wake Forest University via the John Templeton Foundation.

Journal Reference:

  1. Elliot T. Panek, Yioryos Nardis, Sara Konrath. Mirror or Megaphone?: How relationships between narcissism and social networking site use differ on Facebook and TwitterComputers in Human Behavior, 2013; 29 (5): 2004 DOI: 10.1016/j.chb.2013.04.012

Chimpanzees Have Five Universal Personality Dimensions (Science Daily)

June 3, 2013 — While psychologists have long debated the core personality dimensions that define humanity, primate researchers have been working to uncover the defining personality traits for humankind’s closest living relative, the chimpanzee. New research, published in the June 3 issue ofAmerican Journal of Primatology provides strong support for the universal existence of five personality dimensions in chimpanzees: reactivity/undependability, dominance, openness, extraversion and agreeableness with a possible sixth factor, methodical, needing further investigation.

Chimpanzee. New research provides strong support for the universal existence of five personality dimensions in chimpanzees: reactivity/undependability, dominance, openness, extraversion and agreeableness with a possible sixth factor, methodical, needing further investigation. (Credit: © anekoho / Fotolia)

“Understanding chimpanzee personality has important theoretical and practical implications,” explained lead author Hani Freeman, postdoctoral fellow with the Lester E. Fisher Center for the Study and Conservation of Apes at Lincoln Park Zoo. “From an academic standpoint, the findings can inform investigations into the evolution of personality. From a practical standpoint, caretakers of chimpanzees living in zoos or elsewhere can now tailor individualized care based on each animal’s personality thereby improving animal welfare.”

The study of chimpanzee personality is not novel; however, according to the authors, previous instruments designed to measure personality left a number of vital questions unanswered.

“Some personality scales used for chimpanzees were originally designed for another species. These ‘top-down’ approaches are susceptible to including traits that are not relevant for chimps, or fail to include all the relevant aspects of chimpanzee personality,” explained Freeman. “Another tactic, called a ‘bottom-up’ approach, derives traits specifically for chimpanzees without taking into account information from previous scales. This approach also has limitations as it impedes comparisons with findings in other studies and other species, which is essential if you want to use research on chimpanzees to better understand the evolution of human personality traits.”

To address the limitations of each approach and gain a better understanding of chimpanzee personality, the authors developed a new personality rating scale that incorporated the strengths of both types of scales. This new scale consisted of 41 behavioral descriptors including boldness, jealousy, friendliness and stinginess amongst others. Seventeen raters who work closely and directly with chimpanzees used the scale to assess 99 chimpanzees in their care at the Michale E. Keeling Center for Comparative Medicine and Research, UT MD Anderson Cancer Center in Bastrop, Texas.

The chimpanzees rated were aged 8 to 48, a majority had been captive born and mother-raised, and all had lived at the facility for at least two years.

To validate their findings, the researchers used two years worth of behavioral data collected on the chimpanzees. As the authors expected, the findings showed the personality ratings were associated with differences in how the chimpanzees behaved. The researchers also showed the raters tended to agree in their independent judgments of chimpanzees’ personalities, suggesting the raters were not merely projecting traits onto the chimpanzees.

Researchers suggest that one benefit to having the chimpanzees rated on the five core personality dimensions is that this information can now be used to make predictions that will help in their management, such as how individual chimpanzees will behave in various social situations. This type of information will help zoos better anticipate certain behaviors from various individuals, and will assist them in providing individualized care.

Journal Reference:

  1. Hani D. Freeman, Sarah F. Brosnan, Lydia M. Hopper, Susan P. Lambeth, Steven J. Schapiro, Samuel D. Gosling.Developing a Comprehensive and Comparative Questionnaire for Measuring Personality in Chimpanzees Using a Simultaneous Top-Down/Bottom-Up DesignAmerican Journal of Primatology, 2013; DOI: 10.1002/ajp.22168

‘Belief in Science’ Increases in Stressful Situations (Science Daily)

June 5, 2013 — A faith in the explanatory and revealing power of science increases in the face of stress or anxiety, a study by Oxford University psychologists suggests.

The researchers argue that a ‘belief in science’ may help non-religious people deal with adversity by offering comfort and reassurance, as has been reported previously for religious belief.

‘We found that being in a more stressful or anxiety-inducing situation increased participants’ “belief in science”,’ says Dr Miguel Farias, who led the study in the Department of Experimental Psychology at Oxford University. ‘This belief in science we looked at says nothing of the legitimacy of science itself. Rather we were interested in the values individuals hold about science.’

He explains: ‘While most people accept science as a reliable source of knowledge about the world, some may hold science as a superior method for gathering knowledge, the only way to explain the world, or as having some unique and fundamental value in itself. This is a view of science that some atheists endorse.’

As well as stressing that investigating a belief in science carries no judgement on the value of science as a method, the researchers point out that drawing a parallel between the psychological benefits of religious faith and belief in science doesn’t necessarily mean that scientific practice and religion are also similar in their basis.

Instead, the researchers suggest that their findings may highlight a basic human motivation to believe.

‘It’s not just believing in God that is important for gaining these psychological benefits, it is belief in general,’ says Dr Farias. ‘It may be that we as humans are just prone to have belief, and even atheists will hold non-supernatural beliefs that are reassuring and comforting.’

The researchers report their findings in the Journal of Experimental Social Psychology.

There is evidence from previous studies that suggests religious belief helps individuals cope with stress and anxiety. The Oxford University group wondered if this was specific to religious belief, or was a more general function of holding belief.

The researchers developed a scale measuring a ‘belief in science’ in which people are asked how much they agree or disagree with a series of 10 statements, including:

  • ‘Science tells us everything there is to know about what reality consists of.’
  • ‘All the tasks human beings face are soluble by science.’
  • ‘The scientific method is the only reliable path to knowledge.’

This scale was used first with a group of 100 rowers, of whom 52 were about to compete in a rowing regatta and the other 48 were about to do a normal training session. Those about to row in competition would be expected to be at a higher stress level.

Those who were competing in the regatta returned scores showing greater belief in science than those in the training group. The difference was statistically significant.

Both groups of rowers reported a low degree of commitment to religion and as expected, those rowers about to compete did say they were experiencing more stress.

In a second experiment, a different set of 60 people were randomly assigned to two groups. One group was asked to write about the feelings aroused by thinking about their own death, while the other was asked to write about dental pain. A number of studies have used an exercise on thinking about your own death to induce a certain amount of ‘existential anxiety’.

The participants who had been asked to think about their own death scored higher in the belief in science scale.

The researchers say their findings are consistent with the idea that belief in science increases when secular individuals are placed in threatening situations. They go on to suggest that a belief in science may help non-religious people deal with adverse conditions.

Dr Farias acknowledges however that they have only shown this in one direction — that stress or anxiety increases belief in science. They suggest other experiments should be done to examine whether affirming a belief in science might then reduce subsequent experience of stress or anxiety.

Journal Reference:

  1. Miguel Farias, Anna-Kaisa Newheiser, Guy Kahane, Zoe de Toledo. Scientific faith: Belief in science increases in the face of stress and existential anxietyJournal of Experimental Social Psychology, 2013; DOI:10.1016/j.jesp.2013.05.008

A mulher que encolheu o cérebro humano (O Globo)

Suzana Herculano é a primeira brasileira a falar na prestigiada conferência TED

Ela debaterá o cérebro de 86 bilhões de neurônios (e não 100 bilhões, como se acreditava) e como o homem se diferenciou dos primatas 

Publicado:24/05/13 – 7h00; Atualizado:24/05/13 – 11h41

Suzana Herculano-Houzel, professora do Instituto de Ciências Biomédicas da UFRJFoto: Guito Moreto

Suzana Herculano-Houzel, professora do Instituto de Ciências Biomédicas da UFRJ Guito Moreto

Neurocientista da UFRJ, Suzana Herculano-Houzel é a primeira brasileira a participar da TED (Tecnologia, Entretenimento e Design, em português) — prestigiada série de conferências que reúne grandes nomes das mais diversas áreas do conhecimento para debater novas ideias. Suzana falará no dia 12 de junho, sob o tema “Ouça a natureza”, e destacará suas descobertas únicas sobre o cérebro humano.

Sobre o que vai falar na TED?

Vou falar sobre o cérebro humano e mostrar como ele não é um cérebro especial, uma exceção à regra. Nossas pesquisas nos revelaram que se trata apenas de um cérebro de primata grande. O notável é que passamos a ter um cérebro enorme, do tamanho que nenhum outro primata tem, nem os maiores, porque inventamos o cozimento dos alimentos e, com isso, passamos a ter um número enorme de neurônios.

O cozimento foi fundamental para nos tornarmos humanos?

Sim, burlamos a limitação energética imposta pela dieta crua. E a implicação bacana e irônica é que, com isso, conseguimos liberar tempo no cérebro para nos dedicarmos a outras coisas (que não buscar alimentos), como criar a agricultura, as civilizações, a geladeira e a eletricidade. Até o ponto em que conseguir comida cozida e calorias em excesso ficou tão fácil que, agora, temos o problema inverso: estamos comendo demais. Por isso, voltamos à saladinha.

Se alimentarmos orangotangos e gorilas com comida cozida eles serão tão inteligentes quanto nós?

Sim, porque não seriam limitados pelo número reduzido de calorias que conseguem com a comida crua. Claro que nós fizemos uma inovação cultural ao inventar a cozinha. Tem uma diferença entre dar comida cozida para o animal e ele ter o desenvolvimento cultural do cozimento. Mas, ainda assim, se em todas as refeições eles tiverem acesso à comida cozida, daqui a 200 mil ou 300 mil anos eles terão o cérebro maior. Com a alimentação que têm hoje, não é possível terem um cérebro maior dado o corpo grande que têm. É uma coisa ou outra.

Somos especiais?

A gente não é especial coisa alguma. Somos apenas um primata que burlou as regras energéticas e conseguiu botar mais neurônios no cérebro de um jeito que nenhum outro animal conseguiu. Por isso estudamos os outros animais e não o contrário.

Persistem ainda mitos sobre o cérebro? Como o dos 100 bilhões de neurônios, que seus estudos demonstraram que são, na verdade, 86 bilhões?

Sim, eles continuam existindo, mesmo na neurociência. O nosso trabalho já é muito citado como referência. As coisas estão mudando. E o mais legal é que é por conta da ciência tupiniquim, o que eu acho maravilhoso. Mas vemos que é um processo, que ainda tem muita gente que insiste no número antigo.

O novo manual de diagnóstico de doenças mentais dos EUA (que serve de referência para todo o mundo, inclusive para a OMS) foi lançado na semana passada em meio à controvérsia. Especialistas acham que são tantos transtornos que praticamente não resta mais nenhum espaço para a normalidade. Qual a sua opinião?

Acho que essa discussão é muito necessária, justamente para reconhecermos o que são as variações ao redor do normal e quais são os extremos problemáticos e doentios de fato. Então, a discussão é importante, ótima a qualquer momento. Mas acho também que há muita informação errada e sensacionalista circulando, sobretudo sobre o déficit de atenção. As estatísticas variam muito de país para país, às vezes porque varia o número de médicos que reconhece a criança como portadora do distúrbio. E acho que ainda há um problema enorme, um medo enorme do estereótipo da doença mental. Até hoje ainda existe uma resistência louca em ir a um psiquiatra. E acho que, pelo contrário, ganhamos muito reconhecendo que existem transtornos e que eles podem ser tratados.

Ainda há muito estigma?

O maior problema hoje em dia é que é feio ter um distúrbio no cérebro. Perceba que nem estou falando em transtorno mental. Precisar de remédio para o cérebro é terrível. E temos tanto a ganhar reconhecendo os problemas, fazendo os diagnósticos. O cérebro é tão complexo, tem tanta coisa para dar errado, que o espantoso é que não dê problema em todo mundo sempre. Então, acho normal que boa parte da população tenha algum problema, não me espanta nem um pouco. E, uma vez que se reconhece o problema, que se faz o diagnóstico, há a opção de poder tratar. Se dispomos de um tratamento, por que não usar?

O presidente dos EUA, Barack Obama, recentemente anunciou uma inédita iniciativa de reunir pesquisadores dos mais diversos centros para estudar exclusivamente o cérebro. O que podemos esperar de tamanho esforço científico?

Não só o cérebro, mas o cérebro em atividade. Obama quer ir além do que já tinham feito — estudar a função de diferentes áreas — e entender como se conectam, como falam umas com as outras, ter ideia desse funcionamento integrado, dessa interação. Essa é uma das grandes lacunas do conhecimento: entender como as várias partes do cérebro funcionam ao mesmo tempo. Não sabemos como o cérebro funciona como um todo; é uma das fronteiras finais do conhecimento.

Não sabemos como o cérebro funciona?

Como um todo, não. Sabemos o que as partes fazem, mas não sabemos como se dá a conversa entre elas. Não sabemos a origem da consciência, da sensação do “eu estou aqui agora”. Que áreas são fundamentais para isso? É esse tipo de conhecimento que se está buscando, do cérebro funcionando ao vivo e em cores, em tempo real.

O objetivo não é estudar doenças, então?

Não, o grande objetivo é estudar consciência, memória; entender como o cérebro reúne emoção e lógica, coisas que são fruto da ação coordenada de várias partes. Claro que desse conhecimento todo podem surgir implicações para o Alzheimer e outras doenças. Mas, na verdade, falar em doenças é uma roupagem usada pela divulgação do programa para o público assimilar melhor. Existe esse preconceito de que a ciência só vale quando resolve uma doença.

Leia mais sobre esse assunto em http://oglobo.globo.com/ciencia/a-mulher-que-encolheu-cerebro-humano-8482825#ixzz2UFWUvdYn © 1996 – 2013. Todos direitos reservados a Infoglobo Comunicação e Participações S.A. Este material não pode ser publicado, transmitido por broadcast, reescrito ou redistribuído sem autorização.

Clouds in the Head: New Model of Brain’s Thought Processes (Science Daily)

May 21, 2013 — A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. Scientists funded by the Swiss National Science Foundation (SNSF) propose that the field of brain research should expand its focus.

A new model of the brain’s thought processes explains the apparently chaotic activity patterns of individual neurons. They do not correspond to a simple stimulus/response linkage, but arise from the networking of different neural circuits. (Credit: iStockphoto/Sebastian Kaulitzki)

Many brain researchers cannot see the forest for the trees. When they use electrodes to record the activity patterns of individual neurons, the patterns often appear chaotic and difficult to interpret. “But when you zoom out from looking at individual cells, and observe a large number of neurons instead, their global activity is very informative,” says Mattia Rigotti, a scientist at Columbia University and New York University who is supported by the SNSF and the Janggen-Pöhn-Stiftung. Publishing inNature together with colleagues from the United States, he has shown that these difficult-to-interpret patterns in particular are especially important for complex brain functions.

What goes on in the heads of apes

The researchers have focussed their attention on the activity patterns of 237 neurons that had been recorded some years previously using electrodes implanted in the frontal lobes of two rhesus monkeys. At that time, the apes had been taught to recognise images of different objects on a screen. Around one third of the observed neurons demonstrated activity that Rigotti describes as “mixed selectivity.” A mixed selective neuron does not always respond to the same stimulus (the flowers or the sailing boat on the screen) in the same way. Rather, its response differs as it also takes account of the activity of other neurons. The cell adapts its response according to what else is going on in the ape’s brain.

Chaotic patterns revealed in context

Just as individual computers are networked to create concentrated processing and storage capacity in the field of Cloud Computing, links in the complex cognitive processes that take place in the prefrontal cortex play a key role. The greater the density of the network in the brain, in other words the greater the proportion of mixed selectivity in the activity patterns of the neurons, the better the apes were able to recall the images on the screen, as demonstrated by Rigotti in his analysis. Given that the brain and cognitive capabilities of rhesus monkeys are similar to those of humans, mixed selective neurons should also be important in our own brains. For him this is reason enough why brain research from now on should no longer be satisfied with just the simple activity patterns, but should also consider the apparently chaotic patterns that can only be revealed in context.

Journal Reference:

  1. Mattia Rigotti, Omri Barak, Melissa R. Warden, Xiao-Jing Wang, Nathaniel D. Daw, Earl K. Miller, Stefano Fusi. The importance of mixed selectivity in complex cognitive tasksNature, 2013; DOI: 10.1038/nature12160

Playing for All Kinds of Possibilities (N.Y.Times)

Buckets of Blickets: Children and Logic: A game developed by researchers at the University of California, Berkeley hopes to show how imaginative play in children may influence development of abstract thought.

By DAVID DOBBS

Published: April 22, 2013

When it comes to play, humans don’t play around.

Alison Gopnik and the Gopnik Lab/University of California, Berkeley. Esther and Benny, both 4, play Blickets with Sophie Bridgers in a lab at the University of California, Berkeley. Children, lacking prior biases, excel in the game, based on associations, but adults flunk it.

Other species play, but none play for as much of their lives as humans do, or as imaginatively, or with as much protection from the family circle. Human children are unique in using play to explore hypothetical situations rather than to rehearse actual challenges they’ll face later. Kittens may pretend to be cats fighting, but they will not pretend to be children; children, by contrast, will readily pretend to be cats or kittens — and then to be Hannah Montana, followed by Spider-Man saving the day.

And in doing so, they develop some of humanity’s most consequential faculties. They learn the art, pleasure and power of hypothesis — of imagining new possibilities. And serious students of play believe that this helps make the species great.

The idea that play contributes to human success goes back at least a century. But in the last 25 years or so, researchers like Elizabeth S. SpelkeBrian Sutton-SmithJaak Panksepp and Alison Gopnik have developed this notion more richly and tied it more closely to both neuroscience and human evolution. They see play as essential not just to individual development, but to humanity’s unusual ability to inhabit, exploit and change the environment.

Dr. Gopnik, author of “The Scientist in the Crib” and “The Philosophical Baby,” and a professor of psychology at the University of California, Berkeley, has been studying the ways that children learn to assess their environment through play. Lately she has focused on the distinction between “exploring” new environments and “exploiting” them. When we’re quite young, we are more willing to explore, she finds; adults are more inclined to exploit.

To exploit, one leans heavily on lessons (and often unconscious rules) learned earlier — so-called prior biases. These biases are useful to adults because they save time and reduce error: By going to the restaurant you know is good, instead of the new place across town, you increase the chance that you’ll enjoy the evening.

Most adults are slow to set such biases aside; young children fling them away like bad fruit.

Dr. Gopnik shows this brilliantly with a game she invented with the psychologist David Sobel (her student, now a professor at Brown). In the game, which has the fetching name Blickets, players try to figure out what it is that makes an otherwise undistinguished clay figure a blicket. In some scenarios you can win even if you’re applying a prior bias. In others you can’t.

Last summer I joined Dr. Gopnik behind a wall of one-way glass to watch her lab manager, Sophie Bridgers, play the game with an extremely alert 4-year-old, Esther.

Seated at a child-size table, Esther leaned forward on her elbows to watch as Ms. Bridgers brought out a small bin of clay shapes and told her that some of them were blickets but most were not.

“You cannot tell which ones are blickets by looking at them. But the ones that are blickets have blicketness inside. And luckily,” Ms. Bridgers went on, holding up a box with a red plastic top, “I have my machine. Blicketness makes my machine turn on and play music.”

It’s a ruse, of course. The box responds not to the clay shapes but to a switch under the table controlled by Ms. Bridgers.

Now came the challenge. The game can be played by either of two rules, called “and” and “or.” The “or” version is easier: When a blicket is placed atop the machine, it will light the machine up whether placed there by itself or with other pieces. It is either a blicket or it isn’t; it doesn’t depend on the presence of any other object.

In the “and” trial, however, a blicket reveals its blicketness only if both it and another blicket are placed on the machine; and it will light up the box even if it and the other blicket are accompanied by a non-blicket. It can be harder than it sounds, and this is the game that Esther played.

First, Ms. Bridgers put each of three clay shapes on the box individually — rectangle, then triangle, then a bridge. None activated the machine. Then she put them on the box in three successive combinations.

1. Rectangle and triangle: No response.

2. Rectangle and bridge: Machine lighted up and played a tune!

3. Triangle and bridge: No response.

Ms. Bridgers then picked up each piece in turn and asked Esther whether it was a blicket. I had been indulging my adult (and journalistic) prior bias for recorded observation by filling several pages with notes and diagrams, and I started flipping frantically through my notebook.

I was still looking when Esther, having given maybe three seconds’ thought to the matter, correctly identified all three. The rectangle? “A blicket,” she said. Triangle? A shake of the head: No. Bridge? “A blicket.” A 4-year-old had instantly discerned a rule that I recognized only after Dr. Gopnik explained it to me.

Esther, along with most other 4- and 5-year-olds tested, bested not just me but most of 88 California undergraduates who took the “and” test. We educated grown-ups failed because our prior biases dictated that we play the game by the more common and efficient “or” rule.

“Or” rules apply far more often in actual life, when a thing’s essence seldom depends on another object’s presence. An arrow’s utility may depend on a bow, but its identity as an arrow does not. Since the “or” rule is more likely correct and simpler to use, I grabbed it and clung.

Esther, however, quickly ditched the “or” rule and hit upon the far less likely “and” rule. Such low-probability hypotheses often fail. But children, like adventurous scientists in a lab, will try these wild ideas anyway, because even if they fail, they often produce interesting results.

Esther and her twin brother, Benny (who played another version of the game), generated low-probability hypotheses as fast as I could breathe. “Maybe if you turn it over and put it on the other end!” “Let’s put all three on!” They were hypothesis machines. Their mother, Wendy Wolfson (who is a science writer), told me they’re like this all the time. “It’s like living with a pair of especially inquisitive otters.”

Alas, Dr. Gopnik said, this trait peaks around 4 or 5. After that, we gradually take less interest in seeing what happens and more in getting it right.

Yet this playlike spirit of speculation and exploration does stay with us, both as individuals and as a species. Studies suggest that free, self-directed play in safe environments enhances resilience, creativity, flexibility, social understanding, emotional and cognitive control, and resistance to stress, depression and anxiety. And we continue to explore as adults, even if not so freely. That’s how we got to the Internet, the moon, and Dr. Gopnik’s lab.

Finally, in the long game of evolution, Dr. Gopnik and some of her fellow scientists hypothesize that humans’ extended period of imaginative play, along with the traits it develops, has helped select for the big brain and rich neural networks that characterize Homo sapiens. This may strike you either as a low-probability or a high-probability hypothesis. But it certainly seems worth playing with.

The Tangle of the Sexes (N.Y.Times)

GRAY MATTER

By BOBBI CAROTHERS and HARRY REIS

Published: April 20, 2013

MEN and women are so different they might as well be from separate planets, so says the theory of the sexes famously explicated in John Gray’s 1992 best seller, “Men Are From Mars, Women Are From Venus.”

Jonny Negron

Indeed, sex differences are a perennially popular topic in behavioral science; since 2000, scientific journals have published more than 30,000 articles on them.

That men and women differ in certain respects is unassailable. Unfortunately, the continuing belief in “categorical differences” — men are aggressive, women are caring — reinforces traditional stereotypes by treating certain behaviors as immutable. And, it turns out, this belief is based on a scientifically indefensible model of human behavior.

As the psychologist Cordelia Fine explains in her book “Delusions of Gender,” the influence of one kind of categorical thinking, neurosexism — justifying differential treatment by citing differences in neural anatomy or function — spills over to educational and employment disparities, family relations and arguments about same-sex institutions.

Consider a marital spat in which she accuses him of being emotionally withdrawn while he indicts her for being demanding. In a gender-categorical world, the argument can quickly devolve to “You’re acting like a typical (man/woman)!” Asking a partner to change, in this binary world, is expecting him or her to go against the natural tendency of his or her category — a very tall order.

The alternative, a dimensional perspective, ascribes behavior to individuals, as one of their various personal qualities. It is much easier to imagine how change might take place.

But what of all those published studies, many of which claim to find differences between the sexes? In our research, published recently in The Journal of Personality and Social Psychology, we shed an empirical light on this question by using a method called taxometric analysis.

This method asks whether data from two groups are likely to be taxonic — a classification that distinguishes one group from another in a nonarbitrary, fundamental manner, called a “taxon” — or whether they are more likely to be dimensional, with individuals’ scores dispersed along a single continuum.

The existence of a taxon implies a fundamental distinction, akin to the difference between species. As the clinical psychologist Paul Meehl famously put it, “There are gophers, there are chipmunks, but there are no gophmunks.”

A dimensional model, in contrast, indicates that men and women come from the same general pool, differing relatively, trait by trait, much as any two individuals from the same group might differ.

We applied such techniques to the data from 13 studies, conducted earlier by other researchers. In each, significant differences had been found. We then looked more closely at these differences to ask whether they were more likely to be of degree (a dimension) or kind (a taxon).

The studies looked at diverse attributes, including sexual attitudes and behavior, desired mate characteristics, interest in and ease of learning science, and intimacy, empathy, social support and caregiving in relationships.

Across analyses spanning 122 attributes from more than 13,000 individuals, one conclusion stood out: instead of dividing into two groups, men and women overlapped considerably on attributes like the frequency of science-related activities, interest in casual sex, or the allure of a potential mate’s virginity.

Even stereotypical traits, like assertiveness or valuing close friendships, fell along a continuum. In other words, we found little or no evidence of categorical distinctions based on sex.

To some, this is no surprise; the psychologist Janet Hyde has argued repeatedly that men and women are far more similar than different. Yet to many others, the idea that men and women are fundamentally different beings persists. The Mars/Venus binary aside, it is all too easy to reify observed behavioral differences by associating them with the categories of the people doing the behaving, be it their sex, race or occupation.

It is important to keep in mind what we did not study. We looked only at psychological characteristics, qualities often associated with the behavior of women and men. We did not look at abilities or skills, and we did not directly observe behavior.

Just to be safe, we repeated our analyses on several dimensions where we did expect categorical differences: physical size, athletic ability and sex-stereotyped hobbies like playing video games and scrapbooking. On these we did find evidence for categories based on sex.

The Mars/Venus view describes a world that does not exist, at least here on earth. Our work shows that sex does not define qualitatively distinct categories of psychological characteristics. We need to look at individuals as individuals.

Bobbi Carothers is a senior data analyst at Washington University in St. Louis. Harry Reis is a professor of psychology at the University of Rochester.