Arquivo da categoria: cognição

>The Science of Why We Don’t Believe Science (Mother Jones)


Illustration: Jonathon Rosen
How our brains fool us on climate, creationism, and the vaccine-autism link.

— By Chris Mooney
Mon Apr. 18, 2011 3:00 AM PDT

“A MAN WITH A CONVICTION is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point.” So wrote the celebrated Stanford University psychologist Leon Festinger, in a passage that might have been referring to climate change denial—the persistent rejection, on the part of so many Americans today, of what we know about global warming and its human causes. But it was too early for that—this was the 1950s—and Festinger was actually describing a famous case study in psychology.

Festinger and several of his colleagues had infiltrated the Seekers, a small Chicago-area cult whose members thought they were communicating with aliens—including one, “Sananda,” who they believed was the astral incarnation of Jesus Christ. The group was led by Dorothy Martin, a Dianetics devotee who transcribed the interstellar messages through automatic writing.

Through her, the aliens had given the precise date of an Earth-rending cataclysm: December 21, 1954. Some of Martin’s followers quit their jobs and sold their property, expecting to be rescued by a flying saucer when the continent split asunder and a new sea swallowed much of the United States. The disciples even went so far as to remove brassieres and rip zippers out of their trousers—the metal, they believed, would pose a danger on the spacecraft.

Festinger and his team were with the cult when the prophecy failed. First, the “boys upstairs” (as the aliens were sometimes called) did not show up and rescue the Seekers. Then December 21 arrived without incident. It was the moment Festinger had been waiting for: How would people so emotionally invested in a belief system react, now that it had been soundly refuted?

At first, the group struggled for an explanation. But then rationalization set in. A new message arrived, announcing that they’d all been spared at the last minute. Festinger summarized the extraterrestrials’ new pronouncement: “The little group, sitting all night long, had spread so much light that God had saved the world from destruction.” Their willingness to believe in the prophecy had saved Earth from the prophecy!

From that day forward, the Seekers, previously shy of the press and indifferent toward evangelizing, began to proselytize. “Their sense of urgency was enormous,” wrote Festinger. The devastation of all they had believed had made them even more certain of their beliefs.

In the annals of denial, it doesn’t get much more extreme than the Seekers. They lost their jobs, the press mocked them, and there were efforts to keep them away from impressionable young minds. But while Martin’s space cult might lie at on the far end of the spectrum of human self-delusion, there’s plenty to go around. And since Festinger’s day, an array of new discoveries in psychology and neuroscience has further demonstrated how our preexisting beliefs, far more than any new facts, can skew our thoughts and even color what we consider our most dispassionate and logical conclusions. This tendency toward so-called “motivated reasoning” helps explain why we find groups so polarized over matters where the evidence is so unequivocal: climate change, vaccines, “death panels,” the birthplace and religion of the president, and much else. It would seem that expecting people to be convinced by the facts flies in the face of, you know, the facts.

The theory of motivated reasoning builds on a key insight of modern neuroscience: Reasoning is actually suffused with emotion (or what researchers often call “affect”). Not only are the two inseparable, but our positive or negative feelings about people, things, and ideas arise much more rapidly than our conscious thoughts, in a matter of milliseconds—fast enough to detect with an EEG device, but long before we’re aware of it. That shouldn’t be surprising: Evolution required us to react very quickly to stimuli in our environment. It’s a “basic human survival skill,” explains political scientist Arthur Lupia of the University of Michigan. We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.

“We apply fight-or-flight reflexes not only to predators, but to data itself.”

We’re not driven only by emotions, of course—we also reason, deliberate. But reasoning comes later, works slower—and even then, it doesn’t take place in an emotional vacuum. Rather, our quick-fire emotions can set us on a course of thinking that’s highly biased, especially on topics we care a great deal about.

Consider a person who has heard about a scientific discovery that deeply challenges her belief in divine creation—a new hominid, say, that confirms our evolutionary origins. What happens next, explains political scientist Charles Taber of Stony Brook University, is a subconscious negative response to the new information—and that response, in turn, guides the type of memories and associations formed in the conscious mind. “They retrieve thoughts that are consistent with their previous beliefs,” says Taber, “and that will lead them to build an argument and challenge what they’re hearing.”

In other words, when we think we’re reasoning, we may instead be rationalizing. Or to use an analogy offered by University of Virginia psychologist Jonathan Haidt: We may think we’re being scientists, but we’re actually being lawyers. Our “reasoning” is a means to a predetermined end—winning our “case”—and is shot through with biases. They include “confirmation bias,” in which we give greater heed to evidence and arguments that bolster our beliefs, and “disconfirmation bias,” in which we expend disproportionate energy trying to debunk or refute views and arguments that we find uncongenial.

That’s a lot of jargon, but we all understand these mechanisms when it comes to interpersonal relationships. If I don’t want to believe that my spouse is being unfaithful, or that my child is a bully, I can go to great lengths to explain away behavior that seems obvious to everybody else—everybody who isn’t too emotionally invested to accept it, anyway. That’s not to suggest that we aren’t also motivated to perceive the world accurately—we are. Or that we never change our minds—we do. It’s just that we have other important goals besides accuracy—including identity affirmation and protecting one’s sense of self—and often those make us highly resistant to changing our beliefs when the facts say we should.

Modern science originated from an attempt to weed out such subjective lapses—what that great 17th century theorist of the scientific method, Francis Bacon, dubbed the “idols of the mind.” Even if individual researchers are prone to falling in love with their own theories, the broader processes of peer review and institutionalized skepticism are designed to ensure that, eventually, the best ideas prevail.

“Scientific evidence is highly susceptible to misinterpretation. Giving ideologues scientific data that’s relevant to their beliefs is like unleashing them in the motivated-reasoning equivalent of a candy store.”

Our individual responses to the conclusions that science reaches, however, are quite another matter. Ironically, in part because researchers employ so much nuance and strive to disclose all remaining sources of uncertainty, scientific evidence is highly susceptible to selective reading and misinterpretation. Giving ideologues or partisans scientific data that’s relevant to their beliefs is like unleashing them in the motivated-reasoning equivalent of a candy store.

Sure enough, a large number of psychological studies have shown that people respond to scientific or technical evidence in ways that justify their preexisting beliefs. In a classic 1979 experiment, pro- and anti-death penalty advocates were exposed to descriptions of two fake scientific studies: one supporting and one undermining the notion that capital punishment deters violent crime and, in particular, murder. They were also shown detailed methodological critiques of the fake studies—and in a scientific sense, neither study was stronger than the other. Yet in each case, advocates more heavily criticized the study whose conclusions disagreed with their own, while describing the study that was more ideologically congenial as more “convincing.”

Since then, similar results have been found for how people respond to “evidence” about affirmative action, gun control, the accuracy of gay stereotypes, and much else. Even when study subjects are explicitly instructed to be unbiased and even-handed about the evidence, they often fail.

And it’s not just that people twist or selectively read scientific evidence to support their preexisting views. According to research by Yale Law School professor Dan Kahan and his colleagues, people’s deep-seated views about morality, and about the way society should be ordered, strongly predict whom they consider to be a legitimate scientific expert in the first place—and thus where they consider “scientific consensus” to lie on contested issues.

In Kahan’s research, individuals are classified, based on their cultural values, as either “individualists” or “communitarians,” and as either “hierarchical” or “egalitarian” in outlook. (Somewhat oversimplifying, you can think of hierarchical individualists as akin to conservative Republicans, and egalitarian communitarians as liberal Democrats.) In one study, subjects in the different groups were asked to help a close friend determine the risks associated with climate change, sequestering nuclear waste, or concealed carry laws: “The friend tells you that he or she is planning to read a book about the issue but would like to get your opinion on whether the author seems like a knowledgeable and trustworthy expert.” A subject was then presented with the résumé of a fake expert “depicted as a member of the National Academy of Sciences who had earned a Ph.D. in a pertinent field from one elite university and who was now on the faculty of another.” The subject was then shown a book excerpt by that “expert,” in which the risk of the issue at hand was portrayed as high or low, well-founded or speculative. The results were stark: When the scientist’s position stated that global warming is real and human-caused, for instance, only 23 percent of hierarchical individualists agreed the person was a “trustworthy and knowledgeable expert.” Yet 88 percent of egalitarian communitarians accepted the same scientist’s expertise. Similar divides were observed on whether nuclear waste can be safely stored underground and whether letting people carry guns deters crime. (The alliances did not always hold. In another study, hierarchs and communitarians were in favor of laws that would compel the mentally ill to accept treatment, whereas individualists and egalitarians were opposed.)

“Head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.”

In other words, people rejected the validity of a scientific source because its conclusion contradicted their deeply held views—and thus the relative risks inherent in each scenario. A hierarchal individualist finds it difficult to believe that the things he prizes (commerce, industry, a man’s freedom to possess a gun to defend his family) could lead to outcomes deleterious to society. Whereas egalitarian communitarians tend to think that the free market causes harm, that patriarchal families mess up kids, and that people can’t handle their guns. The study subjects weren’t “anti-science”—not in their own minds, anyway. It’s just that “science” was whatever they wanted it to be. “We’ve come to a misadventure, a bad situation where diverse citizens, who rely on diverse systems of cultural certification, are in conflict,” says Kahan.

And that undercuts the standard notion that the way to persuade people is via evidence and argument. In fact, head-on attempts to persuade can sometimes trigger a backfire effect, where people not only fail to change their minds when confronted with the facts—they may hold their wrong views more tenaciously than ever.

Take, for instance, the question of whether Saddam Hussein possessed hidden weapons of mass destruction just before the US invasion of Iraq in 2003. When political scientists Brendan Nyhan and Jason Reifler showed subjects fake newspaper articles in which this was first suggested (in a 2004 quote from President Bush) and then refuted (with the findings of the Bush-commissioned Iraq Survey Group report, which found no evidence of active WMD programs in pre-invasion Iraq), they found that conservatives were more likely than before to believe the claim. (The researchers also tested how liberals responded when shown that Bush did not actually “ban” embryonic stem-cell research. Liberals weren’t particularly amenable to persuasion, either, but no backfire effect was observed.)

Another study gives some inkling of what may be going through people’s minds when they resist persuasion. Northwestern University sociologist Monica Prasad and her colleagues wanted to test whether they could dislodge the notion that Saddam Hussein and Al Qaeda were secretly collaborating among those most likely to believe it—Republican partisans from highly GOP-friendly counties. So the researchers set up a study in which they discussed the topic with some of these Republicans in person. They would cite the findings of the 9/11 Commission, as well as a statement in which George W. Bush himself denied his administration had “said the 9/11 attacks were orchestrated between Saddam and Al Qaeda.”

“One study showed that not even Bush’s own words could change the minds of Bush voters who believed there was an Iraq-Al Qaeda link.”

As it turned out, not even Bush’s own words could change the minds of these Bush voters—just 1 of the 49 partisans who originally believed the Iraq-Al Qaeda claim changed his or her mind. Far more common was resisting the correction in a variety of ways, either by coming up with counterarguments or by simply being unmovable:

Interviewer: [T]he September 11 Commission found no link between Saddam and 9/11, and this is what President Bush said. Do you have any comments on either of those? 

Respondent: Well, I bet they say that the Commission didn’t have any proof of it but I guess we still can have our opinions and feel that way even though they say that.

The same types of responses are already being documented on divisive topics facing the current administration. Take the “Ground Zero mosque.” Using information from the political myth-busting site, a team at Ohio State presented subjects with a detailed rebuttal to the claim that “Feisal Abdul Rauf, the Imam backing the proposed Islamic cultural center and mosque, is a terrorist-sympathizer.” Yet among those who were aware of the rumor and believed it, fewer than a third changed their minds.

A key question—and one that’s difficult to answer—is how “irrational” all this is. On the one hand, it doesn’t make sense to discard an entire belief system, built up over a lifetime, because of some new snippet of information. “It is quite possible to say, ‘I reached this pro-capital-punishment decision based on real information that I arrived at over my life,'” explains Stanford social psychologist Jon Krosnick. Indeed, there’s a sense in which science denial could be considered keenly “rational.” In certain conservative communities, explains Yale’s Kahan, “People who say, ‘I think there’s something to climate change,’ that’s going to mark them out as a certain kind of person, and their life is going to go less well.”

This may help explain a curious pattern Nyhan and his colleagues found when they tried to test the fallacy that President Obama is a Muslim. When a nonwhite researcher was administering their study, research subjects were amenable to changing their minds about the president’s religion and updating incorrect views. But when only white researchers were present, GOP survey subjects in particular were more likely to believe the Obama Muslim myth than before. The subjects were using “social desirabililty” to tailor their beliefs (or stated beliefs, anyway) to whoever was listening.

Which leads us to the media. When people grow polarized over a body of evidence, or a resolvable matter of fact, the cause may be some form of biased reasoning, but they could also be receiving skewed information to begin with—or a complicated combination of both. In the Ground Zero mosque case, for instance, a follow-up study showed that survey respondents who watched Fox News were more likely to believe the Rauf rumor and three related ones—and they believed them more strongly than non-Fox watchers.

Okay, so people gravitate toward information that confirms what they believe, and they select sources that deliver it. Same as it ever was, right? Maybe, but the problem is arguably growing more acute, given the way we now consume information—through the Facebook links of friends, or tweets that lack nuance or context, or “narrowcast” and often highly ideological media that have relatively small, like-minded audiences. Those basic human survival skills of ours, says Michigan’s Arthur Lupia, are “not well-adapted to our information age.”

“A predictor of whether you accept the science of global warming? Whether you’re a Republican or a Democrat.”

If you wanted to show how and why fact is ditched in favor of motivated reasoning, you could find no better test case than climate change. After all, it’s an issue where you have highly technical information on one hand and very strong beliefs on the other. And sure enough, one key predictor of whether you accept the science of global warming is whether you’re a Republican or a Democrat. The two groups have been growing more divided in their views about the topic, even as the science becomes more unequivocal.

So perhaps it should come as no surprise that more education doesn’t budge Republican views. On the contrary: In a 2008 Pew survey, for instance, only 19 percent of college-educated Republicans agreed that the planet is warming due to human actions, versus 31 percent of non-college educated Republicans. In other words, a higher education correlated with an increased likelihood of denying the science on the issue. Meanwhile, among Democrats and independents, more education correlated with greater acceptance of the science.

Other studies have shown a similar effect: Republicans who think they understand the global warming issue best are least concerned about it; and among Republicans and those with higher levels of distrust of science in general, learning more about the issue doesn’t increase one’s concern about it. What’s going on here? Well, according to Charles Taber and Milton Lodge of Stony Brook, one insidious aspect of motivated reasoning is that political sophisticates are prone to be more biased than those who know less about the issues. “People who have a dislike of some policy—for example, abortion—if they’re unsophisticated they can just reject it out of hand,” says Lodge. “But if they’re sophisticated, they can go one step further and start coming up with counterarguments.” These individuals are just as emotionally driven and biased as the rest of us, but they’re able to generate more and better reasons to explain why they’re right—and so their minds become harder to change.

That may be why the selectively quoted emails of Climategate were so quickly and easily seized upon by partisans as evidence of scandal. Cherry-picking is precisely the sort of behavior you would expect motivated reasoners to engage in to bolster their views—and whatever you may think about Climategate, the emails were a rich trove of new information upon which to impose one’s ideology.

Climategate had a substantial impact on public opinion, according to Anthony Leiserowitz, director of the Yale Project on Climate Change Communication. It contributed to an overall drop in public concern about climate change and a significant loss of trust in scientists. But—as we should expect by now—these declines were concentrated among particular groups of Americans: Republicans, conservatives, and those with “individualistic” values. Liberals and those with “egalitarian” values didn’t lose much trust in climate science or scientists at all. “In some ways, Climategate was like a Rorschach test,” Leiserowitz says, “with different groups interpreting ambiguous facts in very different ways.”

“Is there a case study of science denial that largely occupies the political left? Yes: the claim that childhood vaccines are causing an epidemic of autism.”

So is there a case study of science denial that largely occupies the political left? Yes: the claim that childhood vaccines are causing an epidemic of autism. Its most famous proponents are an environmentalist (Robert F. Kennedy Jr.) and numerous Hollywood celebrities (most notably Jenny McCarthy and Jim Carrey). The Huffington Post gives a very large megaphone to denialists. And Seth Mnookin, author of the new book The Panic Virus, notes that if you want to find vaccine deniers, all you need to do is go hang out at Whole Foods.

Vaccine denial has all the hallmarks of a belief system that’s not amenable to refutation. Over the past decade, the assertion that childhood vaccines are driving autism rates has been undermined by multiple epidemiological studies—as well as the simple fact that autism rates continue to rise, even though the alleged offending agent in vaccines (a mercury-based preservative called thimerosal) has long since been removed.

Yet the true believers persist—critiquing each new study that challenges their views, and even rallying to the defense of vaccine-autism researcher Andrew Wakefield, after his 1998 Lancet paper—which originated the current vaccine scare—was retracted and he subsequently lost his license (PDF) to practice medicine. But then, why should we be surprised? Vaccine deniers created their own partisan media, such as the website Age of Autism, that instantly blast out critiques and counterarguments whenever any new development casts further doubt on anti-vaccine views.

It all raises the question: Do left and right differ in any meaningful way when it comes to biases in processing information, or are we all equally susceptible?

There are some clear differences. Science denial today is considerably more prominent on the political right—once you survey climate and related environmental issues, anti-evolutionism, attacks on reproductive health science by the Christian right, and stem-cell and biomedical matters. More tellingly, anti-vaccine positions are virtually nonexistent among Democratic officeholders today—whereas anti-climate-science views are becoming monolithic among Republican elected officials.

Some researchers have suggested that there are psychological differences between the left and the right that might impact responses to new information—that conservatives are more rigid and authoritarian, and liberals more tolerant of ambiguity. Psychologist John Jost of New York University has further argued that conservatives are “system justifiers”: They engage in motivated reasoning to defend the status quo.

This is a contested area, however, because as soon as one tries to psychoanalyze inherent political differences, a battery of counterarguments emerges: What about dogmatic and militant communists? What about how the parties have differed through history? After all, the most canonical case of ideologically driven science denial is probably the rejection of genetics in the Soviet Union, where researchers disagreeing with the anti-Mendelian scientist (and Stalin stooge) Trofim Lysenko were executed, and genetics itself was denounced as a “bourgeois” science and officially banned.

The upshot: All we can currently bank on is the fact that we all have blinders in some situations. The question then becomes: What can be done to counteract human nature itself?

“We all have blinders in some situations. The question then becomes: What can be done to counteract human nature?”

Given the power of our prior beliefs to skew how we respond to new information, one thing is becoming clear: If you want someone to accept new evidence, make sure to present it to them in a context that doesn’t trigger a defensive, emotional reaction.

This theory is gaining traction in part because of Kahan’s work at Yale. In one study, he and his colleagues packaged the basic science of climate change into fake newspaper articles bearing two very different headlines—”Scientific Panel Recommends Anti-Pollution Solution to Global Warming” and “Scientific Panel Recommends Nuclear Solution to Global Warming”—and then tested how citizens with different values responded. Sure enough, the latter framing made hierarchical individualists much more open to accepting the fact that humans are causing global warming. Kahan infers that the effect occurred because the science had been written into an alternative narrative that appealed to their pro-industry worldview.

You can follow the logic to its conclusion: Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue. Doing so is, effectively, to signal a détente in what Kahan has called a “culture war of fact.” In other words, paradoxically, you don’t lead with the facts in order to convince. You lead with the values—so as to give the facts a fighting chance.

[Original link with access to mentioned studies here.]


>Quase ganhador

Agência FAPESP – 17/5/2010

Quase. Passou muito perto. Na próxima vez está no papo. Segundo uma pesquisa feita na Universidade de Cambridge, no Reino Unido, o cérebro do apostador contumaz reage diferentemente na hora de encarar uma derrota.

Para quem não costuma jogar, perder é algo normal e sinaliza a hora de parar. Mas para quem tem no jogo de apostas o seu vício, não é bem assim. De acordo com o estudo, o cérebro desses apostadores reage de modo muito mais intenso a ocasiões em que a vitória esteve muito próxima do que ocorre nos demais.

Essa particularidade poderia explicar por que os jogadores obstinados continuam a apostar mesmo quando estão perdendo sem parar. No estudo, os pesquisadores analisaram os cérebros de 20 apostadores por meio de ressonância magnética funcional enquanto eles jogavam em uma máquina caça-níqueis.

Os pesquisadores observaram que as partes do cérebro envolvidos no processamento de recompensas – chamadas de centros de dopamina – eram mais ativos em pessoas com problemas de apostas do que em pessoas que apostavam socialmente (dois grupos nos quais os voluntários foram divididos).

Durante o experimento, os participantes jogaram uma máquina com duas rodas e ganhavam 50 pences a cada vez que o resultado eram dois ícones iguais. Duas figuras diferentes era considerado uma derrota, mas quando o resultado ficava a um ícone de um par (antes ou depois, na sequência do movimento), o resultado era considerado um “perdeu por pouco”.

Os pesquisadores observaram que esses últimos casos ativaram os mesmos caminhos cerebrais do que as vitórias, mesmo que não houvesse recompensa monetária. Verificaram também que a reação ao resultado era muito mais forte entre os apostadores contumazes.

“Os resultados são interessantes por que sugerem que as ‘derrotas por pouco’ podem estimular uma resposta dopamínica nos jogadores mais frequentes, mesmo quando isso não resulta em um prêmio. Se esses fluxos de dopamina estão direcionando o comportamento aditivo, isso poderá ajudar a explicar por que aqueles que têm nas apostas o seu problema acham tão difícil parar de jogar”, disse Luke Clark, um dos autores do estudo, que foi publicado no Journal of Neuroscience.

O artigo Gambling severity predicts midbrain response to near-miss outcomes (DOI:10.1523/jneurosci.5758-09.2010), de Luke Clark e Henry Chase, pode ser lido por assinantes do Journal of Neuroscience em

>O maior túmulo do samba (isto é, as ciências cognitivas), levado ao absurdo

Next Big Thing in English: Knowing They Know That You Know

The New York Times, March 31, 2010

To illustrate what a growing number of literary scholars consider the most exciting area of new research, Lisa Zunshine, a professor of English at the University of Kentucky, refers to an episode from the TV series “Friends.”

(Follow closely now; this is about the science of English.) Phoebe and Rachel plot to play a joke on Monica and Chandler after they learn the two are secretly dating. The couple discover the prank and try to turn the tables, but Phoebe realizes this turnabout and once again tries to outwit them.

As Phoebe tells Rachel, “They don’t know that we know they know we know.”

This layered process of figuring out what someone else is thinking — of mind reading — is both a common literary device and an essential survival skill. Why human beings are equipped with this capacity and what particular brain functions enable them to do it are questions that have occupied primarily cognitive psychologists.

Now English professors and graduate students are asking them too. They say they’re convinced science not only offers unexpected insights into individual texts, but that it may help to answer fundamental questions about literature’s very existence: Why do we read fiction? Why do we care so passionately about nonexistent characters? What underlying mental processes are activated when we read?

Ms. Zunshine, whose specialty is 18th-century British literature, became familiar with the work of evolutionary psychologists while she was a graduate student at the University of California, Santa Barbara in the 1990s. “I thought this could be the most exciting thing I could ever learn,” she said.

At a time when university literature departments are confronting painful budget cuts, a moribund job market and pointed scrutiny about the purpose and value of an education in the humanities, the cross-pollination of English and psychology is a providing a revitalizing lift.

Jonathan Gottschall, who has written extensively about using evolutionary theory to explain fiction, said “it’s a new moment of hope” in an era when everyone is talking about “the death of the humanities.” To Mr. Gottschall a scientific approach can rescue literature departments from the malaise that has embraced them over the last decade and a half. Zealous enthusiasm for the politically charged and frequently arcane theories that energized departments in the 1970s, ’80s and early ’90s — Marxism, structuralism, psychoanalysis — has faded. Since then a new generation of scholars have been casting about for The Next Big Thing.

The brain may be it. Getting to the root of people’s fascination with fiction and fantasy, Mr. Gottschall said, is like “mapping wonderland.”

Literature, like other fields including history and political science, has looked to the technology of brain imaging and the principles of evolution to provide empirical evidence for unprovable theories.

Interest has bloomed during the last decade. Elaine Scarry, a professor of English at Harvard, has since 2000 hosted a seminar on cognitive theory and the arts. Over the years participants have explored, for example, how the visual cortex works in order to explain why Impressionist paintings give the appearance of shimmering. In a few weeks Stephen Kosslyn, a psychologist at Harvard, will give a talk about mental imagery and memory, both of which are invoked while reading.

Ms. Zunshine said that in 1999 she and about 10 others won approval from the Modern Language Association to form a discussion group on cognitive approaches to literature. Last year their members numbered more than 1,200. Unlike Mr. Gottschall, however, Ms. Zunshine sees cognitive approaches as building on other literary theories rather than replacing them.

Ms. Zunshine is particularly interested in what cognitive scientists call the theory of mind, which involves one person’s ability to interpret another person’s mental state and to pinpoint the source of a particular piece of information in order to assess its validity.

Jane Austen’s novels are frequently constructed around mistaken interpretations. In “Emma” the eponymous heroine assumes Mr. Elton’s attentions signal a romantic interest in her friend Harriet, though he is actually intent on marrying Emma. She similarly misinterprets the behavior of Frank Churchill and Mr. Knightly, and misses the true objects of their affections.

Humans can comfortably keep track of three different mental states at a time, Ms. Zunshine said. For example, the proposition “Peter said that Paul believed that Mary liked chocolate” is not too hard to follow. Add a fourth level, though, and it’s suddenly more difficult. And experiments have shown that at the fifth level understanding drops off by 60 percent, Ms. Zunshine said. Modernist authors like Virginia Woolf are especially challenging because she asks readers to keep up with six different mental states, or what the scholars call levels of intentionality.

Perhaps the human facility with three levels is related to the intrigues of sexual mating, Ms. Zunshine suggested. Do I think he is attracted to her or me? Whatever the root cause, Ms. Zunshine argues, people find the interaction of three minds compelling. “If I have some ideological agenda,” she said, “I would try to construct a narrative that involved a triangularization of minds, because that is something we find particularly satisfying.”

Ms. Zunshine is part of a research team composed of literary scholars and cognitive psychologists who are using snapshots of the brain at work to explore the mechanics of reading. The project, funded by the Teagle Foundation and hosted by the Haskins Laboratory in New Haven, is aimed at improving college-level reading skills.

“We begin by assuming that there is a difference between the kind of reading that people do when they read Marcel Proust or Henry James and a newspaper, that there is a value added cognitively when we read complex literary texts,” said Michael Holquist, professor emeritus of comparative literature at Yale, who is leading the project.

The team spent nearly a year figuring how one might test for complexity. What they came up with was mind reading — or how well an individual is able to track multiple sources. The pilot study, which he hopes will start later this spring, will involve 12 subjects. “Each will be put into the magnet” — an M.R.I. machine — “and given a set of texts of graduated complexity depending on the difficulty of source monitoring and we’ll watch what happens in the brain,” Mr. Holquist explained.

At the other end of the country Blakey Vermeule, an associate professor of English at Stanford, is examining theory of mind from a different perspective. She starts from the assumption that evolution had a hand in our love of fiction, and then goes on to examine the narrative technique known as “free indirect style,” which mingles the character’s voice with the narrator’s. Indirect style enables readers to inhabit two or even three mind-sets at a time.

This style, which became the hallmark of the novel beginning in the 19th century with Jane Austen, evolved because it satisfies our “intense interest in other people’s secret thoughts and motivations,” Ms. Vermeule said.

The road between the two cultures — science and literature — can go both ways. “Fiction provides a new perspective on what happens in evolution,” said William Flesch, a professor of English at Brandeis University.

To Mr. Flesch fictional accounts help explain how altruism evolved despite our selfish genes. Fictional heroes are what he calls “altruistic punishers,” people who right wrongs even if they personally have nothing to gain. “To give us an incentive to monitor and ensure cooperation, nature endows us with a pleasing sense of outrage” at cheaters, and delight when they are punished, Mr. Flesch argues. We enjoy fiction because it is teeming with altruistic punishers: Odysseus, Don Quixote, Hamlet, Hercule Poirot.

“It’s not that evolution gives us insight into fiction,” Mr. Flesch said, “but that fiction gives us insight into evolution.”

>Marcelo Gleiser: Criação imperfeita (Folha Mais!)

A noção de que a natureza pode ser decifrada pelo reducionismo precisa ser abolida.

14 março 2010

Desde tempos imemoriais, ao se deparar com a imensa complexidade da natureza, o homem buscou nela padrões repetitivos, algum tipo de ordem. Isso faz muito sentido. Afinal, ao olharmos para os céus, vemos que existem padrões organizados, movimentos periódicos que se repetem, definindo ciclos naturais aos quais estamos profundamente ligados: o nascer e o pôr do Sol, as fases da Lua, as estações do ano, as órbitas planetárias.

Com Pitágoras, 2.500 anos atrás, a busca por uma ordem natural das coisas foi transformada numa busca por uma ordem matemática: os padrões que vemos na natureza refletem a matemática da criação. Cabe ao filósofo desvendar esses padrões, revelando assim os segredos do mundo.

Ademais, como o mundo é obra de um arquiteto universal (não exatamente o Deus judaico-cristão, mas uma divindade criadora mesmo assim), desvendar os segredos do mundo equivale a desvendar a “mente de Deus”. Escrevi recentemente sobre como essa metáfora permanece viva ainda hoje e é usada por físicos como Stephen Hawking e muitos outros.

Essa busca por uma ordem matemática da natureza rendeu -e continua a render- muitos frutos. Nada mais justo do que buscar uma ordem oculta que explica a complexidade do mundo. Essa abordagem é o cerne do reducionismo, um método de estudo baseado na ideia de que a compreensão do todo pode ser alcançada através do estudo das suas várias partes.

Os resultados dessa ordem são expressos através de leis, que chamamos de leis da natureza. As leis são a expressão máxima da ordem natural. Na realidade, as coisas não são tão simples. Apesar da sua óbvia utilidade, o reducionismo tem suas limitações. Existem certas questões, ou melhor, certos sistemas, que não podem ser compreendidos a partir de suas partes. O clima é um deles; o funcionamento da mente humana é outro.

Os processos bioquímicos que definem os seres vivos não podem ser compreendidos a partir de leis simples, ou usando que moléculas são formadas de átomos. Essencialmente, em sistemas complexos, o todo não pode ser reduzido às suas partes.

Comportamentos imprevisíveis emergem das inúmeras interações entre os elementos do sistema. Por exemplo, a função de moléculas com muitos átomos, como as proteínas, depende de como elas se “dobram”, isto é, de sua configuração espacial. O funcionamento do cérebro não pode ser deduzido a partir do funcionamento de 100 bilhões de neurônios.

Sistemas complexos precisam de leis diferentes, que descrevem comportamentos resultantes da cooperação de muitas partes. A noção de que a natureza é perfeita e pode ser decifrada pela aplicação sistemática do método reducionista precisa ser abolida. Muito mais de acordo com as descobertas da ciência moderna é que devemos adotar uma abordagem múltipla, e que junto ao reducionismo precisamos utilizar outros métodos para lidar com sistemas mais complexos. Claro, tudo ainda dentro dos parâmetros das ciências naturais, mas aceitando que a natureza é imperfeita e que a ordem que tanto procuramos é, na verdade, uma expressão da ordem que buscamos em nós mesmos.

É bom lembrar que a ciência cria modelos que descrevem a realidade; esses modelos não são a realidade, só nossas representações dela. As “verdades” que tanto admiramos são aproximações do que de fato ocorre.

As simetrias jamais são exatas. O surpreendente na natureza não é a sua perfeição, mas o fato de a matéria, após bilhões de anos, ter evoluído a ponto de criar entidades capazes de se questionarem sobre a sua existência.

MARCELO GLEISER é professor de física teórica no Dartmouth College, em Hanover (EUA) e autor do livro “A Criação Imperfeita”

>Idealização e abstração


Agência FAPESP, 22/12/2009
Por Fabio Reynol

Distinguir o falso e suprimir o verdadeiro é, para a maior parte dos casos, indispensável para se fazer uma boa ciência cognitiva. A declaração provocativa foi feita por John Woods, professor da Universidade da Colúmbia Britânica, no Canadá.

O filósofo participou na semana passada do Seminário “Raciocínio Baseado em Modelo em Ciência e Tecnologia”, na Universidade Estadual de Campinas (Unicamp). O evento foi realizado no âmbito do Projeto Temático Logical Consequence and Combinations of Logics – Fundaments and Efficient Applications apoiado pela FAPESP e coordenado por Walter Carnielli, professor do Instituto de Filosofia e Ciências Humanas da Unicamp.

Woods se refere a dois recursos utilizados pelo raciocínio baseado em modelo voltado à ciência: a idealização e a abstração. Segundo ele, ambos são distorções da realidade que acabam trazendo bons resultados para a investigação científica. “São essas distorções que os tornam interessantes”, disse.

Enquanto a idealização representa em demasia um fenômeno expressando aspectos considerados falsos, a abstração o sub-representa ao eliminar algumas variáveis em favorecimento de outras na tentativa de simplificar o problema. Um exemplo de idealização são problemas de física em que não é considerado o atrito das superfícies.

As ideias de Woods fazem um contraponto à imagem instrumentalista da ciência, de uma ferramenta para registrar a realidade por meio apenas de medições precisas e fiéis. Para ele, os modelos científicos de sucesso contêm distorções. “A distorção não é incompatível com a aquisição do conhecimento”, destacou.

A base estaria no próprio sistema de modelagem, que nunca será idêntico ao fenômeno representado. “Se algo com que um objeto se parece nunca será o próprio objeto, então dizer o que é esse objeto é o mesmo que afirmar o que ele não é”, disse Woods.

O filósofo criou um modelo representativo para explicar como tais modelos conseguem ser bem-sucedidos. Segundo ele, os conhecimentos resultantes desses processos modelados não são obtidos por instrumentos, mas por cognição, e ainda usam uma técnica contraintuitiva ao trabalhar com considerações irreais, idealizadas ou até mesmo falsas.

“Entender as coisas de maneira errada é um meio de entendê-las corretamente. É intrigante mesmo – e dá certo”, afirmou.

Além de Carnielli, coordenaram o seminário o professor Lorenzo Magnani, da Universidade de Pavia, e o professor Claudio Pizzi, da Universidade de Siena. As duas instituições italianas promoveram o evento em conjunto com a Unicamp.


Da apresentação do livro Biocommunication and Natural Genome Editing, de Günther Witzany (Springer, 2010):

Biocommunication occurs on three levels (A) intraorganismic, i.e. intra- and intercellular, (B) interorganismic, between the same or related species and (C) transorganismic, between organisms which are not related. The biocommunicative approach demonstrates both that cells, tissues, organs and organisms coordinate and organize by communication processes and genetic nucleotide sequence order in cellular and non-cellular genomes is structured language-like, i.e. follow combinatorial (syntactic), context-sensitive (pragmatic) and content-specific (semantic) rules. Without sign-mediated interactions no vital functions within and between organisms can be coordinated. Exactly this feature is absent in non-living matter.

Additionally the biocommunicative approach investigates natural genome editing competences of viruses. Natural genome editing from a biocommunicative perspective is competent agent-driven generation and integration of meaningful nucleotide sequences into pre-existing genomic content arrangements and the ability to (re)combine and (re)regulate them according to context-dependent (i.e. adaptational) purposes of the host organism.

>Acaso, cognição e cotidiano

Cervejas e alguma sorte

A ação do acaso na vida é mais importante do que se pensa, e o cérebro não lida bem com probabilidade, diz físico em novo livro


Caderno Mais! – 13/09/2009

Você é homem, americano, branco, heterossexual e não usa drogas. Corre o ano de 1989. Faz um exame de sangue, desses descompromissados e, após alguns dias, recebe a notícia: HIV positivo. O médico sente muito, mas a morte é inevitável. Se quiser, você pode fazer outro exame, mas a chance de que não esteja contaminado é bem pequena: uma em mil.

Aconteceu com Leonard Mlodinow, físico do Caltech (Instituto de Tecnologia da Califórnia) que já havia escrito o livro “Uma Nova História do Tempo” com Stephen Hawking. “É difícil descrever (…) como passei aquele fim-de-semana; digamos apenas que não fui à Disneylândia”, escreve em “O Andar do Bêbado”, seu novo livro, recém-lançado no Brasil.

Talvez o médico de Mlodinow fosse ótimo. Mas não serviria como estatístico. Isso porque, em 1989, nos EUA, uma em cada 10 mil pessoas nas condições citadas estava infectada pelo HIV. Imagine que essas 10 mil fizessem exames. O único soropositivo receberia, possivelmente, uma notícia ruim. Mas, como um em cada mil exames dá o resultado errado, dez pessoas saudáveis também a receberiam.

Ou seja, nessas condições, de cada 11 pessoas que recebiam o veredicto “HIV positivo”, apenas uma realmente estava contaminada. A porcentagem de “falsos positivos” é dez vezes maior que a de “verdadeiros positivos”. Faria, então, mais sentido que Mlodinow não deixasse de ir à Disneylândia. No final, soube que não tinha HIV.

Mas dificuldades com probabilidades não são exclusividade do médico de Mlodinow. Humanos desprezam a presença do aleatório nas suas vidas – nossos cérebros são programados para achar padrões, mesmo quando não existem.

Isso vai desde coisas evidentemente desprovidas de sentido (como usar uma mesma meia velha em jogos do Brasil) até situações mais sérias.

Jegue estabanado

Nesse sentido, é muito comum considerar que sucessos ou fracassos são resultado exclusivamente da nossa competência. Em boa medida são, claro, mas quanta aleatoriedade está envolvida nisso?

Jogadores, vendedores, homens atrás de mulheres nas festas. Quase todas as atividades humanas estão sujeitas ao acaso. Os resultados se distribuem ao redor de uma média (alta, para quem é competente), mas existem dias bons (quando o centroavante faz três gols e se consagra) e dias ruins (quando o “pegador” volta pra casa sozinho).

Mas isso vale também para o mercado financeiro, por exemplo. Será que os investidores que ganham milhões na bolsa o fazem porque são competentes ou porque, em uma série determinada de anos, tiveram mais sorte -fazendo escolhas tão “chutadas” quanto muitos que tiveram menos sucesso?

Como exemplo, Mlodinow conta a história de Daniel Kahneman, psicólogo que em 2002 ganhou o Nobel de Economia. Como não se escolhe trabalho no começo da carreira, ele foi, nos anos 1960, ensinar aos instrutores da Aeronáutica israelense que recompensar funciona melhor do que punir erros.

Foi interrompido por um dos instrutores que o ouvia. Ele dizia que muitas vezes elogiou a manobra de um aluno e, na vez seguinte, o sujeito se saiu muito pior. E que, quando gritou com a besta que havia quase acabado de destruir o avião, ela melhorava em seguida. Os outros instrutores concordaram.
Estariam os psicólogos errados? Kahneman concluiu que não. Apenas que a experiência dos instrutores estava relacionada com a probabilidade.

A ideia dele era que os aprendizes melhoram a sua capacidade aos poucos, e isso não é perceptível entre uma manobra e outra. Qualquer voo perfeito ou qualquer pouso que leve embora meio aeroporto junto são questões pontuais, desvios da média. Na próxima tentativa, é alta a chance de que se retorne ao “padrão” central -nem fantástico, nem desastroso.

Então, diz Mlodinow, quando os instrutores elogiavam uma performance impecável, tinham a impressão de que, em seguida, o aluno piorava. Já se ele, digamos, esquecia de baixar o trem de pouso e escutava um grito de “seu jegue estabanado”, na próxima melhorava.

A pergunta por trás do livro é até que ponto não nos deixamos enganar por desvios da média como sinais de competência extrema ou de falta de aptidão para a vida. Quando um ator é descoberto de repente, após anos de fracasso, como Bruce Willis, ou quando alguém ganha muito dinheiro em poucos anos, como Bill Gates, qual foi a importância de estar no lugar certo, na hora certa? O andar do bêbado, sem direção consciente, acaba sendo uma ótima metáfora para os caminhos que tomamos na vida.

LIVRO – “O Andar do Bêbado: Como o Acaso Determina Nossas Vidas”
Leonard Mlodinow; trad. de Diego Alfaro; ed. Zahar, 261 págs., R$ 39