Arquivo da tag: Cognição

How Do You Say ‘Disagreement’ in Pirahã? (N.Y.Times)

By JENNIFER SCHUESSLER. Published: March 21, 2012

Dan Everett. Essential Media & Entertainment/Smithsonian Channel

In his 2008 memoir, “Don’t Sleep, There Are Snakes,” the linguist Dan Everett recalled the night members of the Pirahã — the isolated Amazonian hunter-gatherers he first visited as a Christian missionary in the late 1970s — tried to kill him.

Dr. Everett survived, and his life among the Pirahã, a group of several hundred living in northwest Brazil, went on mostly peacefully as he established himself as a leading scholarly authority on the group and one of a handful of outsiders to master their difficult language.

His life among his fellow linguists, however, has been far less idyllic, and debate about his scholarship is poised to boil over anew, thanks to his ambitious new book, “Language: The Cultural Tool,” and a forthcoming television documentary that presents an admiring view of his research among the Pirahã along with a darkly conspiratorial view of some of his critics.

Members of the Pirahã people of Amazonian Brazil, who have an unusual language, as seen in “The Grammar of Happiness.” Essential Media & Entertainment/Smithsonian Channel

In 2005 Dr. Everett shot to international prominence with a paper claiming that he had identified some peculiar features of the Pirahã language that challenged Noam Chomsky’s influential theory, first proposed in the 1950s, that human language is governed by “universal grammar,” a genetically determined capacity that imposes the same fundamental shape on all the world’s tongues.

The paper, published in the journal Current Anthropology, turned him into something of a popular hero but a professional lightning rod, embraced in the press as a giant killer who had felled the mighty Chomsky but denounced by some fellow linguists as a fraud, an attention seeker or worse, promoting dubious ideas about a powerless indigenous group while refusing to release his data to skeptics.

The controversy has been simmering in journals and at conferences ever since, fed by a slow trickle of findings by researchers who have followed Dr. Everett’s path down to the Amazon. In a telephone interview Dr. Everett, 60, who is the dean of arts and sciences at Bentley University in Waltham, Mass., insisted that he’s not trying to pick a fresh fight, let alone present himself as a rival to the man he calls “the smartest person I’ve ever met.”

“I’m a small fish in the sea,” he said, adding, “I do not put myself at Chomsky’s level.”

Dan Everett in the Amazon region of Brazil with the Pirahã in 1981. Courtesy Daniel Everett

Still, he doesn’t shy from making big claims for “Language: The Cultural Tool,” published last week by Pantheon. “I am going beyond my work with Pirahã and systematically dismantling the evidence in favor of a language instinct,” he said. “I suspect it will be extremely controversial.”

Even some of Dr. Everett’s admirers fault him for representing himself as a lonely voice of truth against an all-powerful Chomskian orthodoxy bent on stopping his ideas dead. It’s certainly the view advanced in the documentary, “The Grammar of Happiness,” which accuses unnamed linguists of improperly influencing the Brazilian government to deny his request to return to Pirahã territory, either with the film crew or with a research team from M.I.T., led by Ted Gibson, a professor of cognitive science. (It’s scheduled to run on the Smithsonian Channel in May.)

A Pirahã man in the film “The Grammar of Happiness.” Essential Media & Entertainment/Smithsonian Channel

Dr. Everett acknowledged that he had no firsthand evidence of any intrigues against him. But Miguel Oliveira, an associate professor of linguistics at the Federal University of Alagoas and the M.I.T. expedition’s Brazilian sponsor, said in an interview that Dr. Everett is widely resented among scholars in Brazil for his missionary past, anti-Chomskian stance and ability to attract research money.

“This is politics, everybody knows that,” Dr. Oliveira said. “One of the arguments is that he’s stealing something from the indigenous people to become famous. It’s not said. But that’s the way they think.”

Claims of skullduggery certainly add juice to a debate that, to nonlinguists, can seem arcane. In a sense what Dr. Everett has taken from the Pirahã isn’t gold or rare medicinal plants but recursion, a property of language that allows speakers to embed phrases within phrases — for example, “The professor said Everett said Chomsky is wrong” — infinitely.

In a much-cited 2002 paper Professor Chomsky, an emeritus professor of linguistics at M.I.T., writing with Marc D. Hauser and W. Tecumseh Fitch, declared recursion to be the crucial feature of universal grammar and the only thing separating human language from its evolutionary forerunners. But Dr. Everett, who had been publishing quietly on the Pirahã for two decades, announced in his 2005 paper that their language lacked recursion, along with color terms, number terms, and other common properties of language. The Pirahã, Dr. Everett wrote, showed these linguistic gaps not because they were simple-minded, but because their culture — which emphasized concrete matters in the here and now and also lacked creation myths and traditions of art making — did not require it.

To Dr. Everett, Pirahã was a clear case of culture shaping grammar — an impossibility according to the theory of universal grammar. But to some of his critics the paper was really just a case of Dr. Everett — who said he began questioning his own Chomskian ideas in the early 1990s, around the time he began questioning his faith — fixing the facts around his new theories.

In 2009 the linguists Andrew Nevins, Cilene Rodrigues and David Pesetsky, three of the fiercest early critics of Dr. Everett’s paper, published their own in the journal Language, disputing his linguistic claims and expressing “discomfort” with his overall account of the Pirahã’s simple culture. Their main source was Dr. Everett himself, whose 1982 doctoral dissertation, they argued, showed clear evidence of recursion in Pirahã.

“He was right the first time,” Dr. Pesetsky, an M.I.T. professor, said in an interview. “The first time he had reasons. The second time he had no reasons.”

Some scholars say the debate remains stymied by a lack of fresh, independently gathered data. Three different research teams, including one led by Dr. Gibson that traveled to the Pirahã in 2007, have published papers supporting Dr. Everett’s claim that there are no numbers in the Pirahã language. But efforts to go recursion hunting in the jungle — using techniques that range from eliciting sentences to having the Pirahã play specially designed video games — have so far yielded no published results.

Still, some have tried to figure out ways to press ahead, even without direct access to the Pirahã. After Dr. Gibson’s team was denied permission to return to Brazil in 2010, its members devised a method that minimized reliance on Dr. Everett’s data by analyzing instead a corpus of 1,000 sentences from Pirahã stories transcribed by another missionary in the region.

Their analysis, presented at the Linguistic Society of America’s annual meeting in January, found no embedded clauses but did uncover “suggestive evidence” of recursion in a more obscure grammatical corner. It’s a result that is hardly satisfying to Dr. Everett, who questions it. But his critics, oddly, seem no more pleased.

Dr. Pesetsky, who heard the presentation, dismissed the whole effort as biased from the start by its reliance on Dr. Everett’s grammatical classifications and basic assumptions. “They were taking for granted the correctness of the hypothesis they were trying to disconfirm,” he said.

But to Dr. Gibson, who said he does not find Dr. Everett’s cultural theory of language persuasive, such responses reflect the gap between theoretical linguists and data-driven cognitive scientists, not to mention the strangely calcified state of the recursion debate.

“Chomskians and non-Chomskians are weirdly illogical at times,” he said. “It’s like they just don’t want to have a cogent argument. They just want to contradict what the other guy is saying.”

Dr. Everett’s critics fault him for failing to release his field data, even seven years after the controversy erupted. He countered that he is currently working to translate his decades’ worth of material and hopes to post some transcriptions online “over the next several months.” The bigger outrage, he insisted, is what he characterized as other scholars’ efforts to accuse him of “racist research” and interfere with his access to the Pirahã.

Dr. Rodrigues, a professor of linguistics at the Pontifical Catholic University in Rio de Janeiro, acknowledged by e-mail that in 2007 she wrote a letter to Funai, the Brazilian government agency in charge of indigenous affairs, detailing her objections to Dr. Everett’s linguistic research and to his broader description of Pirahã culture.

She declined to elaborate on the contents of the letter, which she said was written at Funai’s request and did not recommend any particular course of action. But asked about her overall opinion of Dr. Everett’s research, she said, “It does not meet the standards of scientific evidence in our field.”

Whatever the reasons for Dr. Everett’s being denied access, he’s enlisting the help of the Pirahã themselves, who are shown at the end of “The Grammar of Happiness” recording an emotional plea to the Brazilian government.

“We love Dan,” one man says into the camera. “Dan speaks our language.”

Science, Journalism, and the Hype Cycle: My piece in tomorrow’s Wall Street Journal (Discovery Magazine)

I think one of the biggest struggles a science writer faces is how to accurately describe the promise of new research. If we start promising that a preliminary experiment is going to lead to a cure for cancer, we are treating our readers cruelly–especially the readers who have cancer. On the other hand, scoffing at everything is not a sensible alternative, because sometimes preliminary experiments really do lead to great advances. In the 1950s, scientists discovered that bacteria can slice up virus DNA to avoid getting sick. That discovery led, some 30 years later, to biotechnology–to an industry that enabled, among other things, bacteria to produce human insulin.

This challenge was very much on my mind as I recently read two books, which I review in tomorrow’s Wall Street Journal. One is on gene therapy–a treatment that inspired wild expectations in the 1990s, then crashed, and now is coming back. The other is epigenetics, which seems to me to be in the early stages of the hype cycle. You can read the essay in full here. [see post below]

March 9th, 2012 5:33 PM by Carl Zimmer

Hope, Hype and Genetic Breakthroughs (Wall Street Journal)

By CARL ZIMMER

I talk to scientists for a living, and one of my most memorable conversations took place a couple of years ago with an engineer who put electrodes in bird brains. The electrodes were implanted into the song-generating region of the brain, and he could control them with a wireless remote. When he pressed a button, a bird singing in a cage across the lab would fall silent. Press again, and it would resume its song.

I could instantly see a future in which this technology brought happiness to millions of people. Imagine a girl blind from birth. You could implant a future version of these wireless electrodes in the back of her brain and then feed it images from a video camera.

As a journalist, I tried to get the engineer to explore what seemed to me to be the inevitable benefits of his research. To his great credit, he wouldn’t. He wasn’t even sure his design would ever see the inside of a human skull. There were just too many ways for it to go wrong. He wanted to be very sure that I understood that and that I wouldn’t claim otherwise. “False hope,” he warned me, “is a sinful thing.”

EPEGINE1

Stephen Voss. Gene therapy allowed this once-blind dog to see again.

Over the past two centuries, medical research has yielded some awesome treatments: smallpox wiped out with vaccines, deadly bacteria thwarted by antibiotics, face transplants. But when we look back across history, we forget the many years of failure and struggle behind each of these advances.

This foreshortened view distorts our expectations for research taking place today. We want to believe that every successful experiment means that another grand victory is weeks away. Big stories appear in the press about the next big thing. And then, as the years pass, the next big thing often fails to materialize. We are left with false hope, and the next big thing gets a reputation as the next big lie.

In 1995, a business analyst named Jackie Fenn captured this intellectual whiplash in a simple graph. Again and again, she had seen new advances burst on the scene and generate ridiculous excitement. Eventually they would reach what she dubbed the Peak of Inflated Expectations. Unable to satisfy their promise fast enough, many of them plunged into the Trough of Disillusionment. Their fall didn’t necessarily mean that these technologies were failures. The successful ones slowly emerged again and climbed the Slope of Enlightenment.

When Ms. Fenn drew the Hype Cycle, she had in mind dot-com-bubble technologies like cellphones and broadband. Yet it’s a good model for medical advances too. I could point to many examples of the medical hype cycle, but it’s hard to think of a better one than the subject of Ricki Lewis’s well-researched new book, “The Forever Fix”: gene therapy.

The concept of gene therapy is beguilingly simple. Many devastating disorders are the result of mutant genes. The disease phenylketonuria, for example, is caused by a mutation to a gene involved in breaking down a molecule called phenylalanine. The phenylalanine builds up in the bloodstream, causing brain damage. One solution is to eat a low-phenylalanine diet for your entire life. A much more appealing alternative would be to somehow fix the broken gene, restoring a person’s metabolism to normal.

In “The Forever Fix,” Ms. Lewis chronicles gene therapy’s climb toward the Peak of Inflated Expectations over the course of the 1990s. A geneticist and the author of a widely used textbook, she demonstrates a mastery of the history, even if her narrative sometimes meanders and becomes burdened by clichés. She explains how scientists learned how to identify the particular genes behind genetic disorders. They figured out how to load genes into viruses and then to use those viruses to insert the genes into human cells.

EPEGINE2

Stephen Voss. Alisha Bacoccini is tested on her ability to read letters, at UPenn Hospital, in Philadelphia, PA on Monday, June 23, 2008. Bacoccini is undergoing an experimental gene therapy trial to improve her sight.

By 1999, scientists had enjoyed some promising successes treating people—removing white blood cells from leukemia patients, for example, inserting working genes, and then returning the cells to their bodies. Gene therapy seemed as if it was on the verge of becoming standard medical practice. “Within the next decade, there will be an exponential increase in the use of gene therapy,” Helen M. Blau, the then-director of the gene-therapy technology program at Stanford University, told Business Week.

Within a few weeks of Ms. Blau’s promise, however, gene therapy started falling straight into the Trough. An 18-year-old man named Jesse Gelsinger who suffered from a metabolic disorder had enrolled in a gene-therapy trial. University of Pennsylvania scientists loaded a virus with a working version of an enzyme he needed and injected it into his body. The virus triggered an overwhelming reaction from his immune system and within four days Gelsinger was dead.

Gene therapy nearly came to a halt after his death. An investigation revealed errors and oversights in the design of Gelsinger’s trial. The breathless articles disappeared. Fortunately, research did not stop altogether. Scientists developed new ways of delivering genes without triggering fatal side effects. And they directed their efforts at one part of the body in particular: the eye. The eye is so delicate that inflammation could destroy it. As a result, it has evolved physical barriers that keep the body’s regular immune cells out, as well as a separate battalion of immune cells that are more cautious in their handling of infection.

It occurred to a number of gene-therapy researchers that they could try to treat genetic vision disorders with a very low risk of triggering horrendous side effects of the sort that had claimed Gelsinger’s life. If they injected genes into the eye, they would be unlikely to produce a devastating immune reaction, and any harmful effects would not be able to spread to the rest of the body.

Their hunch paid off. In 2009 scientists reported their first success with gene therapy for a congenital disorder. They treated a rare form of blindness known as Leber’s congenital amaurosis. Children who were once blind can now see.

As “The Forever Fix” shows, gene therapy is now starting its climb up the Slope of Enlightenment. Hundreds of clinical trials are under way to see if gene therapy can treat other diseases, both in and beyond the eye. It still costs a million dollars a patient, but that cost is likely to fall. It’s not yet clear how many other diseases gene therapy will help or how much it will help them, but it is clearly not a false hope.

Gene therapy produced so much excitement because it appealed to the popular idea that genes are software for our bodies. The metaphor only goes so far, though. DNA does not float in isolation. It is intricately wound around spool-like proteins called histones. It is studded with caps made of carbon, hydrogen and oxygen atoms, known as methyl groups. This coiling and capping of DNA allows individual genes to be turned on and off during our lifetimes.

The study of this extra layer of control on our genes is known as epigenetics. In “The Epigenetics Revolution,” molecular biologist Nessa Carey offers an enlightening introduction to what scientists have learned in the past decade about those caps and coils. While she delves into a fair amount of biological detail, she writes clearly and compellingly. As Ms. Carey explains, we depend for our very existence as functioning humans on epigenetics. We begin life as blobs of undifferentiated cells, but epigenetic changes allow some cells to become neurons, others muscle cells and so on.

Epigenetics also plays an important role in many diseases. In cancer cells, genes that are normally only active in embryos can reawaken after decades of slumber. A number of brain disorders, such as autism and schizophrenia, appear to involve the faulty epigenetic programming of genes in neurons.

Scientists got their first inklings about epigenetics decades ago, but in the past few years the field has become hot. In 2008 the National Institutes of Health pledged $190 million to map the epigenetic “marks” on the human genome. New biotech start-ups are trying to carry epigenetic discoveries into the doctor’s office. The FDA has approved cancer drugs that alter the pattern of caps on tumor-cell DNA. Some studies on mice hint that it may be possible to treat depression by taking a pill that adjusts the coils of DNA in neurons.

People seem to be getting giddy about the power of epigenetics in the same way they got giddy about gene therapy in the 1990s. No longer is our destiny written in our DNA: It can be completely overwritten with epigenetics. The excitement is moving far ahead of what the science warrants—or can ever deliver. Last June, an article on the Huffington Post eagerly seized on epigenetics, woefully mangling two biological facts: one, that experiences can alter the epigenetic patterns in the brain; and two, that sometimes epigenetic patterns can be passed down from parents to offspring. The article made a ridiculous leap to claim that we can use meditation to change our own brains and the brains of our children—and thereby alter the course of evolution: “We can jump-start evolution and leverage it on our own terms. We can literally rewire our brains toward greater compassion and cooperation.” You couldn’t ask for a better sign that epigenetics is climbing the Peak of Inflated Expectations at top speed.

The title “The Epigenetics Revolution” unfortunately adds to this unmoored excitement, but in Ms. Carey’s defense, the book itself is careful and measured. Still, epigenetics will probably be plunging soon into the Trough of Disillusionment. It will take years to see whether we can really improve our health with epigenetics or whether this hope will prove to be a false one.

The Forever Fix

By Ricki LewisSt. Martin’s, 323 pages, $25.99

The Epigenetics Revolution

By Nessa CareyColumbia, 339 pages, $26.95

—Mr. Zimmer’s books include “A Planet of Viruses and Evolution: Making Sense of Life,” co-authored with Doug Emlen, to be published in July.

Chimpanzees Have Police Officers, Too (Science Daily)

Mostly high-ranking males or females intervene in a conflict. (Credit: Claudia Rudolf von Rohr)

ScienceDaily (Mar. 7, 2012) — Chimpanzees are interested in social cohesion and have various strategies to guarantee the stability of their group. Anthropologists now reveal that chimpanzees mediate conflicts between other group members, not for their own direct benefit, but rather to preserve the peace within the group. Their impartial intervention in a conflict — so-called “policing” — can be regarded as an early evolutionary form of moral behavior.

Conflicts are inevitable wherever there is cohabitation. This is no different with our closest relatives, the chimpanzees. Sound conflict management is crucial for group cohesion. Individuals in chimpanzee communities also ensure that there is peace and order in their group. This form of conflict management is called “policing” — the impartial intervention of a third party in a conflict. Until now, this morally motivated behavior in chimpanzees was only ever documented anecdotally.

However, primatologists from the University of Zurich can now confirm that chimpanzees intervene impartially in a conflict to guarantee the stability of their group. They therefore exhibit prosocial behavior based on an interest in community concern.

The more parties to a conflict there are, the more policing there is

The willingness of the arbitrators to intervene impartially is greatest if several quarrelers are involved in a dispute as such conflicts particularly jeopardize group peace. The researchers observed and compared the behavior of four different captive chimpanzee groups. At Walter Zoo in Gossau, they encountered special circumstances: “We were lucky enough to be able to observe a group of chimpanzees into which new females had recently been introduced and in which the ranking of the males was also being redefined. The stability of the group began to waver. This also occurs in the wild,” explains Claudia Rudolf von Rohr, the lead author of the study.

High-ranking arbitrators

Not every chimpanzee makes a suitable arbitrator. It is primarily high-ranking males or females or animals that are highly respected in the group that intervene in a conflict. Otherwise, the arbitrators are unable to end the conflict successfully. As with humans, there are also authorities among chimpanzees. “The interest in community concern that is highly developed in us humans and forms the basis for our moral behavior is deeply rooted. It can also be observed in our closest relatives,” concludes Rudolf von Rohr.

The QWERTY Effect: The Keyboards Are Changing Our Language! (The Atlantic)

MAR 8 2012, 1:30 PM ET

Could the layout of letters on a keyboard be shaping how we feel about certain words?

UnderwoodKeyboard1.jpg

It’s long been thought that how a word sounds — its very phonemes — can be related in some ways to what that word means. But language is no longer solely oral. Much of our word production happens not in our throats and mouths but on our keyboards. Could that process shape a word’s meaning as well?

That’s the contention of an intriguing new paper by linguists Kyle Jasmin and Daniel Casasanto. They argue that because of the QWERTY keyboard’s asymmetrical shape (more letters on the left than the right), words dominated by right-side letters “acquire more positive valences” — that is to say, they become more likable. Their argument is that because its easier for your fingers to find the correct letters for typing right-side dominated words, the words subtly gain favor in your mind.

As Dave Mosher of Wired explains:

In their first experiment, the researchers analyzed 1,000-word indexes from English, Spanish and Dutch, comparing their perceived positivity with their location on the QWERTY keyboard. The effect was slight but significant: Right-sided words scored more positively than left-sided words.

With newer words, the correlation was stronger. When the researchers analyzed words coined after the QWERTY keyboard’s invention, they found that right-sided words had more positive associations than left-sided words.

In another experiment, 800 typists recruited through Amazon.com’s Mechanical Turk service rated whether made-up words felt positive or negative. A QWERTY effect also emerged in those words.

Jasmin cautioned that words’ literal meanings almost certainly outweigh their QWERTY-inflected associations, and said the study only shows a correlation rather than clear cause-and-effect. Also, while a typist’s left- or right-handedness didn’t seem to matter, Jasmin said there’s not yet enough data to be certain.

Jasmin and Casasanto leave open the question whether the effect may also be the result of subtle cultural preferences for things on the right-hand side. Additionally, they say, “There is about a 90 percent chance that the QWERTY inventor was right-handed,” so it’s possible that biases he carried, may have subconsciously place more likable sounds on the right. However, they say, “such implicit associations would be based on the peculiar roles these letters play in English words or sounds. The finding of similar QWERTYeffects across languages suggests that, even if English-based [biases] influenced QWERTY’s design, QWERTY has now ‘infected’ typers of other languages with similar associations.”

When It Comes to Accepting Evolution, Gut Feelings Trump Facts (Science Daily)

ScienceDaily (Jan. 19, 2012) — For students to accept the theory of evolution, an intuitive “gut feeling” may be just as important as understanding the facts, according to a new study.

In an analysis of the beliefs of biology teachers, researchers found that a quick intuitive notion of how right an idea feels was a powerful driver of whether or not students accepted evolution — often trumping factors such as knowledge level or religion.

“The whole idea behind acceptance of evolution has been the assumption that if people understood it — if they really knew it — they would see the logic and accept it,” said David Haury, co-author of the new study and associate professor of education at Ohio State University.

“But among all the scientific studies on the matter, the most consistent finding was inconsistency. One study would find a strong relationship between knowledge level and acceptance, and others would find no relationship. Some would find a strong relationship between religious identity and acceptance, and others would find less of a relationship.”

“So our notion was, there is clearly some factor that we’re not looking at,” he continued. “We’re assuming that people accept something or don’t accept it on a completely rational basis. Or, they’re part of a belief community that as a group accept or don’t accept. But the findings just made those simple answers untenable.”

Haury and his colleagues tapped into cognitive science research showing that our brains don’t just process ideas logically — we also rely on how true something feels when judging an idea.

“Research in neuroscience has shown that when there’s a conflict between facts and feeling in the brain, feeling wins,” he says.

The researchers framed a study to determine whether intuitive reasoning could help explain why some people are more accepting of evolution than others. The study, published in the Journal of Research in Science Teaching, included 124 pre-service biology teachers at different stages in a standard teacher preparation program at two Korean universities.

First, the students answered a standard set of questions designed to measure their overall acceptance of evolution. These questions probed whether students generally believed in the main concepts and scientific findings that underpin the theory.

Then the students took a test on the specific details of evolutionary science. To show their level of factual knowledge, students answered multiple-choice and free-response questions about processes such as natural selection. To gauge their “gut” feelings about these ideas, students wrote down how certain they felt that their factually correct answers were actually true.

The researchers then analyzed statistical correlations to see whether knowledge level or feeling of certainty best predicted students’ overall acceptance of evolution. They also considered factors such as academic year and religion as potential predictors.

“What we found is that intuitive cognition has a significant impact on what people end up accepting, no matter how much they know,” said Haury. The results show that even students with greater knowledge of evolutionary facts weren’t likelier to accept the theory, unless they also had a strong “gut” feeling about those facts.

When trying to explain the patterns of whether people believe in evolution or not, “the results show that if we consider both feeling and knowledge level, we can explain much more than with knowledge level alone,” said Minsu Ha, lead author on the paper and a Ph.D. candidate in the School of Teaching and Learning.

In particular, the research shows that it may not be accurate to portray religion and science education as competing factors in determining beliefs about evolution. For the subjects of this study, belonging to a religion had almost no additional impact on beliefs about evolution, beyond subjects’ feelings of certainty.

These results also provide a useful way of looking at the perceived conflict between religion and science when it comes to teaching evolution, according to Haury. “Intuitive cognition not only opens a new door to approach the issue,” he said, “it also gives us a way of addressing that issue without directly questioning religious views.”

When choosing a setting for their study, the team found that Korean teacher preparation programs were ideal. “In Korea, people all take the same classes over the same time period and are all about the same age, so it takes out a lot of extraneous factors,” said Haury. “We wouldn’t be able to find a sample group like this in the United States.”

Unlike in the U.S., about half of Koreans do not identify themselves as belonging to any particular religion. But according to Ha, who is from Korea, certain religious groups consider the topic of evolution just as controversial as in the U.S.

To ensure that their results were relevant to U.S. settings, the researchers compared how the Korean students did on the knowledge tests with previous studies of U.S. students. “We found that the both groups were comparable in terms of the overall performance,” said Haury.

For teaching evolution, the researchers suggest using exercises that allow students to become aware of their brains’ dual processing. Knowing that sometimes what their “gut” says is in conflict with what their “head” knows may help students judge ideas on their merits.

“Educationally, we think that’s a place to start,” said Haury. “It’s a concrete way to show them, look — you can be fooled and make a bad decision, because you just can’t deny your gut.”

Ha and Haury collaborated on this study with Ross Nehm, associate professor of education at the Ohio State University. The research was funded by the National Science Foundation.

The right’s stupidity spreads, enabled by a too-polite left (Guardian)

Conservativism may be the refuge of the dim. But the room for rightwing ideas is made by those too timid to properly object

by George Monbiot, The Guardian

Self-deprecating, too liberal for their own good, today’s progressives stand back and watch, hands over their mouths, as the social vivisectionists of the right slice up a living society to see if its component parts can survive in isolation. Tied up in knots of reticence and self-doubt, they will not shout stop. Doing so requires an act of interruption, of presumption, for which they no longer possess a vocabulary.

Perhaps it is in the same spirit of liberal constipation that, with the exception of Charlie Brooker, we have been too polite to mention the Canadian study published last month in the journal Psychological Science, which revealed that people with conservative beliefs are likely to be of low intelligence. Paradoxically it was the Daily Mail that brought it to the attention of British readers last week. It feels crude, illiberal to point out that the other side is, on average, more stupid than our own. But this, the study suggests, is not unfounded generalisation but empirical fact.

It is by no means the first such paper. There is plenty of research showing that low general intelligence in childhood predicts greater prejudice towards people of different ethnicity or sexuality in adulthood. Open-mindedness, flexibility, trust in other people: all these require certain cognitive abilities. Understanding and accepting others – particularly “different” others – requires an enhanced capacity for abstract thinking.

But, drawing on a sample size of several thousand, correcting for both education and socioeconomic status, the new study looks embarrassingly robust. Importantly, it shows that prejudice tends not to arise directly from low intelligence but from the conservative ideologies to which people of low intelligence are drawn. Conservative ideology is the “critical pathway” from low intelligence to racism. Those with low cognitive abilities are attracted to “rightwing ideologies that promote coherence and order” and “emphasise the maintenance of the status quo”. Even for someone not yet renowned for liberal reticence, this feels hard to write.

This is not to suggest that all conservatives are stupid. There are some very clever people in government, advising politicians, running thinktanks and writing for newspapers, who have acquired power and influence by promoting rightwing ideologies.

But what we now see among their parties – however intelligent their guiding spirits may be – is the abandonment of any pretence of high-minded conservatism. On both sides of the Atlantic, conservative strategists have discovered that there is no pool so shallow that several million people won’t drown in it. Whether they are promoting the idea that Barack Obama was not born in the US, that man-made climate change is an eco-fascist-communist-anarchist conspiracy, or that the deficit results from the greed of the poor, they now appeal to the basest, stupidest impulses, and find that it does them no harm in the polls.

Don’t take my word for it. Listen to what two former Republican ideologues, David Frum and Mike Lofgren, have been saying. Frum warns that “conservatives have built a whole alternative knowledge system, with its own facts, its own history, its own laws of economics”. The result is a “shift to ever more extreme, ever more fantasy-based ideology” which has “ominous real-world consequences for American society”.

Lofgren complains that “the crackpot outliers of two decades ago have become the vital centre today”. The Republican party, with its “prevailing anti-intellectualism and hostility to science” is appealing to what he calls the “low-information voter”, or the “misinformation voter”. While most office holders probably don’t believe the “reactionary and paranoid claptrap” they peddle, “they cynically feed the worst instincts of their fearful and angry low-information political base”.

The madness hasn’t gone as far in the UK, but the effects of the Conservative appeal to stupidity are making themselves felt. This week the Guardian reported that recipients of disability benefits, scapegoated by the government as scroungers, blamed for the deficit, now find themselves subject to a new level of hostility and threats from other people.

These are the perfect conditions for a billionaires’ feeding frenzy. Any party elected by misinformed, suggestible voters becomes a vehicle for undisclosed interests. A tax break for the 1% is dressed up as freedom for the 99%. The regulation that prevents big banks and corporations exploiting us becomes an assault on the working man and woman. Those of us who discuss man-made climate change are cast as elitists by people who happily embrace the claims of Lord Monckton, Lord Lawson or thinktanks funded by ExxonMobil or the Koch brothers: now the authentic voices of the working class.

But when I survey this wreckage I wonder who the real idiots are. Confronted with mass discontent, the once-progressive major parties, as Thomas Frank laments in his latest book Pity the Billionaire, triangulate and accommodate, hesitate and prevaricate, muzzled by what he calls “terminal niceness”. They fail to produce a coherent analysis of what has gone wrong and why, or to make an uncluttered case for social justice, redistribution and regulation. The conceptual stupidities of conservatism are matched by the strategic stupidities of liberalism.

Yes, conservatism thrives on low intelligence and poor information. But the liberals in politics on both sides of the Atlantic continue to back off, yielding to the supremacy of the stupid. It’s turkeys all the way down.

Twitter: @georgemonbiot

Strange History: Mass Hysteria Through the Years (Discovery)

Analysis by Benjamin Radford
Mon Feb 6, 2012 05:28 PM ET

The news media has been abuzz recently about a seemingly mysterious illness that has nearly two dozen students at LeRoy High School in western New York twitching and convulsing uncontrollably.

Most doctors and experts believe that the students are suffering from mass sociogenic illness, also known as mass hysteria. In these cases, psychological symptoms manifest as physical conditions.

Sociologist Robert Bartholomew, author of several books on mass hysteria including The Martians Have Landed: A History of Media-Driven Panics and Hoaxes, explained to Discovery News that “there are two main types of contagious conversion disorder. The most common in Western countries is triggered by extreme, sudden stress; usually a bad smell. Symptoms typically include dizziness, headaches, fainting and over-breathing, and resolve within about a day.”

In contrast, Bartholomew said, “The LeRoy students are experiencing the rarer, more serious type affecting muscle motor function and commonly involves twitching, shaking, facial tics, difficulty communicating and trance states. Symptoms appear slowly over weeks or months under exposure to longstanding stress, and typically take weeks or months to subside.”

Mass hysteria cases are more common than people realize and have been reported all over the world for centuries. Here’s a look at some famous — and bizarre — cases of mass hysteria in history.

The Mad Gasser of Mattoon

Many cases of mass hysteria are spawned by reports of strange or mysterious odors. One of the most famous cases occurred in 1944 when residents of Mattoon, Ill., reported that a “mad gasser” was loose in the small town.

It began with one woman named Aline Kearney, who smelled something odd outside her window. Soon she said her throat and lips were burning, and she began to panic when she felt her legs becoming paralyzed. She called police, and her symptoms soon subsided. Her husband, upon returning home later, reported glimpsing a shadowy figure lurking nearby. The “gas attack” (as it was assumed to be) on Mrs. Kearney was not only the gossip of the neighborhood but also reported in the local newspaper, and soon others in the small town reported odd odors and experiencing short-lived symptoms such as breathlessness, nausea, headache, dizziness and weakness. No “mad gasser” was ever found, and no trace of the mysterious gas was detected.

The French Meowing Nuns

Before 1900 many reports of mass hysteria occurred within the context of religious institutions. European convents in particular were often the settings for outbreaks. In one case the symptoms manifested in strange collective behavior; a source from 1844 reported that “a nun, in a very large convent in France, began to meow like a cat; shortly afterwards other nuns also meowed.

At last all the nuns meowed together every day at a certain time for several hours together.” The meowing went on until neighbors complained and soldiers were called, threatening to whip the nuns until they stopped meowing. During this era, belief in possession (such as by animals or demons, for example) was common, and cats in particular were suspected of being in league with Satan. These outbreaks of animal-like noises and behaviors usually lasted anywhere from a few days to a few months, though some came and went over the course of years.

The Pokémon Panic

A strange and seemingly inexplicable outbreak of bizarre behavior struck Japan in mid-December 1997, when thousands of Japanese schoolchildren experienced frightening seizures after watching an episode of the popular cartoon “Pokémon.” Intense flashes of light during the show triggered relatively harmless and brief seizures, nausea, and headaches. Doctors diagnosed some of the children with a rare, pre-existing condition called photosensitive epilepsy, in which bright flashing lights used in the cartoon can trigger the symptoms.

But experts were unable to explain what had happened to the remaining thousands of other children who reported symptoms; the vast majority of them did not have photosensitive epilepsy. Finally, the mystery was solved in 2001, when it was discovered that the symptoms found in most children were caused by mass hysteria, triggered by the initial wave of epileptic seizures.

The McMinnville School Poison Gas Episode

Nearly 200 students and teachers were hospitalized during a mysterious outbreak of illness at Warren County High School in McMinnville, Tenn., in November 1998. A local newspaper, the Southern Standard, ran the headline “Students Poisoned: Mysterious Fumes Sicken Almost 100 at High School.” It began when a teacher reported smelling a gasoline-like odor in her classroom that made her sick. A few of her students then also became sick, and the school was closed for testing.

No contamination was found, nor any medical or environmental cause for the symptoms, which included headache, dizziness, nausea and drowsiness. Following a clean bill of health, the school reopened, and soon a second cluster of students fell ill and closed down the school a second time. All recovered from the attack.

As these cases show, the LeRoy high school incident is only one of many strange episodes of mass sociogenic illness — and there will be more.

Into the mind of a Neanderthal (New Scientist)

18 January 2012
Magazine issue 2847

Neanderthals shared about 99.84 per cent of their DNA with us <i>(Image: Action Press/Rex Features)</i>Neanderthals shared about 99.84 per cent of their DNA with us (Image: Action Press/Rex Features)

What would have made them laugh? Or cry? Did they love home more than we do? Meet the real Neanderthals

A NEANDERTHAL walks into a bar and says… well, not a lot, probably. Certainly he or she could never have delivered a full-blown joke of the type modern humans would recognise because a joke hinges on surprise juxtapositions of unexpected or impossible events. Cognitively, it requires quite an advanced theory of mind to put oneself in the position of one or more of the actors in that joke – and enough working memory (the ability to actively hold information in your mind and use it in various ways).

So does that mean our Neanderthal had no sense of humour? No: humans also recognise the physical humour used to mitigate painful episodes – tripping, hitting our heads and so on – which does not depend on language or symbols. So while we could have sat down with Neanderthals and enjoyed the slapstick of The Three Stooges or Lee Evans, the verbal complexities of Twelfth Night would have been lost on them.

Humour is just one aspect of Neanderthal life we have been plotting for some years in our mission to make sense of their cognitive life. So what was it like to be a Neanderthal? Did they feel the same way we do? Did they fall in love? Have a bad day? Palaeoanthropologists now know a great deal about these ice-age Europeans who flourished between 200,000 and 30,000 years ago. We know, for example, that Neanderthals shared about 99.84 per cent of their DNA with us, and that we and they evolved separately for several hundred thousand years. We also know Neanderthal brains were a bit larger than ours and were shaped a bit differently. And we know where they lived, what they ate and how they got it.

Skeletal evidence shows that Neanderthal men, women and children led very strenuous lives, preoccupied with hunting large mammals. They often made tactical use of terrain features to gain as much advantage as possible, but administered the coup de grace with thrusting spears. Based on their choice of stone for tools, we know they almost never travelled outside small home territories that were rarely over 1000 square kilometres.

The Neanderthal style of hunting often resulted in injuries, and the victims were often nursed back to health by others. But few would have survived serious lower body injuries, since individuals who could not walk might well have been abandoned. It looks as if Neanderthals had well-developed way-finding and tactical abilities, and empathy for group members, but also that they made pragmatic decisions when necessary.

Looking closely at the choices Neanderthals made when they manufactured and used tools shows that they organised their technical activities much as artisans, such as blacksmiths, organise their production. Like blacksmiths, they relied on “expert” cognition, a form of observational learning and practice acquired through apprenticeship that relies heavily on long-term procedural memory.

The only obvious difference between Neanderthal technical thinking and ours lay in innovation. Although Neanderthals invented the practice of hafting stone points onto spears, this was one of very few innovations over several hundred thousand years. Active invention relies on thinking by analogy and a good amount of working memory, implying they may have had a reduced capacity in these respects. Neanderthals may have relied more heavily than we do on well-learned procedures of expert cognition.

As for the neighbourhood, the size and distribution of archaeological sites shows that Neanderthals spent their lives mostly in small groups of five to 10 individuals. Several such groups would come together briefly after especially successful hunts, suggesting that Neanderthals also belonged to larger communities but that they seldom made contact with people outside those groupings.

Many Neanderthal sites have rare pieces of high-quality stone from more distant sources (more than 100 kilometres), but not enough to indicate trade or even regular contact with other communities. A more likely scenario is that an adolescent boy or girl carried the material with them when they attached themselves to a new community. The small size of Neanderthal territories would have made some form of “marrying out” essential.

We can also assume that Neanderthals had some form of marriage because pair-bonding between men and women, and joint provisioning for their offspring, had been a feature of hominin social life for over a million years. They also protected corpses by covering them with rocks or placing them in shallow pits, suggesting the kinds of intimate, embodied social and cognitive interaction typical of our own family life.

But the Neanderthals’ short lifespan – few lived past 35 – meant that other features of our more recent social past were absent: elders, for example, were rare. And they almost certainly lacked the cognitive abilities for dealing with strangers that evolved in modern humans, who lived in larger groups numbering in the scores and belonged to larger communities in the hundreds or more. They also established and maintained contacts with distant groups.

One cognitive ability that evolved in modern humans as a result was the “cheater detection” ability described by evolutionary psychologist Leda Cosmides, at the University of California, Santa Barbara. Another was an ability to judge the value of one commodity in terms of another, what anthropologist Alan Page Fiske at the University of California, Los Angeles, calls the “market pricing” ability. Both are key reasoning skills that evolved to allow interaction with acquaintances and strangers, neither of which was a regular feature of Neanderthal home life.

There are good circumstantial reasons for thinking that Neanderthals had language, with words and some kind of syntax; some of their technology and hunting tactics would have been difficult to learn and execute without it. Moreover, Neanderthal brains had a well-developed Broca’s area, and their DNA includes the FOXP2 gene carried by modern humans, which is involved in speech production. Unfortunately, none of this reveals anything specific about Neanderthal language. It could have been very or only slightly different, we just don’t know.

Having any sort of language could also have exposed Neanderthals to problems modern humans face, such as schizophrenia, says one theory which puts the disease down to coordination problems between the brain’s left and right hemispheres.

But while Neanderthals would have had a variety of personality types, just as we do, their way of life would have selected for an average profile quite different from ours. Jo or Joe Neanderthal would have been pragmatic, capable of leaving group members behind if necessary, and stoical, to deal with frequent injuries and lengthy convalescence. He or she had to be risk tolerant for hunting large beasts close up; they needed sympathy and empathy in their care of the injured and dead; and yet were neophobic, dogmatic and xenophobic.

So we could have recognised and interacted with Neanderthals, but we would have noticed these significant cognitive differences. They would have been better at well-learned, expert cognition than modern humans, but not as good at the development of novel solutions. They were adept at intimate, small-scale social cognition, but lacked the cognitive tools to interact with acquaintances and strangers, including the extensive use of symbols.

In the final count, when Neanderthals and modern humans found themselves competing across the European landscape 30,000 years ago, those cognitive differences may well have been decisive in seeing off the Neanderthals.

Profile
Thomas Wynn is a professor of anthropology and Frederick L. Coolidge is a professor of psychology at the University of Colorado, Colorado Springs. For the past decade they have worked on the evolution of cognition. Their new book is How to Think Like a Neandertal (Oxford University Press, 2012)

Great apes make sophisticated decisions (Max-Planck-Gesellschaft)

By Daniel Haun
Max-Planck-Gesellschaft

Chimpanzees, orangutans, gorillas and bonobos make more sophisticated decisions than was previously thought. Great apes weigh their chances of success, based on what they know and the likelihood to succeed when guessing, according to a study of MPI researcher Daniel Haun, published on December 21 in the online journal PLoS ONE. The findings may provide insight into human decision-making as well.

The authors of the study, led by Daniel Haun of the Max Planck Institutes for Psycholinguistics (Nijmegen) and Evolutionary Anthropology (Leipzig), investigated the behaviour of all four non-human great ape species. The apes were presented with two banana pieces: a smaller one, which was always reliably in the same place, and a larger one, which was hidden under one of multiple cups, and therefore the riskier choice.

The researchers found that the apes’ choices were regulated by their uncertainty and the probability of success for the risky choice, suggesting sophisticated decision-making. Apes chose the small piece more often when they where uncertain where the large piece was hidden. The lower their chances to guess correctly, the more often they chose the small piece.

Risky choices

The researchers also found that the apes went for the larger piece – and risked getting nothing at all – no less than 50% of the time. This risky decision-making increased to nearly 100% when the size difference between the two banana pieces was largest. While all four species demonstrated sophisticated decision making strategies, chimpanzees and orangutans were overall more likely to make risky choices relative to gorillas and bonobos. The precise reason for this discrepancy remains unknown.

Haun concludes: “Our study adds to the growing evidence that the mental life of the other great apes is much more sophisticated than is often assumed.”

O papel da confiança na decisão social (FAPESP)

08/12/2011

Por Mônica Pileggi

Estudo realizado no Mackenzie e publicado no The Journal of Neuroscience indica que cérebro não percebe injustiça de amigos em situações de decisão econômica (Wikimedia)

Agência FAPESP – Durante situações de decisão econômica, a amizade é uma das variáveis que modulam nosso cérebro, tornando o ser humano incapaz de se sentir injustiçado. Essa é uma das conclusões de uma pesquisa desenvolvida no Laboratório de Neurociência Cognitiva e Social da Universidade Presbiteriana Mackenzie (UPM) e publicada no The Journal of Neuroscience.

O trabalho, liderado pelo professor Paulo Sérgio Boggio, coordenador de pesquisa do Centro de Ciências Biológicas e da Saúde da UPM, foi realizado durante o mestrado “Estudo preliminar sobre potenciais cognitivos em tarefa de tomada de decisão social”, da psicóloga Camila Campanhã, que atualmente faz o doutorado na UPM, ambos com bolsas da FAPESP.

Segundo Campanhã, o estudo teve como objetivo estudar o papel da confiança na tomada de decisão social e suas bases neurobiológicas. Para isso, ela se baseou na teoria dos jogos, ramo da matemática aplicada que estuda situações estratégicas nas quais jogadores escolhem diferentes ações na tentativa de melhorar seu retorno.

Inicialmente desenvolvida como ferramenta para compreender comportamento econômico e depois usada até mesmo para definir estratégias nucleares, a teoria dos jogos é hoje aplicada em diversos campos acadêmicos. Tornou-se um ramo proeminente da matemática especialmente após a publicação, em 1944, de The Theory of Games and Economic Behavior de John von Neumann e Oskar Morgenstern.

Campanhã – cujo estudo foi realizado em colaboração com os pesquisadores Ludovico Minati, do Istituto Neurologico “Carlo Besta” (Itália), e Felipe Fregni, da Universidade Harvard (Estados Unidos) – conta que para a realização do experimento foi utilizado o Ultimatum Game, jogo utilizado na neuroeconomia e por estudiosos do comportamento social.

Composto por participantes da faixa etária de 18 a 25 anos, o jogo foi dividido em dois blocos. No primeiro, o computador enviou propostas econômicas justas e injustas de amigos (que se encontravam em ambientes diferentes). No segundo, as propostas foram feitas por integrantes do laboratório, desconhecidos dos participantes.

Os valores das propostas foram classificadas como justas (50:50), mais ou menos justas (70:30) e muito injustas (80:20 e 90:10). “Os participantes receberam a mesma quantidade de propostas justas e injustas, tanto do amigo como do desconhecido, enviadas pelo computador. Registramos toda a atividade eletroencefalográfica desses participantes durante o experimento”, disse Campanhã à Agência FAPESP.

Nesse tipo de experimento, caso a pessoa aceite a proposta, ambos recebem o valor combinado. Se ela recusar, os dois não recebem nada. “Do ponto de vista comportamental, observamos que as pessoas rejeitaram muito mais as propostas injustas do desconhecido do que as oferecidas pelo amigo – nas quais o amigo sairia ganhando mais. Além disso, essas pessoas pontuaram os amigos como mais justos do que os desconhecidos”, destacou.

O estudo apontou uma inversão positiva na atividade neuroelétrica para as propostas de amigos. “A expectativa era que os dados seriam negativos conforme se recebessem propostas injustas do amigo. No entanto, os participantes não perceberam essa injustiça”, disse Campanhã.

Segundo ela, a inversão de polaridade positiva está relacionada à satisfação de receber algo bom e justo, cuja recompensa está acima do esperado. Nesse caso, a dopamina é liberada. No sinal negativo há quebra de expectativa e a substância é inibida, gerando raiva.

“Ao realizarmos a análise para identificar a área do cérebro ativada naquele momento, observamos que o sinal elétrico apareceu no córtex pré-frontal medial anterior. Essa é uma área relacionada à habilidade de imaginar e tentar entender o que o outro está pensando e sentindo”, disse.

“Não significa que as pessoas não processam a injustiça, mas esse processo é diferente quando se confia em alguém. É como se não precisasse tentar entender o que se passa com a outra pessoa ou o que ela está sentindo”, disse.

O artigo Responding to Unfair Offers Made by a Friend: Neuroelectrical Activity Changes in the Anterior Medial Prefrontal Cortex (doi:10.1523/JNEUROSCI.1253-11.2011), de Camila Campanhã e outros, pode ser lido por assinantes da The Journal of Neuroscience emwww.jneurosci.org/content/31/43/15569.full.pdf+html?sid=94d0a3e8-79b9-47a8-89d8-24dcf41750e7. 

Human brains unlikely to evolve into a ‘supermind’ as price to pay would be too high (University of Warwick)

University of Warwick

Human minds have hit an evolutionary “sweet spot” and – unlike computers – cannot continually get smarter without trade-offs elsewhere, according to research by the University of Warwick.

Researchers asked the question why we are not more intelligent than we are given the adaptive evolutionary process. Their conclusions show that you can have too much of a good thing when it comes to mental performance.

The evidence suggests that for every gain in cognitive functions, for example better memory, increased attention or improved intelligence, there is a price to pay elsewhere – meaning a highly-evolved “supermind” is the stuff of science fiction.

University of Warwick psychology researcher Thomas Hills and Ralph Hertwig of the University of Basel looked at a range of studies, including research into the use of drugs like Ritalan which help with attention, studies of people with autism as well as a study of the Ashkenazi Jewish population.

For instance, among individuals with enhanced cognitive abilities- such as savants, people with photographic memories, and even genetically segregated populations of individuals with above average IQ, these individuals often suffer from related disorders, such as autism, debilitating synaesthesia and neural disorders linked with enhanced brain growth.

Similarly, drugs like Ritalan only help people with lower attention spans whereas people who don’t have trouble focusing can actually perform worse when they take attention-enhancing drugs.

Dr Hills said: “These kinds of studies suggest there is an upper limit to how much people can or should improve their mental functions like attention, memory or intelligence.

“Take a complex task like driving, where the mind needs to be dynamically focused, attending to the right things such as the road ahead and other road users – which are changing all the time.

“If you enhance your ability to focus too much, and end up over-focusing on specific details, like the driver trying to hide in your blind spot, then you may fail to see another driver suddenly veering into your lane from the other direction.

“Or if you drink coffee to make yourself more alert, the trade-off is that it is likely to increase your anxiety levels and lose your fine motor control. There are always trade-offs.

“In other words, there is a ‘sweet spot’ in terms of enhancing our mental abilities – if you go beyond that spot – just like in the fairy-tales – you have to pay the price.”

The research, entitled ‘Why Aren’t We Smarter Already: Evolutionary Trade-Offs and Cognitive Enhancements,’ is published in Current Directions in Psychological Science, a journal of the Association for Psychological Science.

The Mental Time Travel Of Animals (NPR)

11:39 am

November 3, 2011

by BARBARA J KING

Don't underestimate the crow.

Arif Ali/AFP/Getty Images. Don’t underestimate the crow.

Without a trace of agitation, the male chimpanzee piles up stones in small caches within his enclosure. He does this in the morning, before zoo visitors arrive. Hours later, in an aroused state, the ape hurls the stones at people gathering to watch him.

detailed report by Mathias Osvath concluded that the ape had planned ahead strategically for the future. It is exactly this feat of mental time travel that psychologist Michael C. Corballis, in his book The Recursive Mind: The Origins of Human Language, Thought, and Civilization, claims is beyond the reach of nonhuman animals. Last week, my review of Corballis’s book appeared in the Times Literary Supplement.

Corballis suggests that mental time travel is one of two human ways of thinking that propelled our species into a unique cognitive status. (The other, theory of mind, I won’t deal with here.)

During mental time travel, we insert into our present consciousness an experience that we’ve had in the past or that we imagine for ourselves in the future. Corballis calls this ability mental recursion, and he’s right that we humans do it effortlessly. When we daydream at work about last weekend’s happy times with family and friends, or anticipate tonight’s quiet evening with a book, we engage in mental time travel.

Our highly elaborated ability to insert the past or future recursively into our thinking may play a role in the evolution of human civilization, as Corballis claims. But Corballis’s argument is weakened because he dismisses other animals’ mental capacities far too readily.

It’s not only one chimpanzee in a Swedish zoo who makes me think so.

When our pets grieve, as I wrote about in this space recently, they hold in their mind some memory of the past that causes them to miss a companion.

New research on the pattern of food storage by Eurasian jays indicates that these birds think ahead about what specific foods they will want in the future.

When apes (chimpanzees) and corvids (crows and ravens) make tools to obtain food, they too think ahead to a goal, even as they fashion a tool to solve the problem before them.

In the NATURE documentary film A Murder of Crows, a New Caledonian crow solves a three-part tool-using problem totally new to him (or to any other crow). As one researcher put it, the bird thinks “three chess moves into the future” as he finds one tool that allows him to get another tool that he uses finally to procure food.

Have a look at this crow’s stunning problem-solving here. The experimental footage begins at 16:30, but starting at 13:00 offers good context. And the entire film is a delight.

People Rationalize Situations They’re Stuck With, but Rebel When They Think There’s an out (Science Daily)

ScienceDaily (Nov. 1, 2011) — People who feel like they’re stuck with a rule or restriction are more likely to be content with it than people who think that the rule isn’t definite. The authors of a new study, which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, say this conclusion may help explain everything from unrequited love to the uprisings of the Arab Spring.

Psychological studies have found two contradictory results about how people respond to rules. Some research has found that, when there are new restrictions, you rationalize them; your brain comes up with a way to believe the restriction is a good idea. But other research has found that people react negatively against new restrictions, wanting the restricted thing more than ever.

Kristin Laurin of the University of Waterloo thought the difference might be absoluteness — how much the restriction is set in stone. “If it’s a restriction that I can’t really do anything about, then there’s really no point in hitting my head against the wall and trying to fight against it,” she says. “I’m better off if I just give up. But if there’s a chance I can beat it, then it makes sense for my brain to make me want the restricted thing even more, to motivate me to fight” Laurin wrote the new paper with Aaron Kay and Gavan Fitzsimons of Duke University.

In an experiment in the new study, participants read that lowering speed limits in cities would make people safer. Some read that government leaders had decided to reduce speed limits. Of those people, some were told that this legislation would definitely come into effect, and others read that it would probably happen, but that there was still a small chance government officials could vote it down.

People who thought the speed limit was definitely being lowered supported the change more than control subjects, but people who thought there was still a chance it wouldn’t happen supported it less than these control subjects. Laurin says this confirms what she suspected about absoluteness; if a restriction is definite, people find a way to live with it.

This could help explain how uprisings spread across the Arab world earlier this year. When people were living under dictatorships with power that appeared to be absolute, Laurin says, they may have been comfortable with it. But once Tunisia’s president fled, citizens of neighboring countries realized that their governments weren’t as absolute as they seemed — and they could have dropped whatever rationalizations they were using to make it possible to live under an authoritarian regime. Even more, the now non-absolute restriction their governments represented could have exacerbated their reaction, fueling their anger and motivating them to take action.

And how does this relate to unrequited love? It confirms people’s intuitive sense that leading someone can just make them fall for you more deeply, Laurin says. “If this person is telling me no, but I perceive that as not totally absolute, if I still think I have a shot, that’s just going to strengthen my desire and my feeling, that’s going to make me think I need to fight to win the person over,” she says. “If instead I believe no, I definitely don’t have a shot with this person, then I might rationalize it and decide that I don’t like them that much anyway.”

Forgetting Is Part of Remembering (Science Daily)

ScienceDaily (Oct. 18, 2011) — It’s time for forgetting to get some respect, says Ben Storm, author of a new article on memory in Current Directions in Psychological Science, a journal of the Association for Psychological Science. “We need to rethink how we’re talking about forgetting and realize that under some conditions it actually does play an important role in the function of memory,” says Storm, who is a professor at the University of Illinois at Chicago.

“Memory is difficult. Thinking is difficult,” Storm says. Memories and associations accumulate rapidly. “These things could completely overrun our life and make it impossible to learn and retrieve new things if they were left alone, and could just overpower the rest of memory,” he says.

But, fortunately, that isn’t what happens. “We’re able to get around these strong competing inappropriate memories to remember the ones we want to recall.” Storm and other psychological scientists are trying to understand how our minds select the right things to recall — if someone’s talking about beaches near Omaha, Nebraska, for example, you will naturally suppress any knowledge you’ve collected about Omaha Beach in Normandy.

In one kind of experiment, participants are given a list of words that have some sort of relation to each other. They might be asked to memorize a list of birds, for example. In the next part of the test, they have to do a task that requires remembering half the birds. “That’s going to make you forget the other half of the birds in that list,” Storm says. That might seem bad — it’s forgetting. “But what the research shows is that this forgetting is actually a good thing.”

People who are good at forgetting information they don’t need are also good at problem solving and at remembering something when they’re being distracted with other information. This shows that forgetting plays an important role in problem solving and memory, Storm says.

There are plenty of times when forgetting makes sense in daily life. “Say you get a new cell phone and you have to get a new phone number, do you really want to remember your old phone number every time someone asks what your number is?” Storm asks. Or where you parked your car this morning — it’s important information today, but you’d better forget it when it comes time to go get your car for tomorrow afternoon’s commute. “We need to be able to update our memory so we can remember and think about the things that are currently relevant.”

Brain Scans Support Findings That IQ Can Rise or Fall Significantly During Adolescence (Science Daily)

ScienceDaily (Oct. 20, 2011) — IQ, the standard measure of intelligence, can increase or fall significantly during our teenage years, according to research funded by the Wellcome Trust, and these changes are associated with changes to the structure of our brains. The findings may have implications for testing and streaming of children during their school years.

Across our lifetime, our intellectual ability is considered to be stable, with intelligence quotient (IQ) scores taken at one point in time used to predict educational achievement and employment prospects later in life. However, in a study published October 20 in the journal Nature, researchers at the Wellcome Trust Centre for Neuroimaging at UCL (University College London) and the Centre for Educational Neuroscience show for the first time that, in fact, our IQ is not constant.

The researchers, led by Professor Cathy Price, tested 33 healthy adolescents in 2004 when they were between the ages of 12 and 16 years. They then repeated the tests four years later when the same subjects were between 15 and 20 years old. On both occasions, the researchers took structural brain scans of the subjects using magnetic resonance imaging (MRI).

Professor Price and colleagues found significant changes in the IQ scores measured in 2008 compared to the 2004 scores. Some subjects had improved their performance relative to people of a similar age by as much as 20 points on the standardised IQ scale; in other cases, however, performance had fallen by a similar amount.

To test whether these changes were meaningful, the researchers analysed the MRI scans to see whether there was a correlation with changes in the structure of the subjects’ brains.

“We found a considerable amount of change in how our subjects performed on the IQ tests in 2008 compared to four years earlier,” explains Sue Ramsden, first author of the study. “Some subjects performed markedly better but some performed considerably worse. We found a clear correlation between this change in performance and changes in the structure of their brains and so can say with some certainty that these changes in IQ are real.”

The researchers measured each subject’s verbal IQ, which includes measurements of language, arithmetic, general knowledge and memory, and their non-verbal IQ, such as identifying the missing elements of a picture or solving visual puzzles. They found a clear correlation with particular regions of the brain.

An increase in verbal IQ score correlated with an increase in the density of grey matter — the nerve cells where the processing takes place — in an area of the left motor cortex of the brain that is activated when articulating speech. Similarly, an increase in non-verbal IQ score correlated with an increase in the density of grey matter in the anterior cerebellum, which is associated with movements of the hand. However, an increase in verbal IQ did not necessarily go hand-in-hand with an increase in non-verbal IQ.

According to Professor Price, a Wellcome Trust Senior Research Fellow, it is not clear why IQ should have changed so much and why some people’s performance improved while others’ declined. It is possible that the differences are due to some of the subjects being early or late developers, but it is equally possible that education had a role in changing IQ, and this has implications for how schoolchildren are assessed.

“We have a tendency to assess children and determine their course of education relatively early in life, but here we have shown that their intelligence is likely to be still developing,” says Professor Price. “We have to be careful not to write off poorer performers at an early stage when in fact their IQ may improve significantly given a few more years.

“It’s analogous to fitness.A teenager who is athletically fit at 14 could be less fit at 18 if they stopped exercising. Conversely, an unfit teenager can become much fitter with exercise.”

Other studies from the Wellcome Trust Centre for Neuroimaging and other research groups have provided strong evidence that the structure of the brain remains ‘plastic’ even throughout adult life. For example, Professor Price showed recently that guerrillas in Columbia who had learned to read as adults had a higher density of grey matter in several areas of the left hemisphere of the brain than those who had not learned to read. Professor Eleanor Maguire, also from the Wellcome Trust Centre, showed that part of a brain structure called the hippocampus, which plays an important part in memory and navigation, has greater volume in licensed London taxi drivers.

“The question is, if our brain structure can change throughout our adult lives, can our IQ also change?” adds Professor Price. “My guess is yes. There is plenty of evidence to suggest that our brains can adapt and their structure changes, even in adulthood.”

“This interesting study highlights how ‘plastic’ the human brain is,” said Dr John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust. “It will be interesting to see whether structural changes as we grow and develop extend beyond IQ to other cognitive functions. This study challenges us to think about these observations and how they may be applied to gain insight into what might happen when individuals succumb to mental health disorders.”

Man’s best friends: How animals made us human (New Scientist)

31 May 2011 by Pat Shipman
Magazine issue 2814.

Video: How animals made us human

Our bond with animals goes far deeper than food and companionship: it drove our ancestors to develop tools and language

TRAVEL almost anywhere in the world and you will see something so common that it may not even catch your attention. Wherever there are people, there are animals: animals being walked, herded, fed, watered, bathed, brushed or cuddled. Many, such as dogs, cats and sheep, are domesticated but you will also find people living alongside wild and exotic creatures such as monkeys, wolves and binturongs. Close contact with animals is not confined to one particular culture, geographic region or ethnic group. It is a universal human trait, which suggests that our desire to be with animals is deeply embedded and very ancient.

On the face of it this makes little sense. In the wild, no other mammal adopts individuals from another species; badgers do not tend hares, deer do not nurture baby squirrels, lions do not care for giraffes. And there is a good reason why. Since the ultimate prize in evolution is perpetuating your genes in your offspring and their offspring, caring for an individual from another species is counterproductive and detrimental to your success. Every mouthful of food you give it, every bit of energy you expend keeping it warm (or cool) and safe, is food and energy that does not go to your own kin. Even if pets offer unconditional love, friendship, physical affection and joy, that cannot explain why or how our bond with other species arose in the first place. Who would bring a ferocious predator such a wolf into their home in the hope that thousands of years later it would become a loving family pet?

I am fascinated by this puzzle and as a palaeoanthropologist have tried to understand it by looking to the deep past for the origins of our intimate link with animals. What I found was a long trail, an evolutionary trajectory that I call the animal connection. What’s more, this trail links to three of the most important developments in human evolution: tool-making, language and domestication. If I am correct, our affinity with other species is no mere curiosity. Instead, the animal connection is a hugely significant force that has shaped us and been instrumental in our global spread and success in the world.

The trail begins at least 2.6 million years ago. That is when the first flaked stone tools appear in the archaeological record, at Gona in the Afar region of Ethiopia (Nature, vol 385, p 333). Inventing stone tools is no trivial task. It requires the major intellectual breakthrough of understanding that the apparent properties of an object can be altered. But the prize was great. Those earliest flakes are found in conjunction with fossilised animal bones, some of which bear cut marks. It would appear that from the start our ancestors were using tools to gain access to animal carcasses. Up until then, they had been largely vegetarian, upright apes. Now, instead of evolving the features that make carnivores effective hunters – such as swift locomotion, grasping claws, sharp teeth, great bodily strength and improved senses for hunting – our ancestors created their own adaptation by learning how to turn heavy, blunt stones into small, sharp items equivalent to razor blades and knives. In other words, early humans devised an evolutionary shortcut to becoming a predator.

That had many consequences. On the plus side, eating more highly nutritious meat and fat was a prerequisite to the increase in relative brain size that marks the human lineage. Since meat tends to come in larger packages than leaves, fruits or roots, meat-eaters can spend less time finding and eating food and more on activities such as learning, social interaction, observation of others and inventing more tools. On the minus side, though, preying on animals put our ancestors into direct competition with the other predators that shared their ecosystem. To get the upper hand, they needed more than just tools and that, I believe, is where the animal connection comes in.

Two and a half million years ago, there were 11 true carnivores in Africa. These were the ancestors of today’s lions, cheetahs, leopards and three types of hyena, together with five now extinct species: a long-legged hyena, a wolf-like canid, two sabretooth cats and a “false” sabretooth cat. All but three of these outweighed early humans, so hanging around dead animals would have been a very risky business. The new predator on the savannah would have encountered ferocious competition for prizes such as freshly killed antelope. Still, by 1.7 million years ago, two carnivore species were extinct – perhaps because of the intense competition – and our ancestor had increased enough in size that it outweighed all but four of the remaining carnivores.

Why did our lineage survive when true carnivores were going extinct? Working in social groups certainly helped, but hyenas and lions do the same. Having tools enabled early humans to remove a piece of a dead carcass quickly and take it to safety, too. But I suspect that, above all, the behavioural adaptation that made it possible for our ancestors to compete successfully with true carnivores was the ability to pay very close attention to the habits of both potential prey and potential competitors. Knowledge was power, so we acquired a deep understanding of the minds of other animals.

Out of Africa

Another significant consequence of becoming more predatory was a pressing need to live at lower densities. Prey species are common and often live in large herds. Predators are not, and do not, because they require large territories in which to hunt or they soon exhaust their food supply. The record of the geographic distribution of our ancestors provides more support for my idea that the animal connection has shaped our evolution. From the first appearance of our lineage 6 or 7 million years ago until perhaps 2 million years ago, all hominins were in Africa and nowhere else. Then early humans underwent a dramatic territorial expansion, forced by the demands of their new way of living. They spread out of Africa into Eurasia with remarkable speed, arriving as far east as Indonesia and probably China by about 1.8 million years ago. This was no intentional migration but simply a gradual expansion into new hunting grounds. First, an insight into the minds of other species had secured our success as predators, now that success had driven our expansion across Eurasia.

Throughout the period of these enormous changes in the lifestyle and ecology of our ancestors, gathering, recording and sharing knowledge became more and more advantageous. And the most crucial topic about which our ancestors amassed and exchanged information was animals.

How do I know this? No words or language remain from that time, so I cannot look for them. I can, however, look for symbols – since words are essentially symbolic – and that takes me to the wealth of prehistoric art that appears in Europe, Asia, Africa and Australia, starting about 50,000 years ago. Prehistoric art allows us to eavesdrop on the conversations of our ancestors and see the topic of discussion: animals, their colours, shapes, habits, postures, locomotion and social habits. This focus is even more striking when you consider what else might have been depicted. Pictures of people, social interactions and ceremonies are rare. Plants, water sources and geographic features are even scarcer, though they must have been key to survival. There are no images showing how to build shelters, make fires or create tools. Animal information mattered more than all of these.

The overwhelming predominance of animals in prehistoric art suggests that the animal connection – the evolutionary advantages of observing animals and collecting, compiling and sharing information about them – was a strong impetus to a second important development in human evolution: the development of language and enhanced communication. Of course, more was involved than simply coining words. Famously, vervet monkeys have different cries for eagles, leopards and snakes, but they cannot discuss dangerous-things-that-were-here-yesterday or ask “what ate my sibling?” or wonder if that danger might appear again tomorrow. They communicate with each other and share information, but they do not have language. The magical property of full language is that it is comprised of vocabulary and grammatical rules that can be combined and recombined in an infinite number of ways to convey fine shades of meaning.

Nobody doubts that language proved a major adaptive advantage to our ancestors in developing complex behaviours and sharing information. How it arose, however, remains a mystery. I believe I am the first to propose a continuity between the strong human-animal link that appeared 2.6 million years ago and the origin of language. The complexity and importance of animal-related information spurred early humans to move beyond what their primate cousins could achieve.

As our ancestors became ever more intimately involved with animals, the third and final product of the animal connection appeared. Domestication has long been linked with farming and the keeping of stock animals, an economic and social change from hunting and gathering that is often called the Neolithic revolution. Domestic animals are usually considered as commodities, “walking larders”, reflecting the idea that the basis of the Neolithic revolution was a drive for greater food security.

When I looked at the origins of domestication for clues to its underlying reasons, I found some fundamental flaws in this idea. Instead, my analysis suggests that domestication emerged as a natural progression of our close association with, and understanding of, other species. In other words, it was a product of the animal connection.

Man’s best friend

First, if domestication was about knowing where your next meal was coming from, then the first domesticate ought to have been a food source. It was not. According to a detailed analysis of fossil skulls carried out by Mietje Germonpré of the Royal Belgian Institute of Natural Sciences in Brussels and her colleagues, the earliest known dog skull is 32,000 years old (Journal of Archaeological Science, vol 36, p 473). The results have been greeted with some surprise, since other analyses have suggested dogs were domesticated around 17,000 years ago, but even that means they pre-date any other domesticated animal or plant by about 5000 years (see diagram). Yet dogs are not a good choice if you want a food animal: they are dangerous while being domesticated, being derived from wolves, and worst of all, they eat meat. If the objective of domestication was to have meat to eat, you would never select an animal that eats 2 kilograms of the stuff a day.

A sustainable relationship

My second objection to the idea that animals were domesticated simply for food turns on a paradox. Farming requires hungry people to set aside edible animals or seeds so as to have some to reproduce the following year. My Penn State colleague David Webster explores the idea in a paper due to appear in Current Anthropology. He concludes that it only becomes logical not to eat all you have if the species in question is already well on the way to being domesticated, because only then are you sufficiently familiar with it to know how to benefit from taking the long view. This means for an animal species to become a walking larder, our ancestors must have already spent generations living intimately with it, exerting some degree of control over breeding. Who plans that far in advance for dinner?

Then there’s the clincher. A domestic animal that is slaughtered for food yields little more meat than a wild one that has been hunted, yet requires more management and care. Such a system is not an improvement in food security. Instead, I believe domestication arose for a different reason, one that offsets the costs of husbandry. All domestic animals, and even semi-domesticated ones, offer a wealth of renewable resources that provide ongoing benefits as long as they are alive. They can provide power for hauling, transport and ploughing, wool or fur for warmth and weaving, milk for food, manure for fertiliser, fuel and building material, hunting assistance, protection for the family or home, and a disposal service for refuse and ordure. Domestic animals are also a mobile source of wealth, which can literally propagate itself.

Domestication, more than ever, drew upon our understanding of animals to keep them alive and well. It must have started accidentally and been a protracted reciprocal process of increasing communication that allowed us not just to tame other species but also to permanently change their genomes by selective breeding to enhance or diminish certain traits.

The great benefit for people of this caring relationship was a continuous supply of resources that enabled them to move into previously uninhabitable parts of the world. This next milestone in human evolution would have been impossible without the sort of close observation, accumulated knowledge and improved communication skills that the animal connection started selecting for when our ancestors began hunting at least 2.6 million years ago.

What does it matter if the animal connection is a fundamental and ancient influence on our species? I think it matters a great deal. The human-animal link offers a causal connection that makes sense of three of the most important leaps in our development: the invention of stone tools, the origin of language and the domestication of animals. That makes it a sort of grand unifying theory of human evolution.

And the link is as crucial today as it ever was. The fundamental importance of our relationship with animals explains why interacting with them offers various physical and mental health benefits – and why the annual expenditure on items related to pets and wild animals is so enormous.

Finally, if being with animals has been so instrumental in making humans human, we had best pay attention to this point as we plan for the future. If our species was born of a world rich with animals, can we continue to flourish in one where we have decimated biodiversity?

Pat Shipman is adjunct professor of biological anthropology at Penn State University. Her book The Animal Connection: A new perspective on what makes us human is published by W. W. Norton & Company on 13 June

Intuitions Regarding Geometry Are Universal, Study Suggests (ScienceDaily)

ScienceDaily (May 26, 2011) — All human beings may have the ability to understand elementary geometry, independently of their culture or their level of education.

A Mundurucu participant measuring an angle using a goniometer laid on a table. (Credit: © Pierre Pica / CNRS)

This is the conclusion of a study carried out by CNRS, Inserm, CEA, the Collège de France, Harvard University and Paris Descartes, Paris-Sud 11 and Paris 8 universities (1). It was conducted on Amazonian Indians living in an isolated area, who had not studied geometry at school and whose language contains little geometric vocabulary. Their intuitive understanding of elementary geometric concepts was compared with that of populations who, on the contrary, had been taught geometry at school. The researchers were able to demonstrate that all human beings may have the ability of demonstrating geometric intuition. This ability may however only emerge from the age of 6-7 years. It could be innate or instead acquired at an early age when children become aware of the space that surrounds them. This work is published in thePNAS.

Euclidean geometry makes it possible to describe space using planes, spheres, straight lines, points, etc. Can geometric intuitions emerge in all human beings, even in the absence of geometric training?

To answer this question, the team of cognitive science researchers elaborated two experiments aimed at evaluating geometric performance, whatever the level of education. The first test consisted in answering questions on the abstract properties of straight lines, in particular their infinite character and their parallelism properties. The second test involved completing a triangle by indicating the position of its apex as well as the angle at this apex.

To carry out this study correctly, it was necessary to have participants that had never studied geometry at school, the objective being to compare their ability in these tests with others who had received training in this discipline. The researchers focused their study on Mundurucu Indians, living in an isolated part of the Amazon Basin: 22 adults and 8 children aged between 7 and 13. Some of the participants had never attended school, while others had been to school for several years, but none had received any training in geometry. In order to introduce geometry to the Mundurucu participants, the scientists asked them to imagine two worlds, one flat (plane) and the second round (sphere), on which were dotted villages (corresponding to the points in Euclidean geometry) and paths (straight lines). They then asked them a series of questions illustrated by geometric figures displayed on a computer screen.

Around thirty adults and children from France and the United States, who, unlike the Mundurucu, had studied geometry at school, were also subjected to the same tests.

The result was that the Mundurucu Indians proved to be fully capable of resolving geometric problems, particularly in terms of planar geometry. For example, to the question Can two paths never cross?, a very large majority answered Yes. Their responses to the second test, that of the triangle, highlight the intuitive character of an essential property in planar geometry, namely the fact that the sum of the angles of the apexes of a triangle is constant (equal to 180°).

And, in a spherical universe, it turns out that the Amazonian Indians gave better answers than the French or North American participants who, by virtue of learning geometry at school, acquire greater familiarity with planar geometry than with spherical geometry. Another interesting finding was that young North American children between 5 and 6 years old (who had not yet been taught geometry at school) had mixed test results, which could signify that a grasp of geometric notions is acquired from the age of 6-7 years.

The researchers thus suggest that all human beings have an ability to understand Euclidean geometry, whatever their culture or level of education. People who have received no, or little, training could thus grasp notions of geometry such as points and parallel lines. These intuitions could be innate (they may then emerge from a certain age, as it happens 6-7 years). If, on the other hand, these intuitions derive from learning (between birth and 6-7 years of age), they must be based on experiences common to all human beings.

(1) The two CNRS researchers involved in this study are Véronique Izard of the Laboratoire Psychologie de la Perception (CNRS / Université Paris Descartes) and Pierre Pica of the Unité ?Structures Formelles du Langage? (CNRS / Université Paris 8). They conducted it in collaboration with Stanislas Dehaene, professor at the Collège de France and director of the Unité de Neuroimagerie Cognitive à NeuroSpin (Inserm / CEA / Université Paris-Sud 11) and Elizabeth Spelke, professor at Harvard University.

Journal ReferenceVéronique Izard, Pierre Pica, Elizabeth S. Spelke, and Stanislas Dehaene. Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proceedings of the National Academy of Sciences, 23 May 2011.