Arquivo mensal: maio 2011

Scientists Cry Foul Over Report Criticizing National Science Foundation (msnbc.com)

By Stephanie Pappas, LiveScience Senior Writer

http://www.msnbc.msn.com/id/43187678

A report released by the office of Sen. Tom Coburn (R-Okla.) distorts the goals and purposes of National Science Foundation-funded (NSF) research in an effort to paint the agency as wasteful, scientists say.

Coburn released “The National Science Foundation: Under the Microscope” May 26, raising “serious questions regarding the agency’s management and priorities,” according to Coburn’s office. But scientists whose research is targeted in the report say Coburn has oversimplified or otherwise misrepresented their work. [Infographic: Science Spending in the Federal Budget ]

“Good Lord!” Texas A&M psychologist Gerianne Alexander, whose work on hormones and infant development appears in the report, wrote in an email to LiveScience. “The summary of the funded research is very inaccurate.”

This isn’t the first time politicians have taken aim at the NSF in the name of deficit reduction. In December 2010, Rep. Adrian Smith (R-Neb.) called for citizens to review NSF grants and highlighted a few projects he viewed as wasteful, including research meant to evaluate productivity.

NSF’s entire budget of approximately $7 billion represents about one-half of 1 percent of the projected 2011 federal deficit.

Funding and review

The new report acknowledges that NSF has funded research leading to innovations ranging from the Internet to bar codes. NSF also runs a rigorous evaluation process when choosing to fund grants. Each year, the agency receives more than 45,000 competitive proposals, NSF spokesperson Maria Zacharias told LiveScience in December. NSF funds about 11,500 of those, Zacharias said.

However, according to a review by Coburn’s staff, the senator is unconvinced that NSF is making the right decisions.

“It is not the intent of this report to suggest that there is no utility associated with these research efforts,” the report reads. “The overarching question to ask, however, is simple. Are these projects the best possible use of our tax dollars, particularly in our current fiscal crisis?”

Science out of context

Scientists say Coburn’s office fails to put their research into context, often choosing silly-sounding projects to characterize entire research programs.

Alexander’s work, for example, is characterized as a $480,000 experiment meant to discover “if boys like trucks and girls like dolls.” According to the report, scientists could have saved their time by “talking to any new parent.”

In fact, Alexander said, the research project is more complicated.

“The grant supports research asking whether the postnatal surge in testosterone levels in early infancy contributes to the development of human behavior,” she said. “This is not a trivial issue.” [Read: The Truth About Genderless Babies ]

That’s because some preliminary evidence suggests that disruptions in hormones like testosterone can alter behavior, Alexander said, potentially contributing to the development of disorders such as attention deficit hyperactivity disorder (ADHD) and autism.

Toy choice is a way to measure sex differences in behavior, because babies tend to choose stereotyped boy-girl toys early on, Alexander said. She and her team measure infant hormone levels and look for effects on behavior, activity levels, temperament and verbal development.

Likewise, a much-ballyhooed project that put shrimp on a treadmill was part of research intended to find out how marine animals will cope with increased environmental stress.

Robot laundry?

Coburn focused much of the report on social science research. But the report also questions several robotics projects, including a robot that can fold laundry. The report mocks the research, noting that it takes the robot 25 minutes to fold a single towel.

In fact, the $1.5 million NSF grant went not to teach robots how to do slow-motion laundry, but to learn how to make robots that can interact with complex objects, said lead researcher Pieter Abbeel of UC Berkeley. The towel-folding, which came six months into a four-year project, was an ideal challenge, Abbeel said, because folding a soft, deformable towel is very different from the pick-up-this-bolt, screw-in-this-screw tasks that current robots can perform.

“Towel-folding is just a first, small step toward a new generation of robotic devices that could, for example, significantly increase the independence of elderly and sick people, protect our soldiers in combat, improve the delivery of government services and a host of other applications that would revolutionize our day-to-day lives,” Abbeel wrote in an email to LiveScience.

Overseeing basic science

“It’s legitimate to ask what kind of scientific research is important and what isn’t,” said John Hibbing, a professor of political science whose research on the genetics of political leaningsappeared in Coburn’s report. However, Hibbing expressed doubt that Coburn’s nonscientific review process could meet that goal.

“I sympathize with the desire to identify things that are silly and not useful,” Hibbing told LiveScience. “But I’m not sure he’s identified a really practical strategy to distinguish between the two.”

Genes, germs and the origins of politics (New Scientist)

NS 2813: Genes, germs and the origins of politics

* 18 May 2011 by Jim Giles

A controversial new theory claims fear of infection makes the difference between democracy and dictatorship

COMPARE these histories. In Britain, democracy evolved steadily over hundreds of years. During the same period, people living in what is now Somalia had many rulers, but almost all deprived them of the chance to vote. It’s easy to find other stark contrasts. Citizens of the United States can trace their right to vote back to the end of the 18th century. In Syria, many citizens cannot trace their democratic rights anywhere – they are still waiting for the chance to take part in a meaningful election.

Conventional explanations for the existence of such contrasting political regimes involve factors such as history, geography, and the economic circumstances and culture of the people concerned, to name just a few. But the evolutionary biologist Randy Thornhill has a different idea. He says that the nature of the political system that holds sway in a particular country – whether it is a repressive dictatorship or a liberal democracy – may be determined in large part by a single factor: the prevalence of infectious disease.

It’s an idea that many people will find outrageously simplistic. How can something as complex as political culture be explained by just one environmental factor? Yet nobody has managed to debunk it, and its proponents are coming up with a steady flow of evidence in its favour. “It’s rather astonishing, and it could be true,” says Carlos Navarrete, a psychologist at the Michigan State University in East
Lansing.

Thornhill is no stranger to controversy, having previously co-authored A Natural History of Rape, a book proposing an evolutionary basis for rape. His iconoclastic theory linking disease to politics was inspired in part by observations of how an animal’s development and behaviour can respond rapidly to physical dangers in a region, often in unexpected ways. Creatures at high risk of being eaten by predators, for example, often reach sexual maturity at a younger age than genetically similar creatures in a safer environment, and are more likely to breed earlier in their lives. Thornhill wondered whether threats to human lives might have similarly influential consequences to our psychology.

The result is a hypothesis known as the parasite-stress model, which Thornhill developed at the University of New Mexico, Albuquerque, with his colleague Corey Fincher.

 

 

Xenophobic instincts

The starting point for Thornhill and Fincher’s thinking is a basic human survival instinct: the desire to avoid illness. In a region where disease is rife, they argue, fear of contagion may cause people to avoid outsiders, who may be carrying a strain of infection to which they have no immunity. Such a mindset would tend to make a community as a whole xenophobic, and might also discourage interaction between the various groups within a society – the social classes, for instance – to prevent unnecessary contact that might spread disease. What is more, Thornhill and Fincher argue, it could encourage people to conform to social norms and to respect authority, since adventurous behaviour may flout rules of conduct set in place to prevent contamination.

Taken together, these attitudes would discourage the rich and influential from sharing their wealth and power with those around them, and inhibit the rest of the population from going against the status quo and questioning the authority of those above them. This is clearly not a situation conducive to democracy. When the threat of disease eases, however, these influences no longer hold sway, allowing forces that favour a more democratic social order to come to the fore.

That’s the idea, anyway. But where is the evidence?

The team had some initial support from earlier studies that had explored how a fear of disease affects individual attitudes. In 2006, for example, Navarrete found that when people are prompted to think about disgusting objects, such as spoilt food, they become more likely to express nationalistic values and show a greater distrust of foreigners (Evolution and Human Behavior, vol 27, p 270). More recently, a team from Arizona State University in Tempe found that reading about contagious illnesses made people less adventurous and open to new experiences, suggesting that they have become more inward looking and conformist (Psychological Science, vol 21, p 440).

Temporarily shifting individual opinions is one thing, but Thornhill and Fincher needed to show that these same biases could change the social outlook of a whole society. Their starting point for doing so was a description of cultural attitudes called the “collectivist-individualist” scale. At one end of this scale lies the collectivist outlook, in which people place the overall good of society ahead of the freedom of action of the individuals within it. Collectivist societies are often, though not exclusively, characterised by a greater respect for authority – if it’s seen as being necessary for the greater good. They also tend to be xenophobic and conformist. At the other end there is the individualist viewpoint, which has more emphasis on openness and freedom for the individual.

Pathogen peril

In 2008, the duo teamed up with Damian Murray and Mark Schaller of the University of British  Columbia in Vancouver, Canada, to test the idea that societies with more pathogens would be more collectivist. They rated people in 98 different nations and regions, from Estonia  to Ecuador, on the collectivist-individualist scale, using data from questionnaires and studies of linguistic cues that can betray a social outlook. Sure enough, they saw a correlation: the greater the threat of disease in a region, the more collectivist people’s attitudes were (Proceedings of the Royal Society B, vol 275, p 1279). The correlation remained even when they controlled for potential confounding factors, such as wealth and urbanisation.

A study soon followed showing similar patterns when comparing US states. In another paper, Murray and Schaller examined a different set of data and showed that cultural differences in one collectivist trait – conformity – correlate strongly with disease prevalence (Personality and Social Psychology Bulletin, vol 37, p 318).

Thornhill and Fincher’s next challenge was to find evidence linking disease prevalence not just with cultural attitudes but with the political systems they expected would go with them. To do so, they used a 66-point scale of pathogen prevalence, based on data assembled by the Global Infectious Diseases and Epidemiology Online Network. They then compared their data set with indicators that assess the politics of a country. Democracy is a tough concept to quantify, so the team looked at a few different measures, including the Freedom House Survey, which is based on the subjective judgements of a team of political scientists working for an independent American think tank, and the Index of Democratization, which is based on estimates of voter participation (measured by how much of a population cast their votes and the number of referendums offered to a population) and the amount of competition between political parties.

The team’s results, published in 2009, showed that each measure varied strongly with pathogen prevalence, just as their model predicted (Biological Reviews, vol 84, p 113). For example, when considering disease prevalence, Somalia is 22nd on the list of nations, while the UK comes in 177th. The two countries come out at opposite ends of the democratic scale (see “An infectious idea”).

Importantly, the relationship still holds when you look at historical records of pathogen prevalence. This, together with those early psychological studies of immediate reactions to disease, suggests it is a nation’s health driving its political landscape, and not the other way around, according to the team.

Last year, they published a second paper that used more detailed data of the diseases prevalent in each region. They again found that measures of collectivism and democracy correlate with the presence of diseases that are passed from human to human – though not with the prevalence of diseases transmitted directly from animals to humans, like rabies (Evolutionary Psychology, vol 8, p 151). Since collectivist behaviours would be less important for preventing such infections, this finding fits with Thornhill and Fincher’s hypothesis.

“Thornhill’s work strikes me as interesting and promising,” says Ronald Inglehart, a political scientist at the University of Michigan in Ann Arbor who was unaware of it before we contacted him. He notes that it is consistent with his own finding that a society needs to have a degree of economic security before democracy can develop. Perhaps this goes hand in hand with a reduction in disease prevalence to signal the move away from collectivism, he suggests.

Inglehart’s comments nevertheless highlights a weakness in the evidence so far assembled in support of the parasite-stress model. An association between disease prevalence and democracy does not necessarily mean that one drives the other. Some other factor may drive both the prevalence of disease in an area and its political system. In their 2009 paper, Thornhill and Fincher managed to eliminate some of the possible “confounders”. For example, they showed that neither a country’s overall wealth nor the way it is distributed can adequately explain the link between the prevalence of disease there and how democratic it is.

But many other possibilities remain. For example, pathogens tend to be more prevalent in the tropics, so perhaps warmer climates encourage collectivism. Also, many of the nations that score high for disease and low for democracy are in sub-Saharan Africa, and have a history of having been colonised, and of frequent conflict and foreign exploitation since independence. Might the near-constant threat of war better explain that region’s autocratic governments? There’s also the possibility that education and literacy would have an impact, since better educated people may be more likely to question authority and demand their rights to a democracy. Epidemiologist Valerie Curtis of the London School of Hygiene and Tropical Medicine thinks such factors might be the ones that count, and says the evidence so far does not make the parasite-stress theory any more persuasive than these explanations.

Furthermore, some nations buck the trend altogether. Take the US and Syria, for example: they have sharply contrasting political systems but an almost identical prevalence of disease. Though even the harshest critic of the theory would not expect a perfect correlation, such anomalies require some good explanations.

Also lacking so far in their analysis is a coherent account of how historical changes in the state of public health are linked to political change. If Thornhill’s theory is correct, improvements in a nation’s health should lead to noticeable changes in social outlook. Evidence consistent with this idea comes from the social revolution of the 1960s in much of western Europe and North America, which involved a shift from collectivist towards individualist thinking. This was preceded by improvements in public health in the years following the second world war – notably the introduction of penicillin, mass vaccination and better malaria control.

There are counter-examples, too. It is not clear whether the opening up of European society during the 18th century was preceded by any major improvements in people’s health, for example. Nor is there yet any clear evidence linking the current pro-democracy movements in the Middle East and north Africa to changes in disease prevalence. The theory also predicts that episodes such as the recent worldwide swine-flu epidemic should cause a shift away from democracy and towards authoritarian, collectivist attitudes. Yet as Holly Arrow, a psychologist at the University of Oregon in Eugene, points out, no effect has been recorded.

Mysterious mechanisms

To make the theory stick, Thornhill and his collaborators will also need to provide a mechanism for their proposed link between pathogens and politics. If cultural changes are responsible, young people might learn to avoid disease – and outsiders – from the behaviour of those around them. Alternatively, the reaction could be genetically hard-wired. So far, it has not proved possible to eliminate any of the possible mechanisms. “It’s an enormous set of unanswered questions. I expect it will take many years to explore,” Schaller says.

One possible genetic explanation involves 5-HTTLPR, a gene that regulates levels of the neurotransmitter serotonin. People carrying the short form of the gene are more likely to be anxious and to be fearful of health risks, relative to those with the long version. These behaviours could be a life-saver if they help people avoid situations that would put them at risk of infection, so it might be expected that the short version of the gene is favoured in parts of the world where disease risk is high. People with the longer version of 5-HTTLPR, on the other hand, tend to have higher levels of serotonin and are therefore more extrovert and more prone to risk-taking. This could bring advantages such as an increased capacity to innovate, so the long form of the gene should be more
common in regions relatively free from illness.

That pattern is exactly what neuroscientists Joan Chiao and Katherine Blizinsky at Northwestern University in Evanston, Illinois, have reported in a paper published last year. Significantly, nations where the short version of the gene is more common also tend to have more collectivist attitudes (Proceedings of the Royal Society B, vol 277, p 529).

It is only tentative evidence, and some doubt that Chiao and Blizinsky’s findings are robust enough to support their conclusions (Proceedings of the Royal Society B, vol 278, p 329). But if the result pans out with further research, it suggests the behaviours involved in the parasite-stress model may be deeply ingrained in our genetic make-up, providing a hurdle to more rapid political change in certain areas. While no one is saying that groups with a higher proportion of short versions of the gene will never develop a democracy, the possibility that some societies are more genetically predisposed to it than others is nevertheless an uncomfortable idea to contemplate.

Should the biases turn out to be more temporary – if flexible psychological reactions to threat, or cultural learning, are the more important mechanisms – the debate might turn to potential implications of the theory. Projects aiming to improve medical care in poor countries might also lead a move to more democratic and open governments, for example, giving western governments another incentive to fund these schemes. “The way to develop a region is to emancipate it from parasites,” says Thornhill.

Remarks like that seem certain to attract flak. Curtis, for instance, bristled a little when New Scientist put the idea to her, pointing out that the immediate threat to human life is a pressing enough reason to be concerned about infectious disease.

Thornhill still has a huge amount of work ahead of him if he is to provide a convincing case that will assuage all of these doubts. In the meantime, his experience following publication of A Natural History of Rape has left him prepared for a hostile reception. “I had threats by email and phone,” he recalls. “You’re sometimes going to hurt people’s feelings. I consider it all in a day’s work.”

Jim Giles is a New Scientist correspondent based in San Francisco

Man’s best friends: How animals made us human (New Scientist)

31 May 2011 by Pat Shipman
Magazine issue 2814.

Video: How animals made us human

Our bond with animals goes far deeper than food and companionship: it drove our ancestors to develop tools and language

TRAVEL almost anywhere in the world and you will see something so common that it may not even catch your attention. Wherever there are people, there are animals: animals being walked, herded, fed, watered, bathed, brushed or cuddled. Many, such as dogs, cats and sheep, are domesticated but you will also find people living alongside wild and exotic creatures such as monkeys, wolves and binturongs. Close contact with animals is not confined to one particular culture, geographic region or ethnic group. It is a universal human trait, which suggests that our desire to be with animals is deeply embedded and very ancient.

On the face of it this makes little sense. In the wild, no other mammal adopts individuals from another species; badgers do not tend hares, deer do not nurture baby squirrels, lions do not care for giraffes. And there is a good reason why. Since the ultimate prize in evolution is perpetuating your genes in your offspring and their offspring, caring for an individual from another species is counterproductive and detrimental to your success. Every mouthful of food you give it, every bit of energy you expend keeping it warm (or cool) and safe, is food and energy that does not go to your own kin. Even if pets offer unconditional love, friendship, physical affection and joy, that cannot explain why or how our bond with other species arose in the first place. Who would bring a ferocious predator such a wolf into their home in the hope that thousands of years later it would become a loving family pet?

I am fascinated by this puzzle and as a palaeoanthropologist have tried to understand it by looking to the deep past for the origins of our intimate link with animals. What I found was a long trail, an evolutionary trajectory that I call the animal connection. What’s more, this trail links to three of the most important developments in human evolution: tool-making, language and domestication. If I am correct, our affinity with other species is no mere curiosity. Instead, the animal connection is a hugely significant force that has shaped us and been instrumental in our global spread and success in the world.

The trail begins at least 2.6 million years ago. That is when the first flaked stone tools appear in the archaeological record, at Gona in the Afar region of Ethiopia (Nature, vol 385, p 333). Inventing stone tools is no trivial task. It requires the major intellectual breakthrough of understanding that the apparent properties of an object can be altered. But the prize was great. Those earliest flakes are found in conjunction with fossilised animal bones, some of which bear cut marks. It would appear that from the start our ancestors were using tools to gain access to animal carcasses. Up until then, they had been largely vegetarian, upright apes. Now, instead of evolving the features that make carnivores effective hunters – such as swift locomotion, grasping claws, sharp teeth, great bodily strength and improved senses for hunting – our ancestors created their own adaptation by learning how to turn heavy, blunt stones into small, sharp items equivalent to razor blades and knives. In other words, early humans devised an evolutionary shortcut to becoming a predator.

That had many consequences. On the plus side, eating more highly nutritious meat and fat was a prerequisite to the increase in relative brain size that marks the human lineage. Since meat tends to come in larger packages than leaves, fruits or roots, meat-eaters can spend less time finding and eating food and more on activities such as learning, social interaction, observation of others and inventing more tools. On the minus side, though, preying on animals put our ancestors into direct competition with the other predators that shared their ecosystem. To get the upper hand, they needed more than just tools and that, I believe, is where the animal connection comes in.

Two and a half million years ago, there were 11 true carnivores in Africa. These were the ancestors of today’s lions, cheetahs, leopards and three types of hyena, together with five now extinct species: a long-legged hyena, a wolf-like canid, two sabretooth cats and a “false” sabretooth cat. All but three of these outweighed early humans, so hanging around dead animals would have been a very risky business. The new predator on the savannah would have encountered ferocious competition for prizes such as freshly killed antelope. Still, by 1.7 million years ago, two carnivore species were extinct – perhaps because of the intense competition – and our ancestor had increased enough in size that it outweighed all but four of the remaining carnivores.

Why did our lineage survive when true carnivores were going extinct? Working in social groups certainly helped, but hyenas and lions do the same. Having tools enabled early humans to remove a piece of a dead carcass quickly and take it to safety, too. But I suspect that, above all, the behavioural adaptation that made it possible for our ancestors to compete successfully with true carnivores was the ability to pay very close attention to the habits of both potential prey and potential competitors. Knowledge was power, so we acquired a deep understanding of the minds of other animals.

Out of Africa

Another significant consequence of becoming more predatory was a pressing need to live at lower densities. Prey species are common and often live in large herds. Predators are not, and do not, because they require large territories in which to hunt or they soon exhaust their food supply. The record of the geographic distribution of our ancestors provides more support for my idea that the animal connection has shaped our evolution. From the first appearance of our lineage 6 or 7 million years ago until perhaps 2 million years ago, all hominins were in Africa and nowhere else. Then early humans underwent a dramatic territorial expansion, forced by the demands of their new way of living. They spread out of Africa into Eurasia with remarkable speed, arriving as far east as Indonesia and probably China by about 1.8 million years ago. This was no intentional migration but simply a gradual expansion into new hunting grounds. First, an insight into the minds of other species had secured our success as predators, now that success had driven our expansion across Eurasia.

Throughout the period of these enormous changes in the lifestyle and ecology of our ancestors, gathering, recording and sharing knowledge became more and more advantageous. And the most crucial topic about which our ancestors amassed and exchanged information was animals.

How do I know this? No words or language remain from that time, so I cannot look for them. I can, however, look for symbols – since words are essentially symbolic – and that takes me to the wealth of prehistoric art that appears in Europe, Asia, Africa and Australia, starting about 50,000 years ago. Prehistoric art allows us to eavesdrop on the conversations of our ancestors and see the topic of discussion: animals, their colours, shapes, habits, postures, locomotion and social habits. This focus is even more striking when you consider what else might have been depicted. Pictures of people, social interactions and ceremonies are rare. Plants, water sources and geographic features are even scarcer, though they must have been key to survival. There are no images showing how to build shelters, make fires or create tools. Animal information mattered more than all of these.

The overwhelming predominance of animals in prehistoric art suggests that the animal connection – the evolutionary advantages of observing animals and collecting, compiling and sharing information about them – was a strong impetus to a second important development in human evolution: the development of language and enhanced communication. Of course, more was involved than simply coining words. Famously, vervet monkeys have different cries for eagles, leopards and snakes, but they cannot discuss dangerous-things-that-were-here-yesterday or ask “what ate my sibling?” or wonder if that danger might appear again tomorrow. They communicate with each other and share information, but they do not have language. The magical property of full language is that it is comprised of vocabulary and grammatical rules that can be combined and recombined in an infinite number of ways to convey fine shades of meaning.

Nobody doubts that language proved a major adaptive advantage to our ancestors in developing complex behaviours and sharing information. How it arose, however, remains a mystery. I believe I am the first to propose a continuity between the strong human-animal link that appeared 2.6 million years ago and the origin of language. The complexity and importance of animal-related information spurred early humans to move beyond what their primate cousins could achieve.

As our ancestors became ever more intimately involved with animals, the third and final product of the animal connection appeared. Domestication has long been linked with farming and the keeping of stock animals, an economic and social change from hunting and gathering that is often called the Neolithic revolution. Domestic animals are usually considered as commodities, “walking larders”, reflecting the idea that the basis of the Neolithic revolution was a drive for greater food security.

When I looked at the origins of domestication for clues to its underlying reasons, I found some fundamental flaws in this idea. Instead, my analysis suggests that domestication emerged as a natural progression of our close association with, and understanding of, other species. In other words, it was a product of the animal connection.

Man’s best friend

First, if domestication was about knowing where your next meal was coming from, then the first domesticate ought to have been a food source. It was not. According to a detailed analysis of fossil skulls carried out by Mietje Germonpré of the Royal Belgian Institute of Natural Sciences in Brussels and her colleagues, the earliest known dog skull is 32,000 years old (Journal of Archaeological Science, vol 36, p 473). The results have been greeted with some surprise, since other analyses have suggested dogs were domesticated around 17,000 years ago, but even that means they pre-date any other domesticated animal or plant by about 5000 years (see diagram). Yet dogs are not a good choice if you want a food animal: they are dangerous while being domesticated, being derived from wolves, and worst of all, they eat meat. If the objective of domestication was to have meat to eat, you would never select an animal that eats 2 kilograms of the stuff a day.

A sustainable relationship

My second objection to the idea that animals were domesticated simply for food turns on a paradox. Farming requires hungry people to set aside edible animals or seeds so as to have some to reproduce the following year. My Penn State colleague David Webster explores the idea in a paper due to appear in Current Anthropology. He concludes that it only becomes logical not to eat all you have if the species in question is already well on the way to being domesticated, because only then are you sufficiently familiar with it to know how to benefit from taking the long view. This means for an animal species to become a walking larder, our ancestors must have already spent generations living intimately with it, exerting some degree of control over breeding. Who plans that far in advance for dinner?

Then there’s the clincher. A domestic animal that is slaughtered for food yields little more meat than a wild one that has been hunted, yet requires more management and care. Such a system is not an improvement in food security. Instead, I believe domestication arose for a different reason, one that offsets the costs of husbandry. All domestic animals, and even semi-domesticated ones, offer a wealth of renewable resources that provide ongoing benefits as long as they are alive. They can provide power for hauling, transport and ploughing, wool or fur for warmth and weaving, milk for food, manure for fertiliser, fuel and building material, hunting assistance, protection for the family or home, and a disposal service for refuse and ordure. Domestic animals are also a mobile source of wealth, which can literally propagate itself.

Domestication, more than ever, drew upon our understanding of animals to keep them alive and well. It must have started accidentally and been a protracted reciprocal process of increasing communication that allowed us not just to tame other species but also to permanently change their genomes by selective breeding to enhance or diminish certain traits.

The great benefit for people of this caring relationship was a continuous supply of resources that enabled them to move into previously uninhabitable parts of the world. This next milestone in human evolution would have been impossible without the sort of close observation, accumulated knowledge and improved communication skills that the animal connection started selecting for when our ancestors began hunting at least 2.6 million years ago.

What does it matter if the animal connection is a fundamental and ancient influence on our species? I think it matters a great deal. The human-animal link offers a causal connection that makes sense of three of the most important leaps in our development: the invention of stone tools, the origin of language and the domestication of animals. That makes it a sort of grand unifying theory of human evolution.

And the link is as crucial today as it ever was. The fundamental importance of our relationship with animals explains why interacting with them offers various physical and mental health benefits – and why the annual expenditure on items related to pets and wild animals is so enormous.

Finally, if being with animals has been so instrumental in making humans human, we had best pay attention to this point as we plan for the future. If our species was born of a world rich with animals, can we continue to flourish in one where we have decimated biodiversity?

Pat Shipman is adjunct professor of biological anthropology at Penn State University. Her book The Animal Connection: A new perspective on what makes us human is published by W. W. Norton & Company on 13 June

The Controversy about Hypothesis Testing

From an interesting call for papers:

“Scientists spend a lot of time testing a hypothesis, and classifying experimental results as (in)significant evidence. But even after a century of hot debate, there is no consensus on what this concept of significance implies, how the results of hypothesis tests should be interpreted, and which practical pitfalls have to be avoided. Take the fierce criticisms of significance testing in economics, take the endless debate about statistical reform in psychology, take the foundational disagreement between frequentists and Bayesians about what constitutes statistical evidence.”

(Link to the conference here).

Tony Andersson on Khagram, Dams and Development (H-Water)

Sanjeev Khagram. Dams and Development: Transnational Struggles for Water and Power. Ithaca Cornell University Press, 2004. 288 pp. $22.95 (paper), ISBN 978-0-8014-8907-5.

Reviewed by Tony Andersson (New York University)
Published on H-Water (May, 2011)
Commissioned by John Broich

Tony Andersson on Khagram, Dams and Development

The controversies over big dams, and the aggressive promotion of such development projects by multinational organizations like the World Bank, have produced an extensive literature written mostly by environmental and social justice activists reacting to the loss of wildlife, often violent human displacements, and the fiscal costs associated with big dams. A welcome addition to this field, Dams and Development is the first monograph published by Sanjeev Khagram, a political scientist at the University of Washington. Pulling back somewhat from the activist literature, Khagram assumes a more distant view in order to explain why, after the 1970s, big dams as a development model seemed to fall so precipitously out of favor among governments and development agencies. Khagram’s previous work on transnational social movements informs this study of anti-dam activism as he reconstructs the international networks of nongovernmental organizations (NGOs), local activists, and institutions that during the latter twentieth century acted to contest and reform development models that uncritically relied on big dams. Taking India as a case study, and in particular the series of damming schemes in the Narmada Valley, Khagram argues that transnational alliances of anti-dam activists have “dramatically altered the dynamics surrounding big dams from the local to the international levels,” affecting not only the scale but also the actual policies that guide large development projects (p. 3). Further, Khagram identifies two principle variables on which the success of anti-dam campaigns hinge: the extent to which local activists in developing countries are able to internationalize their campaigns, linking up with donors and lobbyists in the United States or Europe; and the degree of democratization in the country concerned. According to Khagram, successful anti-dam movements depended on both a robust network of international activists as well as democratic domestic political systems.

Khagram begins the book by elaborating his theoretical framework and general argument. He reviews the rise of the “big dam regime” and its unexplained fall by the 1990s. After noting the inadequacy of technical or financial constraints in explaining the precipitous decline of dam construction worldwide after a century of enthusiastic growth, Khagram details how transnational alliances and democratic institutions facilitated a global shift in norms in relation to the environment, human rights, and indigenous peoples.

Chapters 2 through 4 constitute the heart of the book, exploring India’s infatuation and subsequent disillusionment with dams after the Second World War. In chapter 2, Khagram briefly recounts the rise of big dams as a development model and applies his theoretical arguments to the case of the Silent Valley–the world’s first successful transnational campaign to stop a major dam project, according to the author. He then proceeds to question why, despite an apparent lack of financial or technical constraints, dam building across India declined rapidly after the 1970s. Visiting a series of sites in the subcontinent, Khagram points to the alliances between local activists and international NGOs that, he says, were the motive force behind the decline in dam construction. He also enumerates a group of countervailing trends that worked against anti-dam campaigns, notably a revamped lobbying campaign by dam boosters, the emergence of neoliberal ideology among third world leaders, and a right-wing Hindu nationalist movement that quashed the voices of many anti-dam activists.

Chapter 3 ventures into the history of India’s monumental plans to dam the Narmada Valley. Khagram is keen to note that local resistance met virtually every proposed dam, but that it was ineffective without the support of international organizations that could pressure Western legislators and World Bank managers. He asserts the emergence of a global set of norms pertaining to environmental conservation, human rights, and the protection of indigenous peoples as an essential factor in the success of the anti-dam movements in reforming policies at the bank. Chapter 4 chronicles the major events that eventually led the World Bank to withdraw funding from the Narmada projects in 1993, highlighting the consolidation of the anti-dam coalition in the late 1980s after a momentary split. Here Khagram emphasizes the role that India’s democratic institutions–notably the judiciary–played in upholding settlements that favored the anti-dam coalitions within India’s borders.

The focus shifts in chapter 5 from India to a comparative analysis of dam building and resistance. The author reviews examples from Brazil, Indonesia, South Africa, and China. He evaluates the success of anti-dam movements in each of the five countries, arguing that the outcome can be understood as a product of the two factors–international social mobilization and domestic democratization–that he identifies in the first chapter. According to Khagram, Brazil’s relatively democratic political system and the close ties between local activists and international NGOs successfully stopped the damming of the Xingu River. In South Africa and Indonesia, authoritarian regimes limited the strength of transnational anti-dam movements, even in spite of Indonesia’s relatively well-organized campaigns of resistance. China, lacking both democratic institutions and meaningful social mobilization, has yet to witness any effective resistance to dam building.

The final chapter again alters course, placing the rise of anti-dam movements in global perspective. Khagram locates the origins of the turn away from dams in the 1990s among environmental activism in the United States and Europe from the 1960s. While acknowledging that local resistance to dams has always been present, if ineffective, in the third world, Khagram emphasizes the role played by international NGOs in changing the discourse and policies surrounding dams. Of particular importance were the campaigns to reform dam policy at the World Bank, which were notable for their public visibility and effective coordination between local activists and operatives in a position to influence managers at the bank and their political backers in the United States and Europe. Khagram holds up a series of major declarations, internal reviews by the bank, and the reformist tone of the World Commission on Dams as evidence for the success of these anti-dam coalitions in bringing an end to the big dam regime. Khagram concludes with a review of alternative explanations of the global decline of dam construction and reaffirms his argument, allowing that the anti-dam movement probably contributed little toward the adoption of new sustainable development models that substantially reduced poverty.

The most valuable contribution of this book is its placement of the anti-dam movement within a framework of global changes in development praxis and international norms governing the rights of indigenous peoples. Critics of big dams often discuss the global reach of large organizations like the World Bank, but rarely are the bank’s antagonists given such geographical breadth. Too often, commentators present indigenous communities as passive, tragic victims of an inexorable modernizing state. Leveraged through international networks of NGOs, Khagram demonstrates the agency of marginalized peoples as well as the institutional and political obstacles that they face.

Given the valuable contribution just mentioned, a number of concerns ought to be raised with this book. The first is the author’s too easy dismissal of alternative explanations for the turn away from dams during the 1980s, especially the turn to austerity over stimulus at the World Bank and the International Monetary Fund. In Latin America, dams and their associated projects were a major contributor to the fiscal problems that boiled into the debt crises starting in the late 1970s. Governments and lenders (public and private) were reluctant to undertake big dams at a time of economic uncertainty and shrinking budgets, even if dams retained their appeal as monuments to progress.

One might also like to see more direct evidence connecting the anti-dam movement to specific and transformative changes in World Bank policy or international norms vis-à-vis indigenous peoples and human rights. The relative absence of such evidence in the face of a global resurgence of big dam construction in the first decade of the twenty-first century (again funded by the World Bank) somewhat undermines the argument that transnational anti-dam networks did, in fact, affect real change in attitudes toward modernization, development, or the rights of indigenous peoples. Likewise, the author’s treatment of Brazil–especially its democratic credentials–glosses over important contradictions in that nation’s political history and the limited access to power by poor Brazilians. Brazil’s newly minted president–formerly a leftist guerrilla and once a dedicated opponent of the Xingu River dam–is now its most prominent booster and has been accused of suppressing the legal petitions brought against the dam by the indigenous communities it will displace. This suggests that the allure of big ticket modernization projects like dams has overridden the democratic politics and international alliances that Khagram has proposed as its remedy. Reading this book in 2011, one is left with a sense that the author would have benefited from a more critical view of World Bank reports and the efficacy of UN declarations. On first glance, the argument is compelling and optimistic, but a skeptical look at the sources cited reveals some weak evidentiary foundations.

Citation: Tony Andersson. Review of Khagram, Sanjeev, _Dams and Development: Transnational Struggles for Water and Power_. H-Water, H-Net Reviews. May, 2011. URL: https://www.h-net.org/reviews/showrev.php?id=33220

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.

Código Florestal como foi aprovado na Câmara poderá agravar mudanças climáticas, alertam cientistas do IPCC (Agência Brasil, JC)

JC e-mail 4268, de 30 de Maio de 2011.

De acordo com os pesquisadores, a versão do Código Florestal aprovada pela Câmara compromete as metas internacionais assumidas pelo País para diminuir emissão de gases de efeito estufa.

Quatro dos cientistas brasileiros que fazem parte do Painel Intergovernamental sobre Mudanças Climáticas (IPCC, na sigla em inglês), da Organização das Nações Unidas (ONU), alertaram para o possível agravamento sobre o clima com a entrada em vigência da atual versão do Código Florestal aprovada pela Câmara. Segundo eles, o aumento da pressão sobre as áreas de florestas comprometerá os compromissos internacionais firmados em 2009 pelo Brasil na Conferência de Copenhague, de diminuir em até 38,9% a emissão de gases de efeito estufa (GEE) e reduzir em 80% o desmatamento na Amazônia até 2020.

Os cientistas, que são ligados à Coordenação de Programas de Pós-Gradução de Engenharia da Universidade Federal do Rio de Janeiro (Coppe-UFRJ), falaram sobre o assunto durante um seminário que abordou as conclusões de um relatório do IPCC sobre energias renováveis, realizado na última quinta-feira (26).

Para a cientista Suzana Kanh, as posições internacionais assumidas pelo País serão prejudicadas, se o Senado não mudar o texto do código aprovado pela Câmara ou se a presidenta da República, Dilma Rousseff, não apresentar vetos. “O impacto do código é muito grande, na medida em que o Brasil tem a maior parte do compromisso de redução de emissão ligada à diminuição do desmatamento. Qualquer ação que fragilize esse combate vai dificultar bastante o cumprimento das metas brasileiras”, afirmou.

A cientista alertou que haverá mudanças climáticas imediatas no Brasil e na América do Sul com o aumento da derrubada de florestas para abrir espaço à agricultura e à pecuária, como vem ocorrendo no Cerrado e na Amazônia. “Com o desmatamento, há o aumento da liberação de carbono para a atmosfera, afetando o microclima, influindo sobre o regime de chuvas e provocando a erosão do solo, prejudicando diretamente a população”.

O cientista Roberto Schaeffer, professor de planejamento energético da Coppe, disse que a entrada em vigor do Código Florestal, como aprovado pelos deputados, poderá prejudicar o investimento que o País faz em torno dos biocombustíveis, principalmente a cana, como fontes de energia limpa. “Hoje os biocombustíveis são entendidos como uma das alternativas para lidar como mudanças climáticas. No momento em que o Brasil flexibiliza as regras e perdoa desmatadores, isso gera desconfiança sobre a maneira como o biocombustível é produzido no País e se ele pode reduzir as emissões [de GEE] como a gente sempre falou”, disse.

O geógrafo Marcos Freitas, que também faz parte do IPCC, considerou que o debate em torno do código deveria ser mais focado no melhor aproveitamento do solo, principalmente na revitalização das áreas degradadas. “O Brasil tem 700 mil quilômetros quadrados de terra que já foi desmatada na Amazônia, e pelo menos dois terços é degradada. Se o código se concentrasse nessa terra já seria um ganho, pois evitaria que se desmatasse o restante. A área de floresta em pé é a que preocupa mais. Pois a tendência, na Amazônia, é a expansão da pecuária com baixa rentabilidade”, afirmou.

Para ele, haverá impactos no clima da região e do País, se houver aumento na devastação da floresta decorrente do novo código. “Isso é preocupante, porque a maior emissão [de GEE] histórica do Brasil, em nível global, tem sido o uso do solo da Amazônia, que responde por cerca de 80% de nossas emissões. Nas últimas conferências [climáticas], nós saímos bem na foto, apresentando cenários favoráveis à redução no desmatamento na região. Agora há uma preocupação de que a gente volte a níveis superiores a 10 mil quilômetros quadrados por ano”.

A possibilidade de um retrocesso ambiental, se mantida a decisão da Câmara sobre o código, também foi apontada pelo engenheiro Segen Estefen, especialista em impactos sobre os oceanos. “Foi decepcionante o comportamento do Congresso, uma anistia para quem desmatou. E isso é impunidade. Uma péssima sinalização dos deputados sobre a seriedade na preservação ambiental. Preponderou a visão daqueles que têm interesse no desmatamento. Isso sempre é muito ruim para a imagem do Brasil”, disse.

O diretor da Coppe, Luiz Pinguelli, enviou uma carta à presidenta Dilma, sugerindo que ela vete parte do código, se não houver mudanças positivas no Senado. Secretário executivo do Fórum Brasileiro de Mudanças Climáticas, Pinguelli alertou para a dificuldade do país de cumprir as metas internacionais, se não houver um freio à devastação ambiental.

“O problema é o aumento do desmatamento em alguns estados, isso é um mau sinal. Com a aprovação do código, poderemos estar favorecendo essa situação. Seria possível negociar, beneficiando os pequenos agricultores. Mas o que passou é muito ruim”, afirmou Pinguelli, que mantém a esperança de que o Senado discuta com mais profundidade a matéria, podendo melhorar o que foi aprovado na Câmara.

(Agência Brasil – 28/5)

Material cartográfico revela imaginário colonial português (FAPESP)

HUMANIDADES
| BIBLIOTECA DIGITAL
A mina dos mapas
Márcio Ferrari
Edição Impressa 183 – Maio de 2011

Visão do Brasil que revela a exploração. © DIVULGAÇÃO

Um precioso material cartográfico vem ganhando visibilidade irrestrita graças ao trabalho do grupo de pesquisadores da Universidade de São Paulo (USP) responsável pela construção da Biblioteca Digital de Cartografia Histórica. O acesso on-line é livre. Fruto de um conceito desenvolvido pelo Laboratório de Estudos de Cartografia Histórica (Lech), o site não só oferece a apreciação de um acervo de mapas raros impressos entre os séculos XVI e XIX, mas também torna possível uma série de referências cruzadas, comparações e chaves interpretativas com a pluralidade e a rapidez da internet. Afinal, “um mapa sozinho não faz verão”, como diz uma das coordenadoras do projeto, Iris Kantor, professora do Departamento de História da USP. O conjunto revela muito mais do que informações geográficas. Permite também perceber a elaboração de um imaginário ao longo do tempo, revelado por visões do Brasil concebidas fora do país. O trabalho se inseriu num grande projeto temático, denominadoDimensões do Império português e coordenado pela professora Laura de Mello e Souza, que teve apoio da Fapesp.

Até agora o acervo teve duas fontes principais. A primeira foi o conjunto de anotações realizadas ao longo de 60 anos pelo almirante Max Justo Lopes, um dos principais especialistas em cartografia do Brasil. A segunda foi a coleção particular do Banco Santos, recolhida à guarda do Estado durante o processo de intervenção no patrimônio do banqueiro Edemar Cid Ferreira, em 2005. Uma decisão judicial transferiu a custódia dos mapas ao Instituto de Estudos Brasileiros (IEB) da USP – iniciativa louvável, uma vez que esse acervo, segundo Iris Kantor, “estava guardado em condições muito precárias num galpão, sem nenhuma preocupação de acondicionamento adequado”. Foram recolhidos cerca de 300 mapas. Sabe-se que o número total da coleção original era muito maior, mas ignora-se onde se encontram os demais.

O primeiro passo foi recuperar e restaurar os itens recolhidos. Eles chegaram à USP “totalmente nus”, sendo necessário todo o trabalho de identificação, datação, atribuição de autoria etc. Durante os anos de 2007 e 2008, o Laboratório de Reprodução Digital do IEB pesquisou, adquiriu e utilizou a tecnologia adequada para reproduzir em alta resolução o acervo de mapas. Foram necessárias várias tentativas até se atingir a precisão de traços e cores desejada. Em seguida, o Centro de Informática do campus da USP em São Carlos (Cisc/USP) desenvolveu um software específico, tornando possível construir uma base de dados capaz de interagir com o catálogo geral da biblioteca da USP (Dedalus), assim como colher e transferir dados de outras bases disponíveis na internet. Uma das fontes inspiradoras dos pesquisadores foi o site do colecionador e artista gráfico inglês David Rumsey, que abriga 17 mil mapas. Outra foi a pioneira Biblioteca Virtual da Cartografia Histórica, da Biblioteca Nacional, que reúne 22 mil documentos digitalizados. Futuramente, o acervo cartográfico da USP deverá integrar a Biblioteca Digital de Cartografia Histórica. Foi dada prioridade aos mapas do Banco Santos porque eles não pertencem à universidade, podendo a qualquer momento ser requisitados judicialmente para quitar dívidas.

Hoje estão disponíveis na Biblioteca Digital “informações cartobibliográficas, biográficas, dados de natureza técnica e editorial, assim como verbetes explicativos que procuram contextualizar o processo de produção, circulação e apropriação das imagens cartográficas”. “Não existe mapa ingênuo”, diz Iris Kantor, indicando a necessidade dessa reunião de informações para o entendimento do que está oculto sob a superfície dos contornos geográficos e da toponímia. “O pressuposto do historiador é que todos os mapas mentem; a manipulação é um dado importante a qualquer peça cartográfica.”

Fizeram parte dessa manipulação os interesses geopolíticos e comerciais da época determinada e daqueles que produziram ou encomendaram o mapa. O historiador Paulo Miceli, da Universidade Estadual de Campinas (Unicamp), que no início da década passada havia sido chamado pelo Banco Santos para dar consultoria sobre a organização do acervo, lembra que o primeiro registro cartográfico daquilo que hoje se chama Brasil foi um mapa do navegador espanhol Juan de la Cosa (1460-1510), datado de 1506, que mostra “a linha demarcatória do Tratado de Tordesilhas, a África muito bem desenhada e, à sua esquerda, um triângulo bem pequeno para indicar a América do Sul”. “O Brasil foi surgindo de uma espécie de nevoeiro de documentos, condicionado, entre outras coisas, pelo rigor da coroa portuguesa sobre o trabalho dos cartógrafos, que estavam sujeitos até a pena de morte.” Essa “aparição” gradual do Brasil no esquema geopolítico imperial é o tema da tese de livre-docência de Miceli, intitulada, apropriadamente, de O desenho do Brasil no mapa do mundo, que sairá em livro ainda este ano pela editora da Unicamp. O título se refere aoTheatrum orbis terrarum (Teatro do mundo), do geógrafo flamengo Abraham Ortelius (1527-1598), considerado o primeiro atlas moderno.

Navegadores – Ao contrário do que se pode imaginar, os mapas antigos não tinham a função principal, e prática, de orientar exploradores e navegadores. Estes, até o século XIX, se valiam de roteiros escritos, as “cartas de marear”, registrados em “pergaminhos sem beleza nem ambiguidade, perfurados por compassos e outros instrumentos, e que viraram invólucros de pastas de documentos em acervos cartográficos”, segundo Miceli. “Os mapas eram objetos de ostentação e prestígio, com valor de fruição e ornamentação, para nobres e eruditos”, diz Iris Kantor. “Um dos tesouros do Vaticano era sua coleção cartográfica.” Já os roteiros de navegação eram apenas manuscritos e não impressos, processo que dava aos mapas status de documentos privilegiados. As chapas originais de metal, com as alterações ao longo do tempo, duravam até 200 anos, sempre nas mãos de “famílias” de cartógrafos, editores e livreiros. Às vezes essas famílias eram mesmo grupos consanguíneos com funções hereditárias, outras vezes eram ateliês altamente especializados. Os artistas, com experiência acumulada ao longo de décadas, não viajavam e recolhiam suas informações de “navegadores muitas vezes analfabetos”, segundo Miceli. Para dar uma ideia do prestígio atribuído à cartografia, ele lembra que o Atlas maior, do holandês Willem Blaue (1571-1638), pintado com tinta de ouro, foi considerado o livro mais caro do Renascimento.

Um dos critérios de busca da Biblioteca Digital de Cartografia Histórica é justamente por “escolas” de cartógrafos, entre elas a flamenga, a francesa e a veneziana – sempre lembrando que o saber fundamental veio dos navegadores e cosmógrafos portugueses. Iris Kantor considera que elas se interpenetram e planeja, futuramente, substituir a palavra “escola” por “estilo”. Também está nos planos da equipe reconstituir a genealogia da produção de mapas ao longo do período coberto. No estudo desses documentos se inclui a identificação daqueles que contêm erros voluntários como parte de um esforço de contrainformação, chamado por Miceli de “adulteração patriótica”. Como os mapas que falsificam a localização de recursos naturais, como rios, para favorecer portugueses ou espanhóis na divisão do Tratado de Tordesilhas.

Uma evidência da função quase propagandística da cartografia está no mapa Brasil, de 1565, produzido pela escola veneziana, que ilustra a abertura desta reportagem. Nele não se destaca exatamente a precisão geográfica. “A toponímia não é muito intensa, embora toda a costa já estivesse nomeada nessa época”, diz Iris Kantor. “É uma obra voltada para o público leigo, talvez mais para os comerciantes, como indicam os barquinhos com os brasões das coroas da França e de Portugal. Vemos o comércio do pau-brasil, ainda sem identificação da soberania política. Parece uma região de franco acesso. A representação dos indígenas e seu contato com o estrangeiro transmite cordialidade e reciprocidade.”

“No fundo, os mapas servem como representação de nós mesmos”, prossegue a professora da USP. “Pelo estudo da cartografia brasileira pós-independência, por exemplo, chama a atenção nossa visão de identidade nacional baseada numa cultura geográfica romântica, liberal e naturalista, que representa o país como um contínuo geográfico entre a Amazônia e o Prata. No mesmo período, a ideia do povo não era tão homogênea. Não é por acaso que os homens que fizeram a independência e constituíram o arcabouço legal do país fossem ligados às ciências naturais, à cartografia etc. A questão geográfica foi imperativa na criação da identidade nacional.”

Um exemplo bem diferente de utilização de recursos digitais na pesquisa com mapas está em andamento na Unicamp, derivado do projeto Trabalhadores no Brasil: identidades, direitos e política, coordenado pela professora Silvia Hunold Lara e apoiado pela Fapesp. Trata-se do estudo Mapas temáticos de Santana e Bexiga, sobre o cotidiano dos trabalhadores urbanos entre 1870 e 1930. Segundo a professora, pode-se reconstituir o cotidiano dos moradores dos bairros, “não  dissociados de seu modo de trabalho e de suas reivindicações por direitos”.

Ordem no caos (FAPESP)

31/05/2011

Por Elton Alisson

Pesquisadores desenvolvem modelo teórico para explicar e determinar as condições para a ocorrência de sincronização isócrona em sistemas caóticos. Estudo pode levar à melhoria de sistemas como o de telecomunicações.

Agência FAPESP – Na natureza, enxames de vaga-lumes enviam sinais luminosos uns para os outros. Isso é feito inicialmente de forma autônoma, individual e independente e, sob determinadas circunstâncias, pode dar origem a um fenômeno robusto de natureza coletiva chamado sincronização. Como resultado, milhares de vaga-lumes piscam em uníssono, de forma ritmada, emitindo sinais luminosos em sincronia com os demais.

Há pouco mais de 20 anos se descobriu que a sincronização também ocorre em sistemas caóticos – sistemas complexos de comportamento imprevisível nas mais variadas áreas, como economia, clima ou agricultura. Outra descoberta mais recente foi que a sincronização resiste a atrasos na propagação de sinais emitidos.

Nessas situações, sob determinadas circunstâncias, a sincronização pode emergir em sua forma isócrona, isto é, com atraso zero. Isso significa que equipamentos como osciladores estão perfeitamente sincronizados no tempo, mesmo recebendo sinais atrasados dos demais. Entretanto, os modelos teóricos desenvolvidos para explicar o fenômeno não levaram esse fato em consideração até o momento.

Uma nova pesquisa realizada por cientistas do Instituto Tecnológico de Aeronáutica (ITA) e do Instituto Nacional de Pesquisas Espaciais (Inpe) resultou em um modelo teórico para demonstrar como a sincronização ocorre quando há atraso na emissão e no recebimento de informação entre osciladores caóticos.

Os resultados do estudo, que podem ser utilizados para aprimorar sistemas tecnológicos, foram publicados em abril no periódico Journal of Physics A: Mathematical and Theoretical.

Durante o estudo, os pesquisadores buscaram explicar a sincronização quando há atraso no recebimento da informação entre os osciladores caóticos. O objetivo é determinar as condições sob as quais o fenômeno ocorre em sistemas reais.

“Utilizando a teoria da estabilidade de Lyapunov-Krasovskii, que trata do problema da estabilidade em sistemas dinâmicos, estabelecemos critérios de estabilidade que, a partir de parâmetros como o tempo de atraso no recebimento das informações entre os osciladores, permitem determinar se os osciladores entrarão em estado de sincronização isócrona”, disse um dos autores do artigo, José Mario Vicensi Grzybowski, à Agência FAPESP.

“Foi a primeira demonstração de forma totalmente analítica da estabilidade da sincronização isócrona. Não há similares na literatura”, afirmou Grzybowski, que realiza trabalho de doutorado em engenharia eletrônica e computação no ITA com Bolsa da FAPESP.

As descobertas do estudo poderão possibilitar o aprimoramento de sistemas tecnológicos baseados em sincronização, especialmente em sistemas de telecomunicação baseados em caos.

Além disso, entre as possíveis aplicações estão os satélites em formação de voo, em que um precisa manter uma distância relativa adequada em relação aos outros e, ao mesmo tempo, estabelecer um referencial (sincronização) que permita o intercâmbio de informações, coleta e combinação eletrônica de imagens oriundas dos diversos satélites da formação.

“Nesse caso, o referencial pode ser estabelecido por meio de um fenômeno que emerge naturalmente desde que as condições apropriadas sejam proporcionadas, diminuindo ou até dispensando o uso de algoritmos”, disse.

Redes complexas naturais

Veículos aéreos não tripulados, que podem explorar uma determinada região em conjunto, além de robôs e sistemas de controle distribuídos, que também precisam trabalhar de forma coordenada em uma rede, podem utilizar os resultados da pesquisa.

Os autores do estudo também pretendem fazer com que o fenômeno da sincronização ocorra em sistemas tecnológicos sem a necessidade de existir um líder que oriente a forma como os outros agentes osciladores devem se comportar.

“Pretendemos eliminar a figura do líder e fazer com que a sincronização ocorra em função da interação entre os agentes, como ocorre com uma espécie de vaga-lumes na Ásia, que entra em sincronização sem que um deles lidere”, disse Elbert Einstein Macau, pesquisador do Inpe e outro autor do estudo, do qual participou também Takashi Yoneyama, do ITA.

Segundo eles, nessa pesquisa foi analisada a sincronização com um atraso de tempo na transmissão da informação entre dois osciladores. Mas no trabalho que desenvolvem atualmente os resultados serão expandidos para uma rede de osciladores de modo a ampliar a escala do problema, e de sua solução.

Dessa forma, segundo eles, será possível modelar fenômenos baseados na sincronização isócrona em escala de rede e contemplar fenômenos naturais que apresentam nível de complexidade muitas vezes superior.

“Em princípio, qualquer fenômeno real que se baseia na sincronização isócrona poderá ser tratado a partir desses elementos teóricos, que podem servir para projetos de redes tecnológicas, ou para analisar e compreender comportamentos emergentes em redes naturais, mesmo naquelas em que não temos formas de influir diretamente”, disse Grzybowski.

O artigo Stability of isochronal chaos synchronization (doi:10.1088/1751-8113/44/17/175103) pode ser lido em http://iopscience.iop.org/1751-8121/44/17/175103/pdf/1751-8121_44_17_175103.pdf

World-Wide Assessment Determines Differences in Cultures (NSF)

[Apesar dos problemas metodológicos desse tipo de pesquisa (identificação de fronteiras nacionais com fronteiras culturais, reificação do conceito de cultura, abordagem sincrônica, dentre muitos outros), os resultados são provocadores, e portanto incitam a um debate interessante. RT]

Press Release 11-106 – Video
Michele Gelfand discusses what makes cultures restrictive versus permissive.

Watch video here.

University of Maryland Psychology Professor Michele Gelfand discusses recent research that investigates the “tightness” and “looseness” of 33 countries. “Tight” refers to nations that have strong social norms and low tolerance for deviation from those norms, whereas another term, “loose,” refers to nations with weak social norms and a high tolerance for deviation from those norms.

Credit: University of Maryland/National Science Foundation.

Press Release 11-106
World-Wide Assessment Determines Differences in Cultures

Ukraine, Israel, Brazil and the United States are “loose” cultures

Population density helps determine whether a country is tight or loose as this German street hints.

May 26, 2011

Conflicts and misunderstandings frequently arise between individuals from different cultures. But what makes cultures different; what makes one more restrictive and another less so?

A new international study led by the University of Maryland and supported by the National Science Foundation’s Division of Behavioral and Cognitive Sciences offers insights that may help explain such cultural differences and bridge the gaps between them.

Published in the May 27 issue of the journal Science, the study for the first time assesses the degree to which countries are restrictive versus permissive and it all comes down to factors that shape societal norms.

The researchers found a wide variation in the degree to which various societies impose social norms, enforce conformity and punish anti-social behavior. They also found the more threats experienced by a society, the more likely the society is to be restrictive, the authors say.

“There is less public dissent in tight cultures,” said University of Maryland Psychology Professor Michele Gelfand, who led the study. “Tight societies require much stronger norms and are much less tolerant of behavior that violates norms.”

“Tight” refers to nations that have strong social norms and low tolerance for deviation from those norms, whereas another term, “loose,” refers to nations with weak social norms and a high tolerance for deviation from them.

Gelfand and colleagues found that countries such as Japan, Korea, Singapore and Pakistan are much tighter whereas countries such as the Ukraine, Israel, Brazil and the United States are looser.

“Is important, within our view, to be mindful that we don’t think that either culture is worse or better,” said Gelfand.

She and her colleagues examined cultural variation in both types of societies.

“We believe this knowledge about how tight or loose a country is and why it is that way can foster greater cross-cultural tolerance and understanding,” said Gelfand. “Such understanding is critical in a world where both global interdependence and global threats are increasing.”

The researchers surveyed 6,823 respondents in 33 nations. In each nation, individuals from a wide range of occupations, as well as university students, were included. Data on environmental and historical threats and on societal institutions were collected from numerous established databases. Historical data–population density in 1500, history of conflict over the last hundred years, historical prevalence of disease outbreaks–were included whenever possible, and data on a wide range of societal institutions, including government, media and criminal justice, were obtained.

“You can see tightness reflected in the response in Japan to the natural disasters recently,” said Gelfand referring to the massive earthquake and tsunami that hit the country on March 11 of this year.

“The order and social coordination after the event, we believe, is a function of the tightness of the society,” Gelfand said, noting that tightness is needed in Japan to face these kinds of ecological vulnerabilities.

The research further showed that a nation’s tightness or looseness is in part determined by the environmental and human factors that have shaped a nation’s history–including wars, natural disasters, disease outbreaks, population density and scarcity of natural resources.

Tight and loose societies also vary in their institutions, with tight societies having more autocratic governments, more closed media and criminal justice systems that have more monitoring and greater deterrence of crime as compared to loose societies.

The study found that the situations that people encounter differ in tight and loose societies. For example, everyday situations–like being in a park, a classroom, the movies, a bus, at job interviews, restaurants and even one’s bedroom–constrain behavior much more in tight societies and afford a wider range of behavior in loose societies.

“We also found that the psychological makeup of individual citizens varies in tight and loose societies,” Gelfand said. “For example, individuals in tight societies are more prevention focused, have higher self-regulation strength and have higher needs for order and self-monitoring abilities than individuals in loose societies.”

These attributes, Gelfand said, help people to adapt to the level of constraint, or latitude, in their cultural context, and at the same time, reinforce it.

The research team combined all these measures in a multi-level model that shows how tight and loose systems are developed and maintained.

Gelfand said knowledge about these cultural differences can be invaluable to many people–from diplomats and global managers to military personal, immigrants and travelers–who have to traverse the tight-loose divide.

“When we understand why cultures, and the individuals in those cultures, are the way they are, it helps us to become less judgmental. It helps us to understand and appreciate societal differences.”

-NSF-

Media Contacts
Bobbie Mixon, NSF (703) 292-8485 bmixon@nsf.gov
Lee Tune, University of Maryland (301) 405-4679 ltune@umd.edu

Principal Investigators
Michele Gelfand, University of Maryland (301) 405-6972 mgelfand@psyc.umd.edu

Witchy Town’s Worry: Do Too Many Psychics Spoil the Brew? (N.Y. Times)

Lorelei Stathopoulos is concerned Salem will lose its “quaint reputation.” Photo: Gretchen Ertl for The New York Times.
By KATIE ZEZIMA. Published: May 26, 2011
SALEM, Mass. — Like any good psychic, Barbara Szafranski claims she foresaw the problems coming.
Gretchen Ertl for The New York Times

Christian Day, who owns two shops, thinks competition is a good thing.

Gretchen Ertl for The New York Times

Debra Ann Freeman read a customer’s tarot cards in Salem, Mass.

Her prophecy came in 2007, as the City Council was easing its restrictions on the number of psychics allowed to practice in this seaside city, where self-proclaimed witches, angels, clairvoyants and healers still flock 319 years after the notorious Salem witch trials. Some hoped for added revenues from extra licenses and tourists. Others just wanted to bring underground psychics into the light.

Just as Ms. Szafranski predicted, the number of psychic licenses has drastically increased, to 75 today, up from a mere handful in 2007. And now Ms. Szafranski, some fellow psychics and city officials worry the city is on psychic overload.

“It’s like little ants running all over the place, trying to get a buck,” grumbled Ms. Szafranski, 75, who quit her job as an accountant in 1991 to open Angelica of the Angels, a store that sells angel figurines and crystals and provides psychic readings. She says she has lost business since the licensing change.

“Many of them are not trained,” she said of her rivals. “They don’t understand that when you do a reading you hold a person’s life in your hands.”

Christian Day, a warlock who calls himself the “Kathy Griffin of witchcraft,” thinks the competition is good for Salem.

“I want Salem to be the Las Vegas of psychics,” said Mr. Day, who used to work in advertising and helped draft the 2007 regulations. Since they went into effect, he has opened two stores, Hex and Omen.

But not everyone is sure that quantity can ensure quality. Lorelei Stathopoulos, formerly an exotic dancer known as Toppsey Curvey, has been doing psychic readings at her store, Crow Haven Corner, for 15 years. She thinks psychics should have years of experience to practice here.

“I want Salem to keep its wonderful quaint reputation,” said Ms. Stathopoulos, who was wearing a black tank top that read “Sexy witch.” “And with that you have to have wonderful people working.”

Under the 2007 regulations, psychics must have lived in the city for at least a year to obtain an individual license, and businesses must be open for at least a year to hire five psychics. License applicants are also subject to criminal background checks.

Ms. Stathopoulos says a garden-variety reader makes 40 percent of a $35 reading that lasts 15 minutes. She charges $90 and up for a half-hour of her services, and keeps all of that.

Now, talk has started about new regulations that would include a cap on the number of psychic businesses, but the grumbling has in no way reached the level of viciousness that occurred in 2007, when someone left the mutilated body of a raccoon outside Ms. Szafranski’s shop and Mr. Day and Ms. Stathopoulos got into a fight.

Ms. Szafranski says she plans to send the council an official complaint in June.

This time, she has no prediction how it will turn out.

Intuitions Regarding Geometry Are Universal, Study Suggests (ScienceDaily)

ScienceDaily (May 26, 2011) — All human beings may have the ability to understand elementary geometry, independently of their culture or their level of education.

A Mundurucu participant measuring an angle using a goniometer laid on a table. (Credit: © Pierre Pica / CNRS)

This is the conclusion of a study carried out by CNRS, Inserm, CEA, the Collège de France, Harvard University and Paris Descartes, Paris-Sud 11 and Paris 8 universities (1). It was conducted on Amazonian Indians living in an isolated area, who had not studied geometry at school and whose language contains little geometric vocabulary. Their intuitive understanding of elementary geometric concepts was compared with that of populations who, on the contrary, had been taught geometry at school. The researchers were able to demonstrate that all human beings may have the ability of demonstrating geometric intuition. This ability may however only emerge from the age of 6-7 years. It could be innate or instead acquired at an early age when children become aware of the space that surrounds them. This work is published in thePNAS.

Euclidean geometry makes it possible to describe space using planes, spheres, straight lines, points, etc. Can geometric intuitions emerge in all human beings, even in the absence of geometric training?

To answer this question, the team of cognitive science researchers elaborated two experiments aimed at evaluating geometric performance, whatever the level of education. The first test consisted in answering questions on the abstract properties of straight lines, in particular their infinite character and their parallelism properties. The second test involved completing a triangle by indicating the position of its apex as well as the angle at this apex.

To carry out this study correctly, it was necessary to have participants that had never studied geometry at school, the objective being to compare their ability in these tests with others who had received training in this discipline. The researchers focused their study on Mundurucu Indians, living in an isolated part of the Amazon Basin: 22 adults and 8 children aged between 7 and 13. Some of the participants had never attended school, while others had been to school for several years, but none had received any training in geometry. In order to introduce geometry to the Mundurucu participants, the scientists asked them to imagine two worlds, one flat (plane) and the second round (sphere), on which were dotted villages (corresponding to the points in Euclidean geometry) and paths (straight lines). They then asked them a series of questions illustrated by geometric figures displayed on a computer screen.

Around thirty adults and children from France and the United States, who, unlike the Mundurucu, had studied geometry at school, were also subjected to the same tests.

The result was that the Mundurucu Indians proved to be fully capable of resolving geometric problems, particularly in terms of planar geometry. For example, to the question Can two paths never cross?, a very large majority answered Yes. Their responses to the second test, that of the triangle, highlight the intuitive character of an essential property in planar geometry, namely the fact that the sum of the angles of the apexes of a triangle is constant (equal to 180°).

And, in a spherical universe, it turns out that the Amazonian Indians gave better answers than the French or North American participants who, by virtue of learning geometry at school, acquire greater familiarity with planar geometry than with spherical geometry. Another interesting finding was that young North American children between 5 and 6 years old (who had not yet been taught geometry at school) had mixed test results, which could signify that a grasp of geometric notions is acquired from the age of 6-7 years.

The researchers thus suggest that all human beings have an ability to understand Euclidean geometry, whatever their culture or level of education. People who have received no, or little, training could thus grasp notions of geometry such as points and parallel lines. These intuitions could be innate (they may then emerge from a certain age, as it happens 6-7 years). If, on the other hand, these intuitions derive from learning (between birth and 6-7 years of age), they must be based on experiences common to all human beings.

(1) The two CNRS researchers involved in this study are Véronique Izard of the Laboratoire Psychologie de la Perception (CNRS / Université Paris Descartes) and Pierre Pica of the Unité ?Structures Formelles du Langage? (CNRS / Université Paris 8). They conducted it in collaboration with Stanislas Dehaene, professor at the Collège de France and director of the Unité de Neuroimagerie Cognitive à NeuroSpin (Inserm / CEA / Université Paris-Sud 11) and Elizabeth Spelke, professor at Harvard University.

Journal ReferenceVéronique Izard, Pierre Pica, Elizabeth S. Spelke, and Stanislas Dehaene. Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proceedings of the National Academy of Sciences, 23 May 2011.

Why Are Spy Researchers Building a ‘Metaphor Program’? (The Atlantic)

MAY 25 2011, 4:19 PM ET

ALEXIS MADRIGAL – Alexis Madrigal is a senior editor at The Atlantic. He’s the author of Powering the Dream: The History and Promise of Green Technology.
A small research arm of the U.S. government’s intelligence establishment wants to understand how speakers of Farsi, Russian, English, and Spanish see the world by building software that automatically evaluates their use of metaphors.That’s right, metaphors, like Shakespeare’s famous line, “All the world’s a stage,” or more subtly, “The darkness pressed in on all sides.” Every speaker in every language in the world uses them effortlessly, and the Intelligence Advanced Research Projects Activity wants know how what we say reflects our worldviews. They call it The Metaphor Program, and it is a unique effort within the government to probe how a people’s language reveals their mindset.

“The Metaphor Program will exploit the fact that metaphors are pervasive in everyday talk and reveal the underlying beliefs and worldviews of members of a culture,” declared an open solicitation for researchers released last week. A spokesperson for IARPA declined to comment at the time.

diagram.jpg
IARPA wants some computer scientists with experience in processing language in big chunks to come up with methods of pulling out a culture’s relationship with particular concepts.”They really are trying to get at what people think using how they talk,” Benjamin Bergen, a cognitive scientist at the University of California, San Diego, told me. Bergen is one of a dozen or so lead researchers who are expected to vie for a research grant that could be worth tens of millions of dollars over five years, if the team scan show progress towards automatically tagging and processing metaphors across languages.

“IARPA grants are big,” said Jennifer Carter of Applied Research Associates, a 1,600-strong research company that may throw its hat in the Metaphor ring after winning a lead research spot in a separate IARPA solicitation. While no one knows the precise value of the rewards of the IARPA grants and the contracts are believed to vary widely, they tend to support several large teams of multidisciplinary researchers, Carter said. The awards, which would initially go to several teams, could range into the five digits annually. “Generally what happens… there will be a ‘downselect’ each year, so maybe only one team will get money for the whole program,” she said.*

All this to say: The Metaphor Program may represent a nine-figure investment by the government in understanding how people use language. But that’s because metaphor studies aren’t light or frilly and IARPA isn’t afraid of taking on unusual sounding projects if they think they might help intelligence analysts sort through and decode the tremendous amounts of data pouring into their minds.

In a presentation to prospective research “performers,” as they’re known, The Metaphor Program’s manager, Heather McCallum-Bayliss gave the following example of the power of metaphors in political discussions. Her slide reads:

Metaphors shape how people think about complex topics and can influence beliefs. A study presented participants with a report on crime in a city; they were asked how crime should be addressed in the city. The report contained statistics, including crime and murder rates, as well as one of two metaphors, CRIME AS A WILD BEAST or CRIME AS A VIRUS. The participants were influenced by the embedded metaphor…

McCallum-Bayliss appears to be referring to a 2011 paper published in the PLoS ONE, “Metaphors We Think With: The Role of Metaphor in Reasoning,” lead authored by Stanford’s Paul Thibodeau. In that case, if people were given the crime-as-a-virus framing, they were more likely to suggest social reform and less likely to suggest more law enforcement or harsher punishments for criminals. The differences generated by the metaphor alternatives were “were larger than those that exist between Democrats and Republicans, or between men and women,” the study authors noted.

Every writer (and reader) knows that there are clues to how people think and ways to influence each other through our use of words. Metaphor researchers, of whom there are a surprising number and variety, have formalized many of these intuitions into whole branches of cognitive linguistics using studies like the one outlined above (more on that later). But what IARPA’s project calls for is the deployment of spy resources against an entire language. Where you or I might parse a sentence, this project wants to parse, say, all the pages in Farsi on the Internet looking for hidden levers into the consciousness of a people.

“The study of language offers a strategic opportunity for improved counterterrorist intelligence, in that it enables the possibility of understanding of the Other’s perceptions and motivations, be he friend or foe,” the two authors of Computational Methods for Counterterrorism wrote. “As we have seen, linguistic expressions have levels of meaning beyond the literal, which it is critical to address. This is true especially when dealing with texts from a high-context traditionalist culture such as those of Islamic terrorists and insurgents.”

In the first phase of the IARPA program, the researchers would simply try to map from the metaphors a language used to the general affect associated with a concept like “journey” or “struggle.” These metaphors would then be stored in the metaphor repository. In a later stage, the Metaphor Program scientists will be expected to help answer questions like, “What are the perspectives of Pakistan and India with respect to Kashmir?” by using their metaphorical probes into the cultures. Perhaps, a slide from IARPA suggests, metaphors can tell us something about the way Indians and Pakistanis view the role of Britain or the concept of the “nation” or “government.”

The assumption is that common turns of phrase, dissected and reassembled through cognitive linguistics, could say something about the views of those citizens that they might not be able to say themselves. The language of a culture as reflected in a bunch of text on the Internet might hide secrets about the way people think that are so valuable that spies are willing to pay for them.

MORE THAN WORDS

IARPA is modeled on the famed DARPA — progenitors of the Internet among other wonders — and tasked with doing high-risk, high-reward research for the many agencies, the NSA and CIA among them, that make up the American intelligence-gathering force. IARPA is, as you might expect, a low-profile organization. Little information is available from the organization aside from a couple of interviews that its administrator, Lisa Porter, a former NASA official, gave back in 2008 to Wiredand IEEE Spectrum. Neither publication can avoid joking that the agency is like James Bond’s famous research crew, but it turns out that the place is more likely to use “cloak-and-dagger” in a sentence than in actual combat with supervillainy.

A major component of the agency’s work is data mining and analysis. IARPA is split into three program offices with distinct goals: Smart Collection “to dramatically improve the value of collected data from all sources”; Incisive Analysis “to maximize insight from the information we collect, in a timely fashion”; and Safe & Secure Operations “to counter new capabilities implemented by our adversaries that would threaten our ability to operate freely and effectively in a networked world.” The Metaphor Program falls under the office of Incisive Analysis and is headed by the aforementioned McCallum-Bayliss, a former technologist at Lockheed Martin and IBM, who co-filed several patents relating to the processing of names in databases.

Incisive Analysis has put out several calls for other projects. They range widely in scope and domain. The Babel Program seeks to “demonstrate the ability to generate a speech transcription system for any new language within one week to support keyword search performance for effective triage of massive amounts of speech recorded in challenging real-world situations.” ALADDIN aims to create software to automatically monitor massive amounts of video. The FUSE Program is trying to “develop automated methods that aid in the systematic, continuous, and comprehensive assessment of technical emergence” using the scientific and patent literature.

All three projects are technologically exciting, but none of those projects has the poetic ring nor the smell of humanity of The Metaphor Program. The Metaphor Program wants to understand what human beings mean through the unvoiced emotional inflection of our words. That’s normally the work of an examined life, not a piece of spy software.

There is some precedent for the work. It comes from two directions: cognitive linguistics and natural language processing. On the cognitive linguistic side, George Lakoff and Mark Johnson of the University of California, Berkeley did the foundational work, notably in their 1980 book,Metaphors We Live By. As summarized recently by Zoltán Kövecses in his book, Metaphor: A Practical Introduction, Lakoff and Johnson showed that metaphors weren’t just the devices of writers but rather “a valuable cognitive tool without which neither poets nor you and I as ordinary people could live.”

In this school of cognitive linguistics, we need to use more embodied, concrete domains in order to describe more abstract ones. Researchers assembled the linguistic expressions we use like “That class gave me food for thought” and “His idea was half-baked” into a construct called a “conceptual category.” These come in the form of awesomely simple sentences like “Ideas Are Food.” And there are whole great lists of them. (My favorites: Darkness Is a Solid; Time Is Something Moving Toward You; Happiness Is Fluid In a Container; Control Is Up.) The conceptual categories show that humans use one domain (“the source”) to describe another (“the target”). So, take Ideas Are Food: thinking is preparing food and understanding is digestion and believing is swallowing and learning is eating and communicating is feeding. Put simply: We import the logic of the source domain into the target domain.

Below, you can check out how one, “Ideas Are Food,” is expressed, or skip past the gallery to the rest of the story.

The main point here is that metaphors, in this sense, aren’t soft or literary in any narrow sense. Rather, they are a deep and fundamental way that humans make sense of the world. And unfortunately for spies who want to filter the Internet to look for dangerous people, computers can’t make much sense out of sentences like, “We can make beautiful music together,” which Google translates as something about actually playing music when, of course, it really means, “We can be good together.” (Or as the conceptual category would phrase it: “Interpersonal Harmony Is Musical Harmony.”)

While some of the underlying structures of the metaphors — the conceptual categories — are near universal (e.g. Happy Is Up), there are many variations in their range, elaboration, and emphasis. And, of course, not every category is universal. For example, Kövecses points to a special conceptual category in Japanese centered around the hara, or belly, “Anger Is (In The) Hara.” In Zulu, one finds an important category, “Anger Is (Understood As Being) In the Heart,” which would be rare in English. Alternatively, while many cultures conceive of anger as a hot fluid in a container, it’s in English that we “blow off steam,” a turn of phrase that wouldn’t make sense in Zulu.

These relationships have been painstakingly mapped by human analysts over the last 30 years and they represent a deep culturolinguistic knowledge base. For the cognitive linguistic school, all of these uses of language reveal something about the way the people of a culture understand each other and the world. And that’s really the target of the metaphor program, and what makes it unprecedented. They’re after a deeper understanding of the way people use words because the deep patterns encoded in language may help intelligence analysts understand the people, not just the texts.

For Lakoff, it’s about time that the government started taking metaphor seriously. “There have been 30 years of neglect of current linguistics in all government-sponsored research,” he told me. “And finally there is somebody in the government who has managed to do something after many years of trying.”

UC San Diego’s Bergen agreed. “It’s a totally unique project,” he said. “I’ve never seen anything like it.”

But that doesn’t mean it’s going to be easy to create a system that can automatically deduce what Americans’ biases about education from a statement like “The teacher spoon-fed the students.”

Lakoff contends that it will take a long, sustained effort by IARPA (or anyone else) to complete the task. “The quick-and-dirty way” won’t work, he said. “Are they going to do a serious scientific account?”

BUILDING A METAPHOR MACHINE

The metaphor problem is particularly difficult because we don’t even know what the right answers to our queries are, Bergen said.

“If you think about other sorts of automation of language processing, there are right answers,” he said. “In speech recognition, you know what the word should be. So you can do statistical learning. You use humans, tag up a corpus and then run some machine learning algorithms on that. Unfortunately, here, we don’t know what the right answers are.”

For one, we don’t really have a stable way of telling what is and what is not metaphorical language. And metaphorical language is changing all the time. Parsing text for metaphors is tough work for humans and we’re made for it. The kind of intensive linguistic analysis that’s made Lakoff and his students (of whom Bergen was one) famous can take a human two hours for every 500 words on the page.

But it’s that very difficulty that makes people want to deploy computing resources instead of human beings. And they do have some directions that they could take. James Martin of the University of Colorado played a key role in the late 1980s and early 1990s in defining the problem and suggesting a solution. Martin contended “the interpretation of novel metaphors can be accomplished through the systematic extension, elaboration, and combination of knowledge about already well-understood metaphors,” in a 1988 paper.

What that means is that within a given domain — say, “the family” in Arabic — you can start to process text around that. First you’ll have humans go in and tag up the data, finding the metaphors. Then, you’d use what they learned about the target domain “family” to look for metaphorical words that are often associated with it. Then, you run permutations on those words from the source domain to find other metaphors you might not have before. Eventually you build up a repository of metaphors in Arabic around the domain of family.

Of course, that’s not exactly what IARPA’s looking for, but it’s where the research teams will be starting. To get better results, they will have to start to learn a lot more about the relationships between the words in the metaphors. For Lakoff, that means understanding the frames and logics that inform metaphors and structure our thinking as we use them. For Bergen, it means refining the rules by which software can process language. There are three levels of analysis that would then be combined. First, you could know something about the metaphorical bias of an individual word. Crossroads, for example, is generally used in metaphorical terms. Second, words in close proximity might generate a bias, too. “Knockout in the same clause as ‘she’ has a much higher probability of being metaphorical if it’s in close proximity to ‘he,'” Bergen offered as an example. Third, for certain topics, certain words become more active for metaphorical usage. The economy’s movement, for example, probably maps to a source domain of motion through space. So, accelerate to describe something about the economy is probably metaphorical. Create a statistical model to combine the outputs of those three processes and you’ve got a brute-force method for identifying metaphors in a text.

In this particular competition, there will be more nuanced approaches based on parsing the more general relationships between words in text: sorting out which are nouns and how they connect to verbs, etc. “If you have that information, then you can find parts of sentences that don’t look like they should be there,” Bergen explained. A classic kind of identifier would be a type mismatch. “If I am the verb ‘smile,’ I like to have a subject that has a face,” he said. If something without a face is smiling, it might be an indication that some kind of figurative language is being employed.

From these constituent parts — and whatever other wild stuff people cook up —  the teams will try to build a metaphor machine that can convert a language into underlying truths about a culture. Feed text in one end and wait on the other end of the Rube Goldberg software for a series of beliefs about family or America or power.

We might never be able to build such a thing. Indeed, I get the feeling that we can’t, at least not yet. But what if we can?

“Are they going to use it wisely?” Lakoff posed. “Because using it to detect terrorists is not a bad idea, but then the question is: Are they going to use it to spy on us?”

I don’t know, but I know that as an American I think through these metaphors: Problem Is a Target; Society Is a Body; Control Is Up.

* This section of the story was updated to more accurately reflect the intent of Carter’s statement.

“A população não tem a quem recorrer para divulgar os seus problemas” (Envolverde/Adital)

25/5/2011 – 09h47

por Raquel Júnia*

1350 A população não tem a quem recorrer para divulgar os seus problemasEmerson Claudio dos Santos, mais conhecido como MC Fiell.

No dia Internacional da Liberdade de Expressão, os equipamentos de uma rádio comunitária localizada em uma favela do Rio de Janeiro foram apreendidos pela Polícia Federal e pela Anatel. Dois dos coordenadores da rádio foram levados para prestar depoimento. Nesta entrevista, Emerson Claudio dos Santos, mais conhecido como MC Fiell, presidente da Rádio Comunitária Santa Marta, fala sobre o exercício do direito à comunicação em um cenário de legislação restritiva e favorecedora dos interesses das mídias comerciais. Como o próprio nome já diz, a rádio se localiza na favela Santa Marta e atualmente, devido à apreensão dos equipamentos, está transmitindo apenas pela internet. Nesta entrevista, Fiell ajuda na reflexão sobre o papel das mídias que se pretendem contra-hegemônicas — comunitárias, alternativas, populares ou institucionais.

Que desafios as rádios comunitárias têm hoje?

A burocracia da lei de rádio é para você não ter rádio mesmo. Um dos maiores problemas dentro do capitalismo é grana. É uma armadilha, eles mesmos fazem os trâmites para o povo não ter o acesso. Mas sabemos dos problemas e vamos avançando. Em nossa rádio, por exemplo, fazemos festa para arrecadar grana, vendemos produtos como as camisetas da rádio, dando jeitos sem comercializar a rádio. Esta lei precisa ser mudada, senão o povo não terá acesso a esse direito. Só as rádios comunitárias não podem fazer propaganda. Enquanto isso a maioria das rádios comerciais está irregular, e tem as concessões renovadas automaticamente. Só o povo é punido e podado dos seus direitos.

Que mudanças na legislação você considera como mais fundamentais?

A Lei das Rádios Comunitárias tem que ser mudada em tudo, temos que fazer uma nova lei. Não tem como uma comunidade, por exemplo, no interior do Ceará, ter como exigência para uma rádio comunitária se legalizar uma associação formada por mais cinco instituições no raio de um quilômetro. Como vai fazer isso? Aqui já é difícil, imagine em outros lugares. É preciso outra lei construída com participação dos comunicadores e do povo.

E você vê alguma perspectiva de mudança da lei?

Se não tivermos perspectivas estamos mortos, temos que avançar. Um dos principais motivos pelos quais não avançamos é o desconhecimento. Quando você divulga alguma coisa, o povo fica sabendo e reage. A mesma coisa acontece com outros direitos, como o direito à saúde, à moradia. A comunicação hegemônica mantém o povo paralisado, engessado. As rádios comunitárias vêm para trocar ideias com o povo, mostrar seus direitos e deveres e tentar caminhar de outras formas, com escolhas. Há pouco interesse do poder público em mudar isso. Essa mudança se dará pela luta popular, das organizações em defesa da democratização da comunicação e de outros setores da sociedade que vão querer dialogar sobre isso e exigir que mude, que o povo tenha realmente acesso à comunicação, não só na teoria, mas na prática.

A rádio Santa Marta sofreu um fechamento pela polícia federal recentemente. Esta realidade se repete em todo o país?

A nossa rádio estava há oito meses no ar, cumpre tudo o que a legislação pede: não comercializamos, não vendemos programas, não temos partido, enfim, nós sempre buscamos exercer nossos deveres para conquistarmos nossos direitos. A rádio foi fechada de forma ilegal porque a Anatel, junto com a Polícia Federal, chegou aqui sem nenhum mandado, sem nenhum documento formal no nome da rádio Santa Marta, e mesmo assim confiscaram o transmissor e nos conduziram à delegacia para prestar depoimento. Se nós estamos ilegais porque não temos a outorga, eles estão ilegais por não terem mandado de busca e apreensão.

Infelizmente isto é corriqueiro no Brasil. No país todo está havendo uma grande criminalização das rádios comunitárias: a própria mídia hegemônica divulga que a rádio comunitária é pirata, que derruba avião, e isto é pura mentira. A gente costuma brincar que se rádio comunitária derrubasse avião, os terroristas montariam rádios comunitárias e não precisariam mais jogar bombas contra os aviões. E muitas pessoas, infelizmente sem informação política e sem visão crítica, acredita, mas esta é só uma forma de criminalizar para não termos acesso a essas ferramentas. Há dados que mostram que o governo Lula, infelizmente, foi o que mais fechou rádios. Mas temos que lutar mesmo porque nada será dado de forma voluntária aqui no Brasil, terá que ser conquistado na marra, de forma organizada. Isso tudo só vai mudar quando entendermos uma coisa: que os governantes precisam ser subordinados ao povo e não o povo subordinado ao governo. Quando entendermos isso, tudo mudará.

Como foi o depoimento que vocês deram na delegacia?

Eles perguntaram se a rádio é de pastor, se é de político, se existe comercialização, se eu tenho antecedentes criminais, se tenho marcas no corpo como tatuagem, se tenho bens materiais… Ter tatuagem não tem nada a ver com comunicação. Eu tenho tatuagem. Eu sou livre, eu faço o que eu quiser com o meu corpo. Eu falei: ‘se para vocês é crime, o único crime que eu faço é fazer rádio comunitária. O crime que eu cometo é prestar serviço à favela, de forma voluntária’. É surreal. E isso tudo aconteceu no dia 3 de maio, Dia Mundial da Liberdade de Expressão, e o que aconteceu só mostra que não temos liberdade de expressão.

Por que vocês acreditam que após oito meses de funcionamento da rádio a polícia e a Anatel foram até lá?

Temos diversas possibilidades para isso, mas temos pensado que é porque começamos a incomodar, temos feito um bom trabalho de alfabetização e de formação política para o povo. O povo está se apoderando de seus direitos. Infelizmente, no Brasil, quando você fala a verdade, é criminalizado e tirado de circulação. Quando você se organiza, alguma coisa acontece, e sempre terá repressões. Quando buscamos um coletivo, o poder para o coletivo, isto desagrada muita gente, e o próprio governo. Porque vivemos em um país capitalista onde a lógica é individual e da competição e conosco aqui a lógica é coletiva, todo mundo tem voz, todo mundo é igual e todo mundo pode fazer. Então, isto incomoda a quem não adere a essa filosofia. Por mais que tentem, nunca vão calar a voz do povo.

A mídia comercial esteve bastante presente no Santa Marta cobrindo a instalação e primeiras ações da Unidade de Polícia Pacificadora (UPP). Qual a diferença no enfoque dado ao Santa Marta antes e depois da UPP?

Desde a primeira favela, esses espaços sempre apareceram na mídia de uma forma ínfima, violenta, mostrando o povo da favela como mau e violento. O Santa Marta não é diferente, o seu povo sempre apareceu nas páginas da grande mídia sendo tratado como traficante, e o morro como um lugar de perigo. Depois, em 2009, com a entrada da UPP, essa mesma mídia que relacionava toda a população com o tráfico de drogas, agora fala que essa população tem voz. É uma jogada de interesses. Essa própria mídia, no caso a Globo, ineditamente fica 30 dias dentro do Santa Marta, cobrindo, fazendo link ao vivo, mas, na real, não deu voz ao povo. Esteve aqui para fazer uma jogada de marketing e mostrar o que ela queria, não mostrava os problemas da favela, não dava voz às lideranças críticas da favela, ela continua mostrando o que ela quer. E isto mostra que o poder está nas mãos deles.

A rádio comunitária Santa Marta também mostra o que quer, no entanto, sabemos que a construção do que sai na rádio é diferente. Qual é esta diferença?

A rádio Santa Marta mostra o direito do povo, ela é plural, isto é que é diferente. Uma rádio comunitária nasce para dar voz à população dessa favela; ela já começa diferente porque tem gestão, mas não tem dono, o dono é o povo. Quando o povo necessita, ela é acessível, fala dos problemas locais, da cidade, também do mundo. Mas as prioridades são os problemas, os projetos e os acontecimentos da localidade. O povo do Santa Marta nunca teve uma mídia que falasse dela como a Rádio Santa Marta faz. Este é o diferencial de uma rádio comunitária quando ela está a serviço do povo. Porque é importante salientar também que algumas outras rádios estão a serviço do lucro. A nossa, desde o princípio, está a serviço dos interesses do povo dessa favela.

Como isto se expressa na programação da rádio?

Nós temos uma programação plural, toda a diversidade cultural do Santa Marta está na rádio. São mais de 20 programas, começa às 6 horas e vai até meia noite. E tem programas jornalísticos, musicais, mas todos são informativos, porque a todo momento chegam notícias, e em todos eles a população tem linha direta: ela liga e participa e, se quer falar, é colocada ao vivo. Tem programas de entrevista sobre diversos assuntos – direito à moradia, alimentação, educação no Brasil, vida do trabalhador, programas que contam a história de imigrantes, como o Saudades da Minha Terra. Nós pedimos para as pessoas enviarem emails com críticas, ideias e fazemos nossa reunião quinzenal principalmente para isso, para ficar sabendo como estão os programas. A população pode participar da reunião, é aberta. Incluímos sempre o povo nas ações da rádio, não decidimos nada sozinhos, é tudo pelo interesse do povo.

Existe uma polêmica sobre a participação de partidos e religiões nas rádios comunitárias. Alguns acreditam que a rádio pode abrir espaços para essas instituições desde que seja contemplada a pluralidade local. Já outros acham que isto não deve acontecer. Como vocês pensam estas questões?

Aqui tem um programa gospel. O que pedimos é que o locutor não fique pregando e nem condicionando o povo. Partido político não tem mesmo, não queremos isso, cada um tem o seu e temos que usar o espaço da rádio para outras coisas. Agora, religião, se tiver várias, elas precisam ter espaço para que possam divulgar os seus eventos, por exemplo, mas sem pregar. No caso desse programa gospel, ele não é de nenhuma igreja, é um morador que é evangélico e faz o programa. As pessoas pedem músicas gospel, mas ele também fala o que está acontecendo no Santa Marta. É um programa igual ao de hip hop, só que é gospel, porque as pessoas também gostam desse tipo de música.

Como a rádio comunitária tenta responder a esse desafio de cativar um público já acostumado com a estética da mídia comercial para passar outro tipo de mensagem?

A população aprova a rádio, inclusive estamos numa campanha de um abaixo-assinado (em defesa da rádio) e a população vem assinar, traz a família. Por ser rádio comunitária, não se configura que seja uma rádio menor. A programação tem o mesmo potencial de qualquer outra rádio, tem vinhetas de qualidade, programadores de qualidade, porque também fazemos capacitação de locução, de jornalismo dentro da rádio. Então, ela não deixa nada a desejar, a única coisa diferente é que ela não abrange o Rio de Janeiro, mas apenas o raio de um quilômetro — Santa Marta e uma pequena parte de Botafogo —, com uma programação de altíssima qualidade.

O povo percebeu e aprovou que a rádio comunitária é ao mesmo tempo igual a qualquer outra e diferente porque fala dos nossos assuntos e do nosso povo e as outras não falam, a não ser quando é de interesse delas. Desde o início, não nos preocupamos em fazer uma réplica de programas das rádios comerciais, falamos em nossa linguagem coloquial, não somos acadêmicos e isso não tem nenhum problema, o que importa é o povo entender a mensagem. Trazemos mensalmente algum curso de comunicação comunitária, de operação de som, para todos nós avançarmos juntos, continuarmos melhorando a programação e a própria rádio, entendendo sempre que a intenção é falar para o nosso povo. Infelizmente nosso povo não está nos devidos lugares, como as faculdades e escolas, é um povo escravizado de carteira assinada. Então, avançamos, mas sabendo que tem que ser sem muros na linguagem. “O parceiro” e “a parceira” não podemos perder, a linguagem da favela não podemos esquecer, a Dona Maria não vai sair da nossa linguagem. Então, avançamos sem perder identidade.

Como a rádio consegue se manter e também garantir essa formação?

Por meio de parcerias com movimentos sociais, sindicatos, instituições, que fazem um trabalho voluntário. Vamos buscando juntos o entendimento de que a rádio é importante para os sete mil moradores do Santa Marta. Como a rádio não pode fazer propaganda, vender comercial, os amigos da rádio doam algum valor financeiro, os locutores todos doam também, porque todos têm um trabalho voluntário na rádio e outros trabalhos remunerados fora da rádio. Todos nós entendemos que juntos manteríamos a rádio para continuar com a nossa voz viva e calorosa no Santa Marta.

Como um dos coordenadores da rádio, você percebe a comunicação hoje de uma forma diferente?

Para nós há duas maneiras de entender a comunicação. Uma comunicação é a que a classe dominante usa, para poder educar e dominar um povo. E a nossa é a que usamos para esclarecer o povo, para levar mais informações sobre a sua realidade da vida. Sempre houve essas duas maneiras de comunicação, uma hegemônica e outra da classe popular, que tenta de alguma forma esclarecer o povo. Infelizmente nem todos os trabalhadores têm essa clareza, quando vamos participando de alguns momentos de formação política é que vamos percebendo. Eu pude perceber isso quando fiz um curso de comunicação comunitária com o Núcleo Piratininga de Comunicação: até então eu sabia que existia desigualdade também na comunicação, mas não da forma como eu entendo hoje.

* Raquel Júnia é da Escola Politécnica de Saúde Joaquim Venâncio (EPSJV), Fiocruz.

** Publicado originalmente no site Adital.

Antigos índios da Amazônia contribuíram para a fertilidade da terra preta (FAPESP)

CIÊNCIA | GEOQUÍMICA
Adubo pré-colombiano
Marcos Pivetta
Edição Impressa 183 – Maio de 2011

Perfil mostra a diferença entre a fértil terra preta (alto) e o latossolo típico e pobre da Amazônia. À direita, imagem de microscopia por fluorescência da superfície de carbono pirogênico. © EDUARDO GÓES NEVES (ESQUERDA) / CENA/USP (DIREITA)

Os arqueólogos costumam debater qual o real significado das manchas de terra preta encontradas em sítios pré-históricos da Amazônia Central, um tipo de solo escuro que se destaca visualmente da monotonia marrom-amarelada característica das áreas de terra firme da região. Para alguns, elas são um indicativo de que grupos indígenas pré-colombianos viveram por centenas ou até uns poucos milhares de anos em sociedades complexas e estruturadas, baseadas na agricultura sedentária e no manejo do ambiente, em meio à floresta. Para outros, a existência desse tipo de terreno mais escuro, frequentemente recheado de fragmentos de peças de cerâmica, não é uma prova cabal de que houve ali um processo de ocupação humana antiga e prolongada antes do desembarque do conquistador europeu. Mas sobre uma questão, mais relacionada às ciências agrárias do que às humanidades, há consenso generalizado: a terra preta é um oásis quase permanente de fertilidade numa zona recheada de solos pobres e incapazes de reter nutrientes por muito tempo. Estudo recente confirma que um componente importante dessa variante de solo é um vestígio inequívoco do estabelecimento de assentamentos humanos: as fezes dos índios.

Concentrações de um biomarcador associado à deposição de excrementos humanos no ambiente, o coprostranol (5ß-stanol), foram encontradas em amostras de terra preta oriundas de cinco sítios pré-históricos da Amazônia, de acordo com um artigo científico a ser publicado por uma equipe de pesquisadores do Brasil e da Alemanha na edição de junho da revista Journal of Archaeological Science. Quatro sítios estão localizados no Amazonas, a sudoeste de Manaus, numa faixa de terra firme na confluência entre os rios Negro e Solimões, e um se situa no Pará, a sudoeste de Santarém, no baixo Tapajós. “A rigor, o biomarcador também poderia indicar a presença de fezes de porcos domesticados”, afirma o engenheiro agrônomo Wenceslau Geraldes Teixeira, da Embrapa Solos, do Rio de Janeiro, um dos autores do trabalho. “Mas, como esse animal só foi introduzido na América do Sul depois da chegada dos europeus, descartamos essa possibilidade.” Todos os exemplares de terra preta analisados se formaram entre 500 e 2.500 anos atrás, antes da descoberta oficial do continente por Cristóvão Colombo.

Rica em minerais associados à fertilidade dos solos, a terra preta deve sua cor enegrecida à elevada presença em sua composição do chamado carbono pirogênico, uma forma estável de carvão aromático produzida pela combustão incompleta de biomassa. O modo de vida dos antigos índios da Amazônia – que queimavam os restos de animais consumidos, enterravam os mortos  e depositavam lixo e excrementos nos arredores de suas comunidades – deve ter sido o responsável pela formação desse tipo de solo. “Estamos tentando entender a composição química da terra preta e descobrir qual aporte de material orgânico a mantém fértil até hoje”, afirma o arqueólogo Eduardo Góes Neves, da Universidade de São Paulo (USP), outro autor do estudo e coordenador de um projeto temático da FAPESP sobre a história pré-colonial da Amazônia. “Se tivermos sucesso nesse objetivo, talvez possamos aprender a melhorar a fertilidade em solos pobres e dar uma contribuição para uma agricultura tropical mais sustentável.” Existem tentativas de reproduzir artificialmente as propriedades da terra preta, mas os esforços ainda estão nos trabalhos iniciais.

Alguns especialistas acreditam que compostos presentes nas fezes humanas desempenham um papel importante na manutenção a longo prazo da fecundidade dessa variante do chão amazônico.  Ao contrário dos empobrecidos latossolos típicos da Amazônia, a terra preta sofre pouca lixiviação, processo caracterizado pela perda de nutrientes devido à infiltração da água da chuva que “lava” o solo e lhe rouba os componentes químicos.  “Os excrementos dão uma contribuição significativa para o conteúdo de nutrientes encontrados na terra preta, como nitrogênio e fósforo, e a ajudam a reciclar seus nutrientes”, afirma Bruno Glaser, da Universidade Martinho Lutero de Halle-Wittenberg, Alemanha, estudioso da biogeoquímica de solos e também coautor do artigo. “Nas sociedades modernas isso não ocorre mais, pois esses nutrientes são perdidos com a deposição do lodo de esgoto em reservatórios.” Na terra preta as fezes provavelmente se misturam ao solo devido à ação de minhocas, cupins, formigas e outros organismos.

Embora não costume ser diretamente apontado como um elemento capaz de conferir fertilidade ao solo, o carbono pirogênico parece conter uma conjunto único de fungos e bactérias, cuja sinergia pode estar relacionada à fertilidade da terra preta. Trabalhos feitos pela equipe da engenheira agrônoma Siu Mui Tsai, do Centro de Energia Nuclear na Agricultura, da USP, em Piracicaba, mostram que a forma de carvão presente nesse tipo de solo abriga o DNA de até 3 mil espécies de microrganismos. “Essa biodiversidade  é bem maior do que a encontrada em solos amazônicos vizinhos à terra preta”, afirma Siu. “Os índios não usavam produtos tóxicos e seu sistema estava em equilíbrio.” Ninguém sabe, no entanto, se os povos pré-colombianos criaram intencionalmente a terra preta, como  forma de enriquecer o solo destinado à agricultura,  ou se ela é uma mera decorrência acidental dos dejetos e do lixo produzidos por seu modo de vida.

Artigo científico: BIRK, J.J. et alFaeces deposition on Amazonian Anthrosols as assessed from 5ß -stanolsJournal of Archaeological Science. v. 38 (6). p-1209-20, jun. 2011.

Aceitam tudo (Terra Magazine)

Quinta, 19 de maio de 2011, 08h14 Atualizada às 18h50 (link original aqui).

Trecho do livro “Por uma Vida Melhor” apresenta a pergunta “posso falar ‘os livro’?”

Sírio Possenti
De Campinas (SP)

De vez em quando, alguém diz que lingüistas “aceitam” tudo (isto é, que acham certa qualquer construção). Um comentário semelhante foi postado na semana passada. Achei que seria uma boa oportunidade para tentar esclarecer de novo o que fazem os linguistas.

Mas a razão para tentar ser claro não tem mais a ver apenas com aquele comentário. Surgiu uma celeuma causada por notas, comentários, entrevistas etc. a propósito de um livro de português que o MEC aprovou e que ensinaria que é certo dizer Os livro. Perguntado no espaço dos comentários, quando fiquei sabendo da questão, disse que não acreditava na matéria do IG, primeira fonte do debate. Depois tive acesso à indigitada página, no mesmo IG, e constatei que todos os que a leram a leram errado. Mas aposto que muitos a comentaram sem ler.

Vou tratar do tal “aceitam tudo”, que vale também para o caso do livro.

Primeiro: duvido que alguém encontre esta afirmação em qualquer texto de linguística. É uma avaliação simplificada, na verdade, um simulacro, da posição dos linguistas em relação a um dos tópicos de seus estudos – a questão da variação ou da diversidade interna de qualquer língua. Vale a pena insistir: de qualquer língua.

Segundo: “aceitar” é um termo completamente sem sentido quando se trata de pesquisa. Imaginem o ridículo que seria perguntar a um químico se ele aceita que o oxigênio queime, a um físico se aceita a gravitação ou a fissão, a um ornitólogo se ele aceita que um tucano tenha bico tão desproporcional, a um botânico se ele aceita o cheiro da jaca, ou mesmo a um linguista se ele aceita que o inglês não tenha gênero nem subjuntivo e que o latim não tivesse artigo definido.

Não só não se pergunta se eles “aceitam”, como também não se pergunta se isso tudo está certo. Como se sabe, houve época em que dizer que a Terra gira ao redor do sol dava fogueira. Semmelveis foi escorraçado pelos médicos que mandavam em Viena porque disse que todos deveriam lavar as mãos antes de certos procedimentos (por exemplo, quem viesse de uma autópsia e fosse verificar o grau de dilatação de uma parturiente). Não faltou quem dissesse “quem é ele para mandar a gente lavar as mãos?”

Ou seja: não se trata de aceitar ou de não aceitar nem de achar ou de não achar correto que as pessoas digam os livro. Acabo de sair de uma fila de supermercado e ouvi duas lata, dez real, três quilo a dar com pau. Eu deveria mandar esses consumidores calar a boca? Ora! Estávamos num caixa de supermercado, todos de bermuda e chinelo! Não era um congresso científico, nem um julgamento do Supremo!

Um linguista simplesmente “anota” os dados e tenta encontrar uma regra, isto é, uma regularidade, uma lei (não uma ordem, um mandato).

O caso é manjado: nesta variedade do português, só há marca de plural no elemento que precede o nome – artigo ou numeral (os livro, duas lata, dez real, três quilo). Se houver mais de dois elementos, a complexidade pode ser maior (meus dez livro, os meus livro verde etc.). O nome permanece invariável. O linguista vê isso, constata isso. Não só na fila do supermercado, mas também em documentos da Torre do Tombo anteriores a Camões. Portanto, mesmo na língua escrita dos sábios de antanho.

O linguista também constata the books no inglês, isto é, que não há marca de plural no artigo, só no nome, como se o inglês fosse uma espécie de avesso do português informal ou popular. O linguista aceita isso? Ora, ele não tem alternativa! É um dado, é um fato, como a combustão, a gravitação, o bico do tucano ou as marés. O linguista diz que a escola deve ensinar formas como os livro? Esse é outro departamento, ao qual volto logo.

Faço uma digressão para dar um exemplo de regra, porque sei que é um conceito problemático. Se dizemos “as cargas”, a primeira sílaba desta sequência é “as”. O “s” final é surdo (as cordas vocais não vibram para produzir o “s”). Se dizemos “as gatas”, a primeira sílaba é a “mesma”, mas nós pronunciamos “az” – com as cordas vocais vibrando para produzir o “z”. Por que dizemos um “z” neste caso? Porque a primeira consoante de “gatas” é sonora, e, por isso, a consoante que a antecede também se sonoriza. Não acredita? Vá a um laboratório e faça um teste. Ou, o que é mais barato, ponha os dedos na sua garganta, diga “as gatas” e perceberá a vibração. Tem mais: se dizemos “as asas”, não só dizemos um “z” no final de “as”, como também reordenamos as sílabas: dizemos as.ga.tas e as.ca.sas, mas dizemos a.sa.sas (“as” se dividiu, porque o “a” da palavra seguinte puxou o “s/z” para si). Dividimos “asas” em “a.sas”, mas dividimos “as asas” em a.sa.sas.

Volto ao tema do linguista que aceitaria tudo! Para quem só teve aula de certo / errado e acha que isso é tudo, especialmente se não tiver nenhuma formação histórica que lhe permitiria saber que o certo de agora pode ter sido o errado de antes, pode ser difícil entender que o trabalho do linguista é completamente diferente do trabalho do professor de português.

Não “aceitar” construções como as acima mencionadas ou mesmo algumas mais “chocantes” é, para um linguista, o que seria para um botânico não “aceitar” uma gramínea. O que não significa que o botânico paste.

Proponho o seguinte experimento mental: suponha que um descendente seu nasça no ano 2500. Suponha que o português culto de então inclua formas como “A casa que eu moro nela mais os dois armário vale 300 cabral” (acho que não será o caso, mas é só um experimento). Seu descendente nunca saberá que fala uma língua errada. Saberá, talvez (se estudar mais do que você), que um ancestral dele falava formas arcaicas do português, como 300 cabrais.

Outro tema: o linguista diz que a escola deve ensinar a dizer Os livro? Não. Nenhum linguista propõe isso em lugar nenhum (desafio os que têm opinião contrária a fornecer uma referência). Aliás, isso não foi dito no tal livro, embora todos os comentaristas digam que leram isso.

O linguista não propõe isso por duas razões: a) as pessoas já sabem falar os livro, não precisam ser ensinadas (observe-se que ninguém falao livros, o que não é banal); b) ele acha – e nisso tem razão – que é mais fácil que alguém aprenda os livros se lhe dizem que há duas formas de falar do que se lhe dizem que ele é burro e não sabe nem falar, que fala tudo errado. Há muitos relatos de experiências bem sucedidas porque adotaram uma postura diferente em relação à fala dos alunos.

Enfim, cada campo tem seus Bolsonaros. Merecidos ou não.

PS 1 – todos os comentaristas (colunistas de jornais, de blogs e de TVs) que eu ouvi leram errado uma página (sim, era só UMA página!) do livro que deu origem à celeuma na semana passada. Minha pergunta é: se eles defendem a língua culta como meio de comunicação, como explicam que leram tão mal um texto escrito em língua culta? É no teste PISA que o Brasil, sempre tem fracassado, não é? Pois é, este foi um teste de leitura. Nosso jornalismo seria reprovado.

PS 2 – Alexandre Garcia começou um comentário irado sobre o livro em questão assim, no Bom Dia, Brasil de terça-feira: “quando eu TAVA na escola…”. Uma carta de leitor que criticava a forma “os livro” dizia “ensinam os alunos DE que se pode falar errado”. Uma professora entrevistada que criticou a doutrina do livro disse “a língua é ONDE nos une” e Monforte perguntou “Onde FICA as leis de concordância?”. Ou seja: eles abonaram a tese do livro que estavam criticando. Só que, provavelmente, acham que falam certinho! Não se dão conta do que acontece com a língua DELES mesmos!!

* * *

[Quatro dias após esse excelente artigo de Sírio Possenti, O Globo publica editorial – abaixo – onde fica evidente que, como sugere Sírio, tanto foco em questões formais tem o intuito de esconder a baixa qualidade dos argumentos (e do jornalismo que daí decorre). Um verdadeiro show de conservadorismo reducionista: escola é pra “salvar os pobres” inculcando-lhes “a verdadeira cultura”, essa que também deve ser a marca da “inteligência do País”. A qualidade da educação, sugere o texto, se mede com indicadores estatísticos apenas – e não tem nada que ver com a formação de cidadãos, membros ativos de suas comunidades, etc. Ou seja, a educação é um problema técnico, e não político. Na minha opinião, a classe média carioca não merece tanto bolsonarismo.]

Desatino nas escolas

Editorial do jornal O Globo de 23/05/2011.

Os dicionários definem o termo “didática” como a técnica de ensinar, meio para dirigir e orientar o aprendizado. Os livros didáticos, por extensão, se constituem no instrumento pelo qual o ensino do uso correto da língua é ministrado nas escolas. Ao permitir na rede pública – base da formação educacional da grande maioria dos estudantes do País – a adoção de um livro que permite erros de português como parte do processo de aprendizagem, o MEC dá abrigo a uma perigosa contradição. Em nome de uma ideologia de proteção a “excluídos da sociedade”, o governo avaliza um projeto que, na prática, inviabiliza a inclusão. Coonestar erros de gramática, sob o falso princípio de que se deve derrubar preconceitos linguísticos, agrava o marginalismo cultural a que o desconhecimento da língua condena aqueles que, por enfrentar condições sociais adversas, têm poucas chances de adquirir conhecimentos que lhes permitam mudar sua realidade.

O argumento da autora do livro “Por uma vida melhor”, Heloísa Ramos, de que em vez de “certo” e “errado” na avaliação do aprendizado da língua deve-se usar a ideia de “adequado” ou “inadequado”, transfere a discussão para o plano da linguística, quando o que de fato interessa é a questão da didática do ensino, a maneira como as crianças serão alfabetizadas e os instrumentos de instrução que lhes serão fornecidos para aprenderem a escrever corretamente.

Trata-se de questão muito mais séria do que é capaz de alcançar a ideologia de almanaque que justifica tais agressões à língua, à inteligência do País e, não menos importante, à formação dos próprios jovens alunos. A defesa de erros primários de concordância verbal e de princípios da gramática, por si só, é inconcebível em qualquer nação que zele por sua língua. E se torna ainda mais indefensável num País como o Brasil, onde o precário nível de ensino, particularmente nas escolas públicas, é responsável por vergonhosos indicadores educacionais. Pode-se imaginar a confusão na cabeça do jovem aluno que, despendendo esforços para aprender as regras da sua língua, seja confrontado com um livro – logo, instrumento supostamente confiável – em que se tem como corretas frases do tipo “nós pega o peixe” ou “dois real”.

Por outros exemplos de semelhantes ataques a padrões de comportamento, tem-se por óbvio que a questão do livro de Heloísa Ramos não é episódio isolado no País. Faz parte de um contexto mais amplo, que se move pelo princípio do “politicamente correto”. É a mesma cartilha que, no plano do ensino, instrui adeptos do racialismo a condenar, como racista, a obra de Monteiro Lobato (e, como decorrência, a praticar boçalidades como a manifestação, no Rio, contra um bloco de carnaval, e iniquidades como a edição, pelo MEC, de uma bula que oriente os professores como “ensinar” a obra do escritor nas escolas).

Em última análise, permitir a circulação de tal livro é uma agressão não só ao bom senso, mas ao direito do aluno de receber ensino de boa qualidade. Ao aceitar tal desatino, em nome de um ideário de suposta defesa dos excluídos, o MEC boicota o esforço de melhorar os indicadores da Educação no País. Em vez de ajudar a abrir fronteiras da cultura a uma considerável parcela de brasileiros, para os quais o acesso a instrução é tábua de salvação contra adversidades sociais, o ministério apenas os estimula a cultivar erros – que no futuro, na luta pela inclusão social (seja no mercado de trabalho, ou em instituições de ensino que lhes cobrarão conhecimento da língua), lhes custarão caro.

Lingodroid Robots Invent Their Own Spoken Language (IEEE Spectrum)

By EVAN ACKERMAN  /  TUE, MAY 17, 2011

lingodroids language robots

When robots talk to each other, they’re not generally using language as we think of it, with words to communicate both concrete and abstract concepts. Now Australian researchers are teaching a pair of robots to communicate linguistically like humans by inventing new spoken words, a lexicon that the roboticists can teach to other robots to generate an entirely new language.

Ruth Schulz and her colleagues at the University of Queensland and Queensland University of Technology call their robots the Lingodroids. The robots consist of a mobile platform equipped with a camera, laser range finder, and sonar for mapping and obstacle avoidance. The robots also carry a microphone and speakers for audible communication between them.

To understand the concept behind the project, consider a simplified case of how language might have developed. Let’s say that all of a sudden you wake up somewhere with your memory completely wiped, not knowing English, Klingon, or any other language. And then you meet some other person who’s in the exact same situation as you. What do you do?

What might very well end up happening is that you invent some random word to describe where you are right now, and then point at the ground and tell the word to the other person, establishing a connection between this new word and a place. And this is exactly what the Lingodroids do. If one of the robots finds itself in an unfamiliar area, it’ll make up a word to describe it, choosing a random combination from a set of syllables. It then communicates that word to other robots that it meets, thereby defining the name of a place.

lingodroids language robots

From this fundamental base, the robots can play games with each other to reinforce the language. For example, one robot might tell the other robot “kuzo,” and then both robots will race to where they think “kuzo” is. When they meet at or close to the same place, that reinforces the connection between a word and a location. And from “kuzo,” one robot can ask the other about the place they just came from, resulting in words for more abstract concepts like direction and distance:

lingodroids language robots
This image shows what words the robots agreed on for direction and distance concepts. For example, “vupe hiza” would mean a medium long distance to the east.

After playing several hundred games to develop their language, the robots agreed on directions within 10 degrees and distances within 0.375 meters. And using just their invented language, the robots created spatial maps (including areas that they were unable to explore) that agree remarkably well:

lingodroids language robots

In the future, researchers hope to enable the Lingodroids to “talk” about even more elaborate concepts, like descriptions of how to get to a place or the accessibility of places on the map. Ultimately, techniques like this may help robots to communicate with each other more effectively, and may even enable novel ways for robots to talk to humans.

Schulz and her colleagues — Arren Glover, Michael J. Milford, Gordon Wyeth, and Janet Wiles — describe their work in a paper, “Lingodroids: Studies in Spatial Cognition and Language,” presented last week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.

[Original link here.]

Kari Norgaard on climate change denial

Understanding the climate ostrich

BBC News, 15 November 07
By Kari Marie Norgaard
Whitman College, US

Why do people find it hard to accept the increasingly firm messages that climate change is a real and significant threat to livelihoods? Here, a sociologist unravels some of the issues that may lie behind climate scepticism.

“I spent a year doing interviews and ethnographic fieldwork in a rural Norwegian community recently.

In winter, the signs of climate change were everywhere – glaringly apparent in an unfrozen lake, the first ever use of artificial snow at the ski area, and thousands of dollars in lost tourist revenues.

Yet as a political issue, global warming was invisible.

The people I spoke with expressed feelings of deep concern and caring, and a significant degree of ambivalence about the issue of global warming.

This was a paradox. How could the possibility of climate change be both deeply disturbing and almost completely invisible – simultaneously unimaginable and common knowledge?

Self-protection
People told me many reasons why it was difficult to think about this issue. In the words of one man, who held his hands in front of his eyes as he spoke, “people want to protect themselves a bit.”

Community members described fears about the severity of the situation, of not knowing what to do, fears that their way of life was in question, and concern that the government would not adequately handle the problem.

They described feelings of guilt for their own actions, and the difficulty of discussing the issue of climate change with their children.

In some sense, not wanting to know was connected to not knowing how to know. Talking about global warming went against conversation norms.

It wasn’t a topic that people were able to speak about with ease – rather, overall it was an area of confusion and uncertainty. Yet feeling this confusion and uncertainty went against emotional norms of toughness and maintaining control.

Other community members described this sense of knowing and not knowing, of having information but not thinking about it in their everyday lives.

As one young woman told me: “In the everyday I don’t think so much about it, but I know that environmental protection is very important.”

Security risk
The majority of us are now familiar with the basics of climate change.

Worst case scenarios threaten the very basics of our social, political and economic infrastructure.

Yet there has been less response to this environmental problem than any other. Here in the US it seems that only now are we beginning to take it seriously.

How can this be? Why have so few of us engaged in any of the range of possible actions from reducing our airline travel, pressurising our governments and industries to cut emissions, or even talking about it with our family and friends in more than a passing manner?

Indeed, why would we want to know this information?

Why would we want to believe that scenarios of melting Arctic ice and spreading diseases that appear to spell ecological and social demise are in store for us; or even worse, that we see such effects already?

Information about climate change is deeply disturbing. It threatens our sense of individual identity and our trust in our government’s ability to respond.

At the deepest level, large scale environmental problems such as global warming threaten people’s sense of the continuity of life – what sociologist Anthony Giddens calls ontological security.

Thinking about global warming is also difficult for those of us in the developed world because it raises feelings of guilt. We are now aware of how driving automobiles and flying to exotic warm vacations contributes to the problem, and we feel guilty about it.

Tactful denial
If being aware of climate change is an uncomfortable condition which people are motivated to avoid, what happens next?

After all, ignoring the obvious can take a lot of work.

In the Norwegian community where I worked, collectively holding information about global warming at arm’s length took place by participating in cultural norms of attention, emotion, and conversation, and by using a series of cultural narratives to deflect disturbing information and normalise a particular version of reality in which “everything is fine.”

When what a person feels is different from what they want to feel, or are supposed to feel, they usually engage in what sociologists call emotional management.

We have a whole repertoire of techniques or “tools” for ignoring this and other disturbing problems.

As sociologist Evitiar Zerubavel makes clear in his work on the social organisation of denial and secrecy, the means by which we manage to ignore the disturbing realities in front of us are also collectively shaped.

How we cope, how we respond, or how we fail to respond are social as well.

Social rules of focusing our attention include rules of etiquette that involve tact-related ethical obligations to “look the other way” and ignore things we most likely would have noticed about others around us.

Indeed, in many cases, merely following our cultural norms of acceptable conversation and emotional expression serves to keep our attention safely away from that pesky topic of climate change.

Emotions of fear and helplessness can be managed through the use of selective attention; controlling one’s exposure to information, not thinking too far into the future and focusing on something that could be done.

Selective attention can be used to decide what to think about or not to think about, for example screening out painful information about problems for which one does not have solutions: “I don’t really know what to do, so I just don’t think about that”.

The most effective way of managing unpleasant emotions such as fear about your children seems to be by turning our attention to something else, or by focusing attention onto something positive.

Hoodwinking ourselves?
Until recently, the dominant explanation within my field of environmental sociology for why people failed to confront climate change was that they were too poorly informed.

Others pose that Americans are simply too greedy or too individualistic, or suffer from incorrect mental models.

Psychologists have described “faulty” decision-making powers such as “confirmation bias”, and argue that with more appropriate analogies we will be able to manage the information and respond.

Political economists, on the other hand, tell us that we’ve been hoodwinked by increased corporate control of media that limits and moulds available information about global warming.

These are clearly important answers.

Yet the fact that nobody wants information about climate change to be true is a critical piece of the puzzle that also happens to fit perfectly with the agenda of those who have tried to generate climate scepticism.”

Dr Kari Marie Norgaard is a sociologist at Whitman College in Walla Walla, Washington state, US.

See also A Dialog Between Renee Lertzman and Kari Norgaard.

Amondawa tribe lacks abstract idea of time, study says (BBC News)

20 May 2011
By Jason Palmer
Science and technology reporter, BBC News

The Amondawa were first “discovered” by anthropologists in 1986

An Amazonian tribe has no abstract concept of time, say researchers.

The Amondawa lacks the linguistic structures that relate time and space – as in our idea of, for example, “working through the night”.

The study, in Language and Cognition, shows that while the Amondawa recognise events occuring in time, it does not exist as a separate concept.

The idea is a controversial one, and further study will bear out if it is also true among other Amazon languages.

The Amondawa were first contacted by the outside world in 1986, and now researchers from the University of Portsmouth and the Federal University of Rondonia in Brazil have begun to analyse the idea of time as it appears in Amondawa language.

“We’re really not saying these are a ‘people without time’ or ‘outside time’,” said Chris Sinha, a professor of psychology of language at the University of Portsmouth.

“Amondawa people, like any other people, can talk about events and sequences of events,” he told BBC News.

“What we don’t find is a notion of time as being independent of the events which are occuring; they don’t have a notion of time which is something the events occur in.”

The Amondawa language has no word for “time”, or indeed of time periods such as “month” or “year”.

The people do not refer to their ages, but rather assume different names in different stages of their lives or as they achieve different status within the community.

But perhaps most surprising is the team’s suggestion that there is no “mapping” between concepts of time passage and movement through space.

Ideas such as an event having “passed” or being “well ahead” of another are familiar from many languages, forming the basis of what is known as the “mapping hypothesis”.

The Amondawa have no words for time periods such as “month” or “year”

But in Amondawa, no such constructs exist.

“None of this implies that such mappings are beyond the cognitive capacities of the people,” Professor Sinha explained. “It’s just that it doesn’t happen in everyday life.”

When the Amondawa learn Portuguese – which is happening more all the time – they have no problem acquiring and using these mappings from the language.

The team hypothesises that the lack of the time concept arises from the lack of “time technology” – a calendar system or clocks – and that this in turn may be related to the fact that, like many tribes, their number system is limited in detail.

Absolute terms
These arguments do not convince Pierre Pica, a theoretical linguist at France’s National Centre for Scientific Research (CNRS), who focuses on a related Amazonian language known as Mundurucu.

“To link number, time, tense, mood and space by a single causal relationship seems to me hopeless, based on the linguistic diversity that I know of,” he told BBC News.

Dr Pica said the study “shows very interesting data” but argues quite simply that failing to show the space/time mapping does not refute the “mapping hypothesis”.

Small societies like the Amondawa tend to use absolute terms for normal, spatial relations – for example, referring to a particular river location that everyone in the culture will know intimately rather than using generic words for river or riverbank.

These, Dr Pica argued, do not readily lend themselves to being co-opted in the description of time.

“When you have an absolute vocabulary – ‘at the water’, ‘upstream’, ‘downstream’ and so on, you just cannot use it for other domains, you cannot use the mapping hypothesis in this way,” he said.

In other words, while the Amondawa may perceive themselves moving through time and spatial arrangements of events in time, the language may not necessarily reflect it in an obvious way.

What may resolve the conflict is further study, Professor Sinha said.

“We’d like to go back and simply verify it again before the language disappears – before the majority of the population have been brought up knowing about calendar systems.”

Brazil tribe prove words count

BBC News, 20 August, 2004

When it comes to counting, a remote Amazonian tribespeople have been found to be lost for words.

Researchers discovered the Piraha tribe of Brazil, with a population of 200, have no words beyond one, two and many.

The word for “one” can also mean “a few”, while “two” can also be used to refer to “not many”.

Peter Gordon of Columbia University in New York said their skill levels were similar to those of pre-linguistic infants, monkeys, birds and rodents.

He reported in the journal Science that he set the tribe simple numerical matching challenges, and they clearly understood what was asked of them.

“In all of these matching experiments, participants responded with relatively good accuracy with up to two or three items, but performance deteriorated considerably beyond that up to eight to 10 items,” he wrote.

Language theory

Dr Gordon added that not only could they not count, they also could not draw.

“Producing simple straight lines was accomplished only with great effort and concentration, accompanied by heavy sighs and groans.”

The tiny tribe live in groups of 10 to 20 along the banks of the Maici River in the Lowland Amazon region of Brazil.

Dr Gordon said they live a hunter-gatherer existence and reject any assimilation into mainstream Brazilian culture.

He added that the tribe use the same pronoun for “he” and “they” and standard quantifiers such as “more”, “several” and “all” do not exist in their language.

“The results of these studies show that the Piraha’s impoverished counting system truly limits their ability to enumerate exact quantities when set sizes exceed two or three items,” he wrote.

“For tasks that required cognitive processing, performance deteriorated even on set sizes smaller than three.”

The findings lend support to a theory that language can affect thinking.

Linguist Benjamin Lee Whorf suggested in the 1930s that language could determine the nature and content of thought.

Persuasive Speech: The Way We, Um, Talk Sways Our Listeners (ScienceDaily)

ScienceDaily (May 16, 2011) — Want to convince someone to do something? A new University of Michigan study has some intriguing insights drawn from how we speak.

The study, presented May 14 at the annual meeting of the American Association for Public Opinion Research, examines how various speech characteristics influence people’s decisions to participate in telephone surveys. But its findings have implications for many other situations, from closing sales to swaying voters and getting stubborn spouses to see things your way.

“Interviewers who spoke moderately fast, at a rate of about 3.5 words per second, were much more successful at getting people to agree than either interviewers who talked very fast or very slowly,” said Jose Benki, a research investigator at the U-M Institute for Social Research (ISR).

For the study, Benki and colleagues used recordings of 1,380 introductory calls made by 100 male and female telephone interviewers at the U-M ISR. They analyzed the interviewers’ speech rates, fluency, and pitch, and correlated those variables with their success in convincing people to participate in the survey.

Since people who talk really fast are seen as, well, fast-talkers out to pull the wool over our eyes, and people who talk really slow are seen as not too bright or overly pedantic, the finding about speech rates makes sense. But another finding from the study, which was funded by the National Science Foundation, was counterintuitive.

“We assumed that interviewers who sounded animated and lively, with a lot of variation in the pitch of their voices, would be more successful,” said Benki, a speech scientist with a special interest in psycholinguistics, the psychology of language.

“But in fact we found only a marginal effect of variation in pitch by interviewers on success rates. It could be that variation in pitch could be helpful for some interviewers but for others, too much pitch variation sounds artificial, like people are trying too hard. So it backfires and puts people off.”

Pitch, the highness or lowness of a voice, is a highly gendered quality of speech, influenced largely by body size and the corresponding size of the larynx, or voice box, Benki says. Typically, males have low-pitched voices and females high-pitched voices. Stereotypically, think James Earl Jones and Julia Child.

Benki and colleagues Jessica Broome, Frederick Conrad, Robert Groves and Frauke Kreuter also examined whether pitch influenced survey participation decisions differently for male compared to female interviewers.

They found that males with higher-pitched voices had worse success than their deep-voiced colleagues. But they did not find any clear-cut evidence that pitch mattered for female interviewers.

The last speech characteristic the researchers examined for the study was the use of pauses. Here they found that interviewers who engaged in frequent short pauses were more successful than those who were perfectly fluent.

“When people are speaking, they naturally pause about 4 or 5 times a minute,” Benki said. “These pauses might be silent, or filled, but that rate seems to sound the most natural in this context. If interviewers made no pauses at all, they had the lowest success rates getting people to agree to do the survey. We think that’s because they sound too scripted.

“People who pause too much are seen as disfluent. But it was interesting that even the most disfluent interviewers had higher success rates than those who were perfectly fluent.”

Benki and colleagues plan to continue their analyses, comparing the speech of the most and least successful interviewers to see how the content of conversations, as well as measures of speech quality, is related to their success rates.

It’s Even Less in Your Genes (The New York Review of Books)

MAY 26, 2011
Richard C. Lewontin

The Mirage of a Space Between Nature and Nurture
by Evelyn Fox Keller
Duke University Press, 107 pp., $64.95; $18.95 (paper)

In trying to analyze the natural world, scientists are seldom aware of the degree to which their ideas are influenced both by their way of perceiving the everyday world and by the constraints that our cognitive development puts on our formulations. At every moment of perception of the world around us, we isolate objects as discrete entities with clear boundaries while we relegate the rest to a background in which the objects exist.

That tendency, as Evelyn Fox Keller’s new book suggests, is one of the most powerful influences on our scientific understanding. As we change our intent, also we identify anew what is object and what is background. When I glance out the window as I write these lines I notice my neighbor’s car, its size, its shape, its color, and I note that it is parked in a snow bank. My interest then changes to the results of the recent storm and it is the snow that becomes my object of attention with the car relegated to the background of shapes embedded in the snow. What is an object as opposed to background is a mental construct and requires the identification of clear boundaries. As one of my children’s favorite songs reminded them:

You gotta have skin.
All you really need is skin.
Skin’s the thing that if you’ve got it outside,
It helps keep your insides in.
Organisms have skin, but their total environments do not. It is by no means clear how to delineate the effective environment of an organism.

One of the complications is that the effective environment is defined by the life activities of the organism itself. “Fish gotta swim and birds gotta fly,” as we are reminded by yet another popular lyric. Thus, as organisms evolve, their environments necessarily evolve with them. Although classic Darwinism is framed by referring to organisms adapting to environments, the actual process of evolution involves the creation of new “ecological niches” as new life forms come into existence. Part of the ecological niche of an earthworm is the tunnel excavated by the worm and part of the ecological niche of a tree is the assemblage of fungi associated with the tree’s root system that provide it with nutrients.

The vulgarization of Darwinism that sees the “struggle for existence” as nothing but the competition for some environmental resource in short supply ignores the large body of evidence about the actual complexity of the relationship between organisms and their resources. First, despite the standard models created by ecologists in which survivorship decreases with increasing population density, the survival of individuals in a population is often greatest not when their “competitors” are at their lowest density but at an intermediate one. That is because organisms are involved not only in the consumption of resources, but in their creation as well. For example, in fruit flies, which live on yeast, the worm-like immature stages of the fly tunnel into rotting fruit, creating more surface on which the yeast can grow, so that, up to a point, the more larvae, the greater the amount of food available. Fruit flies are not only consumers but also farmers.

Second, the presence in close proximity of individual organisms that are genetically different can increase the growth rate of a given type, presumably since they exude growth-promoting substances into the soil. If a rice plant of a particular type is planted so that it is surrounded by rice plants of a different type, it will give a higher yield than if surrounded by its own type. This phenomenon, known for more than a half-century, is the basis of a common practice of mixed-variety rice cultivation in China, and mixed-crop planting has become a method used by practitioners of organic agriculture.

Despite the evidence that organisms do not simply use resources present in the environment but, through their life activities, produce such resources and manufacture their environments, the distinction between organisms and their environments remains deeply embedded in our consciousness. Partly this is due to the inertia of educational institutions and materials. As a coauthor of a widely used college textbook of genetics,(1) I have had to engage in a constant struggle with my coauthors over the course of thirty years in order to introduce students to the notion that the relative reproductive fitness of organisms with different genetic makeups may be sensitive to their frequency in the population.

But the problem is deeper than simply intellectual inertia. It goes back, ultimately, to the unconsidered differentiations we make—at every moment when we distinguish among objects—between those in the foreground of our consciousness and the background places in which the objects happen to be situated. Moreover, this distinction creates a hierarchy of objects. We are conscious not only of the skin that encloses and defines the object, but of bits and pieces of that object, each of which must have its own “skin.” That is the problem of anatomization. A car has a motor and brakes and a transmission and an outer body that, at appropriate moments, become separate objects of our consciousness, objects that at least some knowledgeable person recognizes as coherent entities.

It has been an agony of biology to find boundaries between parts of organisms that are appropriate for an understanding of particular questions. We murder to dissect. The realization of the complex functional interactions and feedbacks that occur between different metabolic pathways has been a slow and difficult process. We do not have simply an “endocrine system” and a “nervous system” and a “circulatory system,” but “neurosecretory” and “neurocirculatory” systems that become the objects of inquiry because of strong forces connecting them. We may indeed stir a flower without troubling a star, but we cannot stir up a hornet’s nest without troubling our hormones. One of the ironies of language is that we use the term “organic” to imply a complex functional feedback and interaction of parts characteristic of living “organisms.” But musical organs, from which the word was adopted, have none of the complex feedback interactions that organisms possess. Indeed the most complex musical organ has multiple keyboards, pedal arrays, and a huge array of stops precisely so that different notes with different timbres can be played simultaneously and independently.

Evelyn Fox Keller sees “The Mirage of a Space Between Nature and Nurture” as a consequence of our false division of the world into living objects without sufficient consideration of the external milieu in which they are embedded, since organisms help create effective environments through their own life activities. Fox Keller is one of the most sophisticated and intelligent analysts of the social and psychological forces that operate in intellectual life and, in particular, of the relation of gender in our society both to the creation and acceptance of scientific ideas. The central point of her analysis has been that gender itself (as opposed to sex) is socially constructed, and that construction has influenced the development of science:

If there is a single point on which all feminist scholarship…has converged, it is the importance of recognizing the social construction of gender…. All of my work on gender and science proceeds from this basic recognition. My endeavor has been to call attention to the ways in which the social construction of a binary opposition between “masculine” and “feminine” has influenced the social construction of science.(2)

Beginning with her consciousness of the role of gender in influencing the construction of scientific ideas, she has, over the last twenty-five years, considered how language, models, and metaphors have had a determinative role in the construction of scientific explanation in biology.

A major critical concern of Fox Keller’s present book is the widespread attempt to partition in some quantitative way the contribution made to human variation by differences in biological inheritance, that is, differences in genes, as opposed to differences in life experience. She wants to make clear a distinction between analyzing the relative strength of the causes of variation among individuals and groups, an analysis that is coherent in principle, and simply assigning the relative contributions of biological and environmental causes to the value of some character in an individual.

It is, for example, all very well to say that genetic variation is responsible for 76 percent of the observed variation in adult height among American women while the remaining 24 percent is a consequence of differences in nutrition. The implication is that if all variation in nutrition were abolished then 24 percent of the observed height variation among individuals in the population in the next generation would disappear. To say, however, that 76 percent of Evelyn Fox Keller’s height was caused by her genes and 24 percent by her nutrition does not make sense. The nonsensical implication of trying to partition the causes of her individual height would be that if she never ate anything she would still be three quarters as tall as she is.

In fact, Keller is too optimistic about the assignment of causes of variation even when considering variation in a population. As she herself notes parenthetically, the assignment of relative proportions of population variation to different causes in a population depends on there being no specific interaction between the causes. She gives as a simple example the sound of two different drummers playing at a distance from us. If each drummer plays each drum for us, we should be able to tell the effect of different drummers as opposed to differences between drums. But she admits that is only true if the drummers themselves do not change their ways of playing when they change drums.

Keller’s rather casual treatment of the interaction between causal factors in the case of the drummers, despite her very great sophistication in analyzing the meaning of variation, is a symptom of a fault that is deeply embedded in the analytic training and thinking of both natural and social scientists. If there are several variable factors influencing some phenomenon, how are we to assign the relative importance to each in determining total variation? Let us take an extreme example. Suppose that we plant seeds of each of two different varieties of corn in two different locations with the following results measured in bushels of corn produced (see Table 1).

There are differences between the varieties in their yield from location to location and there are differences between locations from variety to variety. So, both variety and location matter. But there is no average variation between locations when averaged over varieties or between varieties when averaged over locations. Just by knowing the variation in yield associated with location and variety separately does not tell us which factor is the more important source of variation; nor do the facts of location and variety exhaust the description of that variation.

There is a third source of variation called the “interaction,” the variation that cannot be accounted for simply by the separate average effects of location and variety. There is no difference that appears between the average of different varieties or average of different locations, suggesting that neither location or variety matters to yield. Yet the yields of corn were different when different particular combinations of variety and location are observed. These effects of particular combinations of factors, not accounted for by the average effects of each factor separately, are thrown into an unanalyzed category called “interaction” with no concrete physical model made explicit.

In real life there will be some difference between the varieties when averaged over locations and some variation between locations when averaged over varieties; but there will also be some interaction variation accounting for the failure of the separately identified main effects to add up to the total variation. In an extreme case, as for example our jungle drummers with a common consciousness of what drums should sound like, it may turn out to be all interaction.

The Mirage of a Space Between Nature and Nurture appears in an era when biological—and specifically, genetic—causation is taken as the preferred explanation for all human physical differences. Although the early and mid-twentieth century was a period of immense popularity of genetic explanations for class and race differences in mental ability and temperament, especially among social scientists, such theories have now virtually disappeared from public view, largely as a result of a considerable effort of biologists to explain the errors of those claims.

The genes for IQ have never been found. Ironically, at the same time that genetics has ceased to be a popular explanation for human intellectual and temperamental differences, genetic theories for the causation of virtually every physical disorder have become the mode. “DNA” has replaced “IQ” as the abbreviation of social import. The announcement in February 2001 that two groups of investigators had sequenced the entire human genome was taken as the beginning of a new era in medicine, an era in which all diseases would be treated and cured by the replacement of faulty DNA. William Haseltine, the chairman of the board of the private company Human Genome Sciences, which participated in the genome project, assured us that “death is a series of preventable diseases.” Immortality, it appeared, was around the corner. For nearly ten years announcements of yet more genetic differences between diseased and healthy individuals were a regular occurrence in the pages of The New York Times and in leading general scientific publications like Science and Nature.

Then, on April 15, 2009, there appeared in The New York Times an article by the influential science reporter and fan of DNA research Nicholas Wade, under the headline “Study of Genes and Diseases at an Impasse.” In the same week the journal Science reported that DNA studies of disease causation had a “relatively low impact.” Both of these articles were instigated by several articles in The New England Journal of Medicine, which had come to the conclusion that the search for genes underlying common causes of mortality had so far yielded virtually nothing useful. The failure to find such genes continues and it seems likely that the search for the genes causing most common diseases will go the way of the search for the genes for IQ.

A major problem in understanding what geneticists have found out about the relation between genes and manifest characteristics of organisms is an overly flexible use of language that creates ambiguities of meaning. In particular, their use of the terms “heritable” and “heritability” is so confusing that an attempt at its clarification occupies the last two chapters of The Mirage of a Space Between Nature and Nurture. When a biological characteristic is said to be “heritable,” it means that it is capable of being transmitted from parents to offspring, just as money may be inherited, although neither is inevitable. In contrast, “heritability” is a statistical concept, the proportion of variation of a characteristic in a population that is attributable to genetic variation among individuals. The implication of “heritability” is that some proportion of the next generation will possess it.

The move from “heritable” to “heritability” is a switch from a qualitative property at the level of an individual to a statistical characterization of a population. Of course, to have a nonzero heritability in a population, a trait must be heritable at the individual level. But it is important to note that even a trait that is perfectly heritable at the individual level might have essentially zero heritability at the population level. If I possess a unique genetic variant that enables me with no effort at all to perform a task that many other people have learned to do only after great effort, then that ability is heritable in me and may possibly be passed on to my children, but it may also be of zero heritability in the population.

One of the problems of exploring an intellectual discipline from the outside is that the importance of certain basic methodological considerations is not always apparent to the observer, considerations that mold the entire intellectual structure that characterizes the field. So, in her first chapter, “Nature and Nurture as Alternatives,” Fox Keller writes that “my concern is with the tendency to think of nature and nurture as separable and hence as comparable, as forces to which relative strength can be assigned.” That concern is entirely appropriate for an external critic, and especially one who, like Fox Keller, comes from theoretical physics rather than experimental biology. Experimental geneticists, however, find environmental effects a serious distraction from the study of genetic and molecular mechanisms that are at the center of their interest, so they do their best to work with cases in which environmental effects are at a minimum or in which those effects can be manipulated at will. If the machine model of organisms that underlies our entire approach to the study of biology is to work for us, we must restrict our objects of study to those in which we can observe and manipulate all the gears and levers.

For much of the history of experimental genetics the chief organism of study was the fruit fly, Drosophila melanogaster, in which very large numbers of different gene mutations with visible effects on the form and behavior of the flies had been discovered. The catalog of these mutations described, in addition to genetic information, a description of the way in which mutant flies differed from normal (“wild type”) and assigned each mutation a “Rank” between 1 and 4. Rank 1 mutations were the most reliable for genetic study because every individual with the mutant genetic type could be easily and reliably recognized by the observer, whereas some proportion of individuals carrying mutations of other ranks could be indistinguishable from normal, depending on the environmental conditions in which they developed. Geneticists, if they could, avoided depending on poorer-rank mutations for their experiments. Only about 20 percent of known mutations were of Rank 1.

With the recent shift from the study of classical genes in controlled breeding experiments to the sequencing of DNA as the standard method of genetic study, the situation has gotten much worse. On the one hand, about 99 percent of the DNA in a cell is of completely unknown functional significance and any two unrelated individuals will differ from each other at large numbers of DNA positions. On the other hand, the attempt to assign the causes of particular diseases and metabolic malfunctions in humans to specific mutations has been a failure, with the exception of a few classical cases like sickle-cell anemia. The study of genes for specific diseases has indeed been of limited value. The reason for that limited value is in the very nature of genetics as a way of studying organisms.

Genetics, from its very beginning, has been a “subtractive” science. That is, it is based on the analysis of the difference between natural or “wild-type” organisms and those with some genetic defect that may interfere in some observable way with regular function. But to carry out such comparison it is necessary that the organisms being studied are, to the extent possible, identical in all other respects, and that the comparison is carried out in an environment that does not, itself, generate atypical responses yet allows the possible effect of the genetic perturbation to be observed. We must face the possibility that such a subtractive approach will never be able to reveal the way in which nature and nurture interact in normal circumstances.

An alternative to the standard subtractive method of genetic perturbations would be a synthetic approach in which living systems would be constructed ab initio from their molecular elements. It is now clear that most of the DNA in an organism is not contained in genes in the usual sense. That is, 98–99 percent of the DNA is not a code for a sequence of amino acids that will be assembled into long chains that will fold up to become the proteins that are essential to the formation of organisms; yet that nongenic DNA is transmitted faithfully from generation to generation just like the genic DNA.

It appears that the sequence of this nongenic DNA, which used to be called “junk-DNA,” is concerned with regulating how often, when, and in which cells the DNA of genes is read in order to produce the long strings of amino acids that will be folded into proteins and which of the many alternative possible foldings will occur. As the understanding and possibility of control of the synthesis of the bits and pieces of living cells become more complete, the temptation to create living systems from elementary bits and pieces will become greater and greater. Molecular biologists, already intoxicated with their ability to manipulate life at its molecular roots, are driven by the ambition to create it. The enterprise of “Synthetic Biology” is already in existence.

In May 2010 the consortium originally created by J. Craig Venter to sequence the human genome gave birth to a new organization, Synthetic Genomics, which announced that it had created an organism by implanting a synthetic genome in a bacterial cell whose own original genome had been removed. The cell then proceeded to carry out the functions of a living organism, including reproduction. One may argue that the hardest work, putting together all the rest of the cell from bits and pieces, is still to be done before it can be said that life has been manufactured, but even Victor Frankenstein started with a dead body. We all know what the consequences of that may be.

1. Anthony J.F. Griffiths, Susan R. Wessler, Sean B. Carroll, and Richard C. Lewontin, Introduction to Genetic Analysis , ninth edition (W.H. Freeman, 2008).

2. The Scientist , Vol. 5, No. 1 (January 7, 1991

Remember Climate Change? (Huffington Post)

Posted: 05/09/11
By Peter Neill – The Huffington Post

Remember climate change? Remember Copenhagen, the climate summit, and half a million people in the streets? Remember the scientific reports? Remember the predictions? Remember the headlines? The campaign promises? The strategies to offset and mitigate the impact of CO2 emissions on human health, the atmosphere, and the ocean? How long ago was it? Six months? A year? More? It might never have been.

How can we meet challenges if we can’t remember what they are? As far as the news media is concerned, the story is archived behind any new urgency no matter what the data. The subject of climate is no more. The deniers have prevailed through shrill contradictions, corporate funded public relations, personal attacks on scientists, and indifference to reports and continuing data that still and again raise critical questions to fall on deaf ears.

In the US Congress, any bill or suggested appropriation that contains the keyword climate is eliminated, most probably without being read. There is no global warming; therefore there is no need for the pitiful American financial support of $2.3 million for the International Panel on Climate Change. There is no problem with greenhouse gases, so there is no need for legislation that enables the Environmental Protection Agency to measure further such impact on animal habitat or human health. There is no need for support for the research and development of alternative renewable energy technologies. There is no need to protect the marine environment from oil spill disaster. There is no need to protect watersheds and drinking water from industrial and mining pollution. There is no need to fund tsunami-warning systems off the American coast. There is no need to support any part of a World Bank program to prevent deforestation in the developing world. There is no need to maintain NOAA’s study of climate change implication for extreme weather. There is no need to fund further climate research sponsored by the National Science Foundation. There is no need to maintain EPA regulation of clean water; oh, and by the way, there is no need for the Environmental Protection Agency. Put it to vote today in the US House of Representatives, and they would blandly and blindly legislate that there is no need for the environment at all.

What do we need? Jobs, jobs, jobs, it is said. To that end, we can start by eliminating jobs that don’t advance our political agenda, by ignoring scientific demonstrations and measurable conditions that foreshadow future job destruction, by promoting and further subsidizing old technologies that make us sick and unable to work successfully in our present jobs, by building the unemployment roles so that the ranks of the jobless will reach levels unheard of since the Great Depression, and by compromising the educational system that is the only hope for those seeking training or re-training for whatever few new jobs may actually exist.

What does this have to do with the ocean?

The health of the ocean is a direct reflection of the health of the land. A nuclear accident in Japan allows radioactive material to seep into the sea. A collapse of shoreside fishery regulation enables the final depletion of species for everyone everywhere. Indifference to watershed protection, industrial pollution, waste control, and agricultural run-off poisons the streams and rivers and coasts and deep ocean and corrupts the food chain all along the way. Lack of understanding of changing weather compromises our response to storms and droughts that inundate our coastal communities and destroy our sustenance.

There is a reason for knowledge. It informs constructive behavior; it promotes employment and economic development; it makes for wise governance; it improves our lives. Are we drowning in debt? Or are we drowning in ignorance? I can’t remember.

Confronting the ‘Anthropocene’ (N.Y. Times)

May 11, 2011, 9:39 AM
By ANDREW C. REVKIN
N.Y. Times, Dot Earth

NASA. Donald R. Pettit, an astronaut, took this photograph of London while living in the International Space Station.

LONDON — I’m participating in a one-day meeting at the Geological Society of London exploring the evidence for, and meaning of, the Anthropocene. This is the proposed epoch of Earth history that, proponents say, has begun with the rise of the human species as a globally potent biogeophysical force, capable of leaving a durable imprint in the geological record.

This recent TEDx video presentation by Will Steffen, the executive director of the Australian National University’s Climate Change Institute, lays out the basic idea:

There’s more on the basic concept in National Geographic and from the BBC. Paul Crutzen, the Nobel laureate in chemistry who, with others, proposed the term in 2000, and Christian Schwägerl, the author of “The Age of Man” (German), described the value of this new framing for current Earth history in January in Yale Environment 360:

Students in school are still taught that we are living in the Holocence, an era that began roughly 12,000 years ago at the end of the last Ice Age. But teaching students that we are living in the Anthropocene, the Age of Men, could be of great help. Rather than representing yet another sign of human hubris, this name change would stress the enormity of humanity’s responsibility as stewards of the Earth. It would highlight the immense power of our intellect and our creativity, and the opportunities they offer for shaping the future. [Read the rest.]

I’m attending because of a quirky role I played almost 20 years ago in laying the groundwork for this concept of humans as a geological force. A new paper from Steffen and three coauthors reviewing the conceptual and historic basis for the Anthropocene includes an appropriately amusing description of my role:

Biologist Eugene F. Stoermer wrote: ‘I began using the term “anthropocene” in the 1980s, but never formalized it until Paul [Crutzen] contacted me’. About this time other authors were exploring the concept of the Anthropocene, although not using the term. More curiously, a popular book about Global Warming, published in 1992 by Andrew C. Revkin, contained the following prophetic words: ‘Perhaps earth scientists of the future will name this new post-Holocene period for its causative element—for us. We are entering an age that might someday be referred to as, say, the Anthrocene [sic]. After all, it is a geological age of our own making’. Perhaps many readers ignored the minor linguistic difference and have read the new term as Anthro(po)cene!

If you’ve been tracking my work for a while, you’re aware of my focus on the extraordinary nature of this moment in both Earth and human history. As far as science can tell, there’s never, until now, been a point when a species became a planetary powerhouse and also became aware of that situation.

As I first wrote in 1992, cyanobacteria are credited with oxygenating the atmosphere some 2 billion years ago. That was clearly a more profound influence on a central component of the planetary system than humans raising the concentration of carbon dioxide 40 percent since the start of the industrial revolution. But, as far as we know, cyanobacteria (let alone any other life form from that period) were neither bemoaning nor celebrating that achievement.

It was easier to be in a teen-style resource binge before science began to delineate an edge to our petri dish.

We no longer have the luxury of ignorance.

We’re essentially in a race between our potency, our awareness of the expressed and potential ramifications of our actions and our growing awareness of the deeply embedded perceptual and behavioral traits that shape how we do, or don’t, address certain kinds of risks. (Explore “Boombustology” and “Disasters by Design” to be reminded how this habit is not restricted to environmental risks.)

This meeting in London is two-pronged. It is in part focused on deepening basic inquiry into stratigraphy and other branches of earth science and clarifying how this human era could qualify as a formal chapter in Earth’s physical biography. As Erle C. Ellis, an ecologist at the University of Maryland, Baltimore County, put it in his talk, it’s unclear for the moment whether humanity’s impact will be long enough to represent an epoch, or will more resemble “an event.” Ellis’s presentation was a mesmerizing tour of the planet’s profoundly humanized ecosystems, which he said would be better described as “anthromes” than “biomes.”

Ellis said it was important to approach this reality not as a woeful situation, but an opportunity to foster a new appreciation of the lack of separation of people and their planet and a bright prospect for enriching that relationship. In this his views resonate powerfully with those of Rene Dubos, someone I’ll be writing about here again soon.

Through the talks by Ellis and others, it was clear that the scientific effort to define a new geological epoch, while important, paled beside the broader significance of this juncture in human history.

In my opening comments at the meeting, I stressed the need to expand the discussion from the physical and environmental sciences into disciplines ranging from sociology to history, philosophy to the arts.

I noted that while the “great acceleration” described by Steffen and others is already well under way, it’s entirely possible for humans to design their future, at least in a soft way, boosting odds that the geological record will have two phases — perhaps a “lesser” and “greater” Anthropocene, as someone in the audience for my recent talk with Brad Allenby at Arizona State University put it.

I also noted that the term “Anthropocene,” like phrases such as “global warming,” is sufficiently vague to guarantee it will be interpreted in profoundly different ways by people with different world views. (As I explained, this is as true for Nobel laureates in physics as it is for the rest of us.)

Some will see this period as a “shame on us” moment. Others will deride this effort as a hubristic overstatement of human powers. Some will argue for the importance of living smaller and leaving no scars. Others will revel in human dominion as a normal and natural part of our journey as a species.

A useful trait will be to get comfortable with that diversity.

Before the day is done I also plan on pushing Randy Olson’s notion of moving beyond the “nerd loop” and making sure this conversation spills across all disciplinary and cultural boundaries from the get-go.

There’s much more to explore of course, and I’ll post updates as time allows. You might track the meeting hash tag, #anthrop11, on Twitter.

Scientist says listen to pope on climate change (U.S. Catholic)

Thursday, May 12, 2011
By Online Editor
Guest blog post by Dan DiLeo

Religion and science comes together in urging action on climate change.

The Pontifical Academy of Sciences sees climate change as an urgent matter, member Veerabhadran (Ram) Ramanathan, Ph.D., told Dan Misleh, Executive Director of the Catholic Coalition on Climate Change, in an interview on the academy’s report coming out of its meeting at the Vatican April 2-4, 2011.

While written, public reports are not the norm following such meetings, the working group was motivated by a sense of the urgency of the issue and the adverse social, political, economic and ecological impacts of climate change, said Ramanathan, who is the co-chair of the working group that produced the report and has been a member of the Pontifical Academy of Sciences since 2004. He is also Distinguished Professor of Atmospheric and Climate Sciences and Director of the National Science Foundation funded Center for Clouds, Chemistry, and Climate at Scripps Institution of Oceanography.

The Vatican’s recent report focuses on the impacts to humans due to global glacier retreat—one of the most obvious indicators of anthropogenic climate change. Ramanathan noted that climate change is already being experienced by many, especially in developing countries, and is likely to continue unless significant global actions to curtail human produced greenhouse gases are not begun soon.

Ramanathan said the working group focused glaciers and not other climate change impacts for three reasons: These impacts have not been sufficiently studied and discussed; shrinking glaciers offer the most visible example of how climate change is adversely affecting the planet; and the disappearance of mountain glaciers—which act as huge freshwater reservoirs for billions of people especially in Central Asia—could have catastrophic impacts.

Throughout his remarks, Ramanathan echoed the church’s call to exercise prudence in confronting climate change, confirming that the grave—and potentially irreversible—nature of climate change impacts obligate action based on what we already know now. He also emphasized the crucial role which the church must continue to play in the face of climate change: while the science community can present the facts, it is the church which has the moral authority necessary to inspire individuals and institutions to change environmentally—and socially—destructive patterns of behaviors.

Ramanathan also shared his personal inspiration for working on the issue of climate change, and in particular the contribution of black carbon. Growing up in a village in India, he saw how the burning of biomass not only created tremendous air pollution but also severely impacted the health of his family. The experience helped him see the interconnectedness of health, poverty, and environment, and reaffirmed that individual choices can have widespread affects—both positive and negative.

Ramanathan closed by noting that if the world’s more than 1 billion Catholics chose to heed the Holy Father and address climate change as a matter of faith, their individual actions and choices would go a long way in caring for God’s good gift of Creation and the poor who are most impacted by environmental degradation.

Dan DiLeo is Project Manager for the Catholic Coalition on Climate Change.