Arquivo da tag: Previsão

Something Is Rotten at the New York Times (Huff Post)

By Michael E. Mann

Director of Penn State Earth System Science Center; Author of ‘Dire Predictions’ and ‘The Hockey Stick and the Climate Wars’

Posted: 11/21/2013 7:20 pm

Something is rotten at the New York Times.

When it comes to the matter of human-caused climate change, the Grey Lady’s editorial page has skewed rather contrarian of late.

A couple months ago, the Intergovernmental Panel on Climate Change (IPCC) publishedits 5th scientific assessment, providing the strongest evidence to date that climate change is real, caused by us, and a problem.

Among other areas of the science where the evidence has become ever more compelling, is the so-called “Hockey Stick” curve — a graph my co-authors and I published a decade and a half ago showing modern warming in the Northern Hemisphere to be unprecedented for at least the past 1000 years. The IPCC further strengthened that original conclusion, finding that recent warmth is likely unprecedented over an even longer timeframe.

Here was USA Today on the development:

The latest report from the Intergovernmental Panel on Climate Change, the internationally accepted authority on the subject, concludes that the climate system has warmed dramatically since the 1950s, and that scientists are 95% to 100% sure human influence has been the dominant cause. In the Northern Hemisphere, 1983 to 2012 was likely the warmest 30-year period of the past 1,400 years, the IPCC found.

And here was the Washington Post:

The infamous “hockey stick” graph showing global temperatures rising over time, first slowly and then sharply, remains valid.

And the New York Times? Well we instead got this:

The [Hockey Stick] graph shows a long, relatively unwavering line of temperatures across the last millennium (the stick), followed by a sharp, upward turn of warming over the last century (the blade). The upward turn implied that greenhouse gases had become so dominant that future temperatures would rise well above their variability and closely track carbon dioxide levels in the atmosphere….I knew that wasn’t the case.

Huh?

Rather than objectively communicating the findings of the IPCC to their readers, the New York Times instead foisted upon them the ill-informed views of Koch Brothers-fundedclimate change contrarian Richard Muller, who used the opportunity to deny the report’s findings.

In fact, in the space of just a couple months now, the Times has chosen to grant Muller not just one, but two opportunities to mislead its readers about climate change and the threat it poses.

The Times has now published another op-ed by Muller wherein he misrepresented the potential linkages between climate change and extreme weather–tornadoes to be specific, which he asserted would be less of a threat in a warmer world. The truth is that the impact of global warming on tornadoes remains uncertain, because the underlying science is nuanced and there are competing factors that come into play.

The Huffington Post published an objective piece about the current state of the science earlier this year in the wake of the devastating and unprecedented Oklahoma tornadoes.

That piece accurately quoted a number of scientists including myself on the potential linkages. I pointed out to the journalist that there are two key factors: warm, moist air is favorable for tornadoes, and global warming will provide more of it. But important too is the amount of “shear” (that is, twisting) in the wind. And whether there will, in a warmer world, be more or less of that in tornado-prone regions, during the tornado season, depends on the precise shifts that will take place in the jet stream–something that is extremely difficult to predict even with state-of-the-art theoretical climate models. That factor is a “wild card” in the equation.

So we’ve got one factor that is a toss-up, and another one that appears favorable for tornado activity. The combination of them is therefore slightly on the “favorable” side, and if you’re a betting person, that’s probably what you would go with. And this is the point that I made in the Huffington Post piece:

Michael Mann, a climatologist who directs the Earth System Science Center at Pennsylvania State University, agreed that it’s too early to tell.

“If one factor is likely to be favorable and the other is a wild card, it’s still more likely that the product of the two factors will be favorable,” said Mann. “Thus, if you’re a betting person — or the insurance or reinsurance industry, for that matter — you’d probably go with a prediction of greater frequency and intensity of tornadoes as a result of human-caused climate change.”

Now watch the sleight of hand that Muller uses when he quotes me in his latest Times op-ed:

Michael E. Mann, a prominent climatologist, was only slightly more cautious. He said, “If you’re a betting person — or the insurance or reinsurance industry, for that matter — you’d probably go with a prediction of greater frequency and intensity of tornadoes as a result of human-caused climate change.”

Completely lost in Muller’s selective quotation is any nuance or context in what I had said, let alone the bottom line in what I stated: that it is in fact too early to tell whether global warming is influencing tornado activity, but we can discuss the processes through which climate change might influence future trends.

Muller, who lacks any training or expertise in atmospheric science, is more than happy to promote with great confidence the unsupportable claim that global warming will actuallydecrease tornado activity. His evidence for this? The false claim that the historical data demonstrate a decreasing trend in past decades.

Actual atmospheric scientists know that the historical observations are too sketchy and unreliable to decide one way or another as to whether tornadoes are increasing or not (see this excellent discussion by weather expert Jeff Masters of The Weather Underground).

So one is essentially left with the physical reasoning I outlined above. You would think that a physicist would know how to do some physical reasoning. And sadly, in Muller’s case, you would apparently be wrong…

To allow Muller to so thoroughly mislead their readers, not once, but twice in the space of as many months, is deeply irresponsible of the Times. So why might it be that the New York Times is so enamored with Muller, a retired physicist with no training in atmospheric or climate science, when it comes to the matter of climate change?

I discuss Muller’s history as a climate change critic and his new-found role as a media favorite in my book “The Hockey Stick and the Climate Wars” (the paperback was just released a couple weeks ago, with a new guest foreword by Bill Nye “The Science Guy”).

Muller is known for his bold and eccentric, but flawed and largely discredited astronomical theories. But he rose to public prominence only two years ago when he cast himself in theirresistible role of the “converted climate change skeptic”.

Muller had been funded by the notorious Koch Brothers, the largest current funders of climate change denial and disinformation, to independently “audit” the ostensibly dubious science of climate change. This audit took the form of an independent team of scientists that Muller picked and assembled under the umbrella of the “Berkeley Earth Surface Temperature” (unashamedly termed “BEST” by Muller) project.

Soon enough, Muller began to unveil the project’s findings: First, in late 2011, he admitted that the Earth was indeed warming. Then, a year later he concluded that the warming was not only real, but could only be explained by human influence.

Muller, in short, had rediscovered what the climate science community already knew long ago.

summarized the development at the time on my Facebook page:

Muller’s announcement last year that the Earth is indeed warming brought him up to date w/ where the scientific community was in the the 1980s. His announcement this week that the warming can only be explained by human influences, brings him up to date with where the science was in the mid 1990s. At this rate, Muller should be caught up to the current state of climate science within a matter of a few years!

The narrative of a repentant Koch Brothers-funded skeptic who had “seen the light” andappeared to now endorse the mainstream view of human-caused climate change, was simply too difficult for the mainstream media to resist. Muller predictably was able to position himself as a putative “honest broker” in the climate change debate. And he was granted a slew of op-eds in the New York Times and Wall Street Journal, headline articles in leading newspapers, and interviews on many of the leading television and radio news shows.

Yet Muller was in reality seeking to simply take credit for findings established by otherscientists (ironically using far more rigorous and defensible methods!) literally decades ago. In 1995 the IPCC had already concluded, based on work by Ben Santer and other leading climate scientists working on the problem of climate change “detection and attribution”, that there was already now a “discernible human influence” on the warming of the planet.

And while Muller has now admitted that the Earth had warmed and that human-activity is largely to blame, he has used his new-found limelight and access to the media to:

1. Smear and misrepresent other scientists, including not just me and various other climate scientists like Phil Jones of the UK’s University of East Anglia, but even the President of the U.S. National Academy of Sciences himself, Ralph Cicerone.

2. Misrepresent key details of climate science, inevitably to downplay the seriousness of climate change, whether it is the impacts on extreme weather and heat, drought, Arctic melting, or the threat to Polar Bears. See my own debunking of various falsehoods that Muller has promoted in his numerous news interviews e.g. here or here.

3. Shill for fossil fuel energy, arguing that the true solution to global warming isn’t renewable or clean energy. No, not at all! Muller is bullish on fracking and natural gas as the true solution.

To (a) pretend to accept the science, but attack the scientists and misrepresent so many important aspect of the science, downplaying the impacts and threat of climate change, while (b) acting as a spokesman for natural gas, one imagines that the petrochemical tycoon Koch Brothers indeed were probably quite pleased with their investment. Job well done. As I put it in an interview last year:

It would seem that Richard Muller has served as a useful foil for the Koch Brothers, allowing them to claim they have funded a real scientist looking into the basic science, while that scientist– Muller—props himself up by using the “Berkeley” imprimatur (UC Berkeley has not in any way sanctioned this effort) and appearing to accept the basic science, and goes out on the talk circuit, writing op-eds, etc. systematically downplaying the actual state of the science, dismissing key climate change impacts and denying the degree of risk that climate change actually represents. I would suspect that the Koch Brothers are quite happy with Muller right now, and I would have been very surprised had he stepped even lightly on their toes during his various interviews, which he of course has not. He has instead heaped great praise on them, as in this latest interview.

The New York Times does a disservice to its readers when it buys into the contrived narrative of the “honest broker”–Muller as the self-styled white knight who must ride in to rescue scientific truth from a corrupt and misguided community of scientists. Especially when that white knight is in fact sitting atop a Trojan Horse–a vehicle for the delivery of disinformation, denial, and systematic downplaying of what might very well be the greatest threat we have yet faced as a civilization, the threat of human-caused climate change.

Shame on you New York Times. You owe us better than this.

Michael Mann is Distinguished Professor of Meteorology at Pennsylvania State University and author of The Hockey Stick and the Climate Wars: Dispatches from the Front Lines (now available in paperback with a new guest foreword by Bill Nye “The Science Guy”)

Majority of red-state Americans believe climate change is real, study shows (The Guardian)

Study suggests far-reaching acceptance of climate change in traditionally Republican states such as Texas and Oklahoma

, US environment correspondent

theguardian.com, Wednesday 13 November 2013 19.40 GMT

Texas droughtA cracked lake bed in Texas. Findings in this study are likely based on personal experiences of hot weather. Photograph: Tony Gutierrez/AP

A vast majority of red-state Americans believe climate change is real and at least two-thirds of those want the government to cut greenhouse gas emissions, new research revealed on Wednesday.

The research, by Stanford University social psychologist Jon Krosnick, confounds the conventional wisdom of climate denial as a central pillar of Republican politics, and practically an article of faith for Tea Party conservatives.

Instead, the findings suggest far-reaching acceptance that climate change is indeed occurring and is caused by human activities, even in such reliably red states as Texas and Oklahoma.

“To me, the most striking finding that is new today was that we could not find a single state in the country where climate scepticism was in the majority,” Krosnick said in an interview.

States that voted for Barack Obama, as expected, also believe climate change is occurring and support curbs on carbon pollution. Some 88% of Massachusetts residents believe climate change is real.

But Texas and Oklahoma are among the reddest of red states and are represented in Congress by Republicans who regularly dismiss the existence of climate change or its attendant risks.

Congressman Joe Barton of Texas and Senator Jim Inhofe of Oklahoma stand out for their regular denials of climate change as a “hoax”, even among Republican ranks.

However, the research found 87% of Oklahomans and 84% of Texans accepted that climate change was occurring.

Seventy-six percent of Americans in both states also believed the government should step in to limit greenhouse gas emissions produced by industry.

In addition, the research indicated substantial support for Obama’s decision to use the Environmental Protection Agency to cut emissions from power plants. The polling found at least 62% of Americans in favour of action cutting greenhouse gas emissions from plants.

Once again, Texas was also solidly lined up with action, with 79% of voters supporting regulation of power plants.

The acceptance of climate change was not a result of outreach efforts by scientists, however, or by the experience of extreme events, such as hurricane Sandy, Krosnick said.

His research found no connection between Sandy and belief in climate change or support for climate action.

Instead, he said the findings suggest personal experiences of hot weather – especially in warm states in the south-west – persuaded Texans and others that the climate was indeed changing within their own lifetimes.

“Their experience with weather leaves people in most places on the green side in most of the questions we ask,” he said.

There was some small slippage in acceptance of climate change in north-western states such as Idaho and Utah and in the industrial heartland states of Ohio. But even then at a minimum, 75% believed climate change was occurring.

The findings, represented in a series of maps, were presented at a meeting of the bicameral task force on climate change which has been pushing Congress to try to move ahead on Obama’s green commitments. There was insufficient data to provide findings from a small number of states

Henry Waxman, the Democrat who co-chairs the taskforce, said in a statement the findings showed Americans were ready to take action to cut emissions that cause climate change.

“This new report is crystal clear,” said Waxman. “It shows that the vast majority of Americans – whether from red states or blue – understand that climate change is a growing danger. Americans recognise that we have a moral obligation to protect the environment and an economic opportunity to develop the clean energy technologies of the future. Americans are way ahead of Congress in listening to the scientists.”

Some 58% of Republicans in the current Congress deny the existence of climate change or oppose action to cut greenhouse gas emissions, according to an analysis by the Center for American Progress.

Selecting Mathematical Models With Greatest Predictive Power: Finding Occam’s Razor in an Era of Information Overload (Science Daily)

Nov. 20, 2013 — How can the actions and reactions of proteins so small or stars so distant they are invisible to the human eye be accurately predicted? How can blurry images be brought into focus and reconstructed?

A new study led by physicist Steve Pressé, Ph.D., of the School of Science at Indiana University-Purdue University Indianapolis, shows that there may be a preferred strategy for selecting mathematical models with the greatest predictive power. Picking the best model is about sticking to the simplest line of reasoning, according to Pressé. His paper explaining his theory is published online this month in Physical Review Letters.

“Building mathematical models from observation is challenging, especially when there is, as is quite common, a ton of noisy data available,” said Pressé, an assistant professor of physics who specializes in statistical physics. “There are many models out there that may fit the data we do have. How do you pick the most effective model to ensure accurate predictions? Our study guides us towards a specific mathematical statement of Occam’s razor.”

Occam’s razor is an oft cited 14th century adage that “plurality should not be posited without necessity” sometimes translated as “entities should not be multiplied unnecessarily.” Today it is interpreted as meaning that all things being equal, the simpler theory is more likely to be correct.

A principle for picking the simplest model to answer complex questions of science and nature, originally postulated in the 19th century by Austrian physicist Ludwig Boltzmann, had been embraced by the physics community throughout the world. Then, in 1998, an alternative strategy for picking models was developed by Brazilian Constantino Tsallis. This strategy has been widely used in business (such as in option pricing and for modeling stock swings) as well as scientific applications (such as for evaluating population distributions). The new study finds that Boltzmann’s strategy, not the 20th century alternative, assures that the models picked are the simplest and most consistent with data.

“For almost three decades in physics we have had two main competing strategies for picking the best model. We needed some resolution,” Pressé said. “Even as simple an experiment as flipping a coin or as complex an enterprise as understanding functions of proteins or groups of proteins in human disease need a model to describe them. Simply put, we need one Occam’s razor, not two, when selecting models.”

In addition to Pressé, co-authors of “Nonadditive entropies yield probability distributions with biases not warranted by the data” are Kingshuk Ghosh of the University of Denver, Julian Lee of Soongsil University, and Ken A. Dill of Stony Brook University.

Pressé is also the first author of a companion paper, “Principles of maximum entropy and maximum caliber in statistical physics” published in the July-September issue of the Reviews of Modern Physics.

Um balanço da primeira semana da COP19 (Vitae Civilis)

Ambiente
18/11/2013 – 09h10

por Délcio Rodrigues e Silvia Dias*

cop19 ecod 300x183 Um balanço da primeira semana da COP19

Ao fim da primeira semana da CoP19, a sensação de dejá vú é inevitável. Mais uma vez, o negociador filipino foi o responsável pelo discurso mais emocionante. Mais uma vez, o Germanwatch divulga que os países pobres são os mais vulneráveis aos eventos climáticos extremos. Mais uma vez, aliás, temos um evento climático vitimando milhares de pessoas enquanto acontece a conferência. Mais uma vez, temos a divulgação de que estamos vivendo os anos mais quentes da história recente do planeta, de que a quantidade de gases causadores do efeito estufa na atmosfera já está em níveis alarmantes, de que o certo seria deixar as reservas de combustíveis fósseis intocadas…

Mesmo o novo relatório do IPCC chega com um certo gosto de notícia velha. Pois apesar da maior gama de detalhes e da maior certeza científica, basicamente o AR5 confirma que estamos seguindo em uma trajetória que esgotará já em 2030 todo o carbono que poderemos queimar neste século sem alterar perigosamente o clima do planeta. Da mesma forma, a Agência Internacional de Energia (IEA) confirma o exposto por uma forte campanha feita na CoP18 contra os subsídios aos combustíveis fósseis. Segundo a IEA, os governos gastaram US$ 523 bilhões em subsídios aos combustíveis fósseis em 2011 – uma completa inversão de prioridades, do ponto de vista da mudança climática: para cada US$ 1 em apoio às energias renováveis​​, outros US$ 6 estão promovendo combustíveis intensivos em carbono. Parte dos subsídios aos combustíveis fósseis estão acontecendo em países emergentes e em desenvolvimento, haja vista os subsídios à gasolina impostos pelo governo brasileiro à Petrobrás. Mas talvez sejam mais importantes nos países ricos. Pesquisa do Overseas Development Institute, do Reino Unido, mostrou que os subsídios ao consumo de combustíveis fósseis em 11 países da OCDE alcançam o total de US$ 72 bilhões dólares, ou cerca de US$ 112 por habitante adulto destes países.

Essa perversidade econômica estrangula, no nascimento, as inovações tecnológicas que podem contribuir para evitarmos a colisão iminente entre a economia global (e o seu sistema energético) e os limites ecológicos do nosso planeta. Os recentes desenvolvimentos em energia eólica, solar, bio-combustíveis , geotermia, marés, células de combustível e eficiência energética estão aumentando as possibilidades de construção de um cenário energético de baixo carbono. Além de poderem afastar a crise climática, estas tecnologias poderiam abrir novas oportunidades de investimento, fornecer energia a preços acessíveis e sustentar o crescimento. Mas este potencial somente será realizado se os governos perseguirem ativamente políticas industriais sustentáveis. É necessário alinhar o objetivo de mitigação da crise climática com desincentivos para as fontes de energia intensivas em carbono por meio de impostos e apoio a alternativas sustentáveis.

O fim dos subsídios aos combustíveis fósseis precisa ser acompanhado por políticas que favoreçam a transferência de tecnologias limpas. Não podemos deixar de lado o exemplo da China, da Índia e também do Brasil, para onde multinacionais historicamente enviam plataformas de produção sujas e energo-intensivas. Infelizmente, as negociações sobre tecnologia estão entre as mais emperradas – tanto no formato anterior, estabelecido pelo Caminho de Bali, como agora, na chamada Plataforma Durban. Simultaneamente, tomamos conhecimento, pelo WikiLeaks, da Parceria Trans-Pacífica (TPP) referente a patentes e proteção intelectual – acordo que vem sendo negociado secretamente entre líderes de 12 países que concentram 40% do PIB e um terço do comércio global e que visa impor medidas mais agressivas para coibir a quebra de propriedade intelectual.

A discrepância entre o que a ciência recomenda e o que os governos estão promovendo permanece, independente do formato das negociações climáticas. Saímos dos dois trilhos estabelecidos em Bali para a Plataforma Durban, mas os compromissos financeiros ou metas mais agressivas de mitigação não vieram. Na primeira semana da CoP19, os discursos dos negociadores reviveram posicionamentos arcaicos e obstrutivos ao processo. Sim, é certo que já sabíamos que esta não seria uma conferência de grandes resultados. Mas o fato é que os bad guys resolveram ser realmente bad sob a condução complacente de uma presidência que não se constrange em explicitar sua conduta em prol do carvão e demais combustíveis fósseis. Tanto que a Rússia abriu mão de atravancar o processo, guardando suas queixas sobre o processo da UNFCCC para outra ocasião.

Esta outra ocasião pode ser a CoP20, no Peru, para onde as esperanças de negociações mais produtivas se voltam. Antes, porém, haverá a cúpula de Ban Ki Moon, para a qual as lideranças dos países estão convidadas. O objetivo é gerar a sensibilidade política que faltou em Copenhague e tentar definir metas antes da reta derradeira do acordo, em Paris. Esse encontro deve ser precedido e seguido de várias reuniões interseccionais para que os delegados avancem na costura do acordo e para que os itens críticos, como metas de mitigação e financiamento, comecem a adquirir contornos mais concretos.

Em outras palavras, uma agenda consistente de reuniões e o compromisso para apresentar metas no ano que vem são o melhor resultado que podemos esperar de uma conferência que corre o risco de entrar para a História como a CoP do carvão.

Délcio Rodrigues é especialista em Mudanças Climáticas do Vitae Civilis. Silvia Dias, membro do Conselho Deliberativo do Vitae Civilis, acompanha as negociações climáticas desde 2009.

A Climate-Change Victory (Slate)

If global warming is slowing, thank the Montreal Protocol.

By 

An aerosol spray can.

No CFCs, please. (Photo by iStock)

Climate deniers like to point to the so-called global warming “hiatus” as evidence that humans aren’t changing the climate. But according a new study, exactly the opposite is true: The recent slowdown in global temperature increases is partially the result of one of the few successful international crackdowns on greenhouse gases.

Back in 1988, more than 40 countries, including the United States, signed the Montreal Protocol, an agreement to phase out the use of ozone-depleting gases like chlorofluorocarbons. (Today the protocol has nearly 200 signatories.) According to the Environmental Protection Agency, CFC emissions are down 90 percent since the protocol, a drop that the agency calls “one of the largest reductions to date in global greenhouse gas emissions.” That’s a blessing for the ozone layer, but also for the climate. CFCs are a potent heat-trapping gas, and a new analysis published in Nature Geoscience finds that slashing them has been a major driver of the much-discussed slowdown in global warming.

Without the protocol, environmental economist Francisco Estrada of the Universidad Nacional Autónoma de México reports, global temperatures today would be about a tenth of a degree Celsius higher than they are. That’s roughly an eighth of the total warming documented since 1880.

Estrada and his co-authors compared global temperature and greenhouse gas emissions records over the last century and found that breaks in the steady upward march of both coincided closely. At times when emissions leveled off or dropped, such as during the Great Depression, the trend was mirrored in temperatures; likewise for when emissions climbed.

“With these breaks, what’s interesting is that when they’re common that’s pretty indicative of causation,” said Pierre Perron, a Boston University economist who developed the custom-built statistical tests used in the study.

The findings put a new spin on investigation into the cause of the recent “hiatus.” Scientists have suggested that several temporary natural phenomena, including thedeep ocean sucking up more heat, are responsible for this slowdown. Estrada says his findings show that a recent reduction in heat-trapping CFCs as a result of the Montreal Protocol has also played an important role.

“Paradoxically, the recent decrease in warming, presented by global warming skeptics as proof that humankind cannot affect the climate system, is shown to have a direct human origin,” Estrada writes in the study.

The chart below, from a column accompanying the study, illustrates that impact. The solid blue line shows the amount of warming relative to pre-industrial levels attributed to CFCs and other gases regulated by the Montreal Protocol; the dashed blue line is an extrapolation of what the level would be without the agreement. Green represents warming from methane; Estrada suggests that leveling out may be the result of improved farming practices in Asia. The diamonds are annual global temperature averages, with the red line fitted to them. The dashed red line represents Estrada’s projection of where global temperature would be without these recent mitigation efforts.

131115_CDESK_chart

Courtesy of Francisco Estrada via Mother Jones

Estrada said his study doesn’t undermine the commonly accepted view among climate scientists that the global warming effect of greenhouse gases can take years or decades to fully manifest. Even if we cut off all emissions today, we’d still very likely see warming into the future, thanks to the long shelf life of carbon dioxide, the principal climate-change culprit. The study doesn’t let CO2 off the hook: The reduction in warming would likely have been even greater if CO2 had leveled off as much as CFCs and methane. Instead, Estrada said, it has increased 20 percent since the protocol was signed.

Still, the study makes clear that efforts to reduce greenhouse gas emissions—like arecent international plan to phase out hydrofluorocarbons, a group of cousin chemicals to CFCs that are used in air conditioners and refrigerators, and the Obama administration’s move this year to impose strict new limits on emissions from power plants—can have a big payoff.

“The Montreal Protocol was really successful,” Estrada said. And as policymakers and climate scientists gather in Warsaw, Poland, for the latest U.N. climate summit next week, “this shows that international agreements can really work.”

Sobre a exploração de xisto no Brasil

JC e-mail 4855, de 13 de novembro de 2013

Cientistas querem adiar exploração de xisto

Ambientalistas e pesquisadores temem os estragos ambientais. Posicionamento da SBPC e da ABC foi registrado em carta

A exploração do gás de xisto nas bacias hidrográficas brasileiras, principalmente na região Amazônica, segue na contramão de países europeus, como França e Alemanha, e algumas regiões dos Estados Unidos, como o estado de Nova York, que vêm proibindo essa atividade, temendo estragos ambientais, mesmo diante de sua viabilidade econômica. Os danos são causados porque, para extrair o gás, os vários tipos de rochas metamórficas, chamadas xisto, são destruídas pelo bombeamento hidráulico ou por uma série de aditivos químicos.

Enquanto a Agência Nacional de Petróleo (ANP) mantém sua decisão de lançar em 28 e 29 de novembro os leilões de blocos de gás de xisto, autoridades de Nova York, um dos pioneiros na exploração desse produto, desde 2007, começam a rever suas políticas internas. Mais radical, a França ratificou, recentemente, a proibição da fratura hidráulica da rocha de xisto, antes mesmo de iniciar a extração desse produto, segundo especialistas.

Cientificamente batizado de gás de “folhelho”, o gás de xisto é conhecido também como “gás não convencional” ou natural. Embora tenha a mesma origem e aplicação do gás convencional, o de xisto se difere no seu processo de extração. Isto é, o produto não consegue sair da rocha naturalmente, ao contrário do gás convencional ou natural, que migra naturalmente das camadas rochosas. Para extrair o gás do xisto, ou seja, finalizar o processo de produção, são usados mecanismos artificiais, como fraturamento da rocha pelo bombeamento hidráulico ou por vários aditivos químicos.

Ao confirmar os leilões, a ANP afirma, via assessoria de imprensa, que a iniciativa cumpre a Resolução CNPE Nº 6 (de 23 de junho deste ano), publicada no Diário Oficial da União. Serão ofertados 240 blocos exploratórios terrestres com potencial para gás natural em sete bacias sedimentares, localizados nos estados do Amazonas, Acre, Tocantins, Alagoas, Sergipe, Piauí, Mato Grosso, Goiás, Bahia, Maranhão, Paraná, São Paulo, totalizando 168.348,42 Km².

Destino

O gás de xisto a ser extraído dessas bacias terá o mesmo destino do petróleo, ou seja, será comercializado como fonte de energia. No Brasil, o gás de xisto pode suprir principalmente o Rio Grande do Sul, Santa Catarina e Paraná, onde a demanda é crescente por gás natural, produto que esses estados importam da Bolívia.

Apesar do potencial econômico, o químico Jailson Bitencourt de Andrade, conselheiro da Sociedade Brasileira para o Progresso da Ciência (SBPC), reforça seu posicionamento sobre a importância de adiar os leilões da ANP e ampliar as pesquisas sobre os impactos negativos da extração do gás de xisto, a fim de evitar as agressões ao meio ambiente. “É preciso dar uma atenção grande a isso”, alerta o pesquisador, também membro da Sociedade Brasileira de Química (SBQ) e da Academia Brasileira de Ciências (ABC). “Mesmo nos Estados Unidos, onde há uma boa cadeia de logística, capaz de reduzir o custo de exploração do gás de xisto, e mesmo que sua relação custo-benefício seja altíssima, alguns estados já estão revendo suas políticas e criando barreiras para a exploração desse produto.”

O posicionamento da SBPC e da ABC

Em carta (disponível em http://www.sbpcnet.org.br/site/artigos-e-manifestos/detalhe.php?p=2011), divulgada em agosto, a SBPC e ABC expõem a preocupação com a decisão da ANP de incluir o gás de xisto, obtido por fraturamento da rocha, na próxima licitação. Um dos motivos é o fato de a tecnologia de extração desse gás ser embasada em processos “invasivos da camada geológica portadora do gás, por meio da técnica de fratura hidráulica, com a injeção de água e substâncias químicas, podendo ocasionar vazamentos e contaminação de aquíferos de água doce que ocorrem acima do xisto”.

Diante de tal cenário, Andrade volta a defender a necessidade de o Brasil investir mais em conhecimento científico nas bacias que devem ser exploradas, “até mesmo para ter uma noção da atual situação das rochas para poder comparar possíveis impactos dessas bacias no futuro”. Nesse caso, ele adiantou que o governo, por intermédio do Ministério de Ciência, Tecnologia e Inovação (MCTI) e da Agência Brasileira da Inovação (Finep), está formando uma rede de pesquisa para estudar os impactos do gás de xisto.

Defensor de estudar todas alternativas de produção de gás para substituir o petróleo futuramente, o pesquisador Hernani Aquini Fernandes Chaves, vice-coordenador do Instituto Nacional de Óleo e Gás (INOG), frisa, em contrapartida, que, apesar de eventuais estragos das rochas de xisto, o uso desse gás “é ambientalmente mais correto” do que o próprio petróleo. “Ele tem menos emissão de gás”, garante. “Precisamos conhecer todas as possibilidades de produção, porque, além de irrigar a economia, o petróleo é um bem finito que acaba um dia. O país é grande. Por isso tem de ver as possibilidades de levar o progresso a todas às áreas.” Ele se refere ao interior do Maranhão, uma das regiões mais pobres do Brasil e com potencial para exploração de gás de xisto.

Sem querer comparar o potencial de produção de gás de xisto dos EUA ao brasileiro, Chaves considera “muito otimista” as estimativas da Agência Internacional de Energia dos EUA feitas para o Brasil, de reservas da ordem de 7,35 trilhões de m³. Segundo Chaves, o INOG ainda não fez estimativas para produção de gás de xisto no território nacional. As bacias produtoras de gás de xisto, disse, ainda não foram comprovadas. Em fase experimental, porém, o gás de xisto já é produzido pela Petrobras na planta de São Mateus do Sul.

Ao falar sobre os danos ambientais provocados pela extração do gás de xisto, Chaves reconhece esse ser “um ponto controverso”. Por ora, ele esclarece que na Europa, sobretudo França e Alemanha, não é permitida a extração do gás de xisto pelo fato de o processo de exploração consumir muita água e prejudicar os aquíferos. Além disso, em Nova York, onde a produção foi iniciada, a exploração também passou a ser questionada. “Os ambientalistas não estão felizes com a produção desse gás”, reconhece. “Na França, por exemplo, não deixaram furar as rochas, mesmo sabendo das estimativas de produção de gás de xisto.”

Esclarecimentos da ANP

Segundo o comunicado da assessoria de imprensa da ANP, as áreas ofertadas nas rodadas de licitações promovidas pela ANP são previamente analisadas quanto à viabilidade ambiental pelo Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis (Ibama) e pelos órgãos ambientais estaduais competentes. “O objetivo desse trabalho conjunto é eventualmente excluir áreas por restrições ambientais em função de sobreposição com unidades de conservação ou outras áreas sensíveis, onde não é possível ou recomendável a ocorrência de atividades de exploração e produção (E&P) de petróleo e gás natural”.

Para todos os blocos ofertados na 12ª rodada de leilões, segundo o comunicado, houve a “devida manifestação positiva do órgão estadual ambiental” competente. “A ANP, apesar de não regular as questões ambientais, está atenta aos fatos relativos a esse tema, no que tange à produção de petróleo e gás natural no Brasil. Nesse sentido, as melhores práticas utilizadas na indústria de petróleo e gás natural em todo o mundo são constantemente acompanhadas e adotadas pela ANP”, cita o documento.

A ANP acrescenta: “Como o processo regulatório é dinâmico, a ANP tomará as medidas necessárias para, sempre que pertinente, adequar suas normas às questões que se apresentarem nos próximos anos para garantir a segurança nas operações.”

(Viviane Monteiro / Jornal da Ciência)

* * *

JC e-mail 4856, de 14 de novembro de 2013

Seminário promove debate sobre os impactos ambientais da exploração do gás de xisto

Com a participação de Jailson de Andrade, conselheiro da SBPC, o encontro discutiu também a necessidade dessa fonte de energia para o setor energético brasileiro

O Instituto Brasileiro de Análises Sociais e Econômicas (Ibase), o Greenpeace, o ISA, a Fase e o CTI promoveram ontem (13) em São Paulo um seminário, aberto ao público, sobre os impactos socioambientais da exploração do gás de xisto no Brasil. Com a participação de Jailson de Andrade, conselheiro da SBPC, o encontro promoveu o debate sobre questões ambientais envolvidas nesse tipo de exploração mineral e discutiu sua viabilidade. Foi discutida também a necessidade dessa fonte de energia para o setor energético brasileiro, com enfoque nas bacias do Acre, Mato Grosso e no aquífero Guarani. O pesquisador do Ibase Carlos Bittencourt alertou que é preciso prorrogar o leilão para que se possa fazer os estudos necessários antes de autorizar a exploração.

O processo licitatório para a exploração de áreas de gás natural convencionais e não convencionais deve acontece no final deste mês. A Agência Nacional do Petróleo (ANP) vai colocar à disposição 240 blocos exploratórios terrestres distribuídos em 12 estados do país. O xisto, gás não convencional utilizado por usinas hidrelétricas e indústrias é uma fonte de energia que, apesar de conhecida, permaneceu inexplorada durante muitos anos, por falta de tecnologia capaz de tornar viável a sua extração.

Ricardo Baitelo (Greenpeace), Bianca Dieile (FAPP-BG), Conrado Octavio (CTI) e Angel Matsés (Comunidad Nativa Matsés) e a moderação é de Padre Nelito (CNBB). O apoio para o seminário é da Ajuda da Igreja Norueguesa.

(Com informações do Ibase)

Geoengineering the Climate Could Reduce Vital Rains (Science Daily)

Oct. 31, 2013 — Although a significant build-up in greenhouse gases in the atmosphere would alter worldwide precipitation patterns, a widely discussed technological approach to reduce future global warming would also interfere with rainfall and snowfall, new research shows.

Rice field in Bali. (Credit: © pcruciatti / Fotolia)

The international study, led by scientists at the National Center for Atmospheric Research (NCAR), finds that global warming caused by a massive increase in greenhouse gases would spur a nearly 7 percent average increase in precipitation compared to preindustrial conditions.

But trying to resolve the problem through “geoengineering” could result in monsoonal rains in North America, East Asia, and other regions dropping by 5-7 percent compared to preindustrial conditions. Globally, average precipitation could decrease by about 4.5 percent.

“Geoengineering the planet doesn’t cure the problem,” says NCAR scientist Simone Tilmes, lead author of the new study. “Even if one of these techniques could keep global temperatures approximately balanced, precipitation would not return to preindustrial conditions.”

As concerns have mounted about climate change, scientists have studied geoengineering approaches to reduce future warming. Some of these would capture carbon dioxide before it enters the atmosphere. Others would attempt to essentially shade the atmosphere by injecting sulfate particles into the stratosphere or launching mirrors into orbit with the goal of reducing global surface temperatures.

The new study focuses on the second set of approaches, those that would shade the planet. The authors warn, however, that Earth’s climate would not return to its preindustrial state even if the warming itself were successfully mitigated.

“It’s very much a pick-your-poison type of problem,” says NCAR scientist John Fasullo, a co-author. “If you don’t like warming, you can reduce the amount of sunlight reaching the surface and cool the climate. But if you do that, large reductions in rainfall are unavoidable. There’s no win-win option here.”

The study appears in an online issue of the Journal of Geophysical Research: Atmospheres, published this week by the American Geophysical Union. An international team of scientists from NCAR and 14 other organizations wrote the study, which was funded in part by the National Science Foundation (NSF), NCAR’s sponsor. The team used, among other tools, the NCAR-based Community Earth System Model, which is funded by NSF and the Department of Energy.

Future carbon dioxide, with or without geoengineering

The research team turned to 12 of the world’s leading climate models to simulate global precipitation patterns if the atmospheric level of carbon dioxide, a leading greenhouse gas, reached four times the level of the preindustrial era. They then simulated the effect of reduced incoming solar radiation on the global precipitation patterns.

The scientists chose the artificial scenario of a quadrupling of carbon dioxide levels, which is on the high side of projections for the end of this century, in order to clearly draw out the potential impacts of geoengineering.

In line with other research, they found that an increase in carbon dioxide levels would significantly increase global average precipitation, although there would likely be significant regional variations and even prolonged droughts in some areas.

Much of the reason for the increased rainfall and snowfall has to do with greater evaporation, which would pump more moisture into the atmosphere as a result of more heat being trapped near the surface.

The team then took the research one step further, examining what would happen if a geoengineering approach partially reflected incoming solar radiation high in the atmosphere.

The researchers found that precipitation amounts and frequency, especially for heavy rain events, would decrease significantly. The effects were greater over land than over the ocean, and particularly pronounced during months of heavy, monsoonal rains. Monsoonal rains in the model simulations dropped by an average of 7 percent in North America, 6 percent in East Asia and South America, and 5 percent in South Africa. In India, however, the decrease was just 2 percent. Heavy precipitation further dropped in Western Europe and North America in summer.

A drier atmosphere

The researchers found two primary reasons for the reduced precipitation.

One reason has to do with evaporation. As Earth is shaded and less solar heat reaches the surface, less water vapor is pumped into the atmosphere through evaporation.

The other reason has to do with plants. With more carbon dioxide in the atmosphere, plants partially close their stomata, the openings that allow them to take in carbon dioxide while releasing oxygen and water into the atmosphere. Partially shut stomata release less water, so the cooled atmosphere would also become even drier over land.

Tilmes stresses that the authors did not address such questions as how certain crops would respond to a combination of higher carbon dioxide and reduced rainfall.

“More research could show both the positive and negative consequences for society of such changes in the environment,” she says. “What we do know is that our climate system is very complex, that human activity is making Earth warmer, and that any technological fix we might try to shade the planet could have unforeseen consequences.”

The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Journal Reference:

  1. Simone Tilmes, John Fasullo, Jean-Francois Lamarque, Daniel R. Marsh, Michael Mills, Kari Alterskjaer, Helene Muri, Jón E. Kristjánsson, Olivier Boucher, Michael Schulz, Jason N. S. Cole, Charles L. Curry, Andy Jones, Jim Haywood, Peter J. Irvine, Duoying Ji, John C. Moore, Diana B. Karam, Ben Kravitz, Philip J. Rasch, Balwinder Singh, Jin-Ho Yoon, Ulrike Niemeier, Hauke Schmidt, Alan Robock, Shuting Yang, Shingo Watanabe. The hydrological impact of geoengineering in the Geoengineering Model Intercomparison Project (GeoMIP)Journal of Geophysical Research: Atmospheres, 2013; 118 (19): 11,036 DOI:10.1002/jgrd.50868

Admirável mundo novo animal (Canal Ibase)

29/10/2013

Renzo Taddei

Colunista do Canal Ibase

Se avaliada pela repercussão que obteve na imprensa, a libertação dos 178 beagles do Instituto Royal foi um marco histórico. Nem na época do debate sobre a regulamentação do uso de células-tronco tanta gente graduada veio a público defender suas práticas profissionais. O tema está na capa das principais revistas semanais do país. A análise dos argumentos apresentados na defesa do uso de animais como cobaias de laboratório é, no entanto, desanimadora. E o é porque expõe o quanto nossos cientistas estão despreparados para avaliar, de forma ampla, as implicações éticas e morais do que fazem.

Vejamos: no debate aprendemos que há pesquisas para as quais as alternativas ao uso de animais não são adequadas. Aprendemos que muitas das doenças que são hoje de fácil tratamento não o seriam sem os testes feitos em animais; desta forma, muitas vidas humanas foram salvas. (Exemplificando como a razão pode sucumbir à emoção – até mesmo entre os mais aguerridos racionalistas -, um pesquisador da Fiocruz chegou ao desatino de afirmar que os “animais experimentais são grandes responsáveis pela sobrevivência da raça humana no planeta”). Adicionalmente, o fato de cientistas importantes do passado, como Albert Sabin, Carlos Chagas ou Louis Pasteur, terem usado animais como cobaias de laboratório em suas pesquisas mostra que os cientistas, por sua contribuição à humanidade, não podem ser tratados como criminosos. Ainda pior que isso tudo, se o Brasil proibir testes com animais, a ciência brasileira perderá autonomia e competitividade, porque dependerá de resultados de pesquisas feitas em outros países para o seu avanço.

Além do mais, há que se levar o animal em consideração: é consenso entre cientistas de que os animais de laboratório não devem sofrer. Providências foram tomadas nesse sentido, como a criação do Conselho Nacional de Controle de Experimentação Animal, e da obrigatoriedade das instituições terem cada uma sua Comissão de Ética no Uso de Animais, com assento para representante de sociedades protetoras de animais legalmente constituídas. E, finalmente, os “próprios animais” são beneficiados, em razão de como as experiências de laboratório supostamente contribuem com o desenvolvimento da ciência veterinária.

De forma geral, o que temos aí resumido é o seguinte: os animais são coisas, e devem ser usados como tais; ou os animais não são coisas, mas infelizmente devem ser usados como tais. Há algo maior que se impõe (e sobre a qual falarei mais adiante), de forma determinante, de modo que se os animais são ou não são coisas, isso é um detalhe menor, que os cientistas logo aprendem a desprezar em seu treinamento profissional.

Foto: Ruth Elison/Flickr

A ideia de que os animais são coisas é antiga: Aristóteles, em seu livro Política, escrito há dois mil e trezentos anos, afirmou que os animais não são capazes de uso da linguagem e, por essa razão, não são capazes de uma existência ética. Sendo assim, conclui o filósofo, os animais foram criados para servir os humanos. Ideia semelhante está no Gênesis bíblico: “E disse Deus: Façamos o homem à nossa imagem, conforme a nossa semelhança; e domine sobre os peixes do mar, e sobre as aves dos céus, e sobre o gado, e sobre toda a terra, e sobre todo o réptil que se move sobre a terra” (Gênesis 1:26). Santo Agostinho e São Tomás de Aquino reafirmam a desconexão entre os animais e Deus. (São Francisco é, na história cristã, claramente um ponto fora da curva). A ideia chegou aos nossos dias praticamente intacta. O Catecismo Católico afirma, em seu parágrafo 2415, que “Os animais, tal como as plantas e os seres inanimados, são naturalmente destinados ao bem comum da humanidade, passada, presente e futura”. A ciência renascentista, através de Descartes e outros autores, fundou o humanismo que a caracteriza sobre essa distinção entre humanos e animais, exacerbando-a: o animal (supostamente) irracional passa a ser entendido como a antítese do humano (supostamente) racional. O tratamento de animais como coisas pela ciência contemporânea tem, desta forma, raízes históricas antigas.

Ocorre, no entanto, que essa ideia se contrapõe à existência cotidiana da maioria da humanidade, em todas as épocas. Em sociedades e culturas não-ocidentais, é comum que se atribua alguma forma de consciência e personalidade “humana” aos animais. Nas sociedades ocidentais, quem tem animal de estimação sabe que estes têm muito mais do que a simples capacidade de sentir dor: são capazes de fazer planos; de interagir entre si e com humanos em tarefas complexas, tomando decisões autônomas; integram-se na ecologia emocional das famílias humanas de forma significativa, construindo maneiras inteligentes de comunicar suas emoções. (Isso sem mencionar como animais humanizados são onipresentes em nosso imaginário cultural, dos desenhos animados infantis aos símbolos de times de futebol, de personagens do folclore popular a blockbusters hollywoodianos). De fato, o contraste entre essa percepção cotidiana e o que sugerem os pensamentos teológico e teórico mencionados acima faz parecer que há racionalização em excesso em tais argumentos. E onde há racionalização demais, ao invés de uma descrição do mundo, o mais provável é que haja uma tentativa de controle da realidade. Ou seja, trata-se mais de um discurso político, que tenta estabilizar relações desiguais de poder, do que qualquer outra coisa (nada de novo, aqui, para as ciências sociais ou para a filosofia da ciência).

É da própria atividade científica, no entanto, que vêm as evidências mais contundentes de que os animais são muito mais do que seres sencientes. No dia 7 de julho de 2012, um grupo de neurocientistas, neurofarmacologistas, neurofisiologistas, neuroanatomistas e cientistas da neurocomputação, reunidos na Universidade de Cambridge, produziu o documento intitulado Manifesto de Cambridge sobre a Consciência, onde se afirma o seguinte: “a ausência de neocortex não parece impedir um organismo de experimentar estados afetivos. Evidências convergentes indicam que animais não-humanos têm os substratos neuroanatômicos, neuroquímicos e neurofisiológicos necessários para a geração de estados conscientes, aliados à capacidade de exibir comportamento intencional. Consequentemente, as evidências indicam que os humanos não são únicos em possuir substratos neurológicos que geram consciência. Animais não-humanos, incluindo todos os mamíferos e aves, e muitas outras criaturas, como os polvos, também possuem tais substratos neurológicos” (tradução livre). O manifesto foi assinado em jantar que contou com a presença de Stephen Hawking. Phillip Low, um dos neurocientistas que redigiu o manifesto, disse em entrevista à revista Veja (edição 2278, 18 jul. 2012): “É uma verdade inconveniente: sempre foi fácil afirmar que animais não têm consciência. Agora, temos um grupo de neurocientistas respeitados que estudam o fenômeno da consciência, o comportamento dos animais, a rede neural, a anatomia e a genética do cérebro. Não é mais possível dizer que não sabíamos”.

Outro grupo de pesquisas com resultados problemáticos para a manutenção de mamíferos em laboratórios vem das ciências que estudam a vida social dos animais, em seus ambientes selvagens. Animais são seres sociais; alguns, como os estudos em primatologia nos mostram, têm sua vida social pautada por dinâmicas políticas complexas, onde os indivíduos não apenas entendem suas relações de parentesco de forma sofisticada, mas também ocupam postos específicos em hierarquias sociais que podem ter quatro níveis de diferenciação. Estudos da Universidade de Princeton  com babuínos mostraram que fêmeas são capazes de induzir uma ruptura política no bando, o que resulta na formação de um novo grupo social. Há muitos outros animais que vivem em sociedades hierárquicas complexas, como os elefantes, por exemplo. Cães, gatos, coelhos e ratos são também, naturalmente, animais sociais, ainda que a complexidade de seus grupos não seja equiparável ao que se vê entre babuínos e elefantes.

Além disso tudo, está amplamente documentado que muitos primatas são capazes de inventar soluções tecnológicas para seus problemas cotidianos – criando ferramentas para quebrar cascas de sementes, por exemplo – e de transmitir o que foi inventado aos demais membros dos grupos; inclusive aos filhotes. Tecnicamente, isso significa que possuem cultura, isto é, vida simbólica. As baleias mudam o “estilo” de seu canto de um ano para o outro, sem que isso tenha causas estritamente biológicas. Segundo o filósofo e músico Bernd M. Scherer, não há como explicar a razão pela qual o canto de um pássaro seja estruturado pela repetição de uma sequência de sons de 1 ou 2 segundos, enquanto outros pássaros cantam em sequências muito mais longas, usando apenas as ideias de marcação de território e atração de fêmeas. Scherer, através de suas pesquisas (que incluem a interação musical, em estilo jazzístico, com pássaros e baleias), está convencido de que há uma dimensão estética presente no canto dos pássaros. Ele afirma, também, que grande parte dos pássaros precisa aprender a cantar, e não nasce com o canto completamente pré-definido geneticamente.

Não há razão para pensar que isso tudo não se aplique também às vacas, porcos e galinhas. Annie Potts, da Universidade de Canterbury, descreve no livro Animals and Society, de Margo DeMello (2012), sua observação da amizade de duas galinhas, Buffy e Mecki, no santuário de galinhas mantido pela pesquisadora. Em determinado momento, Buffy adoeceu, e sua saúde deteriorou-se a ponto de ela não poder mais sair de debaixo de um arbusto. Sua amiga Mecki manteve-se sentada ao seu lado, a despeito de toda a atividade das demais galinhas do santuário, bicando-a suavemente ao redor da face e em suas costas, enquanto emitia sons suaves. Quando Buffy finalmente morreu, Mecki retirou-se para dentro do galinheiro, e por determinado período recusou-se a comer e a acompanhar as outras galinhas em suas atividades. As galinhas são susceptíveis ao luto, conclui Potts.

Quanto mais se pesquisa a existência dos animais – especialmente aves e mamíferos -, mais se conclui que entre eles e nós há apenas diferenças de grau, e não de qualidade. Ambos temos consciência, inteligência, intencionalidade, inventividade, capacidade de improvisação e habilidade no uso de símbolos para a comunicação; ao que parece, os animais não-humanos fazem uso de tais capacidades de forma menos complexa que os humanos, e essa é toda a diferença. Vivemos o momento da descoberta de um admirável mundo novo animal. Nosso mundo tem muito mais subjetividades do que imaginávamos; talvez devêssemos parar de procurar inteligência em outros planetas e começar a olhar mais cuidadosamente ao nosso redor. O problema é que, quando o fazemos, o que vemos não é agradável. Se os animais têm a capacidade de serem sujeitos de suas próprias vidas, como apontam as evidências, ao impedir que o façam os humanos incorrem em ações, no mínimo, eticamente condenáveis.

Voltemos aos argumentos de defesa do uso de animais em laboratórios, citados no início desse texto. A maior parte das razões listadas se funda em razões utilitárias: “assim é mais eficaz; se fizermos de outra forma, perderemos eficiência”. Não se pode fundamentar uma discussão ética sobre pressupostos utilitaristas. Se assim não o fosse, seria aceitável matar um indivíduo saudável para salvar (através da doação de seus órgãos, por exemplo) outros cinco indivíduos doentes. O que boa parte dos cientistas não consegue enxergar é que se trata de um problema que não se resume à dimensão da técnica; trata-se de uma questão política (no sentido filosófico do termo, ou seja, que diz respeito ao problema existencial de seres vivos que coexistem em conflito de interesses).

Mas há outro elemento a pautar, silenciosamente, a lógica da produção científica: a competitividade mercadológica. Na academia, isso se manifesta através do produtivismo exacerbado, onde qualquer alteração metodológica que implique em redução de eficiência no ritmo de pesquisas e publicações encontra resistência. Em laboratórios privados, além da pressa imposta pela concorrência, há a pressão pela redução dos custos de pesquisa. É preciso avançar, a todo custo. Essa percepção do ritmo das coisas parece “natural”, mas não o é: os argumentos falam da colocação em risco das pesquisas que levarão à cura da AIDS ou da criação da vacina para a dengue, como se essas coisas já pré-existissem em algum lugar, e o seu tempo de “descoberta” fosse definido. Isso é uma ficção: não apenas científica, mas também política. As coisas não pré-existem, e o ritmo das coisas não tem nada de “natural”. O tempo é parte da política: é a sociedade quem deve escolher em qual ritmo deve seguir, e é absolutamente legítimo reduzir o ritmo dos avanços técnico-científicos, se as implicações morais para tais avanços forem inaceitáveis.

De todos os cientistas que se pronunciaram nos últimos dias, foi Sidarta Ribeiro, no Estadão do último domingo, o único que colocou, abertamente, o problema de os animais não serem coisas. Mas, para desânimo do leitor, e decepção dos que o admiram, como eu, suas conclusões caíram na vala comum do simplismo burocrático: o problema se resolveu com a criação do aparato burocrático de regulamentação do uso de animais, já mencionado anteriormente, no início desse texto. Ora, se os animais são seres dotados de intencionalidade, inteligência e afeto, e se a plenitude da sua existência depende de vida social complexa, a simples manutenção do seu organismo vivo e (supostamente) sem dor é suficiente para fazer com que eles “não sofram”? Sidarta coloca, de forma acertada, que é preciso atentar para o fato de que coisas muito piores ocorrem na indústria da carne, e também em muitas áreas da existência humana. Mas erra ao criar a impressão de que uma coisa existe em contraposição à outra (algo como “lutem pela humanização dos humanos desumanizados e deixem a ciência em paz”). Todas elas são parte do mesmo problema: a negação do direito a ser sujeito da própria vida. Uma atitude ética coerente implica a não diferenciação de espécie, considerando todos aqueles que efetivamente podem ser sujeitos da própria vida. O resto é escravidão, de animais humanos e não humanos.

Os protocolos de ética em pesquisa com sujeitos humanos foram desenvolvidos após a constatação dos horrores da experimentação médica nazista em judeus. Parece-me inevitável que, em algumas décadas, venhamos a pensar na experimentação com sujeitos-animais em laboratórios com o mesmo sentimento de indignação e horror.

Renzo Taddei é doutor em antropologia pela Universidade de Columbia. É professor da Universidade Federal de São Paulo.

 

An interview with Alan Greenspan (FT)

October 25, 2013 10:41 am

By Gillian Tett

Six years on from the start of the credit crisis, the former US Federal Reserve chairman is prepared to admit that he got it wrong – at least in part

Alan Greenspan©Stefan Ruiz

Acouple of years ago I bumped into Alan Greenspan, the former chairman of the US Federal Reserve, in the lofty surroundings of the Aspen Institute Ideas Festival. As we chatted, the sprightly octogenarian declared that he was becoming interested in social anthropology – and wanted to know what books to read.

“Anthropology?” I retorted, in utter amazement. It appeared to overturn everything I knew (and criticised) about the man. Greenspan, after all, was somebody who had trained as an ultraorthodox, free-market economist and was close to Ayn Rand, the radical libertarian novelist. He was (in) famous for his belief that the best way to run an economy was to rely on rational actors competing in open markets. As Fed chair, he seemed to worship mathematical models and disdain “soft” issues such as human culture.

But Greenspan was serious; he wanted to venture into new intellectual territory, he explained. And that reflected a far bigger personal quest. Between 1987 and 2006, when he led the Fed, Greenspan was highly respected. Such was his apparent mastery over markets – and success in delivering stable growth and low inflation – that Bob Woodward, the Washington pundit, famously described him as a “maestro”. Then the credit crisis erupted in 2007 and his reputation crumbled, with critics blaming him for the bubble. Greenspan denied any culpability. But in late 2008, he admitted to Congress that the crisis had exposed a “flaw” in his world view. He had always assumed that bankers would act in ways that would protect shareholders – in accordance with free-market capitalist theory – but this presumption turned out to be wrong.

In the months that followed, Greenspan started to question and explore many things – including the unfamiliar world of anthropology and psychology. Hence our encounter in Aspen.

Was this just a brief intellectual wobble, I wondered? A bid for sympathy from a man who had gone from hero to zero in investors’ eyes? Or was it possible that a former “maestro” of free markets could change his mind about how the world worked? And if so, what does that imply for the discipline of economics, let alone Greenspan’s successors in the policy making world – such as Janet Yellen, nominated as the new head of the Fed?

Earlier this month I finally got a chance to seek some answers when I stepped into a set of bland, wood-panelled offices in the heart of Washington. Ever since Greenspan left the imposing, marble-pillared Fed, this suite has been his nerve centre. He works out of a room dubbed the “Oval Office” due to its shape. It is surprisingly soulless: piles of paper sit on the windowsill next to a bust of Abraham Lincoln. One flash of colour comes from a lurid tropical beach scene that he has – somewhat surprisingly – installed as a screen saver.

“If you are not going to have numbers on your screen, you might as well have something nice to look at,” he laughs, spreading his large hands expansively in the air. Then, just in case I might think that he is tempted to slack off at the age of 87, he stresses that “I do play tennis twice a week – but my golf game is in the soup. I haven’t had time to get out.” Or, it seems, daydream on a beach. “I get so engaged when I have a problem you cannot solve, that I just cannot break away from what I am doing – I keep thinking and thinking and cannot stop.”

The task that has kept him so busy is his new book, The Map and the Territory,published this month and a successor to an earlier memoir, The Age of Turbulence.To the untrained eye, this title might seem baffling. But to Greenspan, the phrase is highly significant. For what his new manuscript essentially does is explain his intellectual journey since 2007. Most notably it shows why he now thinks that the “map” that he (and many others) once used to analyse finance is incomplete – and what this means for anyone wanting to navigate today’s economic “territory”.

Greenspan in the 'Oval Office' of his Washington workplace©Stefan RuizGreenspan in the ‘Oval Office’ of his Washington workplace

This is not quite the mea culpa that some people who are angry about the credit bubble would like to see. Greenspan is a man who built his career by convincing people that he was correct. Born in New York to a family of east European Jewish ancestry, he trained as an economist and, before he was appointed by Ronald Reagan to run the Fed, was an economic consultant on Wall Street (interspersed with a brief spell working for the Nixon administration). This background once made him lauded; today it seems more of a liability, at least in the eyes of the political left. “Before [2007] I was embarrassed by the adulation – they made me a rock star,” he says. “But I knew then that I was being praised for something I didn’t really do. So after, when I got hammered, it kind of balanced out, since I don’t think I deserved the criticism either … I am a human so I feel it but not as much as some.”

Yet in one respect, at least, Greenspan has had a change of heart: he no longer thinks that classic orthodox economics and mathematical models can explain everything. During the first six decades of his career, he thought – or hoped – that Homo economicus was a rational being and that algorithms could forecast behaviour. When he worked on Wall Street he loved creating models and when he subsequently joined the Fed he believed the US central bank was brilliantly good at this. “The Fed model was as advanced as you could possibly get it,” he recalls. “All the new concepts with every theoretical advance was embodied in that model – rational expectations, monetarism, all sorts of sophisticated means of thinking about how the economy worked. The Fed has 250 [economic] PhDs in that division and they are all very smart.”

And yet in September 2008, this pride was shattered when those venerated models suddenly stopped working. “The whole period upset my view of how the world worked – the models failed at a time when we needed them most … and the failure was uniform,” he recalls, shaking his head. “JPMorgan had the American economy accelerating three days before [the collapse of Lehman Brothers] – their model failed. The Fed model failed. The IMF model failed. I am sure the Goldman model also missed it too.

“So that left me asking myself what has happened? Are we living in an unreal world which has a model which is supposed to replicate the economy but gets caught out by one of the most extraordinary events in history?”

Shocked, Greenspan spent the subsequent months trying to answer his own question. He crunched and re-crunched his beloved algorithms, scoured the data and tested his ideas. It was not the first time he had engaged in intellectual soul-searching: in his youth he had once ascribed to intellectual positivism, until Rand, the libertarian, persuaded him those ideas were wrong. However, this was more radical. Greenspan was losing faith in “the presumption of neoclassical economics that people act in rational self-interest”. “To me it suddenly seemed that the whole idea of taking the maths as the basis of pricing that system failed. The whole structure of risk evaluation – what they call the ‘Harry Markowitz approach’ – failed,” he observes, referring to the influential US economist who is the father of modern portfolio management. “The rating agency failed completely and financial services regulation failed too.”

But if classic models were no longer infallible, were there alternative ways to forecast an economy? Greenspan delved into behavioural economics, anthropology and psychology, and the work of academics such as Daniel Kahneman. But those fields did not offer a magic wand. “Behavioural economics by itself gets you nowhere and the reason is that you cannot create a macro model based on just [that]. To their credit, behavioural economists don’t [even] claim they can,” he points out.

Alan Greenspan with Ayn Rand©GettyIn 1974 with friend and inspiration, the writer Ayn Rand

But as the months turned into years, Greenspan slowly developed a new intellectual framework. This essentially has two parts. The first half asserts that economic models still work in terms of predicting behaviour in the “real” economy: his reading of past data leaves him convinced that algorithms can capture trends in tangible items like inventories. “In the non-financial part of the system [rational economic theory] works very well,” he says. But money is another matter: “Finance is wholly different from the rest the economy.” More specifically, while markets sometimes behave in ways that models might predict, they can also become “irrational”, driven by animal spirits that defy maths.

Greenspan partly blames that on the human propensity to panic. “Fear is a far more dominant force in human behaviour than euphoria – I would never have expected that or given it a moment’s thought before but it shows up in the data in so many ways,” he says. “Once you get that skewing in [statistics from panic] it creates the fat tail.” The other crucial issue is what economists call “leverage” (more commonly dubbed “debt”). When debt in an economy is low, then finance is “neutral” in economic terms and can be explained by models, Greenspan believes. But when debt explodes, this creates fragility – and that panic. “The very nature of finance is that it cannot be profitable unless it is significantly leveraged … and as long as there is debt there can be failure and contagion.”

A cynic might complain that it is a pity Greenspan did not spot that “flaw” when he was running the Fed and leverage was exploding. He admits that he first saw how irrational finance could become as long ago as the 1950s and 1960s when he briefly tried, as a young New York economist, to trade commodity markets. Back then he thought he could predict cotton values “from the outside, looking at supply-demand forces”. But when he actually “bought a seat in the market and did a lot of trading” he discovered that rational logic did not always rule. “There were a couple of guys in that exchange who couldn’t tell a hide from copper sheeting but they made a lot of money. Why? They weren’t trading a commodity but human nature … and there is something about human nature which is not rational.”

Half-a-century later, when Greenspan was running the Fed, he had seemingly come to assume that markets would always “self-correct”, in a logical manner. Thus he did not see any reason to prick bubbles or control excessive exuberance by government fiat. “If bubbles are not leveraged, they can be highly disruptive to the wealth of people who own assets but there are not really any secondary consequences,” he explains, pointing out that the stock market bubble of the late 1980s and tech bubble of the late 1990s both deflated – without real economic damage. “It is only when you have leverage that a collapse in values becomes so contagious.”

Of course, the tragedy of the noughties credit bubble was that this bout of exuberance – unlike 1987 or 2001 – did involve leverage on a massive scale. Greenspan, for his part, denies any direct culpability for this. Though critics have carped that he cut rates in 2001, and thus created easy money, he points out that from 2003 onwards the Fed, and other central banks, were diligently raising interest rates. But even “when we raised [official] rates, long-term rates went down – bond prices were very high”, he argues, blaming this “conundrum” on the fact that countries such as China were experiencing “a huge increase in savings, all of which spilled into the developed world and the global bond market at that time”. But whatever the source of this leverage, one thing is clear: Greenspan, like his critics, now agrees that this tidal wave of debt meant that classic economic theory became impotent to forecast how finance would behave. “We have a system of finance which is far too leveraged – [the models] cannot work in this context.”

So what does that mean for institutions such as the Fed? When I arrived to interview Greenspan, the television screens were filled with the face of Yellen. What advice would he give her? Should she rip up all the Fed’s sophisticated models? Hire psychologists or anthropologists instead?

Alan Greenspan with former US President George W. Bush©Chuck KennedyMay 2004. Greenspan is nominated as Fed chairman for an unprecedented fifth term by President George W. Bush

For the first time during our two-hour conversation, Greenspan looks nonplussed. “It never entered my mind – it’s almost too presumptuous of me to say. I haven’t thought about it.” Really? I press him. He shakes his head vigorously. And then he slides into diplomatic niceties. One unspoken, albeit binding, rule of central banking is that the current and former incumbents of the top jobs never criticise each other in public. “Yellen is a great economist, a wonderful person,” he insists.

But tact cannot entirely mask Greenspan’s deep concern that six years after the leverage-fuelled crisis, there is even more debt in the global financial system and even easier money due to quantitative easing. And later he admits that the Fed faces a “brutal” challenge in finding a smooth exit path. “I have preferences for rates which are significantly above where they are,” he observes, admitting that he would “hardly” be tempted to buy long-term bonds at their current rates. “I run my own portfolio and I am not long [ie holding] 30-year bonds.”

But even if Greenspan is wary of criticising quantitative easing, he is more articulate about banking. Most notably, he is increasingly alarmed about the monstrous size of the debt-fuelled western money machine. “There is a very tricky problem we don’t know how to solve or even talk about, which is an inexorable rise in the ratio of finance and financial insurance as a ratio of gross domestic income,” he says. “In the 1940s it was 2 per cent of GDP – now it is up to 8 per cent. But it is a phenomenon not indigenous to the US – it is everywhere.

“You would expect that with the 2008 crisis, the share of finance in the economy would go down – and it did go down for a while. But then it bounced back despite the fact that finance was held in such terrible repute! So you have to ask: why are the non-financial parts of the economy buying these services? Honestly, I don’t know the answer.”

What also worries Greenspan is that this swelling size has gone hand in hand with rising complexity – and opacity. He now admits that even (or especially) when he was Fed chairman, he struggled to track the development of complex instruments during the credit bubble. “I am not a neophyte – I have been trading derivatives and things and I am a fairly good mathematician,” he observes. “But when I was sitting there at the Fed, I would say, ‘Does anyone know what is going on?’ And the answer was, ‘Only in part’. I would ask someone about synthetic derivatives, say, and I would get detailed analysis. But I couldn’t tell what was really happening.”

This last admission will undoubtedly infuriate critics. Back in 2005 and 2006, Greenspan never acknowledged this uncertainty. On the contrary, he kept insisting that financial innovation was beneficial and fought efforts by other regulators to rein in the more creative credit products emerging from Wall Street. Even today he remains wary of government control; he does not want to impose excessive controls on derivatives, for example.

But what has changed is that he now believes banks should be forced to hold much thicker capital cushions. More surprising, he has come to the conclusion that banks need to be smaller. “I am not in favour of breaking up the banks but if we now have such trouble liquidating them I would very reluctantly say we would be better off breaking up the banks.” He also thinks that finance as a whole needs to be cut down in size. “Is it essential that the division of labour [in our economy] requires an ever increasing amount of financial insight? We need to make sure that the services that non-financial services buy are not just ersatz or waste,” he observes with a wry chuckle.

Alan Greenspan with wife Andrea Mitchell©GettyIn 2004 with wife Andrea Mitchell

There is a profound irony here. In some senses, Greenspan remains an orthodox pillar of ultraconservative American thought: The Map and the Territory rails against fiscal irresponsibility, the swelling social security budget and the entitlement culture. And yet he, like his leftwing critics, now seems utterly disenchanted with Wall Street and the extremities of free-market finance – never mind that he championed them for so many years.

Perhaps this just reflects an 87-year-old man who is trying to make sense of the extreme swings in his reputation. I prefer to think, though, that it reflects a mind that – to his credit – remains profoundly curious, even after suffering this rollercoaster ride. When I say to him that I greatly admire his spirit of inquiry – even though I disagree with some conclusions – he immediately peppers me with questions. “Tell me what you disagree with – please. I really want to hear,” he insists, with a smile that creases his craggy face. As someone who never had children, his books now appear to be his real babies; the only other subject which inspires as much passion is when I mention his adored second wife, Andrea Mitchell, the television journalist.

But later, after I have left, it occurs to me that the real key to explaining the ironies and contradictions that hang over Greenspan is that he has – once again – unwittingly become a potent symbol of an age. Back in the days of the “Great Moderation” – the period of reduced economic volatility starting in the 1980s – most policy makers shared his sunny confidence in 20th-century progress. There was a widespread assumption that a mixture of free market capitalism, innovation and globalisation had made the world a better place. Indeed, it was this very confidence that laid the seeds of disaster. Today, however, that certainty has crumbled; the modern political and economic ecosystem is marked by a culture of doubt and cynicism. Nobody would dare call Yellen “maestro” today; not when the Fed (and others) are tipping into such uncharted territory. This delivers some benefits: Greenspan himself now admits this pre-2007 confidence was an Achilles heel. “Beware of success in policy,” he observes, laughing. “A stable, moderately growing, non-inflationary environment will create a bubble 100 per cent of the time.”

But a world marked by profound uncertainty is also a deeply disconcerting and humbling place. Today there are no easy answers or straightforward heroes or villains, be that among economists, anthropologists or anyone else. Perhaps the biggest moral of The Map and the Territory is that in a shifting landscape, we all need to keep challenging our assumptions and prejudices. And not just at the age of 87.

gillian.tett@ft.com; ‘The Map and the Territory: Risk, Human Nature, and the Future of Forecasting‘ by Alan Greenspan is published by Penguin.

 

Scientists Eye Longer-Term Forecasts of U.S. Heat Waves (Science Daily)

Oct. 27, 2013 — Scientists have fingerprinted a distinctive atmospheric wave pattern high above the Northern Hemisphere that can foreshadow the emergence of summertime heat waves in the United States more than two weeks in advance.

This map of air flow a few miles above ground level in the Northern Hemisphere shows the type of wavenumber-5 pattern associated with US drought. This pattern includes alternating troughs (blue contours) and ridges (red contours), with an “H” symbol (for high pressure) shown at the center of each of the five ridges. High pressure tends to cause sinking air and suppress precipitation, which can allow a heat wave to develop and intensify over land areas. (Credit: Image courtesy Haiyan Teng.)

The new research, led by scientists at the National Center for Atmospheric Research (NCAR), could potentially enable forecasts of the likelihood of U.S. heat waves 15-20 days out, giving society more time to prepare for these often-deadly events.

The research team discerned the pattern by analyzing a 12,000-year simulation of the atmosphere over the Northern Hemisphere. During those times when a distinctive “wavenumber-5” pattern emerged, a major summertime heat wave became more likely to subsequently build over the United States.

“It may be useful to monitor the atmosphere, looking for this pattern, if we find that it precedes heat waves in a predictable way,” says NCAR scientist Haiyan Teng, the lead author. “This gives us a potential source to predict heat waves beyond the typical range of weather forecasts.”

The wavenumber-5 pattern refers to a sequence of alternating high- and low-pressure systems (five of each) that form a ring circling the northern midlatitudes, several miles above the surface. This pattern can lend itself to slow-moving weather features, raising the odds for stagnant conditions often associated with prolonged heat spells.

The study is being published next week in Nature Geoscience. It was funded by the U.S. Department of Energy, NASA, and the National Science Foundation (NSF), which is NCAR’s sponsor. NASA scientists helped guide the project and are involved in broader research in this area.

Predicting a lethal event

Heat waves are among the most deadly weather phenomena on Earth. A 2006 heat wave across much of the United States and Canada was blamed for more than 600 deaths in California alone, and a prolonged heat wave in Europe in 2003 may have killed more than 50,000 people.

To see if heat waves can be triggered by certain large-scale atmospheric circulation patterns, the scientists looked at data from relatively modern records dating back to 1948. They focused on summertime events in the United States in which daily temperatures reached the top 2.5 percent of weather readings for that date across roughly 10 percent or more of the contiguous United States. However, since such extremes are rare by definition, the researchers could identify only 17 events that met such criteria — not enough to tease out a reliable signal amid the noise of other atmospheric behavior.

The group then turned to an idealized simulation of the atmosphere spanning 12,000 years. The simulation had been created a couple of years before with a version of the NCAR-based Community Earth System Model, which is funded by NSF and the Department of Energy.

By analyzing more than 5,900 U.S. heat waves simulated in the computer model, they determined that the heat waves tended to be preceded by a wavenumber-5 pattern. This pattern is not caused by particular oceanic conditions or heating of Earth’s surface, but instead arises from naturally varying conditions of the atmosphere. It was associated with an atmospheric phenomenon known as a Rossby wave train that encircles the Northern Hemisphere along the jet stream.

During the 20 days leading up to a heat wave in the model results, the five ridges and five troughs that make up a wavenumber-5 pattern tended to propagate very slowly westward around the globe, moving against the flow of the jet stream itself. Eventually, a high-pressure ridge moved from the North Atlantic into the United States, shutting down rainfall and setting the stage for a heat wave to emerge.

When wavenumber-5 patterns in the model were more amplified, U.S. heat waves became more likely to form 15 days later. In some cases, the probability of a heat wave was more than quadruple what would be expected by chance.

In follow-up work, the research team turned again to actual U.S. heat waves since 1948. They recognized that some historical heat wave events are indeed characterized by a large-scale circulation pattern that indicated a wavenumber-5 event.

Extending forecasts beyond 10 days

The research finding suggests that scientists are making progress on a key meteorological goal: forecasting the likelihood of extreme events more than 10 days in advance. At present, there is very limited skill in such long-term forecasts.

Previous research on extending weather forecasts has focused on conditions in the tropics. For example, scientists have found that El Niño and La Niña, the periodic warming and cooling of surface waters in the central and eastern tropical Pacific Ocean, are correlated with a higher probability of wet or dry conditions in different regions around the globe. In contrast, the wavenumber-5 pattern does not rely on conditions in the tropics. However, the study does not exclude the possibility that tropical rainfall could act to stimulate or strengthen the pattern.

Now that the new study has connected a planetary wave pattern to a particular type of extreme weather event, Teng and her colleagues will continue searching for other circulation patterns that may presage extreme weather events.

“There may be sources of predictability that we are not yet aware of,” she says. “This brings us hope that the likelihood of extreme weather events that are damaging to society can be predicted further in advance.”

The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this release are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Journal Reference:

  1. Haiyan Teng, Grant Branstator, Hailan Wang, Gerald A. Meehl, Warren M. Washington. Probability of US heat waves affected by a subseasonal planetary wave patternNature Geoscience, 2013; DOI: 10.1038/ngeo1988

Will U.S. Hurricane Forecasting Models Catch Up to Europe’s? (National Geographic)

Photo of a satellite view of Hurricane Sandy on October 28, 2012.

A satellite view of Hurricane Sandy on October 28, 2012.

Photograph by Robert Simmon, ASA Earth Observatory and NASA/NOAA GOES Project Science team

Willie Drye

for National Geographic

Published October 27, 2013

If there was a bright spot amid Hurricane Sandy’s massive devastation, including 148 deaths, at least $68 billion in damages, and the destruction of thousands of homes, it was the accuracy of the forecasts predicting where the storm would go.

Six days before Sandy came ashore one year ago this week—while the storm was still building in the Bahamas—forecasters predicted it would make landfall somewhere between New Jersey and New York City on October 29.

They were right.

Sandy, which had by then weakened from a Category 2 hurricane to an unusually potent Category 1, came ashore just south of Atlantic City, a few miles from where forecasters said it would, on the third to last day of October.

“They were really, really excellent forecasts,” said University of Miami meteorologist Brian McNoldy. “We knew a week ahead of time that something awful was going to happen around New York and New Jersey.”

That knowledge gave emergency management officials in the Northeast plenty of time to prepare, issuing evacuation orders for hundreds of thousands of residents in New Jersey and New York.

Even those who ignored the order used the forecasts to make preparations, boarding up buildings, stocking up on food and water, and buying gasoline-powered generators.

But there’s an important qualification about the excellent forecasts that anticipated Sandy’s course: The best came from a European hurricane prediction program.

The six-day-out landfall forecast arrived courtesy of a computer program known as the European Centre for Medium-range Weather Forecasting (ECMWF), which is based in England.

Most of the other models in use at the National Hurricane Center in Miami, including the U.S. Global Forecast System (GFS), didn’t start forecasting a U.S. landfall until four days before the storm came ashore. At the six-day-out mark, that model and others at the National Hurricane Center had Sandy veering away from the Atlantic Coast, staying far out at sea.

“The European model just outperformed the American model on Sandy,” says Kerry Emanuel, a meteorologist at Massachusetts Institute of Technology.

Now, U.S. weather forecasting programmers are working to close the gap between the U.S. Global Forecast System and the European model.

There’s more at stake than simple pride. “It’s to our advantage to have two excellent models instead of just one,” says McNoldy. “The more skilled models you have running, the more you know about the possibilities for a hurricane’s track.”

And, of course, the more lives you can save.

Data, Data, Data

The computer programs that meteorologists rely on to predict the courses of storms draw on lots of data.

U.S. forecasting computers and their European counterparts rely on radar that provides information on cloud formations and the rotation of a storm, on orbiting satellites that show precisely where a storm is, and on hurricane-hunter aircraft that fly into storms to collect wind speeds, barometric pressure readings, and water temperatures.

Hundreds of buoys deployed along the Atlantic and Gulf coasts, meanwhile, relay information about the heights of waves being produced by the storm.

All this data is fed into computers at the National Centers for Environmental Prediction at Camp Springs, Maryland, which use it to run the forecast models. Those computers, linked to others at the National Hurricane Center, translate the computer models into official forecasts.

The forecasters use data from all computer models—including the ECMWF—to make their forecasts four times daily.

Forecasts produced by various models often diverge, leaving plenty of room for interpretation by human forecasters.

“Usually, it’s kind of a subjective process as far as making a human forecast out of all the different computer runs,” says McNoldy. “The art is in the interpretation of all of the computer models’ outputs.”

There are two big reasons why the European model is usually more accurate than U.S. models. First, the European Centre for Medium-range Weather Forecasting model is a more sophisticated program that incorporates more data.

Second, the European computers that run the program are more powerful than their U.S. counterparts and are and able to do more calculations more quickly.

“They don’t have any top-secret things,” McNoldy said. “Because of their (computer) hardware, they can implement more sophisticated code.”

A consortium of European nations began developing the ECMWF in 1976, and the model has been fueled by a series of progressively more powerful supercomputers in England. It got a boost when the European Union was formed in 1993 and member states started contributing taxes for more improvements.

The ECMWF and the GFS are the two primary models that most forecasters look at, said Michael Laca, producer of TropMet, a website that focuses on hurricanes and other severe weather events.

Laca said that forecasts and other data from the ECMWF are provided to forecasters in the U.S. and elsewhere who pay for the information.

“The GFS, on the other hand, is freely available to everyone, and is funded—or defunded—solely through (U.S.) government appropriations,” Laca said.

And since funding for U.S. research and development is subject to funding debates in Congress, U.S. forecasters are “in a hard position to keep pace with the ECMWF from a research and hardware perspective,” Laca said.

Hurricane Sandy wasn’t the first or last hurricane for which the ECMWF was the most accurate forecast model. It has consistently outperformed the GFS and four other U.S. and Canadian forecasting models.

Greg Nordstrom, who teaches meteorology at Mississippi State University in Starkville, said the European model provided much more accurate forecasts for Hurricane Isaac in August 2012 and for Tropical Storm Karen earlier this year.

“This doesn’t mean the GFS doesn’t beat the Euro from time to time,” he says.  “But, overall, the Euro is king of the global models.”

McNoldy says the European Union’s generous funding of research and development of their model has put it ahead of the American version. “Basically, it’s a matter of resources,” he says. “If we want to catch up, we will. It’s important that we have the best forecasting in the world.”

European developers who work on forecasting software have also benefited from better cooperation between government and academic researchers, says MIT’s Emanuel.

“If you talk to (the National Oceanic and Atmospheric Administration), they would deny that, but there’s no real spirit of cooperation (in the U.S.),” he says. “It’s a cultural problem that will not get fixed by throwing more money at the problem.”

Catching Up Amid Chaos

American computer models’ accuracy in forecasting hurricane tracks has improved dramatically since the 1970s. The average margin of error for a three-day forecast of a hurricane’s track has dropped from 500 miles in 1972 to 115 miles in 2012.

And NOAA is in the middle of a ten-year program intended to dramatically improve the forecasting of hurricanes’ tracks and their likelihood to intensify, or become stronger before landfall.

One of the project’s centerpieces is the Hurricane Weather Research and Forecasting model, or HWRF. In development since 2007, it’s similar to the ECMWF in that it will incorporate more data into its forecasting, including data from the GFS model.

Predicting the likelihood that a hurricane will intensify is difficult. For a hurricane to gain strength, it needs humid air, seawater heated to at least 80ºF, and no atmospheric winds to disrupt its circulation.

In 2005, Hurricane Wilma encountered those perfect conditions and in just 30 hours strengthened from a tropical storm with peak winds of about 70 miles per hour to the most powerful Atlantic hurricane on record, with winds exceeding 175 miles per hour.

But hurricanes are as delicate as they are powerful. Seemingly small environmental changes, like passing over water that’s slightly cooler than 80ºF or ingesting dryer air, can rapidly weaken a storm. And the environment is constantly changing.

“Over the next five years, there may be some big breakthrough to help improve intensification forecasting,” McNoldy said. “But we’re still working against the basic chaos in the atmosphere.”

He thinks it will take at least five to ten years for the U.S. to catch up with the European model.

MIT’s Emanuel says three factors will determine whether more accurate intensification forecasting is in the offing: the development of more powerful computers that can accommodate more data, a better understanding of hurricane intensity, and whether researchers reach a point at which no further improvements to intensification forecasting are possible.

Emanuel calls that point the “prediction horizon” and says it may have already been reached: “Our level of ignorance is still too high to know.”

Predictions and Responses

Assuming we’ve not yet hit that point, better predictions could dramatically improve our ability to weather hurricanes.

The more advance warning, the more time there is for those who do choose to heed evacuation orders. Earlier forecasting would also allow emergency management officials more time to provide transportation for poor, elderly, and disabled people unable to flee on their own.

More accurate forecasts would also reduce evacuation expenses.

Estimates of the cost of evacuating coastal areas before a hurricane vary considerably, but it’s been calculated that it costs $1 million for every mile of coastline evacuated. That includes the cost of lost commerce, wages and salaries by those who leave, and the costs of actual evacuating, like travel and shelter.

Better forecasts could reduce the size of evacuation areas and save money.

They would also allow officials to get a jump on hurricane response.  The Federal Emergency Management Administration tries to stockpile relief supplies far enough away from an expected hurricane landfall to avoid damage from the storm but near enough so that the supplies can quickly be moved to affected areas afterwards.

More reliable landfall forecasts would help FEMA position recovery supplies closer to where they’ll be.

Whatever improvements are made, McNoldy warns that forecasting will never be foolproof. However dependable, he said, “Models will always be imperfect.”

The cultures endangered by climate change (PLOS)

Posted: September 9, 2013

By Greg Downey

The Bull of Winter weakens

In 2003, after decades of working with the Viliui Sakha, indigenous horse and cattle breeders in the Vilyuy River region of northeastern Siberia, anthropologist Susan Crate began to hear the local people complain about climate change:

My own “ethnographic moment” occurred when I heard a Sakha elder recount the age-old story of Jyl Oghuha (the bull of winter). Jyl Oghuha’s legacy explains the 100o C annual temperature range of Sakha’s subarctic habitat. Sakha personify winter, the most challenging season for them, in the form of a white bull with blue spots, huge horns, and frosty breath. In early December this bull of winter arrives from the Arctic Ocean to hold temperatures at their coldest (-60o to -65o C; -76o to -85o F) for December and January. Although I had heard the story many times before, this time it had an unexpected ending… (Crate 2008: 570)

Lena Pillars, photo by Maarten Takens (CC BY SA)

Lena Pillars, photo by Maarten Takens (CC BY SA)

This Sakha elder, born in 1935, talked about how the bull symbolically collapsed each spring, but also its uncertain future:

The bull of winter is a legendary Sakha creature whose presence explains the turning from the frigid winter to the warming spring. The legend tells that the bull of winter, who keeps the cold in winter, loses his first horn at the end of January as the cold begins to let go to warmth; then his second horn melts off at the end of February, and finally, by the end of March, he loses his head, as spring is sure to have arrived. It seems that now with the warming, perhaps the bull of winter will no longer be. (ibid.)

Crate found that the ‘softening’ of winter disrupted the Sakha way of life in a number of ways far less prosaic. The winters were warmer, bringing more rain and upsetting the haying season; familiar animals grew less common and new species migrated north; more snow fell, making hunting more difficult in winter; and when that snow thawed, water inundated their towns, fields, and countryside, rotting their houses, bogging down farming, and generally making life more difficult. Or, as a Sakha elder put it to Crate:

I have seen two ugut jil (big water years) in my lifetime. One was the big flood in 1959 — I remember canoeing down the street to our kin’s house. The other is now. The difference is that in ‘59 the water was only here for a few days and now it does not seem to be going away. (Sakha elder, 2009; in Crate 2011: 184).

(Currently, Eastern Russia is struggling with unprecedented flooding along the Chinese border, and, in July, unusual forest fires struck areas of the region that were permafrost.) As I write this, the website CO2 Now reports that the average atmospheric CO2 level for July 2013 at the Mauna Loa Observatory was 397.23 parts per million, slightly below the landmark 400+ ppm levels recorded in May. The vast majority of climate scientists now argue, not about whether we will witness anthropogenic atmospheric change, but how much and how quicklythe climate will change. Will we cross potential ‘tipping points’, when feedback dynamics accelerate the pace of warming?

While climate science might be controversial with the public in the US (less so here in Australia and among scientists), the effects on human populations are more poorly understood and unpredictable, both by the public and scientists alike. Following on from Wendy Foden and colleagues’ piece in the PLOS special collection proposing a method to identify the species at greatest risk (Foden et al. 2013), I want to consider how we might identify which cultures are at greatest risk from climate change.

Will climate change threaten human cultural diversity, and if so, which groups will be pushed to the brink most quickly? Are groups like the Viliui Sakha at the greatest risk, especially as we know that climate change is already affecting the Arctic and warming may be exaggerated there? And what about island groups, threatened by sea level changes? Who will have to change most and adapt because of a shifting climate? Daniel Lende (2013: 496) has suggested that anthropologists need to put our special expertise to work in public commentary, and in the area of climate change, these human impacts seem to be one place where that expertise might be most useful.

The Sakha Republic

The Sakha Republic where the Viliui Sakha live is half of Russia’s Far Eastern Federal District, a district that covers an area almost as large as India, twice the size of Alaska. Nevertheless, fewer than one million people live there, spread thinly across the rugged landscape. The region contains the coldest spot on the planet, the Verkhoyansk Range, where the average January temperature —average — is around -50O, so cold that it doesn’t matter whether that’s Fahrenheit or Celsius.

The area that is now the Sakha Republic was first taken control by Tsarist Russia in the seventeenth century, a tax taken from the local people in furs. Many early Russian migrants to the region adopted Sakha customs. Both the Tsars and the later Communist governors exiled criminals to the region, which came to be called Yakutia; after the fall of the Soviet Union, the Russian Federation recognised the Sakha Republic. The Sakha, also called Yakuts, are the largest group in the area today; since the fall of the Soviets, many of the ethnic Russian migrants have left.

Verkhoyansk Mountains, Sakha Republic, by Maarten Takens, CC (BY SA).

Verkhoyansk Mountains, Sakha Republic, by Maarten Takens, CC (BY SA).

Sakha speakers first migrated north into Siberia as reindeer hunters, mixing with and eventually assimilating the Evenki, a Tungus-speaking group that lived there nomadically. Then these nomadic groups were later assimilated or forced further north by more sedentary groups of Sakha who raised horses and practiced more intensive reindeer herding and some agriculture (for more information see Susan Crate’s excellent discussion, ‘The Legacy of the Viliui Reinfeer-Herding Complex’ at Cultural Survival). The later migrants forced those practicing the earlier, nomadic reindeer-herding way of life into the most remote and rugged pockets of the region. By the first part of the twentieth century, Crate reports, the traditional reindeer-herding lifestyle was completely replaced in the Viliui watershed, although people elsewhere in Siberia continued to practice nomadic lifestyles, following herds of reindeer.

Today the economy of the Sakha Republic relies heavily on mining: gold, tin, and especially diamonds. Almost a quarter of all diamonds in the world — virtually all of Russia’s production — comes from Sakha. The great Udachnaya pipe, a diamond deposit just outside the Arctic circle, is now the third deepest open pit mine in the world, extending down more than 600 meters.

A new project promises to build a pipeline to take advantage of the massive Chaynda gas field in Sakha, sending the gas eastward to Vladivostok on Russia’s Pacific coast (story in the Siberia Times). The $24 billion Gazprom pipeline, which President Putin’s office says he wants developed ‘within the tightest possible timescale’, would mean that Russia would not have to sell natural gas exclusively through Europe, opening a line for direct delivery into the Pacific.

The Sakha have made the transition to the post-Soviet era remarkably well, with a robust economy and a political system that seems capable of balancing development and environmental safeguards (Crate 2003). But after successfully navigating a political thaw, will the Sakha, and other indigenous peoples of the region, fall victim to a much more literal warming?

The United Nations on indigenous people and climate change

This past month, we celebrated the International Day of the World’s Indigenous People (9 August). From 2005 to 2014, the United Nations called for ‘A Decade for Action and Dignity.’ The focus of this year’s observance is ‘Indigenous peoples building alliances: Honouring treaties, agreements and other constructive arrangements’ (for more information, here’s the UN’s website). According to the UN Development Programme, the day ‘presents an opportunity to honour diverse indigenous cultures and recognize the achievements and valuable contributions of an estimated 370 million indigenous peoples.’

The UN has highlighted the widespread belief that climate change will be especially cruel to indigenous peoples:

Despite having contributed the least to GHG [green house gas], indigenous peoples are the ones most at risk from the consequences of climate change because of their dependence upon and close relationship with the environment and its resources. Although climate change is regionally specific and will be significant for indigenous peoples in many different ways, indigenous peoples in general are expected to be disproportionately affected. Indigenous communities already affected by other stresses (such as, for example, the aftermath of resettlement processes), are considered especially vulnerable. (UN 2009: 95)

The UN’s report, State of the World’s Indigenous People, goes on to cite the following specific ‘changes or even losses in the biodiversity of their environment’ for indigenous groups, that will directly threaten aspects of indigenous life:

  • the traditional hunting, fishing and herding practices of indigenous peoples, not only in the Arctic, but also in other parts of the world;

  • the livelihood of pastoralists worldwide;

  • the traditional agricultural activities of indigenous peoples living in mountainous regions;

  • the cultural and ritual practices that are not only related to specific species or specific annual cycles, but also to specific places and spiritual sites, etc.;

  • the health of indigenous communities (vector-borne diseases, hunger, etc.);

  • the revenues from tourism. (ibid.: 96)

For example, climate change has been linked to extreme drought in Kenya where the Maasai, pastoral peoples, find their herds shrinking and good pasture harder and harder to find. For the Kamayurá in the Xingu region of Brazil, less rain and warmer water have decimated fish stocks in their river and made cassava cultivation a hit and miss affair; children are reduced to eating ants on flatbread to stave off hunger.

The UN report touches on a number of different ecosystems where the impacts of climate change will be especially severe, singling out the Arctic:

The Arctic region is predicted to lose whole ecosystems, which will have implications for the use, protection and management of wildlife, fisheries, and forests, affecting the customary uses of culturally and economically important species and resources. Arctic indigenous communities—as well as First Nations communities in Canada—are already experiencing a decline in traditional food sources, such as ringed seal and caribou, which are mainstays of their traditional diet. Some communities are being forced to relocate because the thawing permafrost is damaging the road and building infrastructure. Throughout the region, travel is becoming dangerous and more expensive as a consequence of thinning sea ice, unpredictable freezing and thawing of rivers and lakes, and the delay in opening winter roads (roads that can be used only when the land is frozen). (ibid.: 97)

Island populations are also often pointed out as being on the sharp edge of climate change (Lazrus 2012). The award-winning film, ‘There Once Was an Island,’ focuses on a community in the Pacific at risk from a rise in the sea level. As a website for the film describes:

Takuu, a tiny atoll in Papua New Guinea, contains the last Polynesian culture of its kind.  Facing escalating climate-related impacts, including a terrifying flood, community members Teloo, Endar, and Satty, take us on an intimate journey to the core of their lives and dreams. Will they relocate to war-ravaged Bougainville – becoming environmental refugees – or fight to stay? Two visiting scientists investigate on the island, leading audience and community to a greater understanding of climate change.

Similarly, The Global Mail reported the island nation of Kiribati was likely to become uninhabitable in coming decades, not simply because the islands flood but because patterns of rainfall shift and seawater encroaches on the coastal aquifer, leaving wells saline and undrinkable.

Heather Lazrus (2012: 288) reviews a number of other cases:

Low-lying islands and coastal areas such as the Maldives; the Marshall Islands; the Federated States of Micronesia, Kiribati, and Tuvalu; and many arctic islands such as Shishmaref… and the small islands in Nunavut… may be rendered uninhabitable as sea levels rise and freshwater resources are reduced.

Certainly, the evidence from twentieth century cases in which whole island populations were relocated suggests that the move can be terribly disruptive, the social damage lingering long after suitcases are unpacked.

Adding climate injury to cultural insult

In fact, even before average temperatures climbed or sea levels rose, indigenous groups were already at risk and have been for a while. By nearly every available measure, indigenous peoples’ distinctive lifeways and the globe’s cultural diversity are threatened, not so much by climate, but by their wealthier, more technologically advanced neighbours, who often exercise sovereignty over them.

If we take language diversity as an index of cultural distinctiveness, for example, linguist Howard Krauss (1992: 4) warned in the early 1990s that a whole range of languages were either endangered or ‘moribund,’ no longer being learned by new speakers or young people. These moribund languages, Krauss pointed out, would inevitably die with a speaker who had already been born, an individual who would someday be unable to converse in that language because there would simply be no one else to talk to:

The Eyak language of Alaska now has two aged speakers; Mandan has 6, Osage 5, Abenaki-Penobscot 20, and Iowa has 5 fluent speakers. According to counts in 1977, already 13 years ago, Coeur d’Alene had fewer than 20, Tuscarora fewer than 30, Menomini fewer than 50, Yokuts fewer than 10. On and on this sad litany goes, and by no means only for Native North America. Sirenikski Eskimo has two speakers, Ainu is perhaps extinct. Ubykh, the Northwest Caucasian language with the most consonants, 80-some, is nearly extinct, with perhaps only one remaining speaker. (ibid.)

Two decades ago, Krauss went on to estimate that 90% of the Arctic indigenous languages were ‘moribund’; 80% of the Native North American languages; 90% of Aboriginal Australian languages (ibid.: 5). Although the estimate involved a fair bit of guesswork, and we have seen some interesting evidence of ‘revivals’, Krauss suggested that 50% of all languages on earth were in danger of disappearing.

The prognosis may not be quite as grim today, but the intervening years have confirmed the overall pattern. Just recently, The Times of India reported that the country has lost 20% of its languages since 1961 — 220 languages disappeared in fifty years, with the pace accelerating. The spiffy updated Ethnologue website, based upon a more sophisticated set of categories and more precise accounting, suggests that, of the 7105 languages that they recognise globally, around 19% are ‘moribund’ or in worse shape, while another 15% are shrinking but still being taught to new users (see Ethnologue’s discussion of language status here  and UNESCO’s interactive atlas of endangered languages).

Back in 2010, I argued that the disappearance of languages was a human rights issue, not simply the inevitable by-product of cultural ‘evolution’, economic motivations, and globalisation (‘Language extinction ain’t no big thing?’ – butbeware as my style of blogging has changed a lot since then). Few peoples voluntarily forsake their mother tongues; the disappearance of a language or assimilation of a culture is generally not a path strode by choice, but a lessor-of-evils choice when threatened with chronic violence, abject poverty, and marginalisation.

I’ve also written about the case of ‘uncontacted’ Indians on the border of Brazil and Peru, where Western observers sometimes assume that indigenous peoples assimilate because they seek the benefits of ‘modernization’ when, in fact, they are more commonly the victims of exploitation and violent displacement. Just this June, a group of Mashco-Piro, an isolated indigenous group in Peru that has little contact with other societies, engaged in a tense stand-off at the Las Piedras river, a tributary of the Amazon. Caught on video, they appeared to be trying to contact or barter with local Yine Indians at a ranger station. Not only have this group of the Mashco-Piro fought in previous decades with loggers, but they now find that low-flying planes are disturbing their territory in search of natural gas and oil. (Globo Brasil also released footage taken in 2011 by officials from Brazil’s National Indian Foundation, FUNAI, of the Kawahiva, also called the Rio Pardo Indians, an isolated group from Mato Grosso state.)

In 1992, Krauss pleaded with fellow scholars to do something about the loss of cultural variation, lest linguistics ‘go down in history as the only science that presided obliviously over the disappearance of 90% of the very field to which it is dedicated’ (1992: 10):

Surely, just as the extinction of any animal species diminishes our world, so does the extinction of any language. Surely we linguists know, and the general public can sense, that any language is a supreme achievement of a uniquely human collective genius, as divine and endless a mystery as a living organism. Should we mourn the loss of Eyak or Ubykh any less than the loss of the panda or California condor? (ibid.: 8)

The pace of extinction is so quick that some activists, like anthropologist and attorney David Lempert (2010), argue that our field needs to collaborate on the creation of a cultural ‘Red Book,’ analogous to the Red Book for Endangered Species. Anthropologists may fight over the theoretical consequences of reifying cultures, but the political and legal reality is that even states with laws on the books to protect cultural diversity often have no clear guidelines as to what that entails.

But treating cultures solely as fragile victims of climate change misrepresents how humans will adapt to climate change. Culture is not merely a fixed tradition, calcified ‘customs’ at risk from warming; culture is also out adaptive tool, the primary way in which our ancestors adapted to such a great range of ecological niches in the first place and we will continue to adapt into the future. And this is not the first time that indigenous groups have confronted climate change.

Culture as threatened, culture as adaptation

One important stream of research in the anthropology of climate change shows very clearly that indigenous cultures are quite resilient in the face of environmental change. Anthropologist Sarah Strauss of the University of Wyoming has cautioned that, if we only focus on cultural extinction from climate change as a threat, we may miss the role of culture in allowing people toaccommodate wide variation in the environment:

People are extraordinarily resilient. Our cultures have allowed human groups to colonize the most extreme reaches of planet Earth, and no matter where we have gone, we have contended with both environmental and social change…. For this reason, I do not worry that the need to adapt to new and dramatic environmental changes (those of our own making, as well as natural occurrences like volcanoes) will drive cultures—even small island cultures—to disappear entirely.  (Strauss 2012: n.p. [2])

A number of ethnographic cases show how indigenous groups can adapt to severe climatic shifts. Crate (2008: 571), for example, points out that the Sakha adapted to a major migration northward, transforming a Turkic culture born in moderate climates to suit their new home. Kalaugher (2010) also discusses the Yamal Nenets, another group of Siberian nomads, who adapted to both climate change and industrial encroachment, including the arrival of oil and gas companies that fouled waterways and degraded their land (Survival International has a wonderful photo essay about the Yamal Nenets here.). A team led by Bruce Forbes of the University of Lapland, Finland, found:

The Nenet have responded by adjusting their migration routes and timing, avoiding disturbed and degraded areas, and developing new economic practices and social interaction, for example by trading with workers who have moved into gas villages in the area. (article here)

Northeast Science Station, Cherskiy, Sakha Republic, by David Mayer, CC (BY NC SA).

Northeast Science Station, Cherskiy, Sakha Republic, by David Mayer, CC (BY NC SA).

But one of the most amazing stories about the resilience and adaptability of the peoples of the Arctic comes from Wade Davis, anthropologist and National Geographic ‘explorer in residence.’ In his wonderful TED presentation, ‘Dreams from endangered cultures,’ Davis tells a story he heard on a trip to the northern tip of Baffin Island, Canada:

…this man, Olayuk, told me a marvelous story of his grandfather. The Canadian government has not always been kind to the Inuit people, and during the 1950s, to establish our sovereignty, we forced them into settlements. This old man’s grandfather refused to go. The family, fearful for his life, took away all of his weapons, all of his tools. Now, you must understand that the Inuit did not fear the cold; they took advantage of it. The runners of their sleds were originally made of fish wrapped in caribou hide. So, this man’s grandfather was not intimidated by the Arctic night or the blizzard that was blowing. He simply slipped outside, pulled down his sealskin trousers and defecated into his hand. And as the feces began to freeze, he shaped it into the form of a blade. He put a spray of saliva on the edge of the shit knife, and as it finally froze solid, he butchered a dog with it. He skinned the dog and improvised a harness, took the ribcage of the dog and improvised a sled, harnessed up an adjacent dog, and disappeared over the ice floes, shit knife in belt. Talk about getting by with nothing.

… and there’s nothing more than I can say after ‘… and disappeared over the ice floes, shit knife in belt’ that can make this story any better…

Climate change in context

The problem for many indigenous cultures is not climate change alone or in isolation, but the potential speed of that change and how it interacts with other factors, many human-induced: introduced diseases, environmental degradation, deforestation and resource depletion, social problems such as substance abuse and domestic violence, and legal systems imposed upon them, including forced settlement and forms of property that prevent movement. As Strauss explains:

Many researchers… see climate change not as a separate problem, in fact, but rather as an intensifier, which overlays but does not transcend the rest of the challenges we face; it is therefore larger in scale and impact, perhaps, but not entirely separable from the many other environmental and cultural change problems already facing human societies. (Strauss 2012: n.p. [2])

One of the clearest examples of these intensifier effects is the way in which nomadic peoples, generally quite resilient, lose their capacity to adapt when they are prevented from moving. The Siberian Yamal Nenets makes this clear:

“We found that free access to open space has been critical for success, as each new threat has arisen, and that institutional constraints and drivers were as important as the documented ecological changes,” said Forbes. “Our findings point to concrete ways in which the Nenets can continue to coexist as their lands are increasingly fragmented by extensive natural gas production and a rapidly warming climate.” (Kalaugher 2010)

With language loss in India, it’s probably no coincidence that, ‘Most of the lost languages belonged to nomadic communities scattered across the country’ (Times of India).

In previous generations, if climate changed, nomadic groups might have migrated to follow familiar resources or adopt techniques from neighbours who had already adapted to forces novel to them. An excellent recent documentary series on the way that Australian Aboriginal people have adapted to climate change on our continent — the end of an ice age, the extinction of megafauna, wholesale climate change including desertification — is a striking example (the website for the series, First Footprints, is excellent).

Today, migration is treated by UN officials and outsiders as ‘failure to adapt’, as people who move fall under the new rubric of ‘climate refugees’ (Lazrus 2012: 293). Migration, instead of being recognised as an adaptive strategy, is treated as just another part of the diabolical problem. (Here in Australia, where refugees on boats trigger unmatched political hysteria, migration from neighbouring areas would be treated as a national security problem rather than an acceptable coping strategy.)

For the most part, the kind of migration that first brought the Viliui Sakha to northeastern Siberia is no longer possible. As the Yamal Nenets, for example, migrate with their herds of reindeer, the come across the drills, pipelines, and even the Obskaya-Bovanenkovo railway – the northern-most railway line in the world – all part of Gazprom’s ‘Yamal Megaproject.’  Endangered indigenous groups are hemmed in on all sides, surviving only in geographical niches that were not attractive to their dominant neighbours, unsuitable for farming. AsElisabeth Rosenthal wrote in The New York Times:

Throughout history, the traditional final response for indigenous cultures threatened by untenable climate conditions or political strife was to move. But today, moving is often impossible. Land surrounding tribes is now usually occupied by an expanding global population, and once-nomadic groups have often settled down, building homes and schools and even declaring statehood.

For the Kamayurá, for example, eating ants instead of fish in Brazil’s Xingu National Park, they are no longer surrounded by the vast expanse of the Amazon and other rivers where they might still fish; the park is now surrounded by ranches and farms, some of which grow sugarcane to feed Brazil’s vast ethanol industry or raise cattle to feed the world’s growing appetite for beef.

Now, some of these indigenous groups find themselves squarely in the path of massive new resource extraction projects with nowhere to go, whether that’s in northern Alberta, eastern Peru, Burma, or remnant forests in Indonesia. That is, indigenous peoples have adapted before to severe climate change; but how much latitude (literally) do these groups now have to adapt if we do not allow them to move?

In sum, indigenous people are often not directly threatened by climate change alone; rather, they are pinched between climate change and majority cultures who want Indigenous peoples’ resources while also preventing them from adapting in familiar ways. The irony is that the dynamic driving climate change is attacking them from two sides: the forests that they need, the mountains where they keep their herds, and the soil under the lands where they live are being coveted for the very industrial processes that belch excess carbon into the atmosphere.

It’s hard not to be struck by the bitter tragedy that, in exchange for the resources to which we are addicted, we offer them assimilation. If they get out of the way so that we can drill out the gas or oil under their land or take their forests, we will invite them in join in our addiction (albeit, as much poorer addicts on the fringes of our societies, if truth be told). They have had little say in the process, or in our efforts to mitigate the process. We assume that our technologies and ways of life are the only potential cure for the problems created by these very technologies and ways of life.

In 2008, for example, Warwick Baird, Director of the Native Title Unit of the Australian Human Rights and Equal Opportunity Commission, warned that the shift to an economic mode of addressing climate change abatement threatened to further sideline indigenous people:

Things are moving fast in the world of climate change policy and the urgency is only going to get greater. Yet Indigenous peoples, despite their deep engagement with the land and waters, it seems to me, have little engagement with the formulation of climate change policy little engagement in climate change negotiations ­ and little engagement in developing and applying mitigation and adaptation strategies. They have not been included. Human rights have not been at the forefront. (transcript of speech here)

The problem then is not that indigenous populations are especially fragile or unable to adapt; in fact, both human prehistory and history demonstrate that these populations are remarkably resilient. Rather, many of these populations have been pushed to the brink, forced to choose between assimilation or extinction by the unceasing demands of the majority cultures they must live along side. The danger is not that the indigenous will fall off the precipice, but rather that the flailing attempts of the resource-thirsty developed world toavoid inevitable culture change — the necessary move away from unsustainable modes of living — will push much more sustainable lifeways over the edge into the abyss first.

Links

Inuit Knowledge and Climate Change (54:07 documentary).
Isuma TV, network of Inuit and Indigenous media producers.

Inuit Perspectives on Recent Climate Change, Skeptical Science, by Caitlyn Baikie, an Inuit geography student at Memorial University of Newfoundland.

Images

The Lena Pillars by Maarten Takens, CC licensed (BY SA). Original at Flickr:http://www.flickr.com/photos/takens/8512871877/

Verkhoyansk Mountains, Sakha Republic, by Maarten Takens, CC licensed (BY SA). Original at Flickr:http://www.flickr.com/photos/35742910@N05/8582017913/in/photolist-e5n5W6-dVQQHP-dYfGe4-dWzA7s-dW8maK-89CRTd-89zppv-7yp2ht-8o9NBd-89CBRs-dWNM2R-8SLQrQ

Northeast Science Station in late July 2012. Cherskiy, Sakha Republic, Russia, by David Mayer, CC licensed (BY NC SA). Original at Flickr:http://www.flickr.com/photos/56778570@N02/8760624135/in/photolist-em9ukH-dSXQnN

References

Crate, S. A. 2003. Co-option in Siberia: The Case of Diamonds and the Vilyuy Sakha. Polar Geography 26(4): 289–307. doi: 10.1080/789610151 (pdf available here)

Crate, S. 2008. Gone the Bull of Winter? Grappling with the Cultural Implications of and Anthropology’s Role(s) in Global Climate Change. Current Anthropology 49(4): 569-595. doi: 10.1086/529543. Stable URL:http://www.jstor.org/stable/10.1086/529543

Crate, S. 2011. Climate and Culture: Anthropology in the Era of Contemporary Climate Change. Annual Review of Anthropology 40:175–94. doi:10.1146/annurev.anthro.012809.104925 (pdf available here)

Cruikshank, J. 2001. Glaciers and Climate Change: Perspectives from Oral Tradition. Arctic 54(4): 377-393. Article Stable URL:http://www.jstor.org/stable/40512394

Foden WB, Butchart SHM, Stuart SN, Vié J-C, Akçakaya HR, et al. (2013) Identifying the World’s Most Climate Change Vulnerable Species: A Systematic Trait-Based Assessment of all Birds, Amphibians and Corals. PLoS ONE 8(6): e65427. doi:10.1371/journal.pone.0065427

Kalaugher L. 2010. Learning from Siberian Nomads’ Resilience. Bristol, UK: Environ. Res. Web.http://environmentalresearchweb.org/cws/article/news/41363

Krauss, M. 1992. The world’s languages in crisis. Language 68(1): 4-10. (pdf available here)

Lazrus, H. 2012. Sea Change: Island Communities and Climate Change. Annu. Rev. Anthropology 41:285–301. doi: 10.1146/annurev-anthro-092611-145730

Lempert, D. 2010. Why We Need a Cultural Red Book for Endangered Cultures, NOW: How Social Scientists and Lawyers/ Rights Activists Need to Join Forces.International Journal on Minority and Group Rights 17: 511–550. doi: 10.1163/157181110X531420

Lende, D. H. 2013. The Newtown Massacre and Anthropology’s Public Response. American Anthropologist 115 (3): 496–501. doi:10.1111/aman.12031

Strauss, S. 2012. Are cultures endangered
by climate change? Yes, but. . . WIREs Clim Change. doi: 10.1002/wcc.181 (pdf available here)

United Nations. 2009. The State of the World’s Indigenous People. Department of Economic and Social Affairs, ST/ESA/328. United Nations Publications: New York. (available online as a pdf)

Predicting the Future Could Improve Remote-Control of Space Robots (Wired)

BY ADAM MANN

10.15.13

A new system could make space exploration robots faster and more efficient by predicting where they will be in the very near future.

The engineers behind the program hope to overcome a particular snarl affecting our probes out in the solar system: that pesky delay caused by the speed of light. Any commands sent to a robot on a distant body take a certain amount of time to travel and won’t be executed for a while. By building a model of the terrain surrounding a rover and providing an interface that lets operators forecast the how the probe will move around within it, engineer can identify potential obstacles and make decisions nearer to real-time.

“You’re reacting quickly, and the rover is staying active more of the time,” said computer scientist Jeff Norris, who leads mission operation innovations at the Jet Propulsion Laboratory’s Ops Lab.

As an example, the distance between Earth and Mars creates round-trip lags of up to 40 minutes. Nowadays, engineers send a long string of commands once a day to robots like NASA’s Curiosity rover. These get executed but then the rover has to stop and wait until the next instructions are beamed down.

Because space exploration robots are multi-million or even multi-billion-dollar machines, they have to work very carefully. One day’s commands might tell Curiosity to drive up to a rock. It will then check that it has gotten close enough. Then, the following day, if will be instructed to place its arm on that rock. Later on, it might be directed to drill into or probe this rock with its instruments. While safe, this method is very inefficient.

“When we only send commands once a day, we’re not dealing with 10- or 20-minute delays. We’re dealing with a 24-hour round trip,” said Norris.

Norris’ lab wants to make the speed and productivity of distant probes better. Their interface simulates more or less where a robot would be given a particular time delay. This is represented by a small ghostly machine — called the “committed state” — moving just ahead of a rover. The ghosted robot is the software’s best guess of where the probe would end up if operators hit the emergency stop button right then.

By looking slightly into the future, the interface allows a rover driver to update decisions and commands at a much faster rate than is currently possible. Say a robot on Mars is commanded to drive forward 100 meters. But halfway there, its sensors notice an interesting rock that scientists want to investigate. Rather than waiting for the rover to finish its drive and then commanding it to go back, this new interface would give operators the ability to write and rewrite their directions on the fly.

The simulation can’t know every detail around a probe and so provides a small predictive envelope as to where the robot might be. Different terrains have different uncertainties.

“If you’re on loose sand, that might be different than hard rock,” said software engineer Alexander Menzies, who works on the interface.

Menzies added that when they tested the interface, users had an “almost game-like experience” trying to optimize commands for a robot. He designed an actual video game where participants were given points for commanding a time-delayed robot through a slalom-like terrain. (Norris lamented that he had the highest score on that game until the last day of testing, when Menzies beat him.)

The team thinks that aspects of this new interface could start to be used in the near future, perhaps even with the current Mars rovers Curiosity and Opportunity. At this point, though, Mars operations are limited by bandwidth. Because there are only a few communicating satellites in orbit on the Red Planet, commands can only be sent a few times a day, reducing a lot of the efficiency that would be gained from this new system. But operations on the moon or a potential asteroid capture and exploration mission – such as the one NASA is currently planning – would likely be in more constant communication with Earth, providing even faster and more efficient operations that could take advantage of this new time-delay-reducing system.

Video: OPSLabJPL/Youtube

Tool Accurately Predicts Whether A Kickstarter Project Will Bomb (Popular Science)

At about 76 percent accuracy, a new prediction model is the best yet. “Your chances of success are at 8 percent. Commence panic.”

By Colin Lecher

Posted 10.16.2013 at 2:00 pm

 

Ouya, A Popular Kickstarter Project 

Well, here’s something either very discouraging or very exciting for crowdfunding hopefuls: a Swiss team can predict, with about 76 percent accuracy and within only four hours of launch, whether a Kickstarter project will succeed.

The team, from the university École Polytechnique Fédérale de Lausanne, laid out a system in a paper presented at the Conference on Online Social Networks. By mining data on more than 16,000 Kickstarter campaigns and more than 1.3 million users, they created a prediction model based on the project’s popularity on Twitter, the rate of cash it’s getting, how many first-time backers it has, and the previous projects supporters have backed.

A previous, similar model built by Americans could predict a Kicktarter project’s success with 68 percent accuracy–impressive, but the Swiss project has another advantage: it’s dynamic. While the American model could only make a prediction before the project launched, the Swiss project monitors projects in real time. They’ve even built a tool, called Sidekick, that monitors projects and displays their chances of success.

Other sites, like Kicktraq, offer similar services, but the predictions aren’t as accurate as the Swiss team claims theirs are. If you peruse Sidekick, you can see how confident the algorithm is in its pass/fail predictions: almost all of the projects are either above 90 percent or below 10 percent. Sort of scary, probably, if you’re launching a project. Although there’s always a chance you could pull yourself out of the hole, it’s like a genie asking if you want to know how you die: Do you really want that information?

[Guardian]

Carbon Cycle Models Underestimate Indirect Role of Animals (Science Daily)

Oct. 16, 2013 — Animal populations can have a far more significant impact on carbon storage and exchange in regional ecosystems than is typically recognized by global carbon models, according to a new paper authored by researchers at the Yale School of Forestry & Environmental Studies (F&ES). 

Wildebeests herd, Serengeti. Scientists found that a decline in wildebeest populations in the Serengeti-Mara grassland-savanna system decades ago allowed organic matter to accumulate, which eventually led to about 80 percent of the ecosystem to burn annually, releasing carbon from the plants and the soil, before populations recovered in recent years. (Credit: © photocreo / Fotolia)

In fact, in some regions the magnitude of carbon uptake or release due to the effects of specific animal species or groups of animals — such as the pine beetles devouring forests in western North America — can rival the impact of fossil fuel emissions for the same region, according to the paper published in the journal Ecosystems.

While models typically take into account how plants and microbes affect the carbon cycle, they often underestimate how much animals can indirectly alter the absorption, release, or transport of carbon within an ecosystem, says Oswald Schmitz, the Oastler Professor of Population and Community Ecology at F&ES and lead author of the paper. Historically, the role of animals has been largely underplayed since animal species are not distributed globally and because the total biomass of animals is vastly lower than the plants that they rely upon, and therefore contribute little carbon in the way of respiration.

“What these sorts of analyses have not paid attention to is what we call the indirect multiplier effects,” Schmitz says. “And these indirect effects can be quite huge — and disproportionate to the biomass of the species that are instigating the change.”

In the paper, “Animating the Carbon Cycle,” a team of 15 authors from 12 universities, research organizations and government agencies cites numerous cases where animals have triggered profound impacts on the carbon cycle at local and regional levels.

In one case, an unprecedented loss of trees triggered by the pine beetle outbreak in western North America has decreased the net carbon balance on a scale comparable to British Columbia’s current fossil fuel emissions.

And in East Africa, scientists found that a decline in wildebeest populations in the Serengeti-Mara grassland-savanna system decades ago allowed organic matter to accumulate, which eventually led to about 80 percent of the ecosystem to burn annually, releasing carbon from the plants and the soil, before populations recovered in recent years.

“These are examples where the animals’ largest effects are not direct ones,” Schmitz says. “But because of their presence they mitigate or mediate ecosystem processes that then can have these ramifying effects.”

“We hope this article will inspire scientists and managers to include animals when thinking of local and regional carbon budgets,” said Peter Raymond, a professor of ecosystem ecology at the Yale School of Forestry & Environmental Studies.

According to the authors, a more proper assessment of such phenomena could provide insights into management schemes that could help mitigate the threat of climate change.

For example, in the Arctic, where about 500 gigatons of carbon is stored in permafrost, large grazing mammals like caribou and muskoxen can help maintain the grasslands that have a high albedo and thus reflect more solar energy. In addition, by trampling the ground these herds can actually help reduce the rate of permafrost thaw, researchers say.

“It’s almost an argument for rewilding places to make sure that the natural balance of predators and prey are there,” Schmitz says. “We’re not saying that managing animals will offset these carbon emissions. What we’re trying to say is the numbers are of a scale where it is worthwhile to start thinking about how animals could be managed to accomplish that.”

Journal Reference:

  1. Oswald J. Schmitz, Peter A. Raymond, James A. Estes, Werner A. Kurz, Gordon W. Holtgrieve, Mark E. Ritchie, Daniel E. Schindler, Amanda C. Spivak, Rod W. Wilson, Mark A. Bradford, Villy Christensen, Linda Deegan, Victor Smetacek, Michael J. Vanni, Christopher C. Wilmers.Animating the Carbon CycleEcosystems, 2013; DOI:10.1007/s10021-013-9715-7

The Reasons Behind Crime (Science Daily)

Oct. 10, 2013 — More punishment does not necessarily lead to less crime, say researchers at ETH Zurich who have been studying the origins of crime with a computer model. In order to fight crime, more attention should be paid to the social and economic backgrounds that encourage crime.

Whether a person turns criminal and commits a robbery depends greatly on the socio-economic circumstances in which he lives. (Credit: © koszivu / Fotolia)

People have been stealing, betraying others and committing murder for ages. In fact, humans have never succeeded in eradicating crime, although — according to the rational choice theory in economics — this should be possible in principle. The theory states that humans turn criminal if it is worthwhile. Stealing or evading taxes, for instance, pays off if the prospects of unlawful gains outweigh the expected punishment. Therefore, if a state sets the penalties high enough and ensures that lawbreakers are brought to justice, it should be possible to eliminate crime completely.

This theory is largely oversimplified, says Dirk Helbing, a professor of sociology. The USA, for example, often have far more drastic penalties than European countries. But despite the death penalty in some American states, the homicide rate in the USA is five times higher than in Western Europe. Furthermore, ten times more people sit in American prisons than in many European countries. More repression, however, can sometimes even lead to more crime, says Helbing. Ever since the USA declared the “war on terror” around the globe, the number of terrorist attacks worldwide has increased, not fallen. “The classic approach, where criminals merely need to be pursued and punished more strictly to curb crime, often does not work.” Nonetheless, this approach dominates the public discussion.

More realistic model

In order to better understand the origins of crime, Helbing and his colleagues have developed a new so-called agent-based model that takes the network of social interactions into account and is more realistic than previous models. Not only does it include criminals and law enforcers, like many previous models, but also honest citizens as a third group. Parameters such as the penalties size and prosecution costs can be varied in the model. Moreover, it also considers spatial dependencies. The representatives of the three groups do not interact with one another randomly, but only if they encounter each other in space and time. In particular, individual agents imitate the behaviour of agents from other groups, if this is promising.

Cycles of crime

Using the model, the scientists were able to demonstrate that tougher punishments do not necessarily lead to less crime and, if so, then at least not to the extent the punishment effort is increased. The researchers were also able to simulate how crime can suddenly break out and calm down again. Like the pig cycle we know from the economic sciences or the predator-prey cycles from ecology, crime is cyclical as well. This explains observations made, for instance, in the USA: according to the FBI’s Uniform Crime Reporting Program, cyclical changes in the frequency of criminal offences can be found in several American states. “If a state increases the investments in its punitive system to an extent that is no longer cost-effective, politicians will cut the law enforcement budget,” says Helbing. “As a result, there is more room for crime to spread again.”

“Many crimes have a socio-economic background”

But would there be a different way of combatting crime, if not with repression? The focus should be on the socio-economic context, says Helbing. As we know from the milieu theory in sociology, the environment plays a pivotal role in the behaviour of individuals. The majority of criminal acts have a social background, claims Helbing. For example, if an individual feels that all the friends and neighbours are cheating the state, it will inevitably wonder whether it should be the last honest person to fill in the tax declaration correctly.

“If we want to reduce the crime rate, we have to keep an eye on the socio-economic circumstances under which people live,” says Helbing. We must not confuse this with soft justice. However, a state’s response to crime has to be differentiated: besides the police and court, economic and social institutions are relevant as well — and, in fact, every individual when it comes to the integration of others. “Improving social conditions and integrating people socially can probably combat crime much more effectively than building new prisons.”

Journal Reference:

  1. Matjaž Perc, Karsten Donnay, Dirk Helbing. Understanding Recurrent Crime as System-Immanent Collective BehaviorPLoS ONE, 2013; 8 (10): e76063 DOI:10.1371/journal.pone.0076063

Mosquitos transgênicos no céu do sertão (Agência Pública)

Saúde

10/10/2013 – 10h36

por Redação da Agência Pública

armadilhas 300x199 Mosquitos transgênicos no céu do sertão

As armadilhas são instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Foto: Coletivo Nigéria

Com a promessa de reduzir a dengue, biofábrica de insetos transgênicos já soltou 18 milhões de mosquitos Aedes aegypti no interior da Bahia. Leia a história e veja o vídeo.

No começo da noite de uma quinta-feira de setembro, a rodoviária de Juazeiro da Bahia era o retrato da desolação. No saguão mal iluminado, funcionavam um box cuja especialidade é caldo de carne, uma lanchonete de balcão comprido, ornado por salgados, biscoitos e batata chips, e um único guichê – com perturbadoras nuvens de mosquitos sobre as cabeças de quem aguardava para comprar passagens para pequenas cidades ou capitais nordestinas.

Assentada à beira do rio São Francisco, na fronteira entre Pernambuco e Bahia, Juazeiro já foi uma cidade cortada por córregos, afluentes de um dos maiores rios do país. Hoje, tem mais de 200 mil habitantes, compõe o maior aglomerado urbano do semiárido nordestino ao lado de Petrolina – com a qual soma meio milhão de pessoas – e é infestada por muriçocas (ou pernilongos, se preferir). Os cursos de água que drenavam pequenas nascentes viraram esgotos a céu aberto, extensos criadouros do inseto, tradicionalmente combatidos com inseticida e raquete elétrica, ou janelas fechadas com ar condicionado para os mais endinheirados.

Mas os moradores de Juazeiro não espantam só muriçocas nesse início de primavera. A cidade é o centro de testes de uma nova técnica científica que utiliza Aedes aegypti transgênicos para combater a dengue, doença transmitida pela espécie. Desenvolvido pela empresa britânica de biotecnologia Oxitec, o método consiste basicamente na inserção de um gene letal nos mosquitos machos que, liberados em grande quantidade no meio ambiente, copulam com as fêmeas selvagens e geram uma cria programada para morrer. Assim, se o experimento funcionar, a morte prematura das larvas reduz progressivamente a população de mosquitos dessa espécie.

A técnica é a mais nova arma para combater uma doença que não só resiste como avança sobre os métodos até então empregados em seu controle. A Organização Mundial de Saúde estima que possam haver de 50 a 100 milhões de casos de dengue por ano no mundo. No Brasil, a doença é endêmica, com epidemias anuais em várias cidades, principalmente nas grandes capitais. Em 2012, somente entre os dias 1º de janeiro e 16 de fevereiro, foram registrados mais de 70 mil casos no país. Em 2013, no mesmo período, o número praticamente triplicou, passou para 204 mil casos. Este ano, até agora, 400 pessoas já morreram de dengue no Brasil.

Em Juazeiro, o método de patente britânica é testado pela organização social Moscamed, que reproduz e libera ao ar livre os mosquitos transgênicos desde 2011. Na biofábrica montada no município e que tem capacidade para produzir até 4 milhões de mosquitos por semana, toda cadeia produtiva do inseto transgênico é realizada – exceção feita à modificação genética propriamente dita, executada nos laboratórios da Oxitec, em Oxford. Larvas transgênicas foram importadas pela Moscamed e passaram a ser reproduzidas nos laboratórios da instituição.

Os testes desde o início são financiados pela Secretaria da Saúde da Bahia – com o apoio institucional da secretaria de Juazeiro – e no último mês de julho se estenderam ao município de Jacobina, na extremidade norte da Chapada Diamantina. Na cidade serrana de aproximadamente 80 mil habitantes, a Moscamed põe à prova a capacidade da técnica de “suprimir” (a palavra usada pelos cientistas para exterminar toda a população de mosquitos) o Aedes aegypti em toda uma cidade, já que em Juazeiro a estratégia se mostrou eficaz, mas limitada por enquanto a dois bairros.

“Os resultados de 2011 e 2012 mostraram que [a técnica] realmente funcionava bem. E a convite e financiados pelo Governo do Estado da Bahia resolvemos avançar e irmos pra Jacobina. Agora não mais como piloto, mas fazendo um teste pra realmente eliminar a população [de mosquitos]”, fala Aldo Malavasi, professor aposentado do Departamento de Genética do Instituto de Biociências da Universidade de São Paulo (USP) e atual presidente da Moscamed. A USP também integra o projeto.

Malavasi trabalha na região desde 2006, quando a Moscamed foi criada para combater uma praga agrícola, a mosca-das-frutas, com técnica parecida – a Técnica do Inseto Estéril. A lógica é a mesma: produzir insetos estéreis para copular com as fêmeas selvagens e assim reduzir gradativamente essa população. A diferença está na forma como estes insetos são esterilizados. Ao invés de modificação genética, radiação. A TIE é usada largamente desde a década de 1970, principalmente em espécies consideradas ameaças à agricultura. O problema é que até agora a tecnologia não se adequava a mosquitos como o Aedes aegypti, que não resistiam de forma satisfatória à radiação

O plano de comunicação

As primeiras liberações em campo do Aedes transgênico foram realizadas nas Ilhas Cayman, entre o final de 2009 e 2010. O território britânico no Caribe, formado por três ilhas localizadas ao Sul de Cuba, se mostrou não apenas um paraíso fiscal (existem mais empresas registradas nas ilhas do que seus 50 mil habitantes), mas também espaço propício para a liberação dos mosquitos transgênicos, devido à ausência de leis de biossegurança. As Ilhas Cayman não são signatárias do Procolo de Cartagena, o principal documento internacional sobre o assunto, nem são cobertas pela Convenção de Aarthus – aprovada pela União Europeia e da qual o Reino Unido faz parte – que versa sobre o acesso à informação, participação e justiça nos processos de tomada de decisão sobre o meio ambiente.

Ao invés da publicação e consulta pública prévia sobre os riscos envolvidos no experimento, como exigiriam os acordos internacionais citados, os cerca de 3 milhões de mosquitos soltos no clima tropical das Ilhas Cayman ganharam o mundo sem nenhum processo de debate ou consulta pública. A autorização foi concedida exclusivamente pelo Departamento de Agricultura das Ilhas. Parceiro local da Oxitec nos testes, a Mosquito Research & Control Unit (Unidade de Pesquisa e Controle de Mosquito) postou um vídeo promocional sobre o assunto apenas em outubro de 2010, ainda assim sem mencionar a natureza transgênica dos mosquitos. O vídeo foi divulgado exatamente um mês antes da apresentação dos resultados dos experimentos pela própria Oxitec no encontro anual daAmerican Society of Tropical Medicine and Hygiene (Sociedade Americana de Medicina Tropical e Higiene), nos Estados Unidos.

A comunidade científica se surpreendeu com a notícia de que as primeiras liberações no mundo de insetos modificados geneticamente já haviam sido realizadas, sem que os próprios especialistas no assunto tivessem conhecimento. A surpresa se estendeu ao resultado: segundo os dados da Oxitec, os experimentos haviam atingido 80% de redução na população de Aedes aegypti nas Ilhas Cayman. O número confirmava para a empresa que a técnica criada em laboratório poderia ser de fato eficiente. Desde então, novos testes de campo passaram a ser articulados em outros países – notadamente subdesenvolvidos ou em desenvolvimento, com clima tropical e problemas históricos com a dengue.

Depois de adiar testes semelhantes em 2006, após protestos, a Malásia se tornou o segundo país a liberar os mosquitos transgênicos entre dezembro de 2010 e janeiro de 2011. Seis mil mosquitos foram soltos num área inabitada do país. O número, bem menor em comparação ao das Ilhas Cayman, é quase insignificante diante da quantidade de mosquitos que passou a ser liberada em Juazeiro da Bahia a partir de fevereiro de 2011. A cidade, junto com Jacobina mais recentemente, se tornou desde então o maior campo de testes do tipo no mundo, com mais de 18 milhões de mosquitos já liberados, segundo números da Moscamed.

“A Oxitec errou profundamente, tanto na Malásia quanto nas Ilhas Cayman. Ao contrário do que eles fizeram, nós tivemos um extenso trabalho do que a gente chama de comunicação pública, com total transparência, com discussão com a comunidade, com visita a todas as casas. Houve um trabalho extraordinário aqui”, compara Aldo Malavasi.

Em entrevista por telefone, ele fez questão de demarcar a independência da Moscamed diante da Oxitec e ressaltou a natureza diferente das duas instituições. Criada em 2006, a Moscamed é uma organização social, sem fins lucrativos portanto, que se engajou nos testes do Aedes aegypti transgênico com o objetivo de verificar a eficácia ou não da técnica no combate à dengue. Segundo Malavasi, nenhum financiamento da Oxitec foi aceito por eles justamente para garantir a isenção na avaliação da técnica. “Nós não queremos dinheiro deles, porque o nosso objetivo é ajudar o governo brasileiro”, resume.

Em favor da transparência, o programa foi intitulado “Projeto Aedes Transgênico” (PAT), para trazer já no nome a palavra espinhosa. Outra determinação de ordem semântica foi o não uso do termo “estéril”, corrente no discurso da empresa britânica, mas empregada tecnicamente de forma incorreta, já que os mosquitos produzem crias, mas geram prole programada para morrer no estágio larval. Um jingle pôs o complexo sistema em linguagem popular e em ritmo de forró pé-de-serra. E o bloco de carnaval “Papa Mosquito” saiu às ruas de Juazeiro no Carnaval de 2011.

No âmbito institucional, além do custeio pela Secretaria de Saúde estadual, o programa também ganhou o apoio da Secretaria de Saúde de Juazeiro da Bahia. “De início teve resistência, porque as pessoas também não queriam deixar armadilhas em suas casas, mas depois, com o tempo, elas entenderam o projeto e a gente teve uma boa aceitação popular”, conta o enfermeiro sanitarista Mário Machado, diretor de Promoção e Vigilância à Saúde da secretaria.

As armadilhas, das quais fala Machado, são simples instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Assim é possível colher os ovos e verificar se eles foram fecundados por machos transgênicos ou selvagens. Isso também é possível porque os mosquitos geneticamente modificados carregam, além do gene letal, o fragmento do DNA de uma água-viva que lhe confere uma marcação fluorescente, visível em microscópios.

Desta forma, foi possível verificar que a redução da população de Aedes aegypti selvagem atingiu, segundo a Moscamed, 96% em Mandacaru – um assentamento agrícola distante poucos quilômetros do centro comercial de Juazeiro que, pelo isolamento geográfico e aceitação popular, se transformou no local ideal para as liberações. Apesar do número, a Moscamed continua com liberações no bairro. Devido à breve vida do mosquito (a fêmea vive aproximadamente 35 dias), a soltura dos insetos precisa continuar para manter o nível da população selvagem baixo. Atualmente, uma vez por semana um carro deixa a sede da organização com 50 mil mosquitos distribuídos aos milhares em potes plásticos que serão abertos nas ruas de Mandacaru.

“Hoje a maior aceitação é no Mandacaru. A receptividade foi tamanha que a Moscamed não quer sair mais de lá”, enfatiza Mário Machado.

O mesmo não aconteceu com o bairro de Itaberaba, o primeiro a receber os mosquitos no começo de 2011. Nem mesmo o histórico alto índice de infecção pelo Aedes aegypti fez com que o bairro periférico juazeirense, vizinho à sede da Moscamed, aceitasse de bom grado o experimento. Mário Machado estima “em torno de 20%” a parcela da população que se opôs aos testes e pôs fim às liberações.

“Por mais que a gente tente informar, ir de casa em casa, de bar em bar, algumas pessoas desacreditam: ‘Não, vocês estão mentindo pra gente, esse mosquito tá picando a gente’”, resigna-se.

Depois de um ano sem liberações, o mosquito parece não ter deixado muitas lembranças por ali. Em uma caminhada pelo bairro, quase não conseguimos encontrar alguém que soubesse do que estávamos falando. Não obstante, o nome de Itaberaba correu o mundo ao ser divulgado pela Oxitec que o primeiro experimento de campo no Brasil havia atingido 80% de redução na população de mosquitos selvagens.

Supervisora de campo da Moscamed, a bióloga Luiza Garziera foi uma das que foram de casa em casa explicando o processo, por vezes contornando o discurso científico para se fazer entender. “Eu falava que a gente estaria liberando esses mosquitos, que a gente liberava somente o macho, que não pica. Só quem pica é a fêmea. E que esses machos quando ‘namoram’ – porque a gente não pode falar às vezes de ‘cópula’ porque as pessoas não vão entender. Então quando esses machos namoram com a fêmea, os seus filhinhos acabam morrendo”.

Este é um dos detalhes mais importantes sobre a técnica inédita. Ao liberar apenas machos, numa taxa de 10 transgênicos para 1 selvagem, a Moscamed mergulha as pessoas numa nuvem de mosquitos, mas garante que estes não piquem aqueles. Isto acontece porque só a fêmea se alimenta de sangue humano, líquido que fornece as proteínas necessárias para sua ovulação.

A tecnologia se encaixa de forma convincente e até didática – talvez com exceção da “modificação genética”, que requer voos mais altos da imaginação. No entanto, ainda a ignorância sobre o assunto ainda campeia em considerável parcela dos moradores ouvidos para esta reportagem. Quando muito, sabe-se que se trata do extermínio do mosquito da dengue, o que é naturalmente algo positivo. No mais, ouviu-se apenas falar ou arrisca-se uma hipótese que inclua a, esta sim largamente odiada, muriçoca.

A avaliação dos riscos

Apesar da campanha de comunicação da Moscamed, a ONG britânica GeneWatch aponta uma série de problemas no processo brasileiro. O principal deles, o fato do relatório de avaliação de riscos sobre o experimento não ter sido disponibilizado ao público antes do início das liberações. Pelo contrário, a pedido dos responsáveis pelo Programa Aedes Transgênico, o processo encaminhado à Comissão Técnica Nacional de Biossegurança (CTNBio, órgão encarregado de autorizar ou não tais experimentos) foi considerado confidencial.

“Nós achamos que a Oxitec deve ter o consentimento plenamente informado da população local, isso significa que as pessoas precisam concordar com o experimento. Mas para isso elas precisam também ser informadas sobre os riscos, assim como você seria se estivesse sendo usado para testar um novo medicamento contra o câncer ou qualquer outro tipo de tratamento”, comentou, em entrevista por Skype, Helen Wallace, diretora executiva da organização não governamental.

Especialista nos riscos e na ética envolvida nesse tipo de experimento, Helen publicou este ano o relatório Genetically Modified Mosquitoes: Ongoing Concerns (“Mosquitos Geneticamente Modificados: atuais preocupações”), que elenca em 13 capítulos o que considera riscos potenciais não considerados antes de se autorizar a liberação dos mosquitos transgênicos. O documento também aponta falhas na condução dos experimentos pela Oxitec.

Por exemplo, após dois anos das liberações nas Ilhas Cayman, apenas os resultados de um pequeno teste haviam aparecido numa publicação científica. No começo de 2011, a empresa submeteu os resultados do maior experimento nas Ilhas à revista Science, mas o artigo não foi publicado. Apenas em setembro do ano passado o texto apareceu em outra revista, a Nature Biotechnology, publicado como “correspondência” – o que significa que não passou pela revisão de outros cientistas, apenas pela checagem do próprio editor da publicação.

Para Helen Wallace, a ausência de revisão crítica dos pares científicos põe o experimento da Oxitec sob suspeita. Mesmo assim, a análise do artigo, segundo o documento, sugere que a empresa precisou aumentar a proporção de liberação de mosquitos transgênicos e concentrá-los em uma pequena área para que atingisse os resultados esperados. O mesmo teria acontecido no Brasil, em Itaberaba. Os resultados do teste no Brasil também ainda não foram publicados pela Moscamed. O gerente do projeto, Danilo Carvalho, informou que um dos artigos já foi submetido a uma publicação e outro está em fase final de escrita.

Outro dos riscos apontados pelo documento está no uso comum do antibiótico tetraciclina. O medicamento é responsável por reverter o gene letal e garantir em laboratório a sobrevivência do mosquito geneticamente modificado, que do contrário não chegaria à fase adulta. Esta é a diferença vital entre a sorte dos mosquitos reproduzidos em laboratório e a de suas crias, geradas no meio ambiente a partir de fêmeas selvagens – sem o antibiótico, estão condenados à morte prematura.

A tetraciclina é comumente empregada nas indústrias da pecuária e da aquicultura, que despejam no meio ambiente grandes quantidades da substância através de seus efluentes. O antibiótico também é largamente usado na medicina e na veterinária. Ou seja, ovos e larvas geneticamente modificados poderiam entrar em contato com o antibiótico mesmo em ambientes não controlados e assim sobreviverem. Ao longo do tempo, a resistência dos mosquitos transgênicos ao gene letal poderia neutralizar seu efeito e, por fim, teríamos uma nova espécie geneticamente modificada adaptada ao meio ambiente.

laboratorio 300x186 Mosquitos transgênicos no céu do sertãoA hipótese é tratada com ceticismo pela Oxitec, que minimiza a possibilidade disto acontecer no mundo real. No entanto, documento confidencial tornado público mostra que a hipótese se mostrou, por acaso, real nos testes de pesquisador parceiro da empresa. Ao estranhar uma taxa de sobrevivência das larvas sem tetraciclina de 15% – bem maior que os usuais 3% contatos pelos experimentos da empresa –, os cientistas da Oxitec descobriram que a ração de gato com a qual seus parceiros estavam alimentando os mosquitos guardava resquícios do antibiótico, que é rotineiramente usado para tratar galinhas destinadas à ração animal.

O relatório da GeneWatch chama atenção para a presença comum do antibiótico em dejetos humanos e animais, assim como em sistemas de esgotamento doméstico, a exemplo de fossas sépticas. Isto caracterizaria um risco potencial, já que vários estudos constataram a capacidade do Aedes aegypti se reproduzir em águas contaminadas – apesar de isso ainda não ser o mais comum, nem acontecer ainda em Juazeiro, segundo a Secretaria de Saúde do município.

Além disso, há preocupações quanto a taxa de liberação de fêmeas transgênicas. O processo de separação das pupas (último estágio antes da vida adulta) é feito de forma manual, com a ajuda de um aparelho que reparte os gêneros pelo tamanho (a fêmea é ligeiramente maior). Uma taxa de 3% de fêmeas pode escapar neste processo, ganhando a liberdade e aumentando os riscos envolvidos. Por último, os experimentos ainda não verificaram se a redução na população de mosquitos incide diretamente na transmissão da dengue.

Todas as críticas são rebatidas pela Oxitec e pela Moscamed, que dizem manter um rigoroso controle de qualidade – como o monitoramento constante da taxa de liberação de fêmeas e da taxa de sobrevivências das larvas sem tetraciclina. Desta forma, qualquer sinal de mutação do mosquito seria detectado a tempo de se suspender o programa. Ao final de aproximadamente um mês, todos os insetos liberados estariam mortos. Os mosquitos, segundo as instituições responsáveis, também não passam os genes modificados mesmo que alguma fêmea desgarrada pique um ser humano.

Mosquito transgênico à venda

Em julho passado, depois do êxito dos testes de campo em Juazeiro, a Oxitec protocolou a solicitação de licença comercial na Comissão Técnica Nacional de Biossegurança (CTNBio). Desde o final de 2012, a empresa britânica possui CNPJ no país e mantém um funcionário em São Paulo. Mais recentemente, com os resultados promissores dos experimentos em Juazeiro, alugou um galpão em Campinas e está construindo o que será sua sede brasileira. O país representa hoje seu mais provável e iminente mercado, o que faz com que o diretor global de desenvolvimento de negócios da empresa, Glen Slade, viva hoje numa ponte aérea entre Oxford e São Paulo.

“A Oxitec está trabalhando desde 2009 em parceria com a USP e Moscamed, que são parceiros bons e que nos deram a oportunidade de começar projetos no Brasil. Mas agora acabamos de enviar nosso dossiê comercial à CTNBio e esperamos obter um registro no futuro, então precisamos aumentar nossa equipe no país. Claramente estamos investindo no Brasil. É um país muito importante”, disse Slade numa entrevista por Skype da sede na Oxitec, em Oxford, na Inglaterra.

A empresa de biotecnologia é uma spin-out da universidade britânica, o que significa dizer que a Oxitec surgiu dos laboratórios de uma das mais prestigiadas universidades do mundo. Fundada em 2002, desde então vem captando investimentos privados e de fundações sem fins lucrativos, tais como a Bill & Melinda Gates, para bancar o prosseguimento das pesquisas. Segundo Slade, mais de R$ 50 milhões foram gastos nesta última década no aperfeiçoamento e teste da tecnologia.

O executivo espera que a conclusão do trâmite burocrático para a concessão da licença comercial aconteça ainda próximo ano, quando a sede brasileira da Oxitec estará pronta, incluindo uma nova biofábrica. Já em contato com vários municípios do país, o executivo prefere não adiantar nomes. Nem o preço do serviço, que provavelmente será oferecido em pacotes anuais de controle da população de mosquitos, a depender o orçamento do número de habitantes da cidade.

“Nesse momento é difícil dar um preço. Como todos os produtos novos, o custo de produção é mais alto quando a gente começa do que a gente gostaria. Acho que o preço vai ser um preço muito razoável em relação aos benefícios e aos outros experimentos para controlar o mosquito, mas muito difícil de dizer hoje. Além disso, o preço vai mudar segundo a escala do projeto. Projetos pequenos não são muito eficientes, mas se tivermos a oportunidade de controlar os mosquitos no Rio de Janeiro todo, podemos trabalhar em grande escala e o preço vai baixar”, sugere.

A empresa pretende também instalar novas biofábricas nas cidades que receberem grandes projetos, o que reduzirá o custo a longo prazo, já que as liberações precisam ser mantidas indefinidamente para evitar o retorno dos mosquitos selvagens. A velocidade de reprodução do Aedes aegypti é uma preocupação. Caso seja cessado o projeto, a espécie pode recompor a população em poucas semanas.

“O plano da empresa é conseguir pagamentos repetidos para a liberação desses mosquitos todo ano. Se a tecnologia deles funcionar e realmente reduzir a incidência de dengue, você não poderá suspender estas liberações e ficará preso dentro desse sistema. Uma das maiores preocupações a longo prazo é que se as coisas começarem a dar errado, ou mesmo se tornarem menos eficientes, você realmente pode ter uma situação pior ao longo de muitos anos”, critica Helen Wallace.

O risco iria desde a redução da imunidade das pessoas à doença, até o desmantelamento de outras políticas públicas de combate à dengue, como as equipes de agentes de saúde. Apesar de tanto a Moscamed quanto a própria secretaria de Saúde de Juazeiro enfatizarem a natureza complementar da técnica, que não dispensaria os outros métodos de controle, é plausível que hajam conflitos na alocação de recursos para a área. Hoje, segundo Mário Machado da secretaria de Saúde, Juazeiro gasta em média R$ 300 mil por mês no controle de endemias, das quais a dengue é a principal.

A secretaria negocia com a Moscamed a ampliação do experimento para todo o município ou mesmo para toda a região metropolitana formada por Juazeiro e Petrolina – um teste que cobriria meio milhão pessoas –, para assim avaliar a eficácia em grandes contingentes populacionais. De qualquer forma e apesar do avanço das experiências, nem a organização social brasileira nem a empresa britânica apresentaram estimativas de preço pra uma possível liberação comercial.

“Ontem nós estávamos fazendo os primeiros estudos, pra analisar qual é o preço deles, qual o nosso. Porque eles sabem quanto custa o programa deles, que não é barato, mas não divulgam”, disse Mário Machado.

Em reportagem do jornal britânico The Observer de julho do ano passado, a Oxitec estimou o custo da técnica em “menos de” £6 libras esterlinas por pessoa por ano. Num cálculo simples, apenas multiplicando o número pela contação atual da moeda britânia frente ao real e desconsiderando as inúmeras outras variáveis dessa conta, o projeto em uma cidade de 150 mil habitantes custaria aproximadamente R$ 3,2 milhões por ano.

Se imaginarmos a quantidade de municípios de pequeno e médio porte brasileiros em que a dengue é endêmica, chega-se a pujança do mercado que se abre – mesmo desconsiderando por hora os grandes centros urbanos do país, que extrapolariam a capacidade atual da técnica. Contudo, este é apenas uma fatia do negócio. A Oxitec também possui uma série de outros insetos transgênicos, estes destinados ao controle de pragas agrícolas e que devem encontrar campo aberto no Brasil, um dos gigantes do agronegócio no mundo.

Aguardando autorização da CTNBio, a Moscamed já se preparara para testar a mosca-das-frutas transgênica, que segue a mesma lógica do Aedes aegypti. Além desta, a Oxitec tem outras 4 espécies geneticamente modificadas que poderão um dia serem testadas no Brasil, a começar por Juazeiro e o Vale do São Francisco. A região é uma das maiores produtoras de frutas frescas para exportação do país. 90% de toda uva e manga exportadas no Brasil saem daqui. Uma produção que requer o combate incessante às pragas. Nas principais avenidas de Juazeiro e Petrolina, as lojas de produtos agrícolas e agrotóxicos se sucedem, variando em seus totens as logos das multinacionais do ramo.

“Não temos planos concretos [além da mosca-das-frutas], mas, claro, gostaríamos muito de ter a oportunidade de fazer ensaios com esses produtos também. O Brasil tem uma indústria agrícola muito grande. Mas nesse momento nossa prioridade número 1 é o mosquito da dengue. Então uma vez que tivermos este projeto com recursos bastante, vamos tentar acrescentar projetos na agricultura.”, comentou Slade.

Ele e vários de seus colegas do primeiro escalão da empresa já trabalharam numa das gigantes do agronegócio, a Syngenta. O fato, segundo Helen Wallace, é um dos revelam a condição do Aedes aegypti transgênico de pioneiro de todo um novo mercado de mosquitos geneticamente modificados: “Nos achamos que a Syngenta está principalmente interessada nas pragas agrícolas. Um dos planos que conhecemos é a proposta de usar pragas agrícolas geneticamente modificadas junto com semestres transgênicas para assim aumentar a resistências destas culturas às pragas”.

“Não tem nenhum relacionamento entre Oxitec e Syngenta dessa forma. Talvez tenhamos possibilidade no futuro de trabalharmos juntos. Eu pessoalmente tenho o interesse de buscar projetos que possamos fazer com Syngenta, Basf ou outras empresas grandes da agricultura”, esclarece Glen Slade.

Em 2011, a indústria de agrotóxicos faturou R$14,1 bilhões no Brasil. Maior mercado do tipo no mundo, o país pode nos próximos anos inaugurar um novo estágio tecnológico no combate às pestes. Assim como na saúde coletiva, com o Aedes aegypti transgênico, que parece ter um futuro comercial promissor. Todavia, resta saber como a técnica conviverá com as vacinas contra o vírus da dengue, que estão em fase final de testes – uma desenvolvida por um laboratório francês, outra pelo Instituto Butantan, de São Paulo. As vacinas devem chegar ao público em 2015. O mosquito transgênico, talvez já próximo ano.

Dentre as linhagens de mosquitos transgênicos, pode surgir também uma versão nacional. Como confirmou a professora Margareth de Lara Capurro-Guimarães, do Departamento de Parasitologia da USP e coordenadora do Programa Aedes Transgênico, já está sob estudo na universidade paulista a muriçoca transgênica. Outra possível solução tecnológica para um problema de saúde pública em Juazeiro da Bahia – uma cidade na qual, segundo levantamento do Sistema Nacional de Informações sobre Saneamento (SNIS) de 2011, a rede de esgoto só atende 67% da população urbana.

* Publicado originalmente no site Agência Pública.

(Agência Pública)

Terrestrial Ecosystems at Risk of Major Shifts as Temperatures Increase (Science Daily)

Oct. 8, 2013 — Over 80% of the world’s ice-free land is at risk of profound ecosystem transformation by 2100, a new study reveals. “Essentially, we would be leaving the world as we know it,” says Sebastian Ostberg of the Potsdam Institute for Climate Impact Research, Germany. Ostberg and collaborators studied the critical impacts of climate change on landscapes and have now published their results inEarth System Dynamics, an open access journal of the European Geosciences Union (EGU).

This image shows simulated ecosystem change by 2100, depending on the degree of global temperature increase: 2 degrees Celsius (upper image) or five degrees Celsius (lower image) above preindustrial levels. The parameter “ (Gamma) measures how far apart a future ecosystem under climate change would be from the present state. Blue colours (lower “) depict areas of moderate change, yellow to red areas (higher “) show major change. The maps show the median value of the “ parameter across all climate models, meaning at least half of the models agree on major change in the yellow to red areas, and at least half of the models are below the threshold for major change in the blue areas. (Credit: Ostberg et al., 2013)

The researchers state in the article that “nearly no area of the world is free” from the risk of climate change transforming landscapes substantially, unless mitigation limits warming to around 2 degrees Celsius above preindustrial levels.

Ecosystem changes could include boreal forests being transformed into temperate savannas, trees growing in the freezing Arctic tundra or even a dieback of some of the world’s rainforests. Such profound transformations of land ecosystems have the potential to affect food and water security, and hence impact human well-being just like sea level rise and direct damage from extreme weather events.

The new Earth System Dynamics study indicates that up to 86% of the remaining natural land ecosystems worldwide could be at risk of major change in a business-as-usual scenario (see note). This assumes that the global mean temperature will be 4 to 5 degrees warmer at the end of this century than in pre-industrial times — given many countries’ reluctance to commit to binding emissions cuts, such warming is not out of the question by 2100.

“The research shows there is a large difference in the risk of major ecosystem change depending on whether humankind continues with business as usual or if we opt for effective climate change mitigation,” Ostberg points out.

But even if the warming is limited to 2 degrees, some 20% of land ecosystems — particularly those at high altitudes and high latitudes — are at risk of moderate or major transformation, the team reveals.

The researchers studied over 150 climate scenarios, looking at ecosystem changes in nearly 20 different climate models for various degrees of global warming. “Our study is the most comprehensive and internally consistent analysis of the risk of major ecosystem change from climate change at the global scale,” says Wolfgang Lucht, also an author of the study and co-chair of the research domain Earth System Analysis at the Potsdam Institute for Climate Impact Research.

Few previous studies have looked into the global impact of raising temperatures on ecosystems because of how complex and interlinked these systems are. “Comprehensive theories and computer models of such complex systems and their dynamics up to the global scale do not exist.”

To get around this problem, the team measured simultaneous changes in the biogeochemistry of terrestrial vegetation and the relative abundance of different vegetation species. “Any significant change in the underlying biogeochemistry presents an ecological adaptation challenge, fundamentally destabilising our natural systems,” explains Ostberg.

The researchers defined a parameter to measure how far apart a future ecosystem under climate change would be from the present state. The parameter encompasses changes in variables such as the vegetation structure (from trees to grass, for example), the carbon stored in the soils and vegetation, and freshwater availability. “Our indicator of ecosystem change is able to measure the combined effect of changes in many ecosystem processes, instead of looking only at a single process,” says Ostberg.

He hopes the new results can help inform the ongoing negotiations on climate mitigation targets, “as well as planning adaptation to unavoidable change.”

Note

Even though 86% of land ecosystems are at risk if global temperature increases by 5 degrees Celsius by 2100, it is unlikely all these areas will be affected. This would mean that the worst case scenario from each climate model comes true.

Journal Reference:

  1. S. Ostberg, W. Lucht, S. Schaphoff, D. Gerten. Critical impacts of global warming on land ecosystemsEarth System Dynamics, 2013; 4 (2): 347 DOI: 10.5194/esd-4-347-2013

Explosive Dynamic Behavior On Twitter and in the Financial Market (Science Daily)

Oct. 7, 2013 — Over the past 10 years, social media has changed the way that people influence each other. By analysing data from the social networking service, Twitter, and stock trading in the financial market, researchers from the Niels Bohr Institute have shown that events in society bring rise to common behaviour among large groups of people who do not otherwise know each other The analysis shows that there are common features in user activity on Twitter and in stock market transactions in the financial market.

The results are published in the scientific journal, PNAS, Proceedings of the National Academy of Sciences.

The figure shows how often the international brands IBM, Pepsi and Toyota have been mentioned during a five-week period on Twitter. The activity is during long periods relatively steady and is then interrupted by sudden activity spikes. The research from NBI shows that there are common features in user activity on Twitter and in stock market transactions in the financial market. (Credit: Niels Bohr Institute)

“The whole idea of the study is to understand how social networks function. The strength of using the popular social media, Twitter, is that they are more than 200 million users worldwide, who write short messages about immediate experiences and impressions. This means that you can directly study human behaviour in crowds on the web. Twitter can be seen as a global social sensor,” explains Joachim Mathiesen, Associate Professor of physics at the Niels Bohr Institute at the University of Copenhagen.

Dynamic Twitter behaviour

Joachim Mathiesen developed a programme that could follow the use of Twitter constantly. He could see that there were periods with relatively steady activity and then there would be a very abrupt and intense upswing in activity. Suddenly there was an event that everyone had to respond to and there was an explosion in the amount of online activity.

“There arises a collective behaviour between people who otherwise do not know each other, but who are coupled together via events in society,” explains Joachim Mathiesen.

The analysis also took into account how frequently approximately 100 international brands, like Pepsi, IBM, Apple, Nokia, Toyota, etc. occurred in messages on Twitter. Here too, the level is characterised by days of steady activity, which is interrupted by sudden, but brief explosions of activity.

“Something happens that encourages people to write on Twitter, and suddenly the activity explodes. This is a kind of horde behaviour that is driven by an external even and gets crowds to react,” says Joachim Mathiesen.

But why is a physicist concerning himself with social behaviour?

“As physicists, we are good at understanding large amounts of complex data and we can create systems in this sea of coincidences. Complex systems are seen in many contexts and are simply learning about human behaviour in large social groups,” he explains.

The model calculations shed light on the statistical properties of large-scale user activity on Twitter and the underlying contexts. Similarly, he analysed the fluctuations in the activity of trading shares on the financial market.

“We saw prolonged intervals with steady activity, after which there was sudden and almost earthquake like activity. An even starts an avalanche in the trading. Statistically, we see the same characteristic horde behaviour in the financial markets as we do on Twitter, so the two social systems are not that different,” concludes Joachim Mathiesen.

Journal Reference:

  1. J. Mathiesen, L. Angheluta, P. T. H. Ahlgren, M. H. Jensen.Excitable human dynamics driven by extrinsic events in massive communitiesProceedings of the National Academy of Sciences, 2013; DOI: 10.1073/pnas.1304179110

Politics and Perceptions: Social Media, Politics Collide in New Study (Science Daily)

Oct. 8, 2013 — It bothered Lindsay Hoffman and colleagues to see other researchers making broad yet vague claims about the role social media plays in political participation.

So they decided to study it.

In a paper to be published in November in the journal Computers in Human Behavior, the University of Delaware associate professor in the departments of Communication and of Political Science and International Relations and her co-authors explored how people perceive their own political behaviors online. It is part of a larger goal to better understand why people engage in politics both on- and offline.

The study is titled “Does My Comment Count? Perceptions of Political Participation in an Online Environment.” Dannagal Young, associate professor of communication, and Philip Jones, associate professor of political science and international relations, teamed up with Hoffman for the study.

It was built around the question of whether, when people engage in political behavior online — “liking” a candidate’s Facebook page, tweeting their thoughts about a political platform, signing a virtual petition — they see their activities as having influence on the functions of government (participation) or as communication with others.

“A lot of people in the 2008 elections were participating on Facebook and on blogs,” Hoffman said (Twitter didn’t play as strong a role then). .” .. We were interested in which is participatory and which is seen as communication.”

Hoffman said many claims had been made about the substantial role social media has played in mobilizing people to become more politically active. Some also believe online political engagement is replacing traditional, offline forms of political behavior, prompting people to play a less active role when it comes to activities like voting.

But without a way to define how people perceived what they were doing when they engaged in politics online, Hoffman and her co-authors were skeptical.

The UD researchers relied on a survey of roughly 1,000 randomly selected American adults to assess what people were doing politically on- and offline, what they had done in the past, to what extent they thought their activities were a good way to influence the government and to what extent they thought their actions were a good way to communicate with others.

The survey, which was completed in the summer of 2010, focused on 11 political behaviors, including voting in an election, communicating online about politics, signing up for online political information, friending or “liking” a candidate or politician and putting up a yard sign or wearing a political shirt.

The work led the researchers to conclude people have a realistic notion of what they are doing when they engage in politics online.

“People are more savvy than we think they are,” Hoffman said. “They viewed every type of behavior mentioned except voting as communication.”

People in the study perceived their on- and offline behaviors as playing different political roles. They seemed not to be replacing traditional, offline political engagement with online behaviors, Hoffman and her co-authors found.

“They are not duped into thinking they can influence government or take a hands-off approach” just by being involved online, she said.

Those in the study who reported being more confident in government and their ability to have an impact were even more motivated to engage in online political activities when they perceived it as communication, the study also found.

“If people see it as communication, they are more likely to participate,” Hoffman said. “Communication is a key cornerstone in political involvement.”

This study was one of the first funded through a $50,000 Innovations through Collaborations Grant awarded by the College of Arts and Sciences’ Interdisciplinary Humanities Research Center, which supports cross-disciplinary work.

Hoffman first met Jones in 2009, when he joined the University. They discovered they shared academic interests, and the grant helped bring their ideas together. The collaboration between the three researchers also resulted in a second publication examining the impact of candidate emotion on political participation. That studied was published online in the journal New Media and Society in December 2012.

“It was the summer of 2010 when we did the online survey asking about how people participate in politics online,” said Hoffman. “There were a lot of high emotions, the tea party was forming, and we wondered how that might impact certain types of political behaviors.”

The study worked toward filling a void in the literature, where few have looked at the effect a candidate’s emotions — like anger, anxiety and hopefulness — have on how people engage in politics. It also challenged the notion that emotional candidates sway voters, particularly those least involved or least knowledgeable about politics.

The researchers found that the online emotional appeal of a candidate did not influence a person’s likelihood of participating on that candidate’s behalf, unless that person was already highly engaged and knowledgeable. The particular emotion expressed was unimportant.

Hoffman is pleased the collaboration with Young and Jones proved so fruitful.

“You hope for the best: to have good data and results that are interesting and compelling,” said Hoffman. “I am really proud of our collaborations.”

Journal Reference:

  1. Lindsay H. Hoffman, Philip Edward Jones, Dannagal Goldthwaite Young. Does my comment count? Perceptions of political participation in an online environmentComputers in Human Behavior, 2013; 29 (6): 2248 DOI: 10.1016/j.chb.2013.05.010

Exploração de gás de xisto no Oeste baiano pode causar desastre ambiental sem precedentes (O Expresso)

01/09/2013 

Se confirmada a ocorrência de gás de xisto na grande área do Aquífero Urucuia e usada técnica norte-americana de extração, poderemos contaminar para sempre a água que bebemos e que usamos para irrigar alimentos. Seremos ricos em petróleo e gás e importaremos água para beber de outros estados. A população das chapadas e, de quase toda a Bahia onde existem reservas de xisto, precisa se engajar nesta luta para obstruir o possível uso da técnica de fratura hidráulica na extração do gás. O poço de prospecção instalado na Fazenda Vitória ( veja a foto abaixo), a apenas 10 km de Luís Eduardo Magalhães, é apenas o início do grande licenciamento que a ANP quer realizar, em outubro, em todo o País.

poço

Conforme o Expresso adiantou em 15 de maio está começando, na Fazenda Vitória, a 10 km do centro de Luís Eduardo Magalhães, a instalação de um poço exploratório de gás e petróleo em Luís Eduardo Magalhães. Na internet, ambientalistas já começam a se manifestar sobre o desastre eminente, caso venha a ser encontrado o gás de xisto e usado o processo de hidro-fragmentação, fratura hidráulica ou fracking para a coleta do gás, técnica desenvolvida nos Estados Unidos, que já causa um enorme passivo ambiental. Segundo relatórios de ambientalistas americanos, o processo leva à poluição de lençóis freáticos profundos e a água utilizada para o processo de retirada do gás retorna ao solo altamente poluída, com resquícios de petróleo. Tanto que a maioria dos estados americanos hesita em liberar a exploração do xisto betuminoso pelo processo de fragmentação.

O gás de xisto, também conhecido como shale gás ou gás não convencional, é encontrado em formações sedimentares de baixa permeabilidade e fica aprisionado, característica que por muito tempo inviabilizou sua extração, visto que não havia tecnologia capaz de retirá-lo de dentro das formações de xisto.

Esse paradigma foi quebrado com a técnica de perfuração horizontal dos poços e o advento do fraturamento hidráulico. Processo esse que consiste em bombear, sob alta pressão, uma mistura de água e areia junto com outros produtos químicos no poço a fim de fraturar as formações de xisto, permitindo a liberação do gás.

Nesta sexta, procuramos a assessoria de Comunicação da Petrobrás, no Rio de Janeiro, que indicou a assessoria de Salvador. Que depois de umas 3 horas de silêncio indicou a assessoria de comunicação da ANP, “por ser área exploratória”. A assessoria da ANP não respondeu aos nossos questionamentos, via email, e certamente não irá responder nos próximos dias.

O Aquífero Urucuia em perigo

O Aquífero Urucuia, sobre o qual está sendo montado o poço exploratório de Luís Eduardo Magalhães, distribui-se pelos estados da Bahia, Tocantins, Minas Gerais, Piauí, Maranhão e Goiás, onde ocupa uma área estimada de 120.000 km2. Deste total, cerca de 75-80% estão encravados na região oeste do Estado da Bahia. Em alguns locais, o grande mar subterrâneo de água doce e pura, tem espessura ou altura de até 400 metros. Se a exploração usar efetivamente o processo de hidro-fragmentação, o Aquífero estará definitivamente comprometido.

Em palavras mais sérias: em pouco tempo estaremos bebendo água com acentuado gosto de petróleo. E as águas das veredas, em ponto de afloramento de gás, se incendiarão ao contato de uma chama.

Veja o que diz o site Carbono Brasil 

A fratura hidráulica, ou fracking, processo que consiste na utilização de água sob altíssima pressão para extração de gás xisto, está trazendo diversos problemas ambientais para os Estados Unidos, obtendo a oposição de diversos grupos ambientalistas e da sociedade civil.

Nesta terça-feira (28), foi publicado um estudo do Serviço Geológico dos EUA e do Serviço de Pesca e Vida Selvagem dos EUA que afirma que os fluidos que vazam do processo estão causando a morte de diversas espécies aquáticas na região de Acorn Fork, no estado do Kentucky.

Segundo a pesquisa, os fluidos da fratura hidráulica prejudicam a qualidade da água a ponto de os peixes desenvolverem lesões nas guelras e sofrerem danos no fígado e no baço. O fracking também fez com que o pH da água diminuísse de 7,5 para 5,6, o que significa que água se tornou mais ácida.

Além disso, o processo aumentou a condutividade da água de 200 para 35 mil microsiemens por centímetro, devido aos níveis elevados de metais como ferro, alumínio e outros elementos dissolvidos na água.

Na Califórnia, um desastre

No estado da Califórnia, o fracking também está trazendo transtorno à população. Tanto que uma coalizão de 100 grupos ambientalistas e da sociedade civil acusa a legislação do processo, recém aprovada pelo Senado norte-americano, de ser muito fraca, e pede para que o governador californiano Jerry Brown suspenda a prática imediatamente.“

A verdade é que não há forma comprovada de proteger a Califórnia do fracking além de proibir essa prática inerentemente perigosa”, escreveram os grupos em uma carta enviada a Brown. De acordo com eles, o conjunto de leis “permitiria que as operações de fracking poluíssem permanentemente grandes quantidades da preciosa água da Califórnia.”

Contudo, o governador acredita que, se feito com segurança, o processo pode trazer grandes ganhos econômicos para o estado. “Tenho que equilibrar meu forte compromisso de lidar com as mudanças climáticas e as energias renováveis com o que pode ser uma oportunidade econômica fabulosa”, colocou ele. Ainda assim, Brown não tomou uma posição pública sobre o projeto de lei.

Uma visão gráfica de como se procede a fratura do solo. Na verdade, os lagos subterrâneos não são como aparecem na imagem. A água está diluída dentro do arenito, que a absorve como uma esponja.

Uma visão gráfica de como se procede a fratura do solo. Na verdade, os lagos subterrâneos não são como aparecem na imagem. A água está diluída dentro do arenito, que a absorve como uma esponja.

Blairo Maggi e ANP

O senador Blairo Maggi, em audiência pública, realizada nesta terça-feira, 27, na Comissão de Meio Ambiente do Senado, já interpelou a ANP sobre a exploração do gás de xisto.

Diz o Senador:

“Acreditando nessas novas tecnologias, o Brasil está prestes a começar a exploração do gás de xisto. A Agência Nacional de Petróleo (ANP) marcou para novembro o primeiro leilão de blocos de gás. No entanto, no país a tal exploração ainda está longe de ser consenso no que diz respeito a custo benefício”

“Primeiro é preciso ter certeza da viabilidade econômica da exploração do gás de xisto, já que o sucesso nos Estados Unidos foi resultado de um forte programa governamental, com muitos incentivos. As condições objetivas, incluindo tecnologia, infraestrutura de transporte, mercado consumidor e impactos ambientais, para exploração e consumo no Brasil, recomendam cautela”, alertou o senador.

O Chefe de Gabinete da ANP, Sílvio Jablonski, explicou que o gás não convencional não é uma realidade imediata para o Brasil, mas sim, uma perspectiva para os próximos 10 anos, isso caso se confirmem as possíveis reservas de xisto.

Pontos de divergências

Impactos ambientais e contaminação de lençóis freáticos são fontes constantes de atritos entre ambientalistas, governos e empresas de exploração de gás e petróleo. Blairo Maggi lembrou que no Brasil duas das maiores concentrações do xisto estão sob grandes reservas de água. Uma na Bacia do Parecis onde se formam as bacias hidrográficas Amazônica e Platina, e outra, sob o Aquífero do Guarani.

Para o secretário-executivo de exploração e produção do Instituto Brasileiro de Petróleo, Gás e Biocombustíveis (IBP), Antônio Guimarães, não há risco de contaminação das águas, uma vez que os poços de exploração devem ser feitos a uma profundidade de três mil metros e que seria quase impossível os fluídos do gás escaparem para a superfície.

“A engenharia é fundamental na construção dos poços para a proteção dos lençóis freáticos. As normas da ANP são bastante restritivas, a fiscalização é rígida e as multas são pesadas para qualquer descumprimentoPreservação, engajamento e apoio das comunidades devem ser o foco para o sucesso dessa exploração”, afirmou o secretário.

No entanto, o senador Blairo Maggi frisou que, apesar do Brasil não poder desconsiderar uma fonte de energia dessa magnitude, não se pode da mesma forma ignorar a inexistência de estudos científicos que comprovem a segurança dessa exploração. E, lembrou que o mundo, hoje, está voltado para a busca de energia limpa.

“É preciso antes comprovar que a exploração do gás de xisto será satisfatória principalmente à população, ao meio ambiente e ao Brasil como um todo. Por isso, estamos trazendo à Comissão, técnicos especializados no assunto para dirimir todas as dúvidas e debater o tema. Na próxima rodada vamos convidar os órgãos ambientais e ONG’s que atuam nessa área”, informou o senador.

O mapa do Aquífero Urucuia, 120 mil km² de água subterrânea contínua, ao lado de outros grandes aquíferos do Oeste baiano

O mapa do Aquífero Urucuia, 120 mil km² de água subterrânea contínua, ao lado de outros grandes aquíferos do Oeste baiano

Insetos conseguem prever tempestades e ventanias (Fapesp)

Animais mudam o comportamento sexual quando há queda da pressão atmosférica, fenômeno comum antes de chuvas e ventos fortes, indica pesquisa feita na Esalq (foto: José Maurício Simões Bento/Esalq)

03/10/2013

Por Elton Alisson

Agência FAPESP – Na Índia e no Japão há um ditado popular que diz que “formigas carregando ovos barranco acima, é a chuva que se aproxima”. Já no Brasil, outro provérbio afirma que “quando aumenta a umidade do ar, cupins e formigas saem de suas tocas para acasalar”.

Um estudo publicado na edição do dia 2 de outubro da revistaPLoS One – realizado por pesquisadores da Escola Superior de Agricultura “Luiz de Queiroz” (Esalq), da Universidade de São Paulo (USP), de Piracicaba (SP), em parceria com colegas da Universidade Estadual do Centro-Oeste (Unicentro), de Guarapuava (PR), e da University of Western Ontario, do Canadá – comprovou que os insetos preveem mudanças climáticas e dão indicações disso com modificações no comportamento.

Os pesquisadores observaram que besouros da espécie Diabrotica speciosa – conhecido popularmente como “brasileirinho” ou “patriota”, por terem cor verde e pintas amarelas –, além de pulgões-da-batata (Macrosiphum euphorbiae) e lagartas da pastagem (Pseudaletia unipuncta), têm capacidade de detectar queda na pressão atmosférica – que, na maioria dos casos, é um sinal de chuva iminente. E, ao perceberem isso, modificam o comportamento sexual, diminuindo a disposição de cortejar e acasalar.

“Demonstramos que os insetos, de fato, têm capacidade de detectar mudanças no tempo por meio da queda da pressão atmosférica, de se antecipar e buscar abrigo para se proteger das más condições climáticas, como temporais e ventanias, por exemplo”, disse José Maurício Simões Bento, professor do Departamento de Entomologia e Acarologia da Esalq e um dos autores do estudo, à Agência FAPESP.

“Certamente esses animais estão mais preparados para enfrentar as mudanças repentinas no tempo que, provavelmente, ocorrerão com maior frequência e intensidade no mundo nos próximos anos em razão das mudanças climáticas globais”, avaliou Bento, um dos pesquisadores principais do Instituto Nacional de Ciência e Tecnologia de Semioquímicos na Agricultura – um dos INCTs financiados pela FAPESP e pelo Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).

Para realizar o estudo, os pesquisadores selecionaram três diferentes espécies de insetos – o besouro “brasileirinho”, o pulgão-da-batata e a lagarta da pastagem –, que pertencem a ordens bem distintas e que variam significativamente em termos de massa corpórea e morfologia.

O besouro “brasileirinho” tem estrutura mais robusta e possui cutícula dura e, por isso, é mais resistente a condições de tempo severas, como chuvas fortes e ventanias. Já o pulgão-da-batata tem estrutura mais frágil e é menos resistente a eventos climáticos extremos.

Como já existiam evidências de que os insetos ajustam seus comportamentos associados com o voo e com a alimentação às mudanças na velocidade dos ventos, os pesquisadores decidiram avaliar o efeito das condições atmosféricas especificamente sobre o comportamento de “namoro” e acasalamento dessas três espécies quando sujeitas a mudanças naturais ou manipuladas experimentalmente da pressão atmosférica.

Os experimentos em condições naturais (sem a manipulação da pressão) e sob condições controladas, em laboratório, revelaram que, ao detectar uma queda brusca na pressão atmosférica, por exemplo, as fêmeas diminuem ou simplesmente deixam de manifestar um comportamento conhecido como “chamamento”, no qual liberam feromônio para atrair machos para o acasalamento.

Os machos, por sua vez, passam a apresentar menor interesse sexual, não respondem aos estímulos das fêmeas e procuram abrigos para se proteger da mudança de tempo capaz de ocorrer nas próximas horas. Passado o mau tempo, os insetos retomam as atividades de cortejo, namoro e acasalamento.

“Esse comportamento de perda momentânea do interesse no acasalamento horas antes de uma tempestade representa uma capacidade adaptativa que, ao mesmo tempo, reduz a probabilidade de lesões e mortes desses animais – uma vez que são organismos diminutos e muito vulneráveis a condições climáticas adversas, como temporais, chuvas pesadas e ventanias – e assegura a reprodução e a perpetuação das espécies”, afirmou Bento.

 Experimentos

Os experimentos em condições naturais foram realizados na Esalq e no INCT de Semioquímicos na Agricultura, ambos em Piracicaba. Os pesquisadores utilizaram um olfatômetro com estrutura em Y, colocando uma fêmea em uma das duas extremidades menores e, na outra, um controle (que fica vago). Posteriormente, uma corrente de ar foi passada no interior do sistema, de forma que o feromônio fosse levado para a extremidade principal, onde o macho estava colocado.

Dependendo da direção seguida pelo  macho quando a corrente de ar com feromônio era liberada – a extremidade onde estava a fêmea ou a outra, do controle –, era possível avaliar se ele estava sendo atraído ou não pelo feromônio emitido pela fêmea.

Nos experimentos com o besouro “brasileirinho”, os pesquisadores constataram que, sob condições de pressão atmosférica estável ou crescente, o inseto caminhava normalmente em direção ao tubo por onde a corrente de ar com o feromônio da fêmea estava sendo liberado. Já na condição de queda de pressão, o inseto apresentava menor movimentação e interesse em seguir em direção à fêmea.

O grupo também observou que, quando mantido em contato direto com as fêmeas em condição de queda da pressão atmosférica, os machos também não empenharam esforço para acasalar. Tal comportamento, de acordo com Bento, pode ser explicado pela sensação de risco de vida do inseto.

“É como se, diante de uma situação de perigo iminente, esses animais colocassem a questão da sobrevivência em primeiro lugar – porque é o que garante a perpetuação da espécie – e deixassem o acasalamento para um segundo plano, por ser uma atividade que pode ser retomada após a passagem do mau tempo”, avaliou.

Os pesquisadores também avaliaram o número de vezes que o pulgão-da-batata e a lagarta da pastagem atenderam ao “chamamento” das respectivas fêmeas, sob diferentes condições atmosféricas. Os resultados foram semelhantes aos obtidos com o besouro “brasileirinho”.

O comportamento dos insetos estudados foi afetado significativamente por mudanças na pressão do ar, obtida pelos pesquisadores por meio de dados fornecidos de hora em hora pelo site do Instituto Nacional de Meteorologia (Inmet).

Ao perceber que os dados fornecidos pela instituição indicavam uma queda ou aumento brusco da pressão atmosférica da região de Piracicaba, os pesquisadores davam início aos experimentos para verificar se os insetos apresentavam mudanças no comportamento de chamamento e de cópula e faziam as comparações com as condições de pressão estável.

“Conseguimos verificar, dessa forma, que o comportamento sexual dos insetos varia em função do efeito da pressão atmosférica, uma vez que todas as outras condições – como a temperatura, umidade e a luz – foram controladas nos experimentos”, afirmou Bento.

Após constatar essa mudança de comportamento sexual dos insetos em condições naturais, o grupo da Esalq fez uma parceria com colegas canadenses do Departamento de Biologia da University of Western Ontario para realizar novos ensaios comportamentais em laboratório, mais precisamente em uma câmara barométrica de grandes dimensões, que permite controlar a pressão atmosférica, além da temperatura, umidade e luz.

Segundo Bento, os testes conduzidos sob controle de pressão comprovaram as observações feitas em condições naturais.

Fênomeno extensivo

De acordo com o pesquisador, o fato de as três espécies de insetos analisadas no estudo terem modificado o comportamento sexual em resposta às alterações na pressão do ar sugere que o fenômeno pode ser extensivo, de maneira geral, às demais espécies de insetos, e que esses animais são adaptados para enfrentar as más condições meteorológicas.

“Já havia outros trabalhos científicos sugerindo mudanças de comportamento de animais em função das mudanças no tempo, mas foram realizados com uma única espécie, especificamente, e não poderiam ser generalizados para as demais espécies do grupo”, disse Bento.

“Nosso estudo demonstrou que, no caso dos insetos, esse fenômeno parece ser extensivo às outras espécies”, afirmou.

O grupo de pesquisadores da Esalq investiga agora os mecanismos que os insetos utilizam para detectar mudanças na pressão atmosférica e desenvolver o comportamento adaptativo de interromper o acasalamento ao pressentir mudanças no tempo.

O artigo Weather forecasting by insects: modified sexual behaviour in response to atmospheric pressure changes (doi: 10.1371/journal.pone.0075004), de Bento e outros, pode ser lido na PLoS One em http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0075004.

*   *   *

Insects Modify Mating Behavior in Anticipation of Storms(Science Daily)

Oct. 2, 2013 — Insects modify calling and courting mating behavior in response to changes in air pressure, according to results published October 2 in the open-access journal PLOS ONE by Ana Cristina Pellegrino and José Maurício Bento, University of São Paulo, and colleagues from other institutions. The bugs’ ability to predict adverse weather conditions may help them modify their mating behavior during high winds and rain, reducing risk of injury or even death.

Researchers studied mating behavior changes in the curcurbit beetle, the true armyworm moth, and the potato aphid under falling, stable, and increasing air pressure conditions. When researchers measured the male beetles’ response to female sex pheromones under the different conditions, they found a significant decrease in pheromone response when air pressure fell compared to stable or increasing pressure. Furthermore, 63% of males started copulating faster in the presence of females during dropping atmospheric pressure, a condition associated with high rains and winds. By contrast, under stable or rising air pressure conditions, all males showed full courtship behavior.

Additionally, the amount that female armyworm moths and potato aphids showed mate-attracting behavior was also measured under the three atmospheric conditions. The female armyworms’ calling was reduced during decreasing air pressure, but the potato aphid showed reduced calling during both decreasing and increasing air pressure, two conditions that can occur with high winds. In both cases, reduced calling went hand-in-hand with reduced mating behavior. Bento explains, “The results presented show that three very different insect species all modify aspects of their sexual behaviour in response to changing barometric pressure. However, there is a great deal of interspecific variability in their responses that can be related to differences in size, flight ability and the diel periodicity of mating.”

Journal Reference:

  1. Ana Cristina Pellegrino, Maria Fernanda Gomes Villalba Peñaflor, Cristiane Nardi, Wayne Bezner-Kerr, Christopher G. Guglielmo, José Maurício Simões Bento, Jeremy N. McNeil.Weather Forecasting by Insects: Modified Sexual Behaviour in Response to Atmospheric Pressure ChangesPLoS ONE, 2013; 8 (10): e75004 DOI:10.1371/journal.pone.0075004

Demissões no Inpe comprometem a previsão do tempo, afirma sindicato (Portal G1, via Agência Ambiente Brasil)

JC e-mail 4825, de 02 de Outubro de 2013.

Se os contratos de 71 funcionários forem suspensos, o Centro de Previsão do Tempo e Estudos Climáticos vai perder mais de um terço da mão de obra

A 10 dias para o fim do prazo determinado pela Justiça para a demissão de servidores contratados irregularmente pelo Instituto Nacional de Pesquisas Espaciais (Inpe), ligado ao Ministério da Ciência, Tecnologia e Inovação (MCTI), o governo federal ainda não definiu a liberação de vagas para abertura de um concurso público. Se os contratos de 71 funcionários forem suspensos a partir de 11 de outubro, o Centro de Previsão do Tempo e Estudos Climáticos (Cptec), em Cachoeira Paulista (SP), vai perder mais de um terço da mão de obra – o centro é o mais avançado em previsão numérica de tempo e clima da América Latina.

O Sindicato dos Servidores Públicos Federais da área de Ciência e Tecnologia no setor Aerospacial (SindCT) cobra agilidade da direção do Inpe e dos Ministérios da Ciência, Tecnologia e Inovação e do Planejamento na solução do problema com o objetivo de tentar suspender temporariamente a decisão judicial. De acordo com a entidade, a previsão do tempo poderá ser comprometida. A direção do Inpe nega que isso vá ocorrer e aguarda a decisão do recurso protolado no Tribunal Regional Federal (TRF-SP) em que pede prolongamento do prazo para solucionar o problema dos contratos irregulares.

O Inpe foi notificado da nulidade dos contratos em 27 agosto e, na ocasião, a sentença definiu prazo de 45 dias para promover as demissões – o prazo termina no próximo dia 10. A ação do Ministério Público Federal (MPF) contesta 111 contratações feitas em caráter emergencial em 2010. Mas segundo Leonel Perondi, diretor do Inpe, um concurso realizado em 2012 possibilitou a substituição dos temporários e, atualmente, a situação se mantém irregular para um grupo de 71 servidores, entre os quais 52 que atuam na previsão do tempo. Outros nove profissionais trabalham no Laboratório de Combustão e Propulsão, onde testes de combustíveis para satélites são realizados. O Cptec tem atualmente um total de 146 servidores.

A maioria dos profissionais que serão desligados são meteorologistas, engenheiros analistas de sistemas e técnicos de computação. Parte dos funcionários que devem ter os contratos suspensos atuam na operação do Tupã, o supercomputador que custou R$ 50 milhões e ampliou a precisão das previsões de fenômenos climáticos extremos, como grandes temporais. O computador é o único no país, segundo o Inpe e, além do Cptec, também fornece informações ao Instituto Nacional Meteorologia (Inmet).

Os contratos dos servidores temporários, caso não houvesse intervenção judicial, terminaria entre os anos de 2014 e 2015. Segundo o instituto, nenhum servidor foi demitido até o momento.

Ameaça de paralisação – O Sindicato dos Servidores Públicos Federais da área de Ciência e Tecnologia no setor Aerospacial (SindCT) cobra agilidade. “Já se passou metade do prazo judicial para que as demissões aconteçam e as vagas não foram liberadas. Essa seria a única possibilidade de tentar suspender a decisão judicial. A previsão do tempo será comprometida, não temos efetivo para operar o Tupã e isso trará consequências a todo país, sobretudo na agricultura”, disse Fernando Morais, vice-presidente do SindCT.

A entidade informou que planeja fazer uma paralisação na próxima sexta-feira (4), envolvendo os 71 demissíveis na tentativa de sensibilizar o governo federal. “Não é possível esperar mais. Estive em Brasília para falar com as autoridades para tentar intervir no problema, mas é preciso que, se liberadas as vagas, esse edital seja lançado no dia seguinte”, afirmou Morais ao G1.

Segundo o presidente do Inpe, dois avisos ministeriais solicitando a abertura de concursos ao Ministério do Planejamento, Orçamento e Gestão foram enviados pelo instituto desde novembro do ano passado, mas nenhuma resposta foi obtida. O diretor do Inpe vai a Brasília nesta terça (1º) tentar negociar a liberação das vagas.

Prejuízo – Os dados e informações fornecidos pelo supercomputador Tupã geram previsões meteorológicas diárias e a longo prazo do tempo. As informações são usadas na agricultura, pelas prefeituras e Defesa Civil que recebem alertas de desastres naturais por meio do Centro Nacional de Monitoramento e Alertas de Desastres Naturais (Cemadem) e também pela aviação civil.

O supercomputador não pode ser desligado. “Essa máquina não foi feita para ser desligada, por isso está atrelada a geradores. O desligamento pode resultar em até 3 meses de serviço, envolvendo técnicos especializados, na tentativa de recuperá-lo. Isso vai envolver custos”, informou o vice-presidente do SindCT.

Ele destacou também que a abertura das vagas ameaçadas por meio de concurso público não traria impacto financeiro ao órgão. “Essas pessoas já tem seus salários inclusos na folha de pagamento do Inpe, não muda nada”, destacou.

De acordo com Perondi, a possibilidade de desligamento do supercomputador está descartada. Ele informa que espera que o recurso sensibilize a Justiça quanto ao prolongamento dos prazos e, que paralelamente, tem estudado medidas para manter o serviço de previsão do tempo.

“Acredito que será possível fazermos um termo de ajuste de conduta e mantermos esses profissionais nas suas atividades até a realização do concurso. Não se contempla a possibilidade de parar a previsão, são dados para áreas críticas do país, vamos estudar alternativas para suprir a eventual falta destes profissionais”, disse ao G1. Perondi acredita que o ideal seria prolongar o prazo para a regularização dos contratos em pelo menos um ano. O Inpe recebeu uma série de pedidos de órgãos como do Operador Nacional do Sistema Elétrico (ONS) e da Defesa Civil. Os documentos foram anexados no recurso.

Acusação – As contratações irregulares no Inpe aconteceram durante a gestão de Gilberto Câmara. O SindCT acusa o então gestor de negligenciar as recomendações jurídicas e promover as contratações irregulares. “Existia um parecer desaconselhando essas contratações. Ele criou um imbróglio, nunca brigou pelas vagas, nunca pediu pela abertura de concurso e agora temos esse problema”, denunciou Morais.

Em missão na Alemanha, Câmara entrou em contato com a reportagem nesta terça-feira (1º) e rebateu as acusações. Segundo ele, as contratações já ocorriam antes de sua gestão e as terceirizações questionadas pelo Ministério Público, que ocorreram em 2010, estavam respaldadas por parecer da Advocacia Geral da União (AGU). Ele também encaminhou ao G1 cópia de ofícios solicitando a abertura de vagas para o órgão em 2007 e 2009.

O ex-diretor questionou ainda o modelo de contratação exigido para o Inpe, considerado por ele um prejuízo para o país. “O país precisa do serviço prestado pelo Inpe e não de mais servidores públicos. O melhor regime seria a criação de uma organização social (OS), como ocorre em vários institutos que recebem missões do governo. Não faz sentido contratar servidor público para a vida inteira. Entendo que o interesse público está servido enquanto o Inpe tiver servidores qualificados, exercendo seus cargos”, defendeu.

O último concurso público do Inpe foi realizado em 2012 para o preenchimento de pouco mais de 100 vagas, incluindo a regularização dos 40 profissionais que corriam risco de terem os contratos suspensos. Antes deste concurso, um outro foi realizado em 2004.

A previsão é que até 2015, tendo como base a quantidade de 1.063 funcionários em atividade em 2010, o Inpe tenha uma redução de 36% no efetivo por motivo de aposentadoria. A idade média dos servidores é de 52 anos.

Entenda o caso – De acordo com o procurador Fernando Lacerda Dias, autor da proposta que resultou na anulação dos contratos, o Inpe fez manobras jurídicas proibidas pela lei para contratar pessoal. “Os contratos foram feitos com base em uma lei que autoriza essas contratações sem concurso. Mas é uma legislação específica para contratação temporária e não para funções de rotina, que é o que eles fizeram”, disse ao G1 no último dia 5.

O Inpe, ao longo dos anos, acumulou novas funções na sua área de atuação, mas o aumento de atribuições não foi acompanhado pela renovação de servidores, o que gerou defasagem quantitativa de mão de obra.

Na tentativa de resolver o problema, o órgão passou a contratar servidores terceirizados, com base no decreto 2.271/97. O procedimento é considerado ilegal, porque a terceirização de mão de obra não pode servir para suprir as atividades finalísticas de um órgão público.

Em 2006, a União se comprometeu a substituir gradualmente os funcionários terceirizados irregulares por servidores concursados, mas em 2009, próximo a expiração dos contratos existentes, o Inpe tentou nova contratação de servidores terceirizados.

A terceirização foi negada pelos órgãos internos de assessoramento jurídico do Inpe. Sem ter o concurso público aberto, o órgão alterou a forma jurídica das contratações, realizando processo seletivo para contratação temporária de 111 novos servidores, com base na lei 8.745/93. A manobra também foi considerada ilegal.

Outro lado – O ministro da Ciência e Tecnologia, Marco Antônio Raupp, foi procurado e informou, por meio de sua assessoria de imprensa que a pasta pleiteou a liberação das vagas para a abertura do concurso. A assessoria da pasta recomendou inicialmente que o G1 procurasse o Ministério do Planejamento para obter informações sobre a liberação das vagas,

O Ministério do Planejamento informou por e-mail que autorizou em 2013 o provimento de 832 vagas para o Ministério de Ciência, Tecnologia e Inovação, a quem compete realizar a distribuição das vagas entre seus vários institutos de pesquisa e que autorizações adicionais dependerão de novas tratativas com o MCTI.

Informado sobre o retorno do Ministério do Planejamento nesta quinta-feira (26), a assessoria de imprensa do Ministério de Ciência e Tecnologia não informou até a tarde desta segunda-feira (30) o destino destas vagas liberadas, nem se elas estão disponíveis para o Inpe.

(Portal G1, via Agência Ambiente Brasil)