Arquivo da tag: Modelagem

What Did Neanderthals Leave to Modern Humans? Some Surprises (New York Times)

Geneticists tell us that somewhere between 1 and 5 percent of the genome of modern Europeans and Asians consists of DNA inherited from Neanderthals, our prehistoric cousins.

At Vanderbilt University, John Anthony Capra, an evolutionary genomics professor, has been combining high-powered computation and a medical records databank to learn what a Neanderthal heritage — even a fractional one — might mean for people today.

We spoke for two hours when Dr. Capra, 35, recently passed through New York City. An edited and condensed version of the conversation follows.

Q. Let’s begin with an indiscreet question. How did contemporary people come to have Neanderthal DNA on their genomes?

A. We hypothesize that roughly 50,000 years ago, when the ancestors of modern humans migrated out of Africa and into Eurasia, they encountered Neanderthals. Matings must have occurred then. And later.

One reason we deduce this is because the descendants of those who remained in Africa — present day Africans — don’t have Neanderthal DNA.

What does that mean for people who have it? 

At my lab, we’ve been doing genetic testing on the blood samples of 28,000 patients at Vanderbilt and eight other medical centers across the country. Computers help us pinpoint where on the human genome this Neanderthal DNA is, and we run that against information from the patients’ anonymized medical records. We’re looking for associations.

What we’ve been finding is that Neanderthal DNA has a subtle influence on risk for disease. It affects our immune system and how we respond to different immune challenges. It affects our skin. You’re slightly more prone to a condition where you can get scaly lesions after extreme sun exposure. There’s an increased risk for blood clots and tobacco addiction.

To our surprise, it appears that some Neanderthal DNA can increase the risk for depression; however, there are other Neanderthal bits that decrease the risk. Roughly 1 to 2 percent of one’s risk for depression is determined by Neanderthal DNA. It all depends on where on the genome it’s located.

Was there ever an upside to having Neanderthal DNA?

It probably helped our ancestors survive in prehistoric Europe. When humans migrated into Eurasia, they encountered unfamiliar hazards and pathogens. By mating with Neanderthals, they gave their offspring needed defenses and immunities.

That trait for blood clotting helped wounds close up quickly. In the modern world, however, this trait means greater risk for stroke and pregnancy complications. What helped us then doesn’t necessarily now.

Did you say earlier that Neanderthal DNA increases susceptibility to nicotine addiction?

Yes. Neanderthal DNA can mean you’re more likely to get hooked on nicotine, even though there were no tobacco plants in archaic Europe.

We think this might be because there’s a bit of Neanderthal DNA right next to a human gene that’s a neurotransmitter implicated in a generalized risk for addiction. In this case and probably others, we think the Neanderthal bits on the genome may serve as switches that turn human genes on or off.

Aside from the Neanderthals, do we know if our ancestors mated with other hominids?

We think they did. Sometimes when we’re examining genomes, we can see the genetic afterimages of hominids who haven’t even been identified yet.

A few years ago, the Swedish geneticist Svante Paabo received an unusual fossilized bone fragment from Siberia. He extracted the DNA, sequenced it and realized it was neither human nor Neanderthal. What Paabo found was a previously unknown hominid he named Denisovan, after the cave where it had been discovered. It turned out that Denisovan DNA can be found on the genomes of modern Southeast Asians and New Guineans.

Have you long been interested in genetics?

Growing up, I was very interested in history, but I also loved computers. I ended up majoring in computer science at college and going to graduate school in it; however, during my first year in graduate school, I realized I wasn’t very motivated by the problems that computer scientists worked on.

Fortunately, around that time — the early 2000s — it was becoming clear that people with computational skills could have a big impact in biology and genetics. The human genome had just been mapped. What an accomplishment! We now had the code to what makes you, you, and me, me. I wanted to be part of that kind of work.

So I switched over to biology. And it was there that I heard about a new field where you used computation and genetics research to look back in time — evolutionary genomics.

There may be no written records from prehistory, but genomes are a living record. If we can find ways to read them, we can discover things we couldn’t know any other way.

Not long ago, the two top editors of The New England Journal of Medicine published an editorial questioning “data sharing,” a common practice where scientists recycle raw data other researchers have collected for their own studies. They labeled some of the recycling researchers, “data parasites.” How did you feel when you read that?

I was upset. The data sets we used were not originally collected to specifically study Neanderthal DNA in modern humans. Thousands of patients at Vanderbilt consented to have their blood and their medical records deposited in a “biobank” to find genetic diseases.

Three years ago, when I set up my lab at Vanderbilt, I saw the potential of the biobank for studying both genetic diseases and human evolution. I wrote special computer programs so that we could mine existing data for these purposes.

That’s not being a “parasite.” That’s moving knowledge forward. I suspect that most of the patients who contributed their information are pleased to see it used in a wider way.

What has been the response to your Neanderthal research since you published it last year in the journal Science?

Some of it’s very touching. People are interested in learning about where they came from. Some of it is a little silly. “I have a lot of hair on my legs — is that from Neanderthals?”

But I received racist inquiries, too. I got calls from all over the world from people who thought that since Africans didn’t interbreed with Neanderthals, this somehow justified their ideas of white superiority.

It was illogical. Actually, Neanderthal DNA is mostly bad for us — though that didn’t bother them.

As you do your studies, do you ever wonder about what the lives of the Neanderthals were like?

It’s hard not to. Genetics has taught us a tremendous amount about that, and there’s a lot of evidence that they were much more human than apelike.

They’ve gotten a bad rap. We tend to think of them as dumb and brutish. There’s no reason to believe that. Maybe those of us of European heritage should be thinking, “Let’s improve their standing in the popular imagination. They’re our ancestors, too.’”

A mysterious 14-year cycle has been controlling our words for centuries (Science Alert)

Some of your favourite science words are making a comeback.

2 DEC 2016

Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.

Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.

The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.

“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.

“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”

The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.

What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.

By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.

According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.

The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.

Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.

“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.

“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.

“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”

It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.

You can try it out for yourself, and search for any word’s popularity over time.

But if there are certain nouns you’re fond of, make the most of them, because they might not be in common use for much longer.

The findings have been published in Palgrave Communications.

Global climate models do not easily downscale for regional predictions (Science Daily)

August 24, 2016
Penn State
One size does not always fit all, especially when it comes to global climate models, according to climate researchers who caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.

One size does not always fit all, especially when it comes to global climate models, according to Penn State climate researchers.

“The impacts of climate change rightfully concern policy makers and stakeholders who need to make decisions about how to cope with a changing climate,” said Fuqing Zhang, professor of meteorology and director, Center for Advanced Data Assimilation and Predictability Techniques, Penn State. “They often rely upon climate model projections at regional and local scales in their decision making.”

Zhang and Michael Mann, Distinguished professor of atmospheric science and director, Earth System Science Center, were concerned that the direct use of climate model output at local or even regional scales could produce inaccurate information. They focused on two key climate variables, temperature and precipitation.

They found that projections of temperature changes with global climate models became increasingly uncertain at scales below roughly 600 horizontal miles, a distance equivalent to the combined widths of Pennsylvania, Ohio and Indiana. While climate models might provide useful information about the overall warming expected for, say, the Midwest, predicting the difference between the warming of Indianapolis and Pittsburgh might prove futile.

Regional changes in precipitation were even more challenging to predict, with estimates becoming highly uncertain at scales below roughly 1200 miles, equivalent to the combined width of all the states from the Atlantic Ocean through New Jersey across Nebraska. The difference between changing rainfall totals in Philadelphia and Omaha due to global warming, for example, would be difficult to assess. The researchers report the results of their study in the August issue of Advances in Atmospheric Sciences.

“Policy makers and stakeholders use information from these models to inform their decisions,” said Mann. “It is crucial they understand the limitation in the information the model projections can provide at local scales.”

Climate models provide useful predictions of the overall warming of the globe and the largest-scale shifts in patterns of rainfall and drought, but are considerably more hard pressed to predict, for example, whether New York City will become wetter or drier, or to deal with the effects of mountain ranges like the Rocky Mountains on regional weather patterns.

“Climate models can meaningfully project the overall global increase in warmth, rises in sea level and very large-scale changes in rainfall patterns,” said Zhang. “But they are uncertain about the potential significant ramifications on society in any specific location.”

The researchers believe that further research may lead to a reduction in the uncertainties. They caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.

“Uncertainty is hardly a reason for inaction,” said Mann. “Moreover, uncertainty can cut both ways, and we must be cognizant of the possibility that impacts in many regions could be considerably greater and more costly than climate model projections suggest.”

Theoretical tiger chases statistical sheep to probe immune system behavior (Science Daily)

Physicists update predator-prey model for more clues on how bacteria evade attack from killer cells

April 29, 2016
IOP Publishing
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Researchers have created a numerical model that explores this behavior in more detail.

Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Reporting their results in the Journal of Physics A: Mathematical and Theoretical, researchers in Europe have created a numerical model that explores this behaviour in more detail.

Using mathematical expressions, the group can examine the dynamics of a single predator hunting a herd of prey. The routine splits the hunter’s motion into a diffusive part and a ballistic part, which represent the search for prey and then the direct chase that follows.

“We would expect this to be a fairly good approximation for many animals,” explained Ralf Metzler, who led the work and is based at the University of Potsdam in Germany.

Obstructions included

To further improve its analysis, the group, which includes scientists from the National Institute of Chemistry in Slovenia, and Sorbonne University in France, has incorporated volume effects into the latest version of its model. The addition means that prey can now inadvertently get in each other’s way and endanger their survival by blocking potential escape routes.

Thanks to this update, the team can study not just animal behaviour, but also gain greater insight into the way that killer cells such as macrophages (large white blood cells patrolling the body) attack colonies of bacteria.

One of the key parameters determining the life expectancy of the prey is the so-called ‘sighting range’ — the distance at which the prey is able to spot the predator. Examining this in more detail, the researchers found that the hunter profits more from the poor eyesight of the prey than from the strength of its own vision.

Long tradition with a new dimension

The analysis of predator-prey systems has a long tradition in statistical physics and today offers many opportunities for cooperative research, particularly in fields such as biology, biochemistry and movement ecology.

“With the ever more detailed experimental study of systems ranging from molecular processes in living biological cells to the motion patterns of animal herds and humans, the need for cross-fertilisation between the life sciences and the quantitative mathematical approaches of the physical sciences has reached a new dimension,” Metzler comments.

To help support this cross-fertilisation, he heads up a new section of the Journal of Physics A: Mathematical and Theoretical that is dedicated to biological modelling and examines the use of numerical techniques to study problems in the interdisciplinary field connecting biology, biochemistry and physics.

Journal Reference:

  1. Maria Schwarzl, Aljaz Godec, Gleb Oshanin, Ralf Metzler. A single predator charging a herd of prey: effects of self volume and predator–prey decision-makingJournal of Physics A: Mathematical and Theoretical, 2016; 49 (22): 225601 DOI: 10.1088/1751-8113/49/22/225601

Modelo matemático auxilia a planejar operação de reservatórios de água (Fapesp)

Sistema computacional desenvolvido por pesquisadores da USP e da Unicamp estabelece regras de racionamento de suprimento hídrico em períodos de seca

Pesquisadores da Escola Politécnica da Universidade de São Paulo (Poli-USP) e da Faculdade de Engenharia Civil, Arquitetura e Urbanismo da Universidade Estadual de Campinas (FEC-Unicamp) desenvolveram novos modelos matemáticos e computacionais voltados a otimizar a gestão e a operação de sistemas complexos de suprimento hídrico e de energia elétrica, como os existentes no Brasil.

Os modelos, que começaram a ser desenvolvidos no início dos anos 2000, foram aprimorados por meio do Projeto Temático “HidroRisco: Tecnologias de gestão de riscos aplicadas a sistemas de suprimento hídrico e de energia elétrica”, realizado com apoio da Fapesp.

“A ideia é que os modelos matemáticos e computacionais que desenvolvemos possam auxiliar os gestores dos sistemas de distribuição e abastecimento de água e energia elétrica na tomada de decisões que têm enormes impactos sociais e econômicos, como a de decretar racionamento”, disse Paulo Sérgio Franco Barbosa, professor da FEC-Unicamp e coordenador do projeto, à Agência Fapesp.

De acordo com Barbosa, muitas das tecnologias utilizadas hoje nos setores hídrico e energético no Brasil para gerir a oferta e a demanda e os riscos de desabastecimento de água e energia em situações de eventos climáticos extremos, como estiagem severa, foram desenvolvidas na década de 1970, quando as cidades brasileiras eram menores e o País não dispunha de um sistema hídrico e hidroenergético tão complexo como o atual.

Por essas razões, segundo ele, esses sistemas de gestão apresentam falhas como não levar em conta a conexão entre as diferentes bacias e não estimar a ocorrência de eventos climáticos mais extremos do que os que já aconteceram no passado ao planejar a operação de um sistema de reservatórios e distribuição de água.

“Houve falha no dimensionamento da capacidade de abastecimento de água do reservatório Cantareira, por exemplo, porque não se imaginou que aconteceria uma seca pior do que a que atingiu a bacia em 1953, considerado o ano mais seco da história do reservatório antes de 2014”, afirmou Barbosa.

A fim de aprimorar esses sistemas de gestão de risco existentes hoje, os pesquisadores desenvolveram novos modelos matemáticos e computacionais que simulam a operação de um sistema de suprimento hídrico ou de energia de forma integrada e em diferentes cenários de aumento de oferta e demanda de água.

“Por meio de algumas técnicas estatísticas e computacionais, os modelos que desenvolvemos são capazes de fazer simulações melhores e proteger mais um sistema de suprimento hídrico ou de energia elétrica contra riscos climáticos”, disse Barbosa.


Um dos modelos desenvolvidos pelos pesquisadores em colaboração com colegas da University of California em Los Angeles, nos Estados Unidos, é a plataforma de modelagem de otimização e simulação de sistemas de suprimento hídrico Sisagua.

A plataforma computacional integra e representa todas as fontes de abastecimento de um sistema de reservatórios e distribuição de água de cidades de grande porte, como São Paulo, incluindo os reservatórios, canais, dutos, estações de tratamento e de bombeamento.

“O Sisagua possibilita planejar a operação, estudar a capacidade de suprimento e avaliar alternativas de expansão ou de diminuição do fornecimento de um sistema de abastecimento de água de forma integrada”, apontou Barbosa.

Um dos diferenciais do modelo computacional, segundo o pesquisador, é estabelecer regras de racionamento de um sistema de reservatórios e distribuição de água de grande porte em períodos de seca, como o que São Paulo passou em 2014, de modo a minimizar os danos à população e à economia causados por um eventual racionamento.

Quando um dos reservatórios do sistema atinge um volume abaixo dos níveis normais e próximo do volume mínimo de operação, o modelo computacional indica um primeiro estágio de racionamento, reduzindo a oferta da água armazenada em 10%, por exemplo.

Se a crise de abastecimento do reservatório prolongar, o modelo matemático indica alternativas para minimizar a intensidade do racionamento distribuindo o corte de água de forma mais uniforme ao longo do período de escassez de água e entre os outros reservatórios do sistema.

“O Sisagua possui uma inteligência computacional que indica onde e quando cortar o fornecimento de água de um sistema de abastecimento hídrico, de modo a minimizar os danos no sistema e para a população e a economia de uma cidade”, afirmou Barbosa.

Sistema Cantareira

Os pesquisadores aplicaram o Sisagua para simular a operação e a gestão do sistema de distribuição de água da região metropolitana de São Paulo, que abastece cerca de 18 milhões de pessoas e é considerado um dos maiores do mundo, com vazão média de 67 metros cúbicos por segundo (m³/s).

O sistema de distribuição de água paulista é composto por oito subsistemas de abastecimento, sendo o maior deles o Cantareira, que fornece água para 5,3 milhões de pessoas, com vazão média de 33 m³/s.

A fim de avaliar a capacidade de suprimento do Cantareira em um cenário de escassez de água e, ao mesmo tempo, de aumento da demanda pelo recurso natural, os pesquisadores realizaram uma simulação de planejamento do uso do subsistema em um período de dez anos utilizando o Sisagua.

Para isso, eles usaram dados de vazões afluentes (de entrada de água) do Cantareira entre 1950 e 1960, fornecidos pela Companhia de Saneamento Básico do Estado de São Paulo (Sabesp).

“Essa período de tempo foi escolhido como base para as projeções do Sisagua porque registrou secas severas, quando as afluências ficaram significativamente abaixo das médias por quatro anos seguidos, entre 1952 e 1956”, explicou Barbosa.

A partir dos dados de vazão afluente desse série histórica, o modelo matemático e computacional analisou cenários com demanda variável de água do Cantareira entre 30 e 40 m³/s.

Algumas das constatações do modelo foram que o Cantareira é capaz de atender uma demanda de até 34 m³/s em um cenário de escassez de água como ocorreu entre 1950 a 1960 com um risco insignificante de desabastecimento. Acima desse valor a escassez e, consequentemente, o risco de racionamento de água no reservatório aumenta exponencialmente.

Para que o Cantareira possa atender uma demanda de 38 m³/s em um período de escassez de água, o modelo indicou que seria preciso começar a racionar a água do reservatório 40 meses (3 anos e 4 meses) antes que o nível da bacia atingisse o ponto crítico, abaixo do volume normal e próximo do limite mínimo de operação.

Dessa forma, seria possível atender entre 85% e 90% da demanda de água do reservatório no período de seca até que ele recuperasse seu volume ideal, evitando um racionamento mais grave do que aconteceria caso fosse mantido o nível pleno de abastecimento do reservatório.

“Quanto antes for feito o racionamento de água de um sistema de abastecimento hídrico melhor o prejuízo é distribuído ao longo do tempo”, disse Barbosa. “A população pode se preparar melhor para um racionamento de 15% de água durante um período de dois anos, por exemplo, do que um corte de 40% em apenas dois meses”, comparou.

Sistemas integrados

Em outro estudo, os pesquisadores usaram o Sisagua para avaliar a capacidade de os subsistemas Cantareira, Guarapiranga, Alto Tietê e Alto Cotia atenderem as atuais demandas de água em um cenário de escassez do recurso natural.

Para isso, eles também utilizaram dados de vazões afluentes dos quatro subsistemas no período de 1950 a 1960.

Os resultados das análises feitas pelo método matemático e computacional indicaram que o subsistema de Cotia atingiu um limite crítico de racionamento diversas vezes durante o período simulado de dez anos.

Em contrapartida, o subsistema Alto Tietê ficou com volume de água acima de sua meta frequentemente.

Com base nessas constatações, os pesquisadores sugerem novas interligações para transferência entre esses quatro subsistemas de abastecimento.

Parte da demanda de água do subsistema de Cotia poderia ser fornecida pelos subsistemas de Guarapiranga e Cantareira. Por outro lado, esses dois subsistemas também poderiam receber água do subsistema Alto Tietê, indicaram as projeções do Sisagua.

“A transferência de água entre os subsistemas proporcionaria maior flexibilidade e resultaria em uma melhor distribuição, eficiência e confiabilidade do sistema de abastecimento hídrico da região metropolitana de São Paulo”, avaliou Barbosa.

De acordo com o pesquisador, as projeções feitas pelo Sisagua também indicaram a necessidade de investimentos em novas fontes de abastecimento de água para a região metropolitana de São Paulo.

Segundo ele, as principais bacias que abastecem São Paulo sofrem de problemas como a concentração urbana.

Em torno da bacia do Alto Tietê, por exemplo, que ocupa apenas 2,7% do território paulista, está concentrada quase 50% da população do Estado de São Paulo, superando em cinco vezes a densidade demográfica de países como Japão, Coréia e Holanda.

Já as bacias de Piracicaba, Paraíba do Sul, Sorocaba e Baixada Santista – que representam 20% da área de São Paulo – concentram 73% da população paulista, com densidade demográfica superior ao de países como Japão, Holanda e Reino Unido, apontam os pesquisadores.

“Será inevitável pensar em outras fontes de abastecimento de água para a região metropolitana de São Paulo, como o sistema Juquiá, no interior do estado, que tem água de excelente quantidade e em grandes volumes”, disse Barbosa.

“Em razão da distância, essa obra será cara e tem sido postergada. Mas, agora, não dá mais para adiá-la”, afirmou.

Além de São Paulo, o Sisagua também foi utilizado para modelar os sistemas de suprimento hídrico de Los Angeles, nos Estados Unidos, e Taiwan.

O artigo “Planning and operation of large-scale water distribution systems with preemptive priorities”, (doi: 10.1061/(ASCE)0733-9496(2008)134:3(247)), de Barros e outros, pode ser lido por assinantes do Journal of Water Resources Planning and Managementem

Agência Fapesp

Doenças sexualmente transmissíveis explicam a monogamia (El País)

Com a ampliação das sociedades, as infecções sexuais se tornaram endêmicas e afetaram os que mantinham muitas relações


13 ABR 2016 – 02:29 CEST

A origem da monogamia imposta ainda é um mistério. Em algum momento na história da humanidade, quando o advento da agricultura e da pecuária começou a transformar as sociedades, começou a mudar a ideia do que era aceitável nas relações entre homens e mulheres. Ao longo da história, a maioria das sociedades tem permitido a poligamia. O estudo sobre caçadores-coletores sugere que, entre as sociedades pré-históricas, era frequente que um grupo relativamente pequeno de homens monopolizasse as mulheres da tribo para aumentar sua prole.

No entanto, aconteceu algo para que muitos dos grupos que conseguiram se sobrepor adotassem um sistema de organização do sexo tão distante das inclinações humanas, como a monogamia. Como se pode ler em várias passagens da Bíblia, a recomendação para resolver conflitos geralmente consistia na morte dos adúlteros por apedrejamento.

Um grupo de pesquisadores da Universidade de Waterloo (Canadá) e do Instituto Max Planck de Antropologia Evolutiva (Alemanha), que publicou nesta terça-feira um artigo sobre o tema na revista Nature Communications, acredita que as doenças sexualmente transmissíveis desempenharam um papel fundamental. Segundo a hipótese, que foi testada com modelos tecnológicos, os pesquisadores sugerem que, quando a agricultura permitiu o surgimento de populações nas quais mais de 300 pessoas viviam juntas, nossa relação com bactérias como a gonorreia ou sífilis mudou.

A sífilis e a gonorreia afetavam a fertilidade em uma sociedade sem antibióticos ou preservativos

Nos pequenos grupos do Plistoceno, os surtos causados por esses micróbios duravam pouco e tinham um impacto reduzido sobre a população. No entanto, quando o número de indivíduos na sociedade é maior, os surtos se tornam endêmicos e o impacto sobre aqueles que praticam a poligamia é maior. Em uma sociedade sem preservativos de látex ou antibióticos, as infecções bacterianas têm um grande impacto sobre a fertilidade.

Essa condição biológica teria dado vantagem às pessoas que se acasalavam de forma monogâmica e, além disso, também teria tornado mais aceitáveis castigos, como os descritos na Bíblia, para indivíduos que desrespeitassem a norma. Eventualmente, nas crescentes sociedades agrárias do início da história da humanidade, a interação entre a monogamia e a imposição de normas para sustentá-la acabaria dando vantagem sob a forma de maior fertilidade para as sociedades que as praticassem.

Os autores do estudo acreditam que estas abordagens, que testam premissas onde se tenta compreender a interação entre as dinâmicas sociais e naturais, podem ajudar a entender não só o surgimento da monogamia imposta socialmente, mas também outras normas sociais relacionadas com o contato físico entre os seres humanos.

Nossas normas sociais não se desenvolveram isoladas do que estava acontecendo em nosso ambiente natural”, afirmou em um comunicado Chris Bauch, professor de matemática aplicada da Universidade de Waterloo e um dos autores do estudo. “Pelo contrário, não podemos compreender as normas sociais sem entender sua origem em nosso ambiente natural”, acrescentou. “As normas foram moldadas por nosso ambiente natural”, conclui.

The Water Data Drought (N.Y.Times)

Then there is water.

Water may be the most important item in our lives, our economy and our landscape about which we know the least. We not only don’t tabulate our water use every hour or every day, we don’t do it every month, or even every year.

The official analysis of water use in the United States is done every five years. It takes a tiny team of people four years to collect, tabulate and release the data. In November 2014, the United States Geological Survey issued its most current comprehensive analysis of United States water use — for the year 2010.

The 2010 report runs 64 pages of small type, reporting water use in each state by quality and quantity, by source, and by whether it’s used on farms, in factories or in homes.

It doesn’t take four years to get five years of data. All we get every five years is one year of data.

The data system is ridiculously primitive. It was an embarrassment even two decades ago. The vast gaps — we start out missing 80 percent of the picture — mean that from one side of the continent to the other, we’re making decisions blindly.

In just the past 27 months, there have been a string of high-profile water crises — poisoned water in Flint, Mich.; polluted water in Toledo, Ohio, and Charleston, W. Va.; the continued drying of the Colorado River basin — that have undermined confidence in our ability to manage water.

In the time it took to compile the 2010 report, Texas endured a four-year drought. California settled into what has become a five-year drought. The most authoritative water-use data from across the West couldn’t be less helpful: It’s from the year before the droughts began.

In the last year of the Obama presidency, the administration has decided to grab hold of this country’s water problems, water policy and water innovation. Next Tuesday, the White House is hosting a Water Summit, where it promises to unveil new ideas to galvanize the sleepy world of water.

The question White House officials are asking is simple: What could the federal government do that wouldn’t cost much but that would change how we think about water?

The best and simplest answer: Fix water data.

More than any other single step, modernizing water data would unleash an era of water innovation unlike anything in a century.

We have a brilliant model for what water data could be: the Energy Information Administration, which has every imaginable data point about energy use — solar, wind, biodiesel, the state of the heating oil market during the winter we’re living through right now — all available, free, to anyone. It’s not just authoritative, it’s indispensable. Congress created the agency in the wake of the 1970s energy crisis, when it became clear we didn’t have the information about energy use necessary to make good public policy.

That’s exactly the state of water — we’ve got crises percolating all over, but lack the data necessary to make smart policy decisions.

Congress and President Obama should pass updated legislation creating inside the United States Geological Survey a vigorous water data agency with the explicit charge to gather and quickly release water data of every kind — what utilities provide, what fracking companies and strawberry growers use, what comes from rivers and reservoirs, the state of aquifers.

Good information does three things.

First, it creates the demand for more good information. Once you know what you can know, you want to know more.

Second, good data changes behavior. The real-time miles-per-gallon gauges in our cars are a great example. Who doesn’t want to edge the M.P.G. number a little higher? Any company, community or family that starts measuring how much water it uses immediately sees ways to use less.

Finally, data ignites innovation. Who imagined that when most everyone started carrying a smartphone, we’d have instant, nationwide traffic data? The phones make the traffic data possible, and they also deliver it to us.

The truth is, we don’t have any idea what detailed water use data for the United States will reveal. But we can be certain it will create an era of water transformation. If we had monthly data on three big water users — power plants, farmers and water utilities — we’d instantly see which communities use water well, and which ones don’t.

We’d see whether tomato farmers in California or Florida do a better job. We’d have the information to make smart decisions about conservation, about innovation and about investing in new kinds of water systems.

Water’s biggest problem, in this country and around the world, is its invisibility. You don’t tackle problems that are out of sight. We need a new relationship with water, and that has to start with understanding it.

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)





A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.


Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at:,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf


Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)


Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages


We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Is human behavior controlled by our genes? Richard Levins reviews ‘The Social Conquest of Earth’ (Climate & Capitalism)

“Failing to take class division into account is not simply a political bias. It also distorts how we look at human evolution as intrinsically bio-social and human biology as socialized biology.”


August 1, 2012

Edward O. Wilson. The Social Conquest of Earth. Liverwright Publishing, New York, 2012

reviewed by Richard Levins

In the 1970s, Edward O. Wilson, Richard Lewontin, Stephen Jay Gould and I were colleagues in Harvard’s new department of Organismic and Evolutionary Biology. In spite of our later divergences, I retain grateful memories of working in the field with Ed, turning over rocks, sharing beer, breaking open twigs, putting out bait (canned tuna fish) to attract the ants we were studying..

We were part of a group that hoped to jointly write and publish articles offering a common view of evolutionary science, but that collaboration was brief, largely because Lewontin and I strongly disagreed with Wilson’s Sociobiology.

Reductionism and Sociobiology

Although Wilson fought hard against the reduction of biology to the study of molecules, his holism stopped there. He came to promote the reduction of social and behavioral science to biology. In his view:

“Our lives are restrained by two laws of biology: all of life’s entities and processes are obedient to the laws of physics and chemistry; and all of life’s entities and processes have arisen through evolution and natural selection.” [Social Conquest, p. 287]

This is true as far as it goes but fails in two important ways.

First, it ignores the reciprocal feedback between levels. The biological creates the ensemble of molecules in the cell; the social alters the spectrum of molecules in the biosphere; biological activity creates the biosphere itself and the conditions for the maintenance of life.

Second, it doesn’t consider how the social level alters the biological: our biology is a socialized biology.

Higher (more inclusive) levels are indeed constrained by the laws at lower levels of organization, but they also have their own laws that emerge from the lower level yet are distinct and that also determine which chemical and physical entities are present in the organisms. In new contexts they operate differently.

Thus for example we, like a few other animals including bears, are omnivores. For some purposes such as comparing digestive systems that’s an adequate label. But we are omnivores of a special kind: we not only acquire food by predation, but we also producefood, turning the inedible into edible, the transitory into stored food. This has had such a profound effect on our lives that it is also legitimate to refer to us as something new, productivores.

The productivore mode of sustenance opens a whole new domain: the mode of production. Human societies have experienced different modes of production and ways to organize reproduction, each with its own dynamics, relations with the rest of nature, division into classes, and processes which restore or change it when it is disturbed.

The division of society into classes changes how natural selection works, who is exposed to what diseases, who eats and who doesn’t eat, who does the dishes, who must do physical work, how long we can expect to live. It is no longer possible to prescribe the direction of natural selection for the whole species.

So failing to take class division into account is not simply a political bias. It also distorts how we look at human evolution as intrinsically bio-social and human biology as socialized biology.

The opposite of the genetic determinism of sociobiology is not “the blank slate” view that claims that our biological natures were irrelevant to behavior and society. The question is, what about our animal heritage was relevant?

We all agree that we are animals; that as animals we need food; that we are terrestrial rather than aquatic animals; that we are mammals and therefore need a lot of food to support our high metabolic rates that maintain body temperature; that for part of our history we lived in trees and acquired characteristics adapted to that habitat, but came down from the trees with a dependence on vision, hands with padded fingers, and so on. We have big brains, with regions that have different major functions such as emotions, color vision, and language.

But beyond these general capacities, there is widespread disagreement about which behaviors or attitudes are expressions of brain structure. The amygdala is a locus of emotion, but does it tell us what to be angry or rejoice about? It is an ancient part of our brains, but has it not evolved in response to what the rest of the brain is doing? There is higher intellectual function in the cortex, but does it tell us what to think about?

Every part of an organism is the environment for the rest of the organism, setting the context for natural selection. In contrast to this fluid viewpoint, phrases such as “hard-wired” have become part of the pop vocabulary, applied promiscuously to all sorts of behaviors.

In a deeper sense, asking if something is heritable is a nonsense question. Heritability is always a comparison: how much of the difference between humans and chimps is heritable? What about the differences between ourselves and Neanderthals? Between nomads and farmers?

Social Conquest of Earth

The Social Conquest of Earth, Ed Wilson’s latest book, continues his interest in the “eusocial” animals – ants, bees and others that live in groups with overlapping generations and a division of labor that includes altruistic behavior. As the title shows. he also continues to use the terminology of conquest and domination, so that social animals “conquer” the earth, their abundance makes them “dominate.”

The problem that Wilson poses in this book is first, why did eusociality arise at all, and second, why is it so rare?

Wilson is at his best when discussing the more remote past, the origins of social behavior 220 million years ago for termites, 150 million years for ants, 70-80 million years for humble bees and honey bees.

But as he gets closer to humanity the reductionist biases that informed Sociobiology reassert themselves. Once again Wilson argues that brain architecture determines what people do socially – that war, aggression, morality, honor and hierarchy are part of “human nature.”

Rejecting kin selection

A major change, and one of the most satisfying parts of the book, is his rejection of kin selection as a motive force of social evolution, a theory he once defended strongly.

Kin selection assumed that natural selection acts on genes. A gene will be favored if it results in enhancing its own survival and reproduction, but it is not enough to look at the survival of the individual. If my brother and I each have 2 offspring, a shared gene would be doubled in the next generation. But if my brother sacrifices himself so that I might leave 5 offspring while he leaves none, our shared gene will increase 250%.

Therefore, argued the promoters of this theory, the fitness that natural selection increases has to be calculated over a whole set of kin, weighted by the closeness of their relationship. Mathematical formulations were developed to support this theory. Wilson found it attractive because it appeared to support sociobiology.

However, plausible inference is not enough to prove a theory. Empirical studies comparing different species or traits did not confirm the kin selection hypothesis, and a reexamination of its mathematical structure (such as the fuzziness of defining relatedness) showed that it could not account for the observed natural world. Wilson devotes a lot of space to refuting kin selection because of his previous support of it: it is a great example of scientific self-correction.

Does group selection explain social behaviour?

Wilson has now adopted another model in which the evolution of sociality is the result of opposing processes of ordinary individual selection acting within populations, and group selection acting between populations. He invokes this model account to for religion, morality, honor and other human behaviors.

He argues that individual selection promotes “selfishness” (that is, behavior that enhances individual survival) while group selection favors cooperative and “altruistic” behavior. The two forms of selection oppose each other, and that results in our mixed behaviors.

“We are an evolutionary chimera living on intelligence steered by the demands of animal instinct. This is the reason we are mindlessly dismantling the biosphere and with it, our own prospects for permanent existence.” [p.13]

But this simplistic reduction of environmental destruction to biology will not stand. Contrary to Wilson, the destruction of the biosphere is not “mindless.” It is the outcome of interactions in the noxious triad of greed, poverty, and ignorance, all produced by a socio-economic system that must expand to survive.

For Wilson, as for many environmentalists, the driver of ecological destruction is some generic “we,” who are all in the same boat. But since the emergence of classes after the adoption of agriculture some 8-10,000 years ago it is no longer appropriate to talk of a collective “we.”

The owners of the economy are willing to use up resources, pollute the environment, debase the quality of products, and undermine the health of the producers out of a kind of perverse economic rationality. They support their policies with theories such as climate change denial or doubting the toxicity of pesticides, and buttress it with legislation and court decisions.

Evolution and religion

The beginning and end of the book, a spirited critique of religion as possibly explaining human nature, is more straightforwardly materialist than the view supported by Stephen J. Gould, who argued that religion and science are separate magisteria that play equal roles in human wellbeing.

But Wilson’s use of evidence is selective.

For example, he argues that religion demands absolute belief from its followers – but this is true only of Christianity and Islam. Judaism lets you think what you want as long as you practice the prescribed rituals, Buddhism doesn’t care about deities or the afterlife.

Similarly he argues that creation myths are a product of evolution:

“Since paleolithic times … each tribe invented its own creation myths… No tribe could long survive without a creation myth… The creation myth is a Darwinian device for survival.” [p. 8]

But the ancient Israelites did not have an origin myth when they emerged as a people in the hills of Judea around 1250 B.C.E. Although it appears at the beginning of the Bible, the Israelites did not adapt the Book of Genesis from Babylonian mythology until four centuries after Deuteronomy was written, after they had survived 200 years as a tribal confederation, two kingdoms and the Assyrian and Babylonian conquests— by then the writing of scripture was a political act, not a “Darwinian device for survival.”

Biologizing war

In support of his biologizing of “traits,” Wilson reviews recent research that appears to a show a biological basis for the way people see and interpret color, for the incest taboo, and for the startle response – and then asserts that inherited traits include war, hierarchy, honor and such. Ignoring the role of social class, he views these as universal traits of human nature.

Consider war. Wilson claims that war reflects genes for group selection. “A soldier going into battle will benefit his country but he runs a higher risk of death than one who does not.” [p. 165]

But soldiers don’t initiate conflict. We know in our own times that those who decide to make war are not those who fight the wars – but, perhaps unfortunately, sterilizing the general staff of the Pentagon and of the CIA would not produce a more peaceful America.

The evidence against war as a biological imperative is strong. Willingness to fight is situational.

Group selection can’t explain why soldiers have to be coerced into fighting, why desertion is a major problem for generals and is severely punished, or why resistance to recruitment is a major problem of armies. In the present militarist USA, soldiers are driven to join up through unemployment and the promises of benefits such as learning skills and getting an education and self-improvement. No recruitment posters offer the opportunity to kill people as an inducement for signing up.

The high rates of surrender and desertion of Italian soldiers in World War II did not reflect any innate cowardice among Italians but a lack of fascist conviction. The very rarity of surrender by Japanese soldiers in the same war was not a testimony to greater bravery on the part of the Japanese but of the inculcated combination of nationalism and religion.

As the American people turned against the Vietnam war, increased desertions and the killing of officers by the soldiers reflected their rejection of the war.

The terrifying assaults of the Vikings during the middle ages bear no resemblance to the mellow Scandinavian culture of today, too short a time for natural selection to transform national character.

The attempt to make war an inherited trait favored by natural selection reflects the sexism that has been endemic in sociobiology. It assumes that local groups differed in their propensity for aggression and prowess in war. The victorious men carry off the women of the conquered settlements and incorporate them into their own communities. Therefore the new generation has been selected for greater military success among the men. But the women, coming from a defeated, weaker group, would bring with them their genes for lack of prowess, a selection for military weakness! Such a selection process would be self-negating.


Wilson also considers ethnocentrism to be an inherited trait: group selection leads people to favor members of their own group and reject outsiders.

The problem is that the lines between groups vary under different circumstances. For example, in Spanish America, laws governing marriage included a large number of graded racial categories, while in North America there were usually just two. What’s more, the category definitions are far from permanent: at one time, the Irish were regarded as Black, and the whiteness of Jews was questioned.

Adoption, immigration, mergers of clans also confound any possible genetic basis for exclusion.


Wilson draws on the work of Herbert Simon to argue that hierarchy is a result of human nature: there will always be rulers and ruled. His argument fails to distinguish between hierarchy and leadership.

There are other forms of organization possible besides hierarchy and chaos, including democratic control by the workers who elect the operational leadership. In some labor unions, leaders’ salaries are pegged to the median wage of the members. In University departments the chairmanship is often a rotating task that nobody really wants. When Argentine factory owners closed their plants during the recession, workers in fact seized control and ran them profitably despite police sieges.

Darwinian behavior?

Wilson argues that “social traits” evolved through Darwinian natural selection. Genes that promoted behaviors that helped the individual or group to survive were passed on; genes that weakened the individual or group were not. The tension between individual and group selection decided which traits would be part of our human nature.

But a plausible claim that a trait might be good for people is not enough to explain its origin and survival. A gene may become fixed in a population even if it is harmful, just by the random genetic changes that we know occur. Or a gene may be harmful but be dragged along by an advantageous gene close to it on the same chromosome.

Selection may act in different directions in different subpopulations, or in different habitats, or in differing environmental. Or the adaptive value of a gene may change with its prevalence or the distribution of ages in the population, itself a consequence of the environment and population heterogeneity.

For instance, Afro-Americans have a higher death rate from cancer than Euro-Americans. In part this reflects the carcinogenic environments they have been subjected to, but there is also a genetic factor. It is the combination of living conditions and genetics that causes higher mortality rates.

* * *

Obviously I am not arguing that evolution doesn’t happen. The point is that we need a much better argument than just a claim that some genotype might be beneficial. And we need a much more rigorous understanding of the differences and linkages between the biological and social components of humanity’s nature. Just calling some social behavior a “trait” does not make it heritable.

In a book that attempts such a wide-ranging panorama of human evolution, there are bound to be errors. But the errors in The Social Conquest of Earth form a pattern: they reduce social issues to biology, and they insist on our evolutionary continuity with other animals while ignoring the radical discontinuity that made us productivores and divided us into classes.

Impact of human activity on local climate mapped (Science Daily)

Date: January 20, 2016

Source: Concordia University

Summary: A new study pinpoints the temperature increases caused by carbon dioxide emissions in different regions around the world.

This is a map of climate change. Credit: Nature Climate Change

Earth’s temperature has increased by 1°C over the past century, and most of this warming has been caused by carbon dioxide emissions. But what does that mean locally?

A new study published in Nature Climate Change pinpoints the temperature increases caused by CO2 emissions in different regions around the world.

Using simulation results from 12 global climate models, Damon Matthews, a professor in Concordia’s Department of Geography, Planning and Environment, along with post-doctoral researcher Martin Leduc, produced a map that shows how the climate changes in response to cumulative carbon emissions around the world.

They found that temperature increases in most parts of the world respond linearly to cumulative emissions.

“This provides a simple and powerful link between total global emissions of carbon dioxide and local climate warming,” says Matthews. “This approach can be used to show how much human emissions are to blame for local changes.”

Leduc and Matthews, along with co-author Ramon de Elia from Ouranos, a Montreal-based consortium on regional climatology, analyzed the results of simulations in which CO2 emissions caused the concentration of CO2 in the atmosphere to increase by 1 per cent each year until it reached four times the levels recorded prior to the Industrial Revolution.

Globally, the researchers saw an average temperature increase of 1.7 ±0.4°C per trillion tonnes of carbon in CO2 emissions (TtC), which is consistent with reports from the Intergovernmental Panel on Climate Change.

But the scientists went beyond these globally averaged temperature rises, to calculate climate change at a local scale.

At a glance, here are the average increases per trillion tonnes of carbon that we emit, separated geographically:

  • Western North America 2.4 ± 0.6°C
  • Central North America 2.3 ± 0.4°C
  • Eastern North America 2.4 ± 0.5°C
  • Alaska 3.6 ± 1.4°C
  • Greenland and Northern Canada 3.1 ± 0.9°C
  • North Asia 3.1 ± 0.9°C
  • Southeast Asia 1.5 ± 0.3°C
  • Central America 1.8 ± 0.4°C
  • Eastern Africa 1.9 ± 0.4°C

“As these numbers show, equatorial regions warm the slowest, while the Arctic warms the fastest. Of course, this is what we’ve already seen happen — rapid changes in the Arctic are outpacing the rest of the planet,” says Matthews.

There are also marked differences between land and ocean, with the temperature increase for the oceans averaging 1.4 ± 0.3°C TtC, compared to 2.2 ± 0.5°C for land areas.

“To date, humans have emitted almost 600 billion tonnes of carbon,” says Matthews. “This means that land areas on average have already warmed by 1.3°C because of these emissions. At current emission rates, we will have emitted enough CO¬2 to warm land areas by 2°C within 3 decades.”

Journal Reference:

  1. Martin Leduc, H. Damon Matthews, Ramón de Elía. Regional estimates of the transient climate response to cumulative CO2 emissionsNature Climate Change, 2016; DOI: 10.1038/nclimate2913

The world’s greatest literature reveals multi fractals and cascades of consciousness (Science Daily)

Date: January 21, 2016

Source: The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Summary: James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure.

Sequences of sentence lengths (as measured by number of words) in four literary works representative of various degree of cascading character. Credit: Source: IFJ PAN 

James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis carried out at the Institute of Nuclear Physics of the Polish Academy of Sciences, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

As far as many bookworms are concerned, advanced equations and graphs are the last things which would hold their interest, but there’s no escape from the math. Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is in fact a fractal. In the case of several works their mathematical complexity proved to be exceptional, comparable to the structure of complex mathematical objects considered to be multifractal. Interestingly, in the analyzed pool of all the works, one genre turned out to be exceptionally multifractal in nature.

Fractals are self-similar mathematical objects: when we begin to expand one fragment or another, what eventually emerges is a structure that resembles the original object. Typical fractals, especially those widely known as the Sierpinski triangle and the Mandelbrot set, are monofractals, meaning that the pace of enlargement in any place of a fractal is the same, linear: if they at some point were rescaled x number of times to reveal a structure similar to the original, the same increase in another place would also reveal a similar structure.

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals ‘interwoven’ with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is therefore non-linear in nature.

“Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

The study involved 113 literary works written in English, French, German, Italian, Polish, Russian and Spanish by such famous figures as Honore de Balzac, Arthur Conan Doyle, Julio Cortazar, Charles Dickens, Fyodor Dostoevsky, Alexandre Dumas, Umberto Eco, George Elliot, Victor Hugo, James Joyce, Thomas Mann, Marcel Proust, Wladyslaw Reymont, William Shakespeare, Henryk Sienkiewicz, JRR Tolkien, Leo Tolstoy and Virginia Woolf, among others. The selected works were no less than 5,000 sentences long, in order to ensure statistical reliability.

To convert the texts to numerical sequences, sentence length was measured by the number of words (an alternative method of counting characters in the sentence turned out to have no major impact on the conclusions). The dependences were then searched for in the data — beginning with the simplest, i.e. linear. This is the posited question: if a sentence of a given length is x times longer than the sentences of different lengths, is the same aspect ratio preserved when looking at sentences respectively longer or shorter?

“All of the examined works showed self-similarity in terms of organization of the lengths of sentences. Some were more expressive — here The Ambassadors by Henry James stood out — while others to far less of an extreme, as in the case of the French seventeenth-century romance Artamene ou le Grand Cyrus. However, correlations were evident, and therefore these texts were the construction of a fractal,” comments Dr. Pawel Oswiecimka (IFJ PAN), who also noted that fractality of a literary text will in practice never be as perfect as in the world of mathematics. It is possible to magnify mathematical fractals up to infinity, while the number of sentences in each book is finite, and at a certain stage of scaling there will always be a cut-off in the form of the end of the dataset.

Things took a particularly interesting turn when physicists from the IFJ PAN began tracking non-linear dependence, which in most of the studied works was present to a slight or moderate degree. However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has so far never been associated with this literary genre.

“The absolute record in terms of multifractality turned out to be Finnegan’s Wake by James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals,” says Prof. Drozdz.

The most multifractal works also included A Heartbreaking Work of Staggering Genius by Dave Eggers, Rayuela by Julio Cortazar, The US Trilogy by John Dos Passos, The Waves by Virginia Woolf, 2666 by Roberto Bolano, and Joyce’s Ulysses. At the same time a lot of works usually regarded as stream of consciousness turned out to show little correlation to multifractality, as it was hardly noticeable in books such as Atlas Shrugged by Ayn Rand and A la recherche du temps perdu by Marcel Proust.

“It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers. It is hardly surprising that ascribing a work to a particular genre is, for whatever reason, sometimes subjective. We see, moreover, the possibility of an interesting application of our methodology: it may someday help in a more objective assignment of books to one genre or another,” notes Prof. Drozdz.

Multifractal analyses of literary texts carried out by the IFJ PAN have been published in Information Sciences, a journal of computer science. The publication has undergone rigorous verification: given the interdisciplinary nature of the subject, editors immediately appointed up to six reviewers.

Journal Reference:

  1. Stanisław Drożdż, Paweł Oświȩcimka, Andrzej Kulig, Jarosław Kwapień, Katarzyna Bazarnik, Iwona Grabska-Gradzińska, Jan Rybicki, Marek Stanuszek. Quantifying origin and character of long-range correlations in narrative textsInformation Sciences, 2016; 331: 32 DOI: 10.1016/j.ins.2015.10.023

Algoritmo quântico mostrou-se mais eficaz do que qualquer análogo clássico (Revista Fapesp)

11 de dezembro de 2015

José Tadeu Arantes | Agência FAPESP – O computador quântico poderá deixar de ser um sonho e se tornar realidade nos próximos 10 anos. A expectativa é que isso traga uma drástica redução no tempo de processamento, já que algoritmos quânticos oferecem soluções mais eficientes para certas tarefas computacionais do que quaisquer algoritmos clássicos correspondentes.

Até agora, acreditava-se que a chave da computação quântica eram as correlações entre dois ou mais sistemas. Exemplo de correlação quântica é o processo de “emaranhamento”, que ocorre quando pares ou grupos de partículas são gerados ou interagem de tal maneira que o estado quântico de cada partícula não pode ser descrito independentemente, já que depende do conjunto (Para mais informações veja

Um estudo recente mostrou, no entanto, que mesmo um sistema quântico isolado, ou seja, sem correlações com outros sistemas, é suficiente para implementar um algoritmo quântico mais rápido do que o seu análogo clássico. Artigo descrevendo o estudo foi publicado no início de outubro deste ano na revista Scientific Reports, do grupo Nature: Computational speed-up with a single qudit.

O trabalho, ao mesmo tempo teórico e experimental, partiu de uma ideia apresentada pelo físico Mehmet Zafer Gedik, da Sabanci Üniversitesi, de Istambul, Turquia. E foi realizado mediante colaboração entre pesquisadores turcos e brasileiros. Felipe Fernandes Fanchini, da Faculdade de Ciências da Universidade Estadual Paulista (Unesp), no campus de Bauru, é um dos signatários do artigo. Sua participação no estudo se deu no âmbito do projeto Controle quântico em sistemas dissipativos, apoiado pela FAPESP.

“Este trabalho traz uma importante contribuição para o debate sobre qual é o recurso responsável pelo poder de processamento superior dos computadores quânticos”, disse Fanchini à Agência FAPESP.

“Partindo da ideia de Gedik, realizamos no Brasil um experimento, utilizando o sistema de ressonância magnética nuclear (RMN) da Universidade de São Paulo (USP) em São Carlos. Houve, então, a colaboração de pesquisadores de três universidades: Sabanci, Unesp e USP. E demonstramos que um circuito quântico dotado de um único sistema físico, com três ou mais níveis de energia, pode determinar a paridade de uma permutação numérica avaliando apenas uma vez a função. Isso é impensável em um protocolo clássico.”

Segundo Fanchini, o que Gedik propôs foi um algoritmo quântico muito simples que, basicamente, determina a paridade de uma sequência. O conceito de paridade é utilizado para informar se uma sequência está em determinada ordem ou não. Por exemplo, se tomarmos os algarismos 1, 2 e 3 e estabelecermos que a sequência 1- 2-3 está em ordem, as sequências 2-3-1 e 3-1-2, resultantes de permutações cíclicas dos algarismos, estarão na mesma ordem.

Isso é fácil de entender se imaginarmos os algarismos dispostos em uma circunferência. Dada a primeira sequência, basta girar uma vez em um sentido para obter a sequência seguinte, e girar mais uma vez para obter a outra. Porém, as sequências 1-3-2, 3-2-1 e 2-1-3 necessitam, para serem criadas, de permutações acíclicas. Então, se convencionarmos que as três primeiras sequências são “pares”, as outras três serão “ímpares”.

“Em termos clássicos, a observação de um único algarismo, ou seja uma única medida, não permite dizer se a sequência é par ou ímpar. Para isso, é preciso realizar ao menos duas observações. O que Gedik demonstrou foi que, em termos quânticos, uma única medida é suficiente para determinar a paridade. Por isso, o algoritmo quântico é mais rápido do que qualquer equivalente clássico. E esse algoritmo pode ser concretizado por meio de uma única partícula. O que significa que sua eficiência não depende de nenhum tipo de correlação quântica”, informou Fanchini.

O algoritmo em pauta não diz qual é a sequência. Mas informa se ela é par ou ímpar. Isso só é possível quando existem três ou mais níveis. Porque, havendo apenas dois níveis, algo do tipo 1-2 ou 2-1, não é possível definir uma sequência par ou ímpar. “Nos últimos tempos, a comunidade voltada para a computação quântica vem explorando um conceito-chave da teoria quântica, que é o conceito de ‘contextualidade’. Como a ‘contextualidade’ também só opera a partir de três ou mais níveis, suspeitamos que ela possa estar por trás da eficácia de nosso algoritmo”, acrescentou o pesquisador.

Conceito de contextulidade

“O conceito de ‘contextualidade’ pode ser melhor entendido comparando-se as ideias de mensuração da física clássica e da física quântica. Na física clássica, supõe-se que a mensuração nada mais faça do que desvelar características previamente possuídas pelo sistema que está sendo medido. Por exemplo, um determinado comprimento ou uma determinada massa. Já na física quântica, o resultado da mensuração não depende apenas da característica que está sendo medida, mas também de como foi organizada a mensuração, e de todas as mensurações anteriores. Ou seja, o resultado depende do contexto do experimento. E a ‘contextualidade’ é a grandeza que descreve esse contexto”, explicou Fanchini.

Na história da física, a “contextualidade” foi reconhecida como uma característica necessária da teoria quântica por meio do famoso Teorema de Bell. Segundo esse teorema, publicado em 1964 pelo físico irlandês John Stewart Bell (1928 – 1990), nenhuma teoria física baseada em variáveis locais pode reproduzir todas as predições da mecânica quântica. Em outras palavras, os fenômenos físicos não podem ser descritos em termos estritamente locais, uma vez que expressam a totalidade.

“É importante frisar que em outro artigo [Contextuality supplies the ‘magic’ for quantum computation] publicado na Nature em junho de 2014, aponta a contextualidade como a possível fonte do poder da computação quântica. Nosso estudo vai no mesmo sentido, apresentando um algoritmo concreto e mais eficiente do que qualquer um jamais imaginável nos moldes clássicos.”

Preventing famine with mobile phones (Science Daily)

Date: November 19, 2015

Source: Vienna University of Technology, TU Vienna

Summary: With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition, say experts. By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. The method has now been tested in the Central African Republic.

Does drought lead to famine? A mobile app helps to collect information. Credit: Image courtesy of Vienna University of Technology, TU Vienna

With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition. The method has now been tested in the Central African Republic.

There are different possible causes for famine and malnutrition — not all of which are easy to foresee. Drought and crop failure can often be predicted by monitoring the weather and measuring soil moisture. But other risk factors, such as socio-economic problems or violent conflicts, can endanger food security too. For organizations such as Doctors without Borders / Médecins Sans Frontières (MSF), it is crucial to obtain information about vulnerable regions as soon as possible, so that they have a chance to provide help before it is too late.

Scientists from TU Wien in Vienna, Austria and the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria have now developed a way to monitor food security using a smartphone app, which combines weather and soil moisture data from satellites with crowd-sourced data on the vulnerability of the population, e.g. malnutrition and other relevant socioeconomic data. Tests in the Central African Republic have yielded promising results, which have now been published in the journal PLOS ONE.

Step One: Satellite Data

“For years, we have been working on methods of measuring soil moisture using satellite data,” says Markus Enenkel (TU Wien). By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. “This method works well and it provides us with very important information, but information about soil moisture deficits is not enough to estimate the danger of malnutrition,” says IIASA researcher Linda See. “We also need information about other factors that can affect the local food supply.” For example, political unrest may prevent people from farming, even if weather conditions are fine. Such problems can of course not be monitored from satellites, so the researchers had to find a way of collecting data directly in the most vulnerable regions.

“Today, smartphones are available even in developing countries, and so we decided to develop an app, which we called SATIDA COLLECT, to help us collect the necessary data,” says IIASA-based app developer Mathias Karner. For a first test, the researchers chose the Central African Republic- one of the world’s most vulnerable countries, suffering from chronic poverty, violent conflicts, and weak disaster resilience. Local MSF staff was trained for a day and collected data, conducting hundreds of interviews.

“How often do people eat? What are the current rates of malnutrition? Have any family members left the region recently, has anybody died? — We use the answers to these questions to statistically determine whether the region is in danger,” says Candela Lanusse, nutrition advisor from Doctors without Borders. “Sometimes all that people have left to eat is unripe fruit or the seeds they had stored for next year. Sometimes they have to sell their cattle, which may increase the chance of nutritional problems. This kind of behavior may indicate future problems, months before a large-scale crisis breaks out.”

A Map of Malnutrition Danger

The digital questionnaire of SATIDA COLLECT can be adapted to local eating habits, as the answers and the GPS coordinates of every assessment are stored locally on the phone. When an internet connection is available, the collected data are uploaded to a server and can be analyzed along with satellite-derived information about drought risk. In the end a map could be created, highlighting areas where the danger of malnutrition is high. For Doctors without Borders, such maps are extremely valuable. They help to plan future activities and provide help as soon as it is needed.

“Testing this tool in the Central African Republic was not easy,” says Markus Enenkel. “The political situation there is complicated. However, even under these circumstances we could show that our technology works. We were able to gather valuable information.” SATIDA COLLECT has the potential to become a powerful early warning tool. It may not be able to prevent crises, but it will at least help NGOs to mitigate their impacts via early intervention.

Story Source:

The above post is reprinted from materials provided by Vienna University of Technology, TU ViennaNote: Materials may be edited for content and length.

Journal Reference:

  1. Markus Enenkel, Linda See, Mathias Karner, Mònica Álvarez, Edith Rogenhofer, Carme Baraldès-Vallverdú, Candela Lanusse, Núria Salse. Food Security Monitoring via Mobile Data Collection and Remote Sensing: Results from the Central African RepublicPLOS ONE, 2015; 10 (11): e0142030 DOI: 10.1371/journal.pone.0142030

Dudas sobre El Niño retrasan preparación ante desastres (SciDev Net)

Dudas sobre El Niño retrasan preparación ante desastres

Crédito de la imagen: Patrick Brown/Panos


Martín De Ambrosio

De un vistazo

  • Efectos del fenómeno aún son confusos a lo largo del continente
  • No hay certeza, pero cruzarse de brazos no es opción, según Organización Panamericana de la Salud
  • Hay consenso científico del 95 por ciento sobre posibilidades de un El Niño fuerte

Los desacuerdos que existen entre los científicos sobre la posibilidad de que Centro y Sudamérica sufran o no un fuerte evento El Niño están generando cierto retraso en las preparaciones, según advierten las principales organizaciones que trabajan en el clima de la región.

Algunos investigadores sudamericanos aún tienen dudas sobre la forma cómo se desarrolla el evento este año. Esta incertidumbre impacta en los funcionarios y los estados, que deberían actuar cuanto antes para prevenir los peores escenarios, incluyendo muertes debido a desastres naturales, reclaman las organizaciones meteorológicas.

Eduardo Zambrano, investigador del Centro de Investigación Internacional sobre el Fenómeno de El Niño (CIIFEN) en Ecuador, y uno de los centros regionales de la Organización Meteorológica Mundial, dice que el problema es que los efectos del fenómeno todavía no han sido claros y evidentes en todo el continente.

“Algunas imágenes de satélite nos muestran un Océano Pacífico muy caliente, una de las características de El Niño”.

Willian Alva León, presidente de la Sociedad Meteorológica del Perú

“De todos modos podemos hablar sobre las extremas sequías en el noreste de Brasil, Venezuela y la zona del Caribe”, dice, y menciona además las inusualmente fuertes lluvias en el desierto de Atacama en Chile desde marzo y las inundaciones en zonas de Argentina, Uruguay y Paraguay.

El Niño alcanza su pico cuando una masa de aguas cálidas para los habituales parámetros del este del Océano Pacífico, se mueve de norte a sur y toca costas peruanas y ecuatorianas. Este movimiento causa efectos en cascada y estragos en todo el sistema de América Central y del Sur, convirtiendo las áridas regiones altas en lluviosas, al tiempo que se presentan sequías en las tierras bajas y tormentas sobre el Caribe.

Pero El Niño continúa siendo de difícil predicción debido a sus muy diferentes impactos. Los científicos, según Zambrano, esperaban al Niño el año pasado “cuando todas las alarmas sonaron, y luego no pasó nada demasiado extraordinario debido a un cambio en la dirección de los vientos”.

Tras ese error, muchas organizaciones prefirieron la cautela para evitar el alarmismo. “Algunas imágenes de satélite nos muestran un Océano Pacífico muy caliente, una de las características de El Niño”, dice Willian Alva León, presidente de la Sociedad Meteorológica del Perú. Pero, agrega, este calor no se mueve al sudeste, hacia las costas peruanas, como sucedería en caso del evento El Niño.

Alva León cree que los peores efectos ya sucedieron este año, lo que significa que el fenómeno está en retirada. “El Niño tiene un límite de energía y creo que ya ha sido alcanzado este año”, dice.

Este desacuerdo entre las instituciones de investigación del clima preocupa a quienes generan políticas, pues necesitan guías claras para iniciar las preparaciones necesarias del caso. Ciro Ugarte, asesor regional del área de Preparativos para Emergencia y Socorro en casos de Desastrede la Organización Panamericana de la Salud, dice que es obligatorio actuar como si El Niño en efecto estuviera en proceso para asegurar que el continente enfrente las posibles consecuencias.

“Estar preparados es importante porque reduce el impacto del fenómeno así como otras enfermedades que hoy son epidémicas”, dice.

Para asegurar el grado de probabilidad de El Niño, algunos científicos usan modelos que abstraen datos de la realidad y generan predicciones. María Teresa Martínez, subdirectora de meteorología del Instituto de Hidrología, Meteorología y Estudios Ambientales de Colombia, señala que los modelos más confiables predijeron en marzo que había entre un 50 y un 60 por ciento de posibilidad de un evento El Niño. “Ahora El Niño se desarrolla con fuerza desde su etapa de formación hacia la etapa de madurez, que será alcanzada en diciembre”, señala.

Ugarte admite que no hay certezas, pero dice que para su organización “no hacer nada no es una opción”.

“Como creadores de políticas de prevención, lo que tenemos que hacer es usar lo que es el consenso entre los científicos, y hoy ese consenso dice que hay un 95% de posibilidades de tener un fuerte o muy fuerte evento El Niño”, dice.

Aquecimento pode triplicar seca na Amazônia (Observatório do Clima)


 Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Modelos de computador sugerem que leste amazônico, que contém a maior parte da floresta, teria mais estiagens, incêndios e morte de árvores, enquanto o oeste ficaria mais chuvoso.

As mudanças climáticas podem aumentar a frequência tanto de secas quanto de chuvas extremas na Amazônia antes do meio do século, compondo com o desmatamento para causar mortes maciças de árvores, incêndios e emissões de carbono. A conclusão é de uma avaliação de 35 modelos climáticos aplicados à região, feita por pesquisadores dos EUA e do Brasil.

Segundo o estudo, liderado por Philip Duffy, do WHRC (Instituto de Pesquisas de Woods Hole, nos EUA) e da Universidade Stanford, a área afetada por secas extremas no leste amazônico, região que engloba a maior parte da Amazônia, pode triplicar até 2100. Paradoxalmente, a frequência de períodos extremamente chuvosos e a área sujeita a chuvas extremas tende a crescer em toda a região após 2040 – mesmo nos locais onde a precipitação média anual diminuir.

Já o oeste amazônico, em especial o Peru e a Colômbia, deve ter um aumento na precipitação média anual.

A mudança no regime de chuvas é um efeito há muito teorizado do aquecimento global. Com mais energia na atmosfera e mais vapor d’água, resultante da maior evaporação dos oceanos, a tendência é que os extremos climáticos sejam amplificados. As estações chuvosas – na Amazônia, o período de verão no hemisfério sul, chamado pelos moradores da região de “inverno” ficam mais curtas, mas as chuvas caem com mais intensidade.

No entanto, a resposta da floresta essas mudanças tem sido objeto de controvérsias entre os cientistas. Estudos da década de 1990 propuseram que a reação da Amazônia fosse ser uma ampla “savanização”, ou mortandade de grandes árvores, e a transformação de vastas porções da selva numa savana empobrecida.

Outros estudos, porém, apontaram que o calor e o CO2 extra teriam o efeito oposto – o de fazer as árvores crescerem mais e fixarem mais carbono, de modo a compensar eventuais perdas por seca. Na média, portanto, o impacto do aquecimento global sobre a Amazônia seria relativamente pequeno.

Ocorre que a própria Amazônia encarregou-se de dar aos cientistas dicas de como reagiria. Em 2005, 2007 e 2010, a floresta passou por secas históricas. O resultado foi ampla mortalidade de árvores e incêndios em florestas primárias em mais de 85 mil quilômetros quadrados. O grupo de Duffy, também integrado por Paulo Brando, do Ipam (Instituto de Pesquisa Ambiental da Amazônia), aponta que de 1% a 2% do carbono da Amazônia foi lançado na atmosfera em decorrência das secas da década de 2000. Brando e colegas do Ipam também já haviam mostrado que a Amazônia está mais inflamável, provavelmente devido aos efeitos combinados do clima e do desmatamento.

Os pesquisadores simularam o clima futuro da região usando os modelos do chamado projeto CMIP5, usado pelo IPCC (Painel Intergovernamental sobre Mudança Climática) no seu último relatório de avaliação do clima global. Um dos membros do grupo, Chris Field, de Stanford, foi um dos coordenadores do relatório – foi também candidato à presidência do IPCC na eleição realizada na semana passada, perdendo para o coreano Hoesung Lee.

Os modelos de computador foram testados no pior cenário de emissões, o chamado RMP 8.5, no qual se assume que pouca coisa será feita para controlar emissões de gases-estufa.

Eles não apenas captaram bem a influência das temperaturas dos oceanos Atlântico e Pacífico sobre o padrão de chuvas na Amazônia – diferenças entre os dois oceanos explicam por que o leste amazônico ficará mais seco e o oeste, mais úmido –, como também mostraram nas simulações de seca futura uma característica das secas recorde de 2005 e 2010: o extremo norte da Amazônia teve grande aumento de chuvas enquanto o centro e o sul estorricavam.

Segundo os pesquisadores, o estudo pode ser até mesmo conservador, já que só levou em conta as variações de precipitação. “Por exemplo, as chuvas no leste da Amazônia têm uma forte dependência da evapotranspiração, então uma redução na cobertura de árvores poderia reduzir a precipitação”, escreveram Duffy e Brando. “Isso sugere que, se os processos relacionados a mudanças no uso da terra fossem mais bem representados nos modelos do CMIP5, a intensidade das secas poderia ser maior do que a projetada aqui.”

O estudo foi publicado na PNAS, a revista da Academia Nacional de Ciências dos EUA. (Observatório do Clima/ #Envolverde)

* Publicado originalmente no site Observatório do Clima.

‘Targeted punishments’ against countries could tackle climate change (Science Daily)

August 25, 2015
University of Warwick
Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

This is a diagram of two possible strategies of targeted punishment studied in the paper. Credit: Royal Society Open Science

Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

Conducted at the University of Warwick, the research suggests that in situations such as climate change, where everyone would be better off if everyone cooperated but it may not be individually advantageous to do so, the use of a strategy called ‘targeted punishment’ could help shift society towards global cooperation.

Despite the name, the ‘targeted punishment’ mechanism can apply to positive or negative incentives. The research argues that the key factor is that these incentives are not necessarily applied to everyone who may seem to deserve them. Rather, rules should be devised according to which only a small number of players are considered responsible at any one time.

The study’s author Dr Samuel Johnson, from the University of Warwick’s Mathematics Institute, explains: “It is well known that some form of punishment, or positive incentives, can help maintain cooperation in situations where almost everyone is already cooperating, such as in a country with very little crime. But when there are only a few people cooperating and many more not doing so punishment can be too dilute to have any effect. In this regard, the international community is a bit like a failed state.”

The paper, published in Royal Society Open Science, shows that in situations of entrenched defection (non-cooperation), there exist strategies of ‘targeted punishment’ available to would-be punishers which can allow them to move a community towards global cooperation.

“The idea,” said Dr Johnson, “is not to punish everyone who is defecting, but rather to devise a rule whereby only a small number of defectors are considered at fault at any one time. For example, if you want to get a group of people to cooperate on something, you might arrange them on an imaginary line and declare that a person is liable to be punished if and only if the person to their left is cooperating while they are not. This way, those people considered at fault will find themselves under a lot more pressure than if responsibility were distributed, and cooperation can build up gradually as each person decides to fall in line when the spotlight reaches them.”

For the case of climate change, the paper suggests that countries should be divided into groups, and these groups placed in some order — ideally, according roughly to their natural tendencies to cooperate. Governments would make commitments (to reduce emissions or leave fossil fuels in the ground, for instance) conditional on the performance of the group before them. This way, any combination of sanctions and positive incentives that other countries might be willing to impose would have a much greater effect.

“In the mathematical model,” said Dr Johnson, “the mechanism works best if the players are somewhat irrational. It seems a reasonable assumption that this might apply to the international community.”

Journal Reference:

  1. Samuel Johnson. Escaping the Tragedy of the Commons through Targeted PunishmentRoyal Society Open Science, 2015 [link]

The Point of No Return: Climate Change Nightmares Are Already Here (Rolling Stone)

The worst predicted impacts of climate change are starting to happen — and much faster than climate scientists expected

BY  August 5, 2015


Walruses, like these in Alaska, are being forced ashore in record numbers. Corey Accardo/NOAA/AP 

Historians may look to 2015 as the year when shit really started hitting the fan. Some snapshots: In just the past few months, record-setting heat waves in Pakistan and India each killed more than 1,000 people. In Washington state’s Olympic National Park, the rainforest caught fire for the first time in living memory. London reached 98 degrees Fahrenheit during the hottest July day ever recorded in the U.K.; The Guardian briefly had to pause its live blog of the heat wave because its computer servers overheated. In California, suffering from its worst drought in a millennium, a 50-acre brush fire swelled seventyfold in a matter of hours, jumping across the I-15 freeway during rush-hour traffic. Then, a few days later, the region was pounded by intense, virtually unheard-of summer rains. Puerto Rico is under its strictest water rationing in history as a monster El Niño forms in the tropical Pacific Ocean, shifting weather patterns worldwide.

On July 20th, James Hansen, the former NASA climatologist who brought climate change to the public’s attention in the summer of 1988, issued a bombshell: He and a team of climate scientists had identified a newly important feedback mechanism off the coast of Antarctica that suggests mean sea levels could rise 10 times faster than previously predicted: 10 feet by 2065. The authors included this chilling warning: If emissions aren’t cut, “We conclude that multi-meter sea-level rise would become practically unavoidable. Social disruption and economic consequences of such large sea-level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.”

Eric Rignot, a climate scientist at NASA and the University of California-Irvine and a co-author on Hansen’s study, said their new research doesn’t necessarily change the worst-case scenario on sea-level rise, it just makes it much more pressing to think about and discuss, especially among world leaders. In particular, says Rignot, the new research shows a two-degree Celsius rise in global temperature — the previously agreed upon “safe” level of climate change — “would be a catastrophe for sea-level rise.”

Hansen’s new study also shows how complicated and unpredictable climate change can be. Even as global ocean temperatures rise to their highest levels in recorded history, some parts of the ocean, near where ice is melting exceptionally fast, are actually cooling, slowing ocean circulation currents and sending weather patterns into a frenzy. Sure enough, a persistently cold patch of ocean is starting to show up just south of Greenland, exactly where previous experimental predictions of a sudden surge of freshwater from melting ice expected it to be. Michael Mann, another prominent climate scientist, recently said of the unexpectedly sudden Atlantic slowdown, “This is yet another example of where observations suggest that climate model predictions may be too conservative when it comes to the pace at which certain aspects of climate change are proceeding.”

Since storm systems and jet streams in the United States and Europe partially draw their energy from the difference in ocean temperatures, the implication of one patch of ocean cooling while the rest of the ocean warms is profound. Storms will get stronger, and sea-level rise will accelerate. Scientists like Hansen only expect extreme weather to get worse in the years to come, though Mann said it was still “unclear” whether recent severe winters on the East Coast are connected to the phenomenon.

And yet, these aren’t even the most disturbing changes happening to the Earth’s biosphere that climate scientists are discovering this year. For that, you have to look not at the rising sea levels but to what is actually happening within the oceans themselves.

Water temperatures this year in the North Pacific have never been this high for this long over such a large area — and it is already having a profound effect on marine life.

Eighty-year-old Roger Thomas runs whale-watching trips out of San Francisco. On an excursion earlier this year, Thomas spotted 25 humpbacks and three blue whales. During a survey on July 4th, federal officials spotted 115 whales in a single hour near the Farallon Islands — enough to issue a boating warning. Humpbacks are occasionally seen offshore in California, but rarely so close to the coast or in such numbers. Why are they coming so close to shore? Exceptionally warm water has concentrated the krill and anchovies they feed on into a narrow band of relatively cool coastal water. The whales are having a heyday. “It’s unbelievable,” Thomas told a local paper. “Whales are all over
the place.”

Last fall, in northern Alaska, in the same part of the Arctic where Shell is planning to drill for oil, federal scientists discovered 35,000 walruses congregating on a single beach. It was the largest-ever documented “haul out” of walruses, and a sign that sea ice, their favored habitat, is becoming harder and harder to find.

Marine life is moving north, adapting in real time to the warming ocean. Great white sharks have been sighted breeding near Monterey Bay, California, the farthest north that’s ever been known to occur. A blue marlin was caught last summer near Catalina Island — 1,000 miles north of its typical range. Across California, there have been sightings of non-native animals moving north, such as Mexican red crabs.


Salmon on the brink of dying out. Michael Quinton/Newscom

No species may be as uniquely endangered as the one most associated with the Pacific Northwest, the salmon. Every two weeks, Bill Peterson, an oceanographer and senior scientist at the National Oceanic and Atmospheric Administration’s Northwest Fisheries Science Center in Oregon, takes to the sea to collect data he uses to forecast the return of salmon. What he’s been seeing this year is deeply troubling.

Salmon are crucial to their coastal ecosystem like perhaps few other species on the planet. A significant portion of the nitrogen in West Coast forests has been traced back to salmon, which can travel hundreds of miles upstream to lay their eggs. The largest trees on Earth simply wouldn’t exist without salmon.

But their situation is precarious. This year, officials in California are bringing salmon downstream in convoys of trucks, because river levels are too low and the temperatures too warm for them to have a reasonable chance of surviving. One species, the winter-run Chinook salmon, is at a particularly increased risk of decline in the next few years, should the warm water persist offshore.

“You talk to fishermen, and they all say: ‘We’ve never seen anything like this before,’ ” says Peterson. “So when you have no experience with something like this, it gets like, ‘What the hell’s going on?’ ”

Atmospheric scientists increasingly believe that the exceptionally warm waters over the past months are the early indications of a phase shift in the Pacific Decadal Oscillation, a cyclical warming of the North Pacific that happens a few times each century. Positive phases of the PDO have been known to last for 15 to 20 years, during which global warming can increase at double the rate as during negative phases of the PDO. It also makes big El Niños, like this year’s, more likely. The nature of PDO phase shifts is unpredictable — climate scientists simply haven’t yet figured out precisely what’s behind them and why they happen when they do. It’s not a permanent change — the ocean’s temperature will likely drop from these record highs, at least temporarily, some time over the next few years — but the impact on marine species will be lasting, and scientists have pointed to the PDO as a global-warming preview.

“The climate [change] models predict this gentle, slow increase in temperature,” says Peterson, “but the main problem we’ve had for the last few years is the variability is so high. As scientists, we can’t keep up with it, and neither can the animals.” Peterson likens it to a boxer getting pummeled round after round: “At some point, you knock them down, and the fight is over.”


Pavement-melting heat waves in India. Harish Tyagi/EPA/Corbis

Attendant with this weird wildlife behavior is a stunning drop in the number of plankton — the basis of the ocean’s food chain. In July, another major study concluded that acidifying oceans are likely to have a “quite traumatic” impact on plankton diversity, with some species dying out while others flourish. As the oceans absorb carbon dioxide from the atmosphere, it’s converted into carbonic acid — and the pH of seawater declines. According to lead author Stephanie Dutkiewicz of MIT, that trend means “the whole food chain is going to be different.”

The Hansen study may have gotten more attention, but the Dutkiewicz study, and others like it, could have even more dire implications for our future. The rapid changes Dutkiewicz and her colleagues are observing have shocked some of their fellow scientists into thinking that yes, actually, we’re heading toward the worst-case scenario. Unlike a prediction of massive sea-level rise just decades away, the warming and acidifying oceans represent a problem that seems to have kick-started a mass extinction on the same time scale.

Jacquelyn Gill is a paleoecologist at the University of Maine. She knows a lot about extinction, and her work is more relevant than ever. Essentially, she’s trying to save the species that are alive right now by learning more about what killed off the ones that aren’t. The ancient data she studies shows “really compelling evidence that there can be events of abrupt climate change that can happen well within human life spans. We’re talking less than a decade.”

For the past year or two, a persistent change in winds over the North Pacific has given rise to what meteorologists and oceanographers are calling “the blob” — a highly anomalous patch of warm water between Hawaii, Alaska and Baja California that’s thrown the marine ecosystem into a tailspin. Amid warmer temperatures, plankton numbers have plummeted, and the myriad species that depend on them have migrated or seen their own numbers dwindle.

Significant northward surges of warm water have happened before, even frequently. El Niño, for example, does this on a predictable basis. But what’s happening this year appears to be something new. Some climate scientists think that the wind shift is linked to the rapid decline in Arctic sea ice over the past few years, which separate research has shown makes weather patterns more likely to get stuck.

A similar shift in the behavior of the jet stream has also contributed to the California drought and severe polar vortex winters in the Northeast over the past two years. An amplified jet-stream pattern has produced an unusual doldrum off the West Coast that’s persisted for most of the past 18 months. Daniel Swain, a Stanford University meteorologist, has called it the “Ridiculously Resilient Ridge” — weather patterns just aren’t supposed to last this long.

What’s increasingly uncontroversial among scientists is that in many ecosystems, the impacts of the current off-the-charts temperatures in the North Pacific will linger for years, or longer. The largest ocean on Earth, the Pacific is exhibiting cyclical variability to greater extremes than other ocean basins. While the North Pacific is currently the most dramatic area of change in the world’s oceans, it’s not alone: Globally, 2014 was a record-setting year for ocean temperatures, and 2015 is on pace to beat it soundly, boosted by the El Niño in the Pacific. Six percent of the world’s reefs could disappear before the end of the decade, perhaps permanently, thanks to warming waters.

Since warmer oceans expand in volume, it’s also leading to a surge in sea-level rise. One recent study showed a slowdown in Atlantic Ocean currents, perhaps linked to glacial melt from Greenland, that caused a four-inch rise in sea levels along the Northeast coast in just two years, from 2009 to 2010. To be sure, it seems like this sudden and unpredicted surge was only temporary, but scientists who studied the surge estimated it to be a 1-in-850-year event, and it’s been blamed on accelerated beach erosion “almost as significant as some hurricane events.”


Biblical floods in Turkey. Ali Atmaca/Anadolu Agency/Getty

Possibly worse than rising ocean temperatures is the acidification of the waters. Acidification has a direct effect on mollusks and other marine animals with hard outer bodies: A striking study last year showed that, along the West Coast, the shells of tiny snails are already dissolving, with as-yet-unknown consequences on the ecosystem. One of the study’s authors, Nina Bednaršek, told Science magazine that the snails’ shells, pitted by the acidifying ocean, resembled “cauliflower” or “sandpaper.” A similarly striking study by more than a dozen of the world’s top ocean scientists this July said that the current pace of increasing carbon emissions would force an “effectively irreversible” change on ocean ecosystems during this century. In as little as a decade, the study suggested, chemical changes will rise significantly above background levels in nearly half of the world’s oceans.

“I used to think it was kind of hard to make things in the ocean go extinct,” James Barry of the Monterey Bay Aquarium Research Institute in California told the Seattle Times in 2013. “But this change we’re seeing is happening so fast it’s almost instantaneous.”

Thanks to the pressure we’re putting on the planet’s ecosystem — warming, acidification and good old-fashioned pollution — the oceans are set up for several decades of rapid change. Here’s what could happen next.

The combination of excessive nutrients from agricultural runoff, abnormal wind patterns and the warming oceans is already creating seasonal dead zones in coastal regions when algae blooms suck up most of the available oxygen. The appearance of low-oxygen regions has doubled in frequency every 10 years since 1960 and should continue to grow over the coming decades at an even greater rate.

So far, dead zones have remained mostly close to the coasts, but in the 21st century, deep-ocean dead zones could become common. These low-oxygen regions could gradually expand in size — potentially thousands of miles across — which would force fish, whales, pretty much everything upward. If this were to occur, large sections of the temperate deep oceans would suffer should the oxygen-free layer grow so pronounced that it stratifies, pushing surface ocean warming into overdrive and hindering upwelling of cooler, nutrient-rich deeper water.

Enhanced evaporation from the warmer oceans will create heavier downpours, perhaps destabilizing the root systems of forests, and accelerated runoff will pour more excess nutrients into coastal areas, further enhancing dead zones. In the past year, downpours have broken records in Long Island, Phoenix, Detroit, Baltimore, Houston and Pensacola, Florida.

Evidence for the above scenario comes in large part from our best understanding of what happened 250 million years ago, during the “Great Dying,” when more than 90 percent of all oceanic species perished after a pulse of carbon dioxide and methane from land-based sources began a period of profound climate change. The conditions that triggered “Great Dying” took hundreds of thousands of years to develop. But humans have been emitting carbon dioxide at a much quicker rate, so the current mass extinction only took 100 years or so to kick-start.

With all these stressors working against it, a hypoxic feedback loop could wind up destroying some of the oceans’ most species-rich ecosystems within our lifetime. A recent study by Sarah Moffitt of the University of California-Davis said it could take the ocean thousands of years to recover. “Looking forward for my kid, people in the future are not going to have the same ocean that I have today,” Moffitt said.

As you might expect, having tickets to the front row of a global environmental catastrophe is taking an increasingly emotional toll on scientists, and in some cases pushing them toward advocacy. Of the two dozen or so scientists I interviewed for this piece, virtually all drifted into apocalyptic language at some point.

For Simone Alin, an oceanographer focusing on ocean acidification at NOAA’s Pacific Marine Environmental Laboratory in Seattle, the changes she’s seeing hit close to home. The Puget Sound is a natural laboratory for the coming decades of rapid change because its waters are naturally more acidified than most of the world’s marine ecosystems.

The local oyster industry here is already seeing serious impacts from acidifying waters and is going to great lengths to avoid a total collapse. Alin calls oysters, which are non-native, the canary in the coal mine for the Puget Sound: “A canary is also not native to a coal mine, but that doesn’t mean it’s not a good indicator of change.”

Though she works on fundamental oceanic changes every day, the Dutkiewicz study on the impending large-scale changes to plankton caught her off-guard: “This was alarming to me because if the basis of the food web changes, then . . . everything could change, right?”

Alin’s frank discussion of the looming oceanic apocalypse is perhaps a product of studying unfathomable change every day. But four years ago, the birth of her twins “heightened the whole issue,” she says. “I was worried enough about these problems before having kids that I maybe wondered whether it was a good idea. Now, it just makes me feel crushed.”

Katharine Hayhoe

Katharine Hayhoe speaks about climate change to students and faculty at Wayland Baptist University in 2011. Geoffrey McAllister/Chicago Tribune/MCT/Getty

Katharine Hayhoe, a climate scientist and evangelical Christian, moved from Canada to Texas with her husband, a pastor, precisely because of its vulnerability to climate change. There, she engages with the evangelical community on science — almost as a missionary would. But she’s already planning her exit strategy: “If we continue on our current pathway, Canada will be home for us long term. But the majority of people don’t have an exit strategy. . . . So that’s who I’m here trying to help.”

James Hansen, the dean of climate scientists, retired from NASA in 2013 to become a climate activist. But for all the gloom of the report he just put his name to, Hansen is actually somewhat hopeful. That’s because he knows that climate change has a straightforward solution: End fossil-fuel use as quickly as possible. If tomorrow, the leaders of the United States and China would agree to a sufficiently strong, coordinated carbon tax that’s also applied to imports, the rest of the world would have no choice but to sign up. This idea has already been pitched to Congress several times, with tepid bipartisan support. Even though a carbon tax is probably a long shot, for Hansen, even the slim possibility that bold action like this might happen is enough for him to devote the rest of his life to working to achieve it. On a conference call with reporters in July, Hansen said a potential joint U.S.-China carbon tax is more important than whatever happens at the United Nations climate talks in Paris.

One group Hansen is helping is Our Children’s Trust, a legal advocacy organization that’s filed a number of novel challenges on behalf of minors under the idea that climate change is a violation of intergenerational equity — children, the group argues, are lawfully entitled to inherit a healthy planet.

A separate challenge to U.S. law is being brought by a former EPA scientist arguing that carbon dioxide isn’t just a pollutant (which, under the Clean Air Act, can dissipate on its own), it’s also a toxic substance. In general, these substances have exceptionally long life spans in the environment, cause an unreasonable risk, and therefore require remediation. In this case, remediation may involve planting vast numbers of trees or restoring wetlands to bury excess carbon underground.

Even if these novel challenges succeed, it will take years before a bend in the curve is noticeable. But maybe that’s enough. When all feels lost, saving a few species will feel like a triumph.

From The Archives Issue 1241: August 13, 2015

Read more:
Follow us: @rollingstone on Twitter | RollingStone on Facebook

Stop burning fossil fuels now: there is no CO2 ‘technofix’, scientists warn (The Guardian)

Researchers have demonstrated that even if a geoengineering solution to CO2 emissions could be found, it wouldn’t be enough to save the oceans

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber Photograph: Doug Perrine/Design Pics/Corbis

German researchers have demonstrated once again that the best way to limit climate change is to stop burning fossil fuels now.

In a “thought experiment” they tried another option: the future dramatic removal of huge volumes of carbon dioxide from the atmosphere. This would, they concluded, return the atmosphere to the greenhouse gas concentrations that existed for most of human history – but it wouldn’t save the oceans.

That is, the oceans would stay warmer, and more acidic, for thousands of years, and the consequences for marine life could be catastrophic.

The research, published in Nature Climate Change today delivers yet another demonstration that there is so far no feasible “technofix” that would allow humans to go on mining and drilling for coal, oil and gas (known as the “business as usual” scenario), and then geoengineer a solution when climate change becomes calamitous.

Sabine Mathesius (of the Helmholtz Centre for Ocean Research in Kiel and the Potsdam Institute for Climate Impact Research) and colleagues decided to model what could be done with an as-yet-unproven technology called carbon dioxide removal. One example would be to grow huge numbers of trees, burn them, trap the carbon dioxide, compress it and bury it somewhere. Nobody knows if this can be done, but Dr Mathesius and her fellow scientists didn’t worry about that.

They calculated that it might plausibly be possible to remove carbon dioxide from the atmosphere at the rate of 90 billion tons a year. This is twice what is spilled into the air from factory chimneys and motor exhausts right now.

The scientists hypothesised a world that went on burning fossil fuels at an accelerating rate – and then adopted an as-yet-unproven high technology carbon dioxide removal technique.

“Interestingly, it turns out that after ‘business as usual’ until 2150, even taking such enormous amounts of CO2 from the atmosphere wouldn’t help the deep ocean that much – after the acidified water has been transported by large-scale ocean circulation to great depths, it is out of reach for many centuries, no matter how much CO2 is removed from the atmosphere,” said a co-author, Ken Caldeira, who is normally based at the Carnegie Institution in the US.

The oceans cover 70% of the globe. By 2500, ocean surface temperatures would have increased by 5C (41F) and the chemistry of the ocean waters would have shifted towards levels of acidity that would make it difficult for fish and shellfish to flourish. Warmer waters hold less dissolved oxygen. Ocean currents, too, would probably change.

But while change happens in the atmosphere over tens of years, change in the ocean surface takes centuries, and in the deep oceans, millennia. So even if atmospheric temperatures were restored to pre-Industrial Revolution levels, the oceans would continue to experience climatic catastrophe.

“In the deep ocean, the chemical echo of this century’s CO2 pollution will reverberate for thousands of years,” said co-author Hans Joachim Schellnhuber, who directs the Potsdam Institute. “If we do not implement emissions reductions measures in line with the 2C (35.6F) target in time, we will not be able to preserve ocean life as we know it.”

Climate Seer James Hansen Issues His Direst Forecast Yet (The Daily Beast) + other sources, and repercussions

A polar bear walks in the snow near the Hudson Bay waiting for the bay to freeze, 13 November 2007, outside Churchill, Mantioba, Canada. Polar bears return to Churchill, the polar bear capital of the world, to hunt for seals on the icepack every year at this time and remain on the icepack feeding on seals until the spring thaw.   AFP PHOTO/Paul J. Richards (Photo credit should read PAUL J. RICHARDS/AFP/Getty Images)

Paul J Richards/AFP/Getty

Mark Hertsgaard 

07.20.151:00 AM ET

James Hansen’s new study explodes conventional goals of climate diplomacy and warns of 10 feet of sea level rise before 2100. The good news is, we can fix it.

James Hansen, the former NASA scientist whose congressional testimony put global warming on the world’s agenda a quarter-century ago, is now warning that humanity could confront “sea level rise of several meters” before the end of the century unless greenhouse gas emissions are slashed much faster than currently contemplated.This roughly 10 feet of sea level rise—well beyond previous estimates—would render coastal cities such as New York, London, and Shanghai uninhabitable.  “Parts of [our coastal cities] would still be sticking above the water,” Hansen says, “but you couldn’t live there.”

James Hanson

Columbia University

This apocalyptic scenario illustrates why the goal of limiting temperature rise to 2 degrees Celsius is not the safe “guardrail” most politicians and media coverage imply it is, argue Hansen and 16 colleagues in a blockbuster study they are publishing this week in the peer-reviewed journal Atmospheric Chemistry and Physics. On the contrary, a 2 C future would be “highly dangerous.”

If Hansen is right—and he has been right, sooner, about the big issues in climate science longer than anyone—the implications are vast and profound.

Physically, Hansen’s findings mean that Earth’s ice is melting and its seas are rising much faster than expected. Other scientists have offered less extreme findings; the United Nations Intergovernmental Panel on Climate Change (IPCC) has projected closer to 3 feet of sea level rise by the end of the century, an amount experts say will be difficult enough to cope with. (Three feet of sea level rise would put runways of all three New York City-area airports underwater unless protective barriers were erected. The same holds for airports in the San Francisco Bay Area.)

Worldwide, approximately $3 trillion worth infrastructure vital to civilization such as water treatment plants, power stations, and highways are located at or below 3 feet of sea level, according to the Stern Review, a comprehensive analysis published by the British government.

Hansen’s track record commands respect. From the time the soft-spoken Iowan told the U.S. Senate in 1988 that man-made global warming was no longer a theory but had in fact begun and threatened unparalleled disaster, he has consistently been ahead of the scientific curve.

Hansen has long suspected that computer models underestimated how sensitive Earth’s ice sheets were to rising temperatures. Indeed, the IPCC excluded ice sheet melt altogether from its calculations of sea level rise. For their study, Hansen and his colleagues combined ancient paleo-climate data with new satellite readings and an improved model of the climate system to demonstrate that ice sheets can melt at a “non-linear” rate: rather than an incremental melting as Earth’s poles inexorably warm, ice sheets might melt at exponential rates, shedding dangerous amounts of mass in a matter of decades, not millennia. In fact, current observations indicate that some ice sheets already are melting this rapidly.

“Prior to this paper I suspected that to be the case,” Hansen told The Daily Beast. “Now we have evidence to make that statement based on much more than suspicion.”

The Nature Climate Change study and Hansen’s new paper give credence to the many developing nations and climate justice advocates who have called for more ambitious action.

Politically, Hansen’s new projections amount to a huge headache for diplomats, activists, and anyone else hoping that a much-anticipated global climate summit the United Nations is convening in Paris in December will put the world on a safe path. President Barack Obama and other world leaders must now reckon with the possibility that the 2 degrees goal they affirmed at the Copenhagen summit in 2009 is actually a recipe for catastrophe. In effect, Hansen’s study explodes what has long been the goal of conventional climate diplomacy.

More troubling, honoring even the conventional 2 degrees C target has so far proven extremely challenging on political and economic grounds. Current emission trajectories put the world on track towards a staggering 4 degrees of warming before the end of the century, an amount almost certainly beyond civilization’s coping capacity. In preparation for the Paris summit, governments have begun announcing commitments to reduce emissions, but to date these commitments are falling well short of satisfying the 2 degrees goal. Now, factor in the possibility that even 2 degrees is too much and many negotiators may be tempted to throw up their hands in despair.

They shouldn’t. New climate science brings good news as well as bad.  Humanity can limit temperature rise to 1.5 degrees C if it so chooses, according to a little-noticed study by experts at the Potsdam Institute for Climate Impacts (now perhaps the world’s foremost climate research center) and the International Institute for Applied Systems Analysis published in Nature Climate Change in May.

“Actions for returning global warming to below 1.5 degrees Celsius by 2100 are in many ways similar to those limiting warming to below 2 degrees Celsius,” said Joeri Rogelj, a lead author of the study. “However … emission reductions need to scale up swiftly in the next decades.” And there’s a significant catch: Even this relatively optimistic study concludes that it’s too late to prevent global temperature rising by 2 degrees C. But this overshoot of the 2 C target can be made temporary, the study argues; the total increase can be brought back down to 1.5 C later in the century.

Besides the faster emissions reductions Rogelj referenced, two additional tools are essential, the study outlines. Energy efficiency—shifting to less wasteful lighting, appliances, vehicles, building materials and the like—is already the cheapest, fastest way to reduce emissions. Improved efficiency has made great progress in recent years but will have to accelerate, especially in emerging economies such as China and India.

Also necessary will be breakthroughs in so-called “carbon negative” technologies. Call it the photosynthesis option: because plants inhale carbon dioxide and store it in their roots, stems, and leaves, one can remove carbon from the atmosphere by growing trees, planting cover crops, burying charred plant materials underground, and other kindred methods. In effect, carbon negative technologies can turn back the clock on global warming, making the aforementioned descent from the 2 C overshoot to the 1.5 C goal later in this century theoretically possible. Carbon-negative technologies thus far remain unproven at the scale needed, however; more research and deployment is required, according to the study.

Together, the Nature Climate Change study and Hansen’s new paper give credence to the many developing nations and climate justice advocates who have called for more ambitious action. The authors of the Nature Climate Changestudy point out that the 1.5 degrees goal “is supported by more than 100 countries worldwide, including those most vulnerable to climate change.” In May, the governments of 20 of those countries, including the Philippines, Costa Rica, Kenya, and Bangladesh, declared the 2 degrees target “inadequate” and called for governments to “reconsider” it in Paris.

Hansen too is confident that the world “could actually come in well under 2 degrees, if we make the price of fossil fuels honest.”

That means making the market price of gasoline and other products derived from fossil fuels reflect the enormous costs that burning those fuels currently externalizes onto society as a whole. Economists from left to right have advocated achieving this by putting a rising fee or tax on fossil fuels. This would give businesses, governments, and other consumers an incentive to shift to non-carbon fuels such as solar, wind, nuclear, and, best of all, increased energy efficiency. (The cheapest and cleanest fuel is the fuel you don’t burn in the first place.)

But putting a fee on fossil fuels will raise their price to consumers, threatening individual budgets and broader economic prospects, as opponents will surely point out. Nevertheless, higher prices for carbon-based fuels need not have injurious economic effects if the fees driving those higher prices are returned to the public to spend as it wishes. It’s been done that way for years with great success in Alaska, where all residents receive an annual check in compensation for the impact the Alaskan oil pipeline has on the state.

“Tax Pollution, Pay People” is the bumper sticker summary coined by activists at the Citizens Climate Lobby. Legislation to this effect has been introduced in both houses of the U.S. Congress.

Meanwhile, there are also a host of other reasons to believe it’s not too late to preserve a livable climate for young people and future generations.

The transition away from fossil fuels has begun and is gaining speed and legitimacy. In 2014, global greenhouse gas emissions remained flat even as the world economy grew—a first. There has been a spectacular boom in wind and solar energy, including in developing countries, as their prices plummet. These technologies now qualify as a “disruptive” economic force that promises further breakthroughs, said Achim Steiner, executive director of the UN Environment Programme.

Coal, the most carbon-intensive conventional fossil fuel, is in a death spiral, partly thanks to another piece of encouraging news: the historic climate agreement the U.S. and China reached last November, which envisions both nations slashing coal consumption (as China is already doing). Hammering another nail into coal’s coffin, the leaders of Great Britain’s three main political parties pledged to phase out coal, no matter who won the general elections last May.

“If you look at the long-term [for coal], it’s not getting any better,” said Standard & Poor’s Aneesh Prabhu when S&P downgraded coal company bonds to junk status. “It’s a secular decline,” not a mere cyclical downturn.

Last but not least, a vibrant mass movement has arisen to fight climate change, most visibly manifested when hundreds of thousands of people thronged the streets of New York City last September, demanding action from global leaders gathered at the UN. The rally was impressive enough that it led oil and gas giant ExxonMobil to increase its internal estimate of how likely the U.S. government is to take strong action. “That many people marching is clearly going to put pressure on government to do something,” an ExxonMobil spokesman told Bloomberg Businessweek.

The climate challenge has long amounted to a race between the imperatives of science and the contingencies of politics. With Hansen’s paper, the science has gotten harsher, even as the Nature Climate Change study affirms that humanity can still choose life, if it will. The question now is how the politics will respond—now, at Paris in December, and beyond.

Mark Hertsgaard has reported on politics, culture, and the environment from more than 20 countries and written six books, including “HOT: Living Through the Next Fifty Years on Earth.”

*   *   *

Experts make dire prediction about sea levels (CBS)


In the future, there could be major flooding along every coast. So says a new study that warns the world’s seas are rising.

Ever-warming oceans that are melting polar ice could raise sea levels 15 feet in the next 50 to 100 years, NASA’s former climate chief now says. That’s five times higher than previous predictions.

“This is the biggest threat the planet faces,” said James Hansen, the co-author of the new journal article raising that alarm scenario.

“If we get sea level rise of several meters, all coastal cities become dysfunctional,” he said. “The implications of this are just incalculable.”

If ocean levels rise just 10 feet, areas like Miami, Boston, Seattle and New York City would face flooding.

The melting ice would cool ocean surfaces at the poles even more. While the overall climate continues to warm. The temperature difference would fuel even more volatile weather.

“As the atmosphere gets warmer and there’s more water vapor, that’s going to drive stronger thunderstorms, stronger hurricanes, stronger tornadoes, because they all get their energy from the water vapor,” said Hansen.

Nearly a decade ago, Hansen told “60 Minutes” we had 10 years to get global warming under control, or we would reach “tipping point.”

“It will be a situation that is out of our control,” he said. “We’re essentially at the edge of that. That’s why this year is a critical year.”

Critical because of a United Nations meeting in Paris that is designed to reach legally binding agreements on carbons emissions, those greenhouse gases that create global warming.

*   *   *

Sea Levels Could Rise Much Faster than Thought (Climate Denial Crock of the Week)

with Peter SinclairJuly 21, 2015

Washington Post:

James Hansen has often been out ahead of his scientific colleagues.

With his 1988 congressional testimony, the then-NASA scientist is credited with putting the global warming issue on the map by saying that a warming trend had already begun. “It is time to stop waffling so much and say that the evidence is pretty strong that the greenhouse effect is here,” Hansen famously testified.

Now Hansen — who retired in 2013 from his NASA post, and is currently an adjunct professor at Columbia University’s Earth Institute — is publishing what he says may be his most important paper. Along with 16 other researchers — including leading experts on the Greenland and Antarctic ice sheets — he has authored a lengthy study outlining an scenario of potentially rapid sea level rise combined with more intense storm systems.

It’s an alarming picture of where the planet could be headed — and hard to ignore, given its author. But it may also meet with considerable skepticism in the broader scientific community, given that its scenarios of sea level rise occur more rapidly than those ratified by the United Nations’ Intergovernmental Panel on Climate Change in its latest assessment of the state of climate science, published in 2013.

In the new study, Hansen and his colleagues suggest that the “doubling time” for ice loss from West Antarctica — the time period over which the amount of loss could double — could be as short as 10 years. In other words, a non-linear process could be at work, triggering major sea level rise in a time frame of 50 to 200 years. By contrast, Hansen and colleagues note, the IPCC assumed more of a linear process, suggesting only around 1 meter of sea level rise, at most, by 2100.

Here, a clip from our extended interview with Eric Rignot in December of 2014.  Rignot is one of the co-authors of the new study.


The study—written by James Hansen, NASA’s former lead climate scientist, and 16 co-authors, many of whom are considered among the top in their fields—concludes that glaciers in Greenland and Antarctica will melt 10 times faster than previous consensus estimates, resulting in sea level rise of at least 10 feet in as little as 50 years. The study, which has not yet been peer reviewed, brings new importance to a feedback loop in the ocean near Antarctica that results in cooler freshwater from melting glaciers forcing warmer, saltier water underneath the ice sheets, speeding up the melting rate. Hansen, who is known for being alarmist and also right, acknowledges that his study implies change far beyond previous consensus estimates. In a conference call with reporters, he said he hoped the new findings would be “substantially more persuasive than anything previously published.” I certainly find them to be.

We conclude that continued high emissions will make multi-meter sea level rise practically unavoidable and likely to occur this century. Social disruption and economic consequences of such large sea level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.

The science of ice melt rates is advancing so fast, scientists have generally been reluctant to put a number to what is essentially an unpredictable, non-linear response of ice sheets to a steadily warming ocean. With Hansen’s new study, that changes in a dramatic way. One of the study’s co-authors is Eric Rignot, whose own study last year found that glacial melt from West Antarctica now appears to be “unstoppable.” Chris Mooney, writing for Mother Jonescalled that study a “holy shit” moment for the climate.

Daily Beast:

New climate science brings good news as well as bad.  Humanity can limit temperature rise to 1.5 degrees C if it so chooses, according to a little-noticed study by experts at the Potsdam Institute for Climate Impacts (now perhaps the world’s foremost climate research center) and the International Institute for Applied Systems Analysis published in Nature Climate Changein May.


“Actions for returning global warming to below 1.5 degrees Celsius by 2100 are in many ways similar to those limiting warming to below 2 degrees Celsius,” said Joeri Rogelj, a lead author of the study. “However … emission reductions need to scale up swiftly in the next decades.” And there’s a significant catch: Even this relatively optimistic study concludes that it’s too late to prevent global temperature rising by 2 degrees C. But this overshoot of the 2 C target can be made temporary, the study argues; the total increase can be brought back down to 1.5 C later in the century.

Besides the faster emissions reductions Rogelj referenced, two additional tools are essential, the study outlines. Energy efficiency—shifting to less wasteful lighting, appliances, vehicles, building materials and the like—is already the cheapest, fastest way to reduce emissions. Improved efficiency has made great progress in recent years but will have to accelerate, especially in emerging economies such as China and India.

Also necessary will be breakthroughs in so-called “carbon negative” technologies. Call it the photosynthesis option: because plants inhale carbon dioxide and store it in their roots, stems, and leaves, one can remove carbon from the atmosphere by growing trees, planting cover crops, burying charred plant materials underground, and other kindred methods. In effect, carbon negative technologies can turn back the clock on global warming, making the aforementioned descent from the 2 C overshoot to the 1.5 C goal later in this century theoretically possible. Carbon-negative technologies thus far remain unproven at the scale needed, however; more research and deployment is required, according to the study.

*   *   *

Earth’s Most Famous Climate Scientist Issues Bombshell Sea Level Warning (Slate)


Monday’s new study greatly increases the potential for catastrophic near-term sea level rise. Here, Miami Beach, among the most vulnerable cities to sea level rise in the world. Photo by Joe Raedle/Getty Images

In what may prove to be a turning point for political action on climate change, a breathtaking new study casts extreme doubt about the near-term stability of global sea levels.

The study—written by James Hansen, NASA’s former lead climate scientist, and 16 co-authors, many of whom are considered among the top in their fields—concludes that glaciers in Greenland and Antarctica will melt 10 times faster than previous consensus estimates, resulting in sea level rise of at least 10 feet in as little as 50 years. The study, which has not yet been peer-reviewed, brings new importance to a feedback loop in the ocean near Antarctica that results in cooler freshwater from melting glaciers forcing warmer, saltier water underneath the ice sheets, speeding up the melting rate. Hansen, who is known for being alarmist and also right, acknowledges that his study implies change far beyond previous consensus estimates. In a conference call with reporters, he said he hoped the new findings would be “substantially more persuasive than anything previously published.” I certainly find them to be.

To come to their findings, the authors used a mixture of paleoclimate records, computer models, and observations of current rates of sea level rise, but “the real world is moving somewhat faster than the model,” Hansen says.

Hansen’s study does not attempt to predict the precise timing of the feedback loop, only that it is “likely” to occur this century. The implications are mindboggling: In the study’s likely scenario, New York City—and every other coastal city on the planet—may only have a few more decades of habitability left. That dire prediction, in Hansen’s view, requires “emergency cooperation among nations.”

We conclude that continued high emissions will make multi-meter sea level rise practically unavoidable and likely to occur this century. Social disruption and economic consequences of such large sea level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.

The science of ice melt rates is advancing so fast, scientists have generally been reluctant to put a number to what is essentially an unpredictable, nonlinear response of ice sheets to a steadily warming ocean. With Hansen’s new study, that changes in a dramatic way. One of the study’s co-authors is Eric Rignot, whose own study last year found that glacial melt from West Antarctica now appears to be “unstoppable.” Chris Mooney, writing for Mother Jonescalled that study a “holy shit” moment for the climate.

One necessary note of caution: Hansen’s study comes via a nontraditional publishing decision by its authors. The study will be published in Atmospheric Chemistry and Physics, an open-access “discussion” journal, and will not have formal peer review prior to its appearance online later this week. [Update, July 23: The paper is now available.] The complete discussion draft circulated to journalists was 66 pages long, and included more than 300 references. The peer review will take place in real time, with responses to the work by other scientists also published online. Hansen said this publishing timeline was necessary to make the work public as soon as possible before global negotiators meet in Paris later this year. Still, the lack of traditional peer review and the fact that this study’s results go far beyond what’s been previously published will likely bring increased scrutiny. On Twitter, Ruth Mottram, a climate scientist whose work focuses on Greenland and the Arctic, was skeptical of such enormous rates of near-term sea level rise, though she defended Hansen’s decision to publish in a nontraditional way.

In 2013, Hansen left his post at NASA to become a climate activist because, in his words, “as a government employee, you can’t testify against the government.” In a wide-ranging December 2013 study, conducted to support Our Children’s Trust, a group advancing legal challenges to lax greenhouse gas emissions policies on behalf of minors, Hansen called for a “human tipping point”—essentially, a social revolution—as one of the most effective ways of combating climate change, though he still favors a bilateral carbon tax agreed upon by the United States and China as the best near-term climate policy. In the new study, Hansen writes, “there is no morally defensible excuse to delay phase-out of fossil fuel emissions as rapidly as possible.”

Asked whether Hansen has plans to personally present the new research to world leaders, he said: “Yes, but I can’t talk about that today.” What’s still uncertain is whether, like with so many previous dire warnings, world leaders will be willing to listen.

*   *   *

Ice Melt, Sea Level Rise and Superstorms (Climate Sciences, Awareness and Solutions / Earth Institute, Columbia University)

23 July 2015

James Hansen

The paper “Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2°C global warming is highly dangerous” has been published in Atmospheric Chemistry and Physics Discussion and is freely available here.

The paper draws on a large body of work by the research community, as indicated by the 300 references. No doubt we missed some important relevant contributions, which we may be able to rectify in the final version of the paper. I thank all the researchers who provided data or information, many of whom I may have failed to include in the acknowledgments, as the work for the paper occurred over a several year period.

I am especially grateful to the Durst family for a generous grant that allowed me to work full time this year on finishing the paper, as well as the other supporters of our program Climate Science, Awareness and Solutions at the Columbia University Earth Institute.

In the conceivable event that you do not read the full paper plus supplement, I include the Acknowledgments here:

Acknowledgments. Completion of this study was made possible by a generous gift from The Durst Family to the Climate Science, Awareness and Solutions program at the Columbia University Earth Institute. That program was initiated in 2013 primarily via support from the Grantham Foundation for Protection of the Environment, Jim and Krisann Miller, and Gerry Lenfest and sustained via their continuing support. Other substantial support has been provided by the Flora Family Foundation, Dennis Pence, the Skoll Global Threats Fund, Alexander Totic and Hugh Perrine. We thank Anders Carlson, Elsa Cortijo, Nil Irvali, Kurt Lambeck, Scott Lehman, and Ulysses Ninnemann for their kind provision of data and related information. Support for climate simulations was provided by the NASA High-End Computing (HEC) Program through the NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center.

Climate models are even more accurate than you thought (The Guardian)

The difference between modeled and observed global surface temperature changes is 38% smaller than previously thought

Looking across the frozen sea of Ullsfjord in Norway.  Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures.

Looking across the frozen sea of Ullsfjord in Norway. Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures. Photograph: Neale Clark/Robert Harding World Imagery/Corbis

Global climate models aren’t given nearly enough credit for their accurate global temperature change projections. As the 2014 IPCC report showed, observed global surface temperature changes have been within the range of climate model simulations.

Now a new study shows that the models were even more accurate than previously thought. In previous evaluations like the one done by the IPCC, climate model simulations of global surface air temperature were compared to global surface temperature observational records like HadCRUT4. However, over the oceans, HadCRUT4 uses sea surface temperatures rather than air temperatures.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

Thus looking at modeled air temperatures and HadCRUT4 observations isn’t quite an apples-to-apples comparison for the oceans. As it turns out, sea surface temperatures haven’t been warming fast as marine air temperatures, so this comparison introduces a bias that makes the observations look cooler than the model simulations. In reality, the comparisons weren’t quite correct. As lead author Kevin Cowtan told me,

We have highlighted the fact that the planet does not warm uniformly. Air temperatures warm faster than the oceans, air temperatures over land warm faster than global air temperatures. When you put a number on global warming, that number always depends on what you are measuring. And when you do a comparison, you need to ensure you are comparing the same things.

The model projections have generally reported global air temperatures. That’s quite helpful, because we generally live in the air rather than the water. The observations, by mixing air and water temperatures, are expected to slightly underestimate the warming of the atmosphere.

The new study addresses this problem by instead blending the modeled air temperatures over land with the modeled sea surface temperatures to allow for an apples-to-apples comparison. The authors also identified another challenging issue for these model-data comparisons in the Arctic. Over sea ice, surface air temperature measurements are used, but for open ocean, sea surface temperatures are used. As co-author Michael Mann notes, as Arctic sea ice continues to melt away, this is another factor that accurate model-data comparisons must account for.

One key complication that arises is that the observations typically extrapolate land temperatures over sea ice covered regions since the sea surface temperature is not accessible in that case. But the distribution of sea ice changes seasonally, and there is a long-term trend toward decreasing sea ice in many regions. So the observations actually represent a moving target.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias.  Created by Kevin Cowtan.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias. Created by Kevin Cowtan.

When accounting for these factors, the study finds that the difference between observed and modeled temperatures since 1975 is smaller than previously believed. The models had projected a 0.226°C per decade global surface air warming trend for 1975–2014 (and 0.212°C per decade over the geographic area covered by the HadCRUT4 record). However, when matching the HadCRUT4 methods for measuring sea surface temperatures, the modeled trend is reduced to 0.196°C per decade. The observed HadCRUT4 trend is 0.170°C per decade.

So when doing an apples-to-apples comparison, the difference between modeled global temperature simulations and observations is 38% smaller than previous estimates. Additionally, as noted in a 2014 paper led by NASA GISS director Gavin Schmidt, less energy from the sun has reached the Earth’s surface than anticipated in these model simulations, both because solar activity declined more than expected, and volcanic activity was higher than expected. Ed Hawkins, another co-author of this study, wrote about this effect.

Combined, the apparent discrepancy between observations and simulations of global temperature over the past 15 years can be partly explained by the way the comparison is done (about a third), by the incorrect radiative forcings (about a third) and the rest is either due to climate variability or because the models are slightly over sensitive on average. But, the room for the latter effect is now much smaller.

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

As Hawkins notes, the remaining discrepancy between modeled and observed temperatures may come down to climate variability; namely the fact that there has been a preponderance of La Niña events over the past decade, which have a short-term cooling influence on global surface temperatures. When there are more La Niñas, we expect temperatures to fall below the average model projection, and when there are more El Niños, we expect temperatures to be above the projection, as may be the case when 2015 breaks the temperature record.

We can’t predict changes in solar activity, volcanic eruptions, or natural ocean cycles ahead of time. If we want to evaluate the accuracy of long-term global warming model projections, we have to account for the difference between the simulated and observed changes in these factors. When the authors of this study did so, they found that climate models have very accurately projected the observed global surface warming trend.

In other words, as I discussed in my book and Denial101x lecture, climate models have proven themselves reliable in predicting long-term global surface temperature changes. In fact, even more reliable than I realized.

Denial101x climate science success stories lecture by Dana Nuccitelli.

There’s a common myth that models are unreliable, often based on apples-to-oranges comparisons, like looking at satellite estimates of temperatures higher in the atmosphere versus modeled surface air temperatures. Or, some contrarians like John Christy will only consider the temperature high in the atmosphere, where satellite estimates are less reliable, and where people don’t live.

This new study has shown that when we do an apples-to-apples comparison, climate models have done a good job projecting the observed temperatures where humans live. And those models predict that unless we take serious and immediate action to reduce human carbon pollution, global warming will continue to accelerate into dangerous territory.

Nova técnica estima multidões analisando atividade de celulares (BBC Brasil)

3 junho 2015

Multidão em aeroporto | Foto: Getty

Pesquisadores buscam maneiras mais eficientes de medir tamanho de multidões sem depender de imagens

Um estudo de uma universidade britânica desenvolveu um novo meio de estimar multidões em protestos ou outros eventos de massa: através da análise de dados geográficos de celulares e Twitter.

Pesquisadores da Warwick University, na Inglaterra, analisaram a geolocalização de celulares e de mensagens no Twitter durante um período de dois meses em Milão, na Itália.

Em dois locais com números de visitantes conhecidos – um estádio de futebol e um aeroporto – a atividade nas redes sociais e nos celulares aumentou e diminuiu de maneira semelhante ao fluxo de pessoas.

A equipe disse que, utilizando esta técnica, pode fazer medições em eventos como protestos.

Outros pesquisadores enfatizaram o fato de que há limitações neste tipo de dados – por exemplo, somente uma parte da população usa smartphones e Twitter e nem todas as áreas em um espaço estão bem servidos de torres telefônicas.

Mas os autores do estudo dizem que os resultados foram “um excelente ponto de partida” para mais estimativas do tipo – com mais precisão – no futuro.

“Estes números são exemplos de calibração nos quais podemos nos basear”, disse o coautor do estudo, Tobias Preis.

“Obviamente seria melhor termos exemplos em outros países, outros ambientes, outros momentos. O comportamento humano não é uniforme em todo o mundo, mas está é uma base muito boa para conseguir estimativas iniciais.”

O estudo, divulgado na publicação científica Royal Society Open Science, é parte de um campo de pesquisa em expansão que explora o que a atividade online pode revelar sobre o comportamento humano e outros fenômenos reais.

Foto: F. Botta et al

Cientistas compararam dados oficiais de visitantes em aeroporto e estádio com atividade no Twitter e no celular

Federico Botta, estudante de PhD que liderou a análise, afirmou que a metodologia baseada em celulares tem vantagens importantes sobre outros métodos para estimar o tamanho de multidões – que costumam se basear em observações no local ou em imagens.

“Este método é muito rápido e não depende do julgamento humano. Ele só depende dos dados que vêm dos telefones celulares ou da atividade no Twitter”, disse à BBC.

Margem de erro

Com dois meses de dados de celulares fornecidos pela Telecom Italia, Botta e seus colegas se concentraram no aeroporto de Linate e no estádio de futebol San Siro, em Milão.

Eles compararam o número de pessoas que se sabia estarem naqueles locais a cada momento – baseado em horários de voos e na venda de ingressos para os jogos de futebol – com três tipos de atividade em telefones celulares: o número de chamadas feitas e de mensagens de texto enviadas, a quantidade de internet utilizada e o volume de tuítes feitos.

“O que vimos é que estas atividades realmente tinham um comportamento muito semelhante ao número de pessoas no local”, afirma Botta.

Isso pode não parecer tão surpreendente, mas, especialmente no estádio de futebol, os padrões observados pela equipe eram tão confiáveis que eles conseguiam até fazer previsões.

Houve dez jogos de futebol no período em que o experimento foi feito. Com base nos dados de nove jogos, foi possível estimar quantas pessoas estariam no décimo jogo usando apenas os dados dos celulares.

“Nossa porcentagem absoluta média de erro é cerca de 13%. Isso significa que nossas estimativas e o número real de pessoas têm uma diferença entre si, em valores absolutos, de cerca de 13%”, diz Botta.

De acordo com os pesquisadores, esta margem de erro é boa em comparação com as técnicas tradicionais baseadas em imagens e no julgamento humano.

Eles deram o exemplo do manifestação em Washington, capital americana, conhecida como “Million Man March” (Passeata do milhão, em tradução livre) em 1995, em que mesmo as análises mais criteriosas conseguiram produzir estimativas com 20% de erro – depois que medições iniciais variaram entre 400 mil e dois milhões de pessoas.

Multidão em estádio italiano | Foto: Getty

Precisão de dados coletados em estádio de futebol surpreendeu até mesmo a equipe de pesquisadores

Segundo Ed Manley, do Centro para Análise Espacial Avançada do University College London, a técnica tem potencial e as pessoas devem sentir-se “otimistas, mas cautelosas” em relação ao uso de dados de celulares nestas estimativas.

“Temos essas bases de dados enormes e há muito o que pode ser feito com elas… Mas precisamos ter cuidado com o quanto vamos exigir dos dados”, afirmou.

Ele também chama a atenção para o fato de que tais informações não refletem igualitariamente uma população.

“Há vieses importantes aqui. Quem exatamente estamos medindo com essas bases de dados?”, o Twitter, por exemplo, diz Manley, tem uma base de usuários relativamente jovem e de classe alta.

Além destas dificuldades, há o fato de que é preciso escolher com cuidado as atividades que serão medidas, porque as pessoas usam seus telefones de maneira diferente em diferentes lugares – mais chamadas no aeroporto e mais tuítes no futebol, por exemplo.

Outra ressalva importante é o fato de que toda a metodologia de análise defendida por Botta depende do sinal de telefone e internet – que varia muito de lugar para lugar, quando está disponível.

“Se estamos nos baseando nesses dados para saber onde as pessoas estão, o que acontece quando temos um problema com a maneira como os dados são coletados?”, indaga Manley.

How Facebook’s Algorithm Suppresses Content Diversity (Modestly) and How the Newsfeed Rules Your Clicks (The Message)

Zeynep Tufekci on May 7, 2015

Today, three researchers at Facebook published an article in Science on how Facebook’s newsfeed algorithm suppresses the amount of “cross-cutting” (i.e. likely to cause disagreement) news articles a person sees. I read a lot of academic research, and usually, the researchers are at a pains to highlight their findings. This one buries them as deep as it could, using a mix of convoluted language and irrelevant comparisons. So, first order of business is spelling out what they found. Also, for another important evaluation — with some overlap to this one — go read this post by University of Michigan professor Christian Sandvig.

The most important finding, if you ask me, is buried in an appendix. Here’s the chart showing that the higher an item is in the newsfeed, the more likely it is clicked on.

Notice how steep the curve is. The higher the link, more (a lot more) likely it will be clicked on. You live and die by placement, determined by the newsfeed algorithm. (The effect, as Sean J. Taylor correctly notes, is a combination of placement, and the fact that the algorithm is guessing what you would like). This was already known, mostly, but it’s great to have it confirmed by Facebook researchers (the study was solely authored by Facebook employees).

The most important caveat that is buried is that this study is not about all of Facebook users, despite language at the end that’s quite misleading. The researchers end their paper with: “Finally, we conclusively establish that on average in the context of Facebook…” No. The research was conducted on a small, skewed subset of Facebook users who chose to self-identify their political affiliation on Facebook and regularly log on to Facebook, about ~4% of the population available for the study. This is super important because this sampling confounds the dependent variable.

The gold standard of sampling is random, where every unit has equal chance of selection, which allows us to do amazing things like predict elections with tiny samples of thousands. Sometimes, researchers use convenience samples — whomever they can find easily — and those can be okay, or not, depending on how typical the sample ends up being compared to the universe. Sometimes, in cases like this, the sampling affects behavior: people who self-identify their politics are almost certainly going to behave quite differently, on average, than people who do not, when it comes to the behavior in question which is sharing and clicking through ideologically challenging content. So, everything in this study applies only to that small subsample of unusual people. (Here’s a post by the always excellent Eszter Hargittai unpacking the sampling issue further.) The study is still interesting, and important, but it is not a study that can generalize to Facebook users. Hopefully that can be a future study.

What does the study actually say?

  • Here’s the key finding: Facebook researchers conclusively show that Facebook’s newsfeed algorithm decreases ideologically diverse, cross-cutting content people see from their social networks on Facebook by a measurable amount. The researchers report that exposure to diverse content is suppressed by Facebook’s algorithm by 8% for self-identified liberals and by 5% for self-identified conservatives. Or, as Christian Sandvig puts it, “the algorithm filters out 1 in 20 cross-cutting hard news stories that a self-identified conservative sees (or 5%) and 1 in 13cross-cutting hard news stories that a self-identified liberal sees (8%).” You are seeing fewer news items that you’d disagree with which are shared by your friends because the algorithm is not showing them to you.
  • Now, here’s the part which will likely confuse everyone, but it should not. The researchers also report a separate finding that individual choice to limit exposure through clicking behavior results in exposure to 6% less diverse content for liberals and 17% less diverse content for conservatives.

Are you with me? One novel finding is that the newsfeed algorithm (modestly) suppresses diverse content, and another crucial and also novel finding is that placement in the feed is (strongly) influential of click-through rates.

Researchers then replicate and confirm a well-known, uncontested and long-established finding which is that people have a tendency to avoid content that challenges their beliefs. Then, confusingly, the researchers compare whether algorithm suppression effect size is stronger than people choosing what to click, and have a lot of language that leads Christian Sandvig to call this the “it’s not our fault” study. I cannot remember a worse apples to oranges comparison I’ve seen recently, especially since these two dynamics, algorithmic suppression and individual choice, have cumulative effects.

Comparing the individual choice to algorithmic suppression is like asking about the amount of trans fatty acids in french fries, a newly-added ingredient to the menu, and being told that hamburgers, which have long been on the menu, also have trans-fatty acids — an undisputed, scientifically uncontested and non-controversial fact. Individual self-selection in news sources long predates the Internet, and is a well-known, long-identified and well-studied phenomenon. Its scientific standing has never been in question. However, the role of Facebook’s algorithm in this process is a new — and important — issue. Just as the medical profession would be concerned about the amount of trans-fatty acids in the new item, french fries, as well as in the existing hamburgers, researchers should obviously be interested in algorithmic effects in suppressing diversity, in addition to long-standing research on individual choice, since the effects are cumulative. An addition, not a comparison, is warranted.

Imagine this (imperfect) analogy where many people were complaining, say, a washing machine has a faulty mechanism that sometimes destroys clothes. Now imagine washing machine company research paper which finds this claim is correct for a small subsample of these washing machines, and quantifies that effect, but also looks into how many people throw out their clothes before they are totally worn out, a well-established, undisputed fact in the scientific literature. The correct headline would not be “people throwing out used clothes damages more dresses than the the faulty washing machine mechanism.” And if this subsample was drawn from one small factory located everywhere else than all the other factories that manufacture the same brand, and produced only 4% of the devices, the headline would not refer to all washing machines, and the paper would not (should not) conclude with a claim about the average washing machine.

Also, in passing the paper’s conclusion appears misstated. Even though the comparison between personal choice and algorithmic effects is not very relevant, the result is mixed, rather than “conclusively establish[ing] that on average in the context of Facebook individual choices more than algorithms limit exposure to attitude-challenging content”. For self-identified liberals, the algorithm was a stronger suppressor of diversity (8% vs. 6%) while for self-identified conservatives, it was a weaker one (5% vs 17%).)

Also, as Christian Sandvig states in this post, and Nathan Jurgenson in this important post here, and David Lazer in the introduction to the piece in Science explore deeply, the Facebook researchers are not studying some neutral phenomenon that exists outside of Facebook’s control. The algorithm is designed by Facebook, and is occasionally re-arranged, sometimes to the devastation of groups who cannot pay-to-play for that all important positioning. I’m glad that Facebook is choosing to publish such findings, but I cannot but shake my head about how the real findings are buried, and irrelevant comparisons take up the conclusion. Overall, from all aspects, this study confirms that for this slice of politically-engaged sub-population, Facebook’s algorithm is a modest suppressor of diversity of content people see on Facebook, and that newsfeed placement is a profoundly powerful gatekeeper for click-through rates. This, not all the roundabout conversation about people’s choices, is the news.

Late Addition: Contrary to some people’s impressions, I am not arguing against all uses of algorithms in making choices in what we see online. The questions that concern me are how these algorithms work, what their effects are, who controls them, and what are the values that go into the design choices. At a personal level, I’d love to have the choice to set my newsfeed algorithm to “please show me more content I’d likely disagree with” — something the researchers prove that Facebook is able to do.

Is the universe a hologram? (Science Daily)

April 27, 2015
Vienna University of Technology
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.

Is our universe a hologram? Credit: TU Wien 

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.

Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat HolographyPhysical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602