Arquivo da tag: Modelagem

The Water Data Drought (N.Y.Times)

Then there is water.

Water may be the most important item in our lives, our economy and our landscape about which we know the least. We not only don’t tabulate our water use every hour or every day, we don’t do it every month, or even every year.

The official analysis of water use in the United States is done every five years. It takes a tiny team of people four years to collect, tabulate and release the data. In November 2014, the United States Geological Survey issued its most current comprehensive analysis of United States water use — for the year 2010.

The 2010 report runs 64 pages of small type, reporting water use in each state by quality and quantity, by source, and by whether it’s used on farms, in factories or in homes.

It doesn’t take four years to get five years of data. All we get every five years is one year of data.

The data system is ridiculously primitive. It was an embarrassment even two decades ago. The vast gaps — we start out missing 80 percent of the picture — mean that from one side of the continent to the other, we’re making decisions blindly.

In just the past 27 months, there have been a string of high-profile water crises — poisoned water in Flint, Mich.; polluted water in Toledo, Ohio, and Charleston, W. Va.; the continued drying of the Colorado River basin — that have undermined confidence in our ability to manage water.

In the time it took to compile the 2010 report, Texas endured a four-year drought. California settled into what has become a five-year drought. The most authoritative water-use data from across the West couldn’t be less helpful: It’s from the year before the droughts began.

In the last year of the Obama presidency, the administration has decided to grab hold of this country’s water problems, water policy and water innovation. Next Tuesday, the White House is hosting a Water Summit, where it promises to unveil new ideas to galvanize the sleepy world of water.

The question White House officials are asking is simple: What could the federal government do that wouldn’t cost much but that would change how we think about water?

The best and simplest answer: Fix water data.

More than any other single step, modernizing water data would unleash an era of water innovation unlike anything in a century.

We have a brilliant model for what water data could be: the Energy Information Administration, which has every imaginable data point about energy use — solar, wind, biodiesel, the state of the heating oil market during the winter we’re living through right now — all available, free, to anyone. It’s not just authoritative, it’s indispensable. Congress created the agency in the wake of the 1970s energy crisis, when it became clear we didn’t have the information about energy use necessary to make good public policy.

That’s exactly the state of water — we’ve got crises percolating all over, but lack the data necessary to make smart policy decisions.

Congress and President Obama should pass updated legislation creating inside the United States Geological Survey a vigorous water data agency with the explicit charge to gather and quickly release water data of every kind — what utilities provide, what fracking companies and strawberry growers use, what comes from rivers and reservoirs, the state of aquifers.

Good information does three things.

First, it creates the demand for more good information. Once you know what you can know, you want to know more.

Second, good data changes behavior. The real-time miles-per-gallon gauges in our cars are a great example. Who doesn’t want to edge the M.P.G. number a little higher? Any company, community or family that starts measuring how much water it uses immediately sees ways to use less.

Finally, data ignites innovation. Who imagined that when most everyone started carrying a smartphone, we’d have instant, nationwide traffic data? The phones make the traffic data possible, and they also deliver it to us.

The truth is, we don’t have any idea what detailed water use data for the United States will reveal. But we can be certain it will create an era of water transformation. If we had monthly data on three big water users — power plants, farmers and water utilities — we’d instantly see which communities use water well, and which ones don’t.

We’d see whether tomato farmers in California or Florida do a better job. We’d have the information to make smart decisions about conservation, about innovation and about investing in new kinds of water systems.

Water’s biggest problem, in this country and around the world, is its invisibility. You don’t tackle problems that are out of sight. We need a new relationship with water, and that has to start with understanding it.

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)

PUBLIC RELEASE: 1-FEB-2016

UNIVERSITY OF SOUTHAMPTON

IMAGE

IMAGE: A STUDY LED BY THE UNIVERSITY OF SOUTHAMPTON HAS FOUND THERE IS NO DIFFERENCE IN THE TIME IT TAKES PEOPLE FROM DIFFERENT COUNTRIES TO READ AND PROCESS DIFFERENT LANGUAGES. view more  CREDIT: UNIVERSITY OF SOUTHAMPTON

A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.

###

Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at: http://eprints.soton.ac.uk/382899/1/Liversedge,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf

 

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Is human behavior controlled by our genes? Richard Levins reviews ‘The Social Conquest of Earth’ (Climate & Capitalism)

“Failing to take class division into account is not simply a political bias. It also distorts how we look at human evolution as intrinsically bio-social and human biology as socialized biology.”

 

August 1, 2012

Edward O. Wilson. The Social Conquest of Earth. Liverwright Publishing, New York, 2012

reviewed by Richard Levins

In the 1970s, Edward O. Wilson, Richard Lewontin, Stephen Jay Gould and I were colleagues in Harvard’s new department of Organismic and Evolutionary Biology. In spite of our later divergences, I retain grateful memories of working in the field with Ed, turning over rocks, sharing beer, breaking open twigs, putting out bait (canned tuna fish) to attract the ants we were studying..

We were part of a group that hoped to jointly write and publish articles offering a common view of evolutionary science, but that collaboration was brief, largely because Lewontin and I strongly disagreed with Wilson’s Sociobiology.

Reductionism and Sociobiology

Although Wilson fought hard against the reduction of biology to the study of molecules, his holism stopped there. He came to promote the reduction of social and behavioral science to biology. In his view:

“Our lives are restrained by two laws of biology: all of life’s entities and processes are obedient to the laws of physics and chemistry; and all of life’s entities and processes have arisen through evolution and natural selection.” [Social Conquest, p. 287]

This is true as far as it goes but fails in two important ways.

First, it ignores the reciprocal feedback between levels. The biological creates the ensemble of molecules in the cell; the social alters the spectrum of molecules in the biosphere; biological activity creates the biosphere itself and the conditions for the maintenance of life.

Second, it doesn’t consider how the social level alters the biological: our biology is a socialized biology.

Higher (more inclusive) levels are indeed constrained by the laws at lower levels of organization, but they also have their own laws that emerge from the lower level yet are distinct and that also determine which chemical and physical entities are present in the organisms. In new contexts they operate differently.

Thus for example we, like a few other animals including bears, are omnivores. For some purposes such as comparing digestive systems that’s an adequate label. But we are omnivores of a special kind: we not only acquire food by predation, but we also producefood, turning the inedible into edible, the transitory into stored food. This has had such a profound effect on our lives that it is also legitimate to refer to us as something new, productivores.

The productivore mode of sustenance opens a whole new domain: the mode of production. Human societies have experienced different modes of production and ways to organize reproduction, each with its own dynamics, relations with the rest of nature, division into classes, and processes which restore or change it when it is disturbed.

The division of society into classes changes how natural selection works, who is exposed to what diseases, who eats and who doesn’t eat, who does the dishes, who must do physical work, how long we can expect to live. It is no longer possible to prescribe the direction of natural selection for the whole species.

So failing to take class division into account is not simply a political bias. It also distorts how we look at human evolution as intrinsically bio-social and human biology as socialized biology.

The opposite of the genetic determinism of sociobiology is not “the blank slate” view that claims that our biological natures were irrelevant to behavior and society. The question is, what about our animal heritage was relevant?

We all agree that we are animals; that as animals we need food; that we are terrestrial rather than aquatic animals; that we are mammals and therefore need a lot of food to support our high metabolic rates that maintain body temperature; that for part of our history we lived in trees and acquired characteristics adapted to that habitat, but came down from the trees with a dependence on vision, hands with padded fingers, and so on. We have big brains, with regions that have different major functions such as emotions, color vision, and language.

But beyond these general capacities, there is widespread disagreement about which behaviors or attitudes are expressions of brain structure. The amygdala is a locus of emotion, but does it tell us what to be angry or rejoice about? It is an ancient part of our brains, but has it not evolved in response to what the rest of the brain is doing? There is higher intellectual function in the cortex, but does it tell us what to think about?

Every part of an organism is the environment for the rest of the organism, setting the context for natural selection. In contrast to this fluid viewpoint, phrases such as “hard-wired” have become part of the pop vocabulary, applied promiscuously to all sorts of behaviors.

In a deeper sense, asking if something is heritable is a nonsense question. Heritability is always a comparison: how much of the difference between humans and chimps is heritable? What about the differences between ourselves and Neanderthals? Between nomads and farmers?

Social Conquest of Earth

The Social Conquest of Earth, Ed Wilson’s latest book, continues his interest in the “eusocial” animals – ants, bees and others that live in groups with overlapping generations and a division of labor that includes altruistic behavior. As the title shows. he also continues to use the terminology of conquest and domination, so that social animals “conquer” the earth, their abundance makes them “dominate.”

The problem that Wilson poses in this book is first, why did eusociality arise at all, and second, why is it so rare?

Wilson is at his best when discussing the more remote past, the origins of social behavior 220 million years ago for termites, 150 million years for ants, 70-80 million years for humble bees and honey bees.

But as he gets closer to humanity the reductionist biases that informed Sociobiology reassert themselves. Once again Wilson argues that brain architecture determines what people do socially – that war, aggression, morality, honor and hierarchy are part of “human nature.”

Rejecting kin selection

A major change, and one of the most satisfying parts of the book, is his rejection of kin selection as a motive force of social evolution, a theory he once defended strongly.

Kin selection assumed that natural selection acts on genes. A gene will be favored if it results in enhancing its own survival and reproduction, but it is not enough to look at the survival of the individual. If my brother and I each have 2 offspring, a shared gene would be doubled in the next generation. But if my brother sacrifices himself so that I might leave 5 offspring while he leaves none, our shared gene will increase 250%.

Therefore, argued the promoters of this theory, the fitness that natural selection increases has to be calculated over a whole set of kin, weighted by the closeness of their relationship. Mathematical formulations were developed to support this theory. Wilson found it attractive because it appeared to support sociobiology.

However, plausible inference is not enough to prove a theory. Empirical studies comparing different species or traits did not confirm the kin selection hypothesis, and a reexamination of its mathematical structure (such as the fuzziness of defining relatedness) showed that it could not account for the observed natural world. Wilson devotes a lot of space to refuting kin selection because of his previous support of it: it is a great example of scientific self-correction.

Does group selection explain social behaviour?

Wilson has now adopted another model in which the evolution of sociality is the result of opposing processes of ordinary individual selection acting within populations, and group selection acting between populations. He invokes this model account to for religion, morality, honor and other human behaviors.

He argues that individual selection promotes “selfishness” (that is, behavior that enhances individual survival) while group selection favors cooperative and “altruistic” behavior. The two forms of selection oppose each other, and that results in our mixed behaviors.

“We are an evolutionary chimera living on intelligence steered by the demands of animal instinct. This is the reason we are mindlessly dismantling the biosphere and with it, our own prospects for permanent existence.” [p.13]

But this simplistic reduction of environmental destruction to biology will not stand. Contrary to Wilson, the destruction of the biosphere is not “mindless.” It is the outcome of interactions in the noxious triad of greed, poverty, and ignorance, all produced by a socio-economic system that must expand to survive.

For Wilson, as for many environmentalists, the driver of ecological destruction is some generic “we,” who are all in the same boat. But since the emergence of classes after the adoption of agriculture some 8-10,000 years ago it is no longer appropriate to talk of a collective “we.”

The owners of the economy are willing to use up resources, pollute the environment, debase the quality of products, and undermine the health of the producers out of a kind of perverse economic rationality. They support their policies with theories such as climate change denial or doubting the toxicity of pesticides, and buttress it with legislation and court decisions.

Evolution and religion

The beginning and end of the book, a spirited critique of religion as possibly explaining human nature, is more straightforwardly materialist than the view supported by Stephen J. Gould, who argued that religion and science are separate magisteria that play equal roles in human wellbeing.

But Wilson’s use of evidence is selective.

For example, he argues that religion demands absolute belief from its followers – but this is true only of Christianity and Islam. Judaism lets you think what you want as long as you practice the prescribed rituals, Buddhism doesn’t care about deities or the afterlife.

Similarly he argues that creation myths are a product of evolution:

“Since paleolithic times … each tribe invented its own creation myths… No tribe could long survive without a creation myth… The creation myth is a Darwinian device for survival.” [p. 8]

But the ancient Israelites did not have an origin myth when they emerged as a people in the hills of Judea around 1250 B.C.E. Although it appears at the beginning of the Bible, the Israelites did not adapt the Book of Genesis from Babylonian mythology until four centuries after Deuteronomy was written, after they had survived 200 years as a tribal confederation, two kingdoms and the Assyrian and Babylonian conquests— by then the writing of scripture was a political act, not a “Darwinian device for survival.”

Biologizing war

In support of his biologizing of “traits,” Wilson reviews recent research that appears to a show a biological basis for the way people see and interpret color, for the incest taboo, and for the startle response – and then asserts that inherited traits include war, hierarchy, honor and such. Ignoring the role of social class, he views these as universal traits of human nature.

Consider war. Wilson claims that war reflects genes for group selection. “A soldier going into battle will benefit his country but he runs a higher risk of death than one who does not.” [p. 165]

But soldiers don’t initiate conflict. We know in our own times that those who decide to make war are not those who fight the wars – but, perhaps unfortunately, sterilizing the general staff of the Pentagon and of the CIA would not produce a more peaceful America.

The evidence against war as a biological imperative is strong. Willingness to fight is situational.

Group selection can’t explain why soldiers have to be coerced into fighting, why desertion is a major problem for generals and is severely punished, or why resistance to recruitment is a major problem of armies. In the present militarist USA, soldiers are driven to join up through unemployment and the promises of benefits such as learning skills and getting an education and self-improvement. No recruitment posters offer the opportunity to kill people as an inducement for signing up.

The high rates of surrender and desertion of Italian soldiers in World War II did not reflect any innate cowardice among Italians but a lack of fascist conviction. The very rarity of surrender by Japanese soldiers in the same war was not a testimony to greater bravery on the part of the Japanese but of the inculcated combination of nationalism and religion.

As the American people turned against the Vietnam war, increased desertions and the killing of officers by the soldiers reflected their rejection of the war.

The terrifying assaults of the Vikings during the middle ages bear no resemblance to the mellow Scandinavian culture of today, too short a time for natural selection to transform national character.

The attempt to make war an inherited trait favored by natural selection reflects the sexism that has been endemic in sociobiology. It assumes that local groups differed in their propensity for aggression and prowess in war. The victorious men carry off the women of the conquered settlements and incorporate them into their own communities. Therefore the new generation has been selected for greater military success among the men. But the women, coming from a defeated, weaker group, would bring with them their genes for lack of prowess, a selection for military weakness! Such a selection process would be self-negating.

Ethnocentrism

Wilson also considers ethnocentrism to be an inherited trait: group selection leads people to favor members of their own group and reject outsiders.

The problem is that the lines between groups vary under different circumstances. For example, in Spanish America, laws governing marriage included a large number of graded racial categories, while in North America there were usually just two. What’s more, the category definitions are far from permanent: at one time, the Irish were regarded as Black, and the whiteness of Jews was questioned.

Adoption, immigration, mergers of clans also confound any possible genetic basis for exclusion.

Hierarchy

Wilson draws on the work of Herbert Simon to argue that hierarchy is a result of human nature: there will always be rulers and ruled. His argument fails to distinguish between hierarchy and leadership.

There are other forms of organization possible besides hierarchy and chaos, including democratic control by the workers who elect the operational leadership. In some labor unions, leaders’ salaries are pegged to the median wage of the members. In University departments the chairmanship is often a rotating task that nobody really wants. When Argentine factory owners closed their plants during the recession, workers in fact seized control and ran them profitably despite police sieges.

Darwinian behavior?

Wilson argues that “social traits” evolved through Darwinian natural selection. Genes that promoted behaviors that helped the individual or group to survive were passed on; genes that weakened the individual or group were not. The tension between individual and group selection decided which traits would be part of our human nature.

But a plausible claim that a trait might be good for people is not enough to explain its origin and survival. A gene may become fixed in a population even if it is harmful, just by the random genetic changes that we know occur. Or a gene may be harmful but be dragged along by an advantageous gene close to it on the same chromosome.

Selection may act in different directions in different subpopulations, or in different habitats, or in differing environmental. Or the adaptive value of a gene may change with its prevalence or the distribution of ages in the population, itself a consequence of the environment and population heterogeneity.

For instance, Afro-Americans have a higher death rate from cancer than Euro-Americans. In part this reflects the carcinogenic environments they have been subjected to, but there is also a genetic factor. It is the combination of living conditions and genetics that causes higher mortality rates.

* * *

Obviously I am not arguing that evolution doesn’t happen. The point is that we need a much better argument than just a claim that some genotype might be beneficial. And we need a much more rigorous understanding of the differences and linkages between the biological and social components of humanity’s nature. Just calling some social behavior a “trait” does not make it heritable.

In a book that attempts such a wide-ranging panorama of human evolution, there are bound to be errors. But the errors in The Social Conquest of Earth form a pattern: they reduce social issues to biology, and they insist on our evolutionary continuity with other animals while ignoring the radical discontinuity that made us productivores and divided us into classes.

Impact of human activity on local climate mapped (Science Daily)

Date: January 20, 2016

Source: Concordia University

Summary: A new study pinpoints the temperature increases caused by carbon dioxide emissions in different regions around the world.


This is a map of climate change. Credit: Nature Climate Change

Earth’s temperature has increased by 1°C over the past century, and most of this warming has been caused by carbon dioxide emissions. But what does that mean locally?

A new study published in Nature Climate Change pinpoints the temperature increases caused by CO2 emissions in different regions around the world.

Using simulation results from 12 global climate models, Damon Matthews, a professor in Concordia’s Department of Geography, Planning and Environment, along with post-doctoral researcher Martin Leduc, produced a map that shows how the climate changes in response to cumulative carbon emissions around the world.

They found that temperature increases in most parts of the world respond linearly to cumulative emissions.

“This provides a simple and powerful link between total global emissions of carbon dioxide and local climate warming,” says Matthews. “This approach can be used to show how much human emissions are to blame for local changes.”

Leduc and Matthews, along with co-author Ramon de Elia from Ouranos, a Montreal-based consortium on regional climatology, analyzed the results of simulations in which CO2 emissions caused the concentration of CO2 in the atmosphere to increase by 1 per cent each year until it reached four times the levels recorded prior to the Industrial Revolution.

Globally, the researchers saw an average temperature increase of 1.7 ±0.4°C per trillion tonnes of carbon in CO2 emissions (TtC), which is consistent with reports from the Intergovernmental Panel on Climate Change.

But the scientists went beyond these globally averaged temperature rises, to calculate climate change at a local scale.

At a glance, here are the average increases per trillion tonnes of carbon that we emit, separated geographically:

  • Western North America 2.4 ± 0.6°C
  • Central North America 2.3 ± 0.4°C
  • Eastern North America 2.4 ± 0.5°C
  • Alaska 3.6 ± 1.4°C
  • Greenland and Northern Canada 3.1 ± 0.9°C
  • North Asia 3.1 ± 0.9°C
  • Southeast Asia 1.5 ± 0.3°C
  • Central America 1.8 ± 0.4°C
  • Eastern Africa 1.9 ± 0.4°C

“As these numbers show, equatorial regions warm the slowest, while the Arctic warms the fastest. Of course, this is what we’ve already seen happen — rapid changes in the Arctic are outpacing the rest of the planet,” says Matthews.

There are also marked differences between land and ocean, with the temperature increase for the oceans averaging 1.4 ± 0.3°C TtC, compared to 2.2 ± 0.5°C for land areas.

“To date, humans have emitted almost 600 billion tonnes of carbon,” says Matthews. “This means that land areas on average have already warmed by 1.3°C because of these emissions. At current emission rates, we will have emitted enough CO¬2 to warm land areas by 2°C within 3 decades.”


Journal Reference:

  1. Martin Leduc, H. Damon Matthews, Ramón de Elía. Regional estimates of the transient climate response to cumulative CO2 emissionsNature Climate Change, 2016; DOI: 10.1038/nclimate2913

The world’s greatest literature reveals multi fractals and cascades of consciousness (Science Daily)

Date: January 21, 2016

Source: The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Summary: James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure.


Sequences of sentence lengths (as measured by number of words) in four literary works representative of various degree of cascading character. Credit: Source: IFJ PAN 

James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis carried out at the Institute of Nuclear Physics of the Polish Academy of Sciences, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

As far as many bookworms are concerned, advanced equations and graphs are the last things which would hold their interest, but there’s no escape from the math. Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is in fact a fractal. In the case of several works their mathematical complexity proved to be exceptional, comparable to the structure of complex mathematical objects considered to be multifractal. Interestingly, in the analyzed pool of all the works, one genre turned out to be exceptionally multifractal in nature.

Fractals are self-similar mathematical objects: when we begin to expand one fragment or another, what eventually emerges is a structure that resembles the original object. Typical fractals, especially those widely known as the Sierpinski triangle and the Mandelbrot set, are monofractals, meaning that the pace of enlargement in any place of a fractal is the same, linear: if they at some point were rescaled x number of times to reveal a structure similar to the original, the same increase in another place would also reveal a similar structure.

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals ‘interwoven’ with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is therefore non-linear in nature.

“Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

The study involved 113 literary works written in English, French, German, Italian, Polish, Russian and Spanish by such famous figures as Honore de Balzac, Arthur Conan Doyle, Julio Cortazar, Charles Dickens, Fyodor Dostoevsky, Alexandre Dumas, Umberto Eco, George Elliot, Victor Hugo, James Joyce, Thomas Mann, Marcel Proust, Wladyslaw Reymont, William Shakespeare, Henryk Sienkiewicz, JRR Tolkien, Leo Tolstoy and Virginia Woolf, among others. The selected works were no less than 5,000 sentences long, in order to ensure statistical reliability.

To convert the texts to numerical sequences, sentence length was measured by the number of words (an alternative method of counting characters in the sentence turned out to have no major impact on the conclusions). The dependences were then searched for in the data — beginning with the simplest, i.e. linear. This is the posited question: if a sentence of a given length is x times longer than the sentences of different lengths, is the same aspect ratio preserved when looking at sentences respectively longer or shorter?

“All of the examined works showed self-similarity in terms of organization of the lengths of sentences. Some were more expressive — here The Ambassadors by Henry James stood out — while others to far less of an extreme, as in the case of the French seventeenth-century romance Artamene ou le Grand Cyrus. However, correlations were evident, and therefore these texts were the construction of a fractal,” comments Dr. Pawel Oswiecimka (IFJ PAN), who also noted that fractality of a literary text will in practice never be as perfect as in the world of mathematics. It is possible to magnify mathematical fractals up to infinity, while the number of sentences in each book is finite, and at a certain stage of scaling there will always be a cut-off in the form of the end of the dataset.

Things took a particularly interesting turn when physicists from the IFJ PAN began tracking non-linear dependence, which in most of the studied works was present to a slight or moderate degree. However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has so far never been associated with this literary genre.

“The absolute record in terms of multifractality turned out to be Finnegan’s Wake by James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals,” says Prof. Drozdz.

The most multifractal works also included A Heartbreaking Work of Staggering Genius by Dave Eggers, Rayuela by Julio Cortazar, The US Trilogy by John Dos Passos, The Waves by Virginia Woolf, 2666 by Roberto Bolano, and Joyce’s Ulysses. At the same time a lot of works usually regarded as stream of consciousness turned out to show little correlation to multifractality, as it was hardly noticeable in books such as Atlas Shrugged by Ayn Rand and A la recherche du temps perdu by Marcel Proust.

“It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers. It is hardly surprising that ascribing a work to a particular genre is, for whatever reason, sometimes subjective. We see, moreover, the possibility of an interesting application of our methodology: it may someday help in a more objective assignment of books to one genre or another,” notes Prof. Drozdz.

Multifractal analyses of literary texts carried out by the IFJ PAN have been published in Information Sciences, a journal of computer science. The publication has undergone rigorous verification: given the interdisciplinary nature of the subject, editors immediately appointed up to six reviewers.


Journal Reference:

  1. Stanisław Drożdż, Paweł Oświȩcimka, Andrzej Kulig, Jarosław Kwapień, Katarzyna Bazarnik, Iwona Grabska-Gradzińska, Jan Rybicki, Marek Stanuszek. Quantifying origin and character of long-range correlations in narrative textsInformation Sciences, 2016; 331: 32 DOI: 10.1016/j.ins.2015.10.023

Algoritmo quântico mostrou-se mais eficaz do que qualquer análogo clássico (Revista Fapesp)

11 de dezembro de 2015

José Tadeu Arantes | Agência FAPESP – O computador quântico poderá deixar de ser um sonho e se tornar realidade nos próximos 10 anos. A expectativa é que isso traga uma drástica redução no tempo de processamento, já que algoritmos quânticos oferecem soluções mais eficientes para certas tarefas computacionais do que quaisquer algoritmos clássicos correspondentes.

Até agora, acreditava-se que a chave da computação quântica eram as correlações entre dois ou mais sistemas. Exemplo de correlação quântica é o processo de “emaranhamento”, que ocorre quando pares ou grupos de partículas são gerados ou interagem de tal maneira que o estado quântico de cada partícula não pode ser descrito independentemente, já que depende do conjunto (Para mais informações veja agencia.fapesp.br/20553/).

Um estudo recente mostrou, no entanto, que mesmo um sistema quântico isolado, ou seja, sem correlações com outros sistemas, é suficiente para implementar um algoritmo quântico mais rápido do que o seu análogo clássico. Artigo descrevendo o estudo foi publicado no início de outubro deste ano na revista Scientific Reports, do grupo Nature: Computational speed-up with a single qudit.

O trabalho, ao mesmo tempo teórico e experimental, partiu de uma ideia apresentada pelo físico Mehmet Zafer Gedik, da Sabanci Üniversitesi, de Istambul, Turquia. E foi realizado mediante colaboração entre pesquisadores turcos e brasileiros. Felipe Fernandes Fanchini, da Faculdade de Ciências da Universidade Estadual Paulista (Unesp), no campus de Bauru, é um dos signatários do artigo. Sua participação no estudo se deu no âmbito do projeto Controle quântico em sistemas dissipativos, apoiado pela FAPESP.

“Este trabalho traz uma importante contribuição para o debate sobre qual é o recurso responsável pelo poder de processamento superior dos computadores quânticos”, disse Fanchini à Agência FAPESP.

“Partindo da ideia de Gedik, realizamos no Brasil um experimento, utilizando o sistema de ressonância magnética nuclear (RMN) da Universidade de São Paulo (USP) em São Carlos. Houve, então, a colaboração de pesquisadores de três universidades: Sabanci, Unesp e USP. E demonstramos que um circuito quântico dotado de um único sistema físico, com três ou mais níveis de energia, pode determinar a paridade de uma permutação numérica avaliando apenas uma vez a função. Isso é impensável em um protocolo clássico.”

Segundo Fanchini, o que Gedik propôs foi um algoritmo quântico muito simples que, basicamente, determina a paridade de uma sequência. O conceito de paridade é utilizado para informar se uma sequência está em determinada ordem ou não. Por exemplo, se tomarmos os algarismos 1, 2 e 3 e estabelecermos que a sequência 1- 2-3 está em ordem, as sequências 2-3-1 e 3-1-2, resultantes de permutações cíclicas dos algarismos, estarão na mesma ordem.

Isso é fácil de entender se imaginarmos os algarismos dispostos em uma circunferência. Dada a primeira sequência, basta girar uma vez em um sentido para obter a sequência seguinte, e girar mais uma vez para obter a outra. Porém, as sequências 1-3-2, 3-2-1 e 2-1-3 necessitam, para serem criadas, de permutações acíclicas. Então, se convencionarmos que as três primeiras sequências são “pares”, as outras três serão “ímpares”.

“Em termos clássicos, a observação de um único algarismo, ou seja uma única medida, não permite dizer se a sequência é par ou ímpar. Para isso, é preciso realizar ao menos duas observações. O que Gedik demonstrou foi que, em termos quânticos, uma única medida é suficiente para determinar a paridade. Por isso, o algoritmo quântico é mais rápido do que qualquer equivalente clássico. E esse algoritmo pode ser concretizado por meio de uma única partícula. O que significa que sua eficiência não depende de nenhum tipo de correlação quântica”, informou Fanchini.

O algoritmo em pauta não diz qual é a sequência. Mas informa se ela é par ou ímpar. Isso só é possível quando existem três ou mais níveis. Porque, havendo apenas dois níveis, algo do tipo 1-2 ou 2-1, não é possível definir uma sequência par ou ímpar. “Nos últimos tempos, a comunidade voltada para a computação quântica vem explorando um conceito-chave da teoria quântica, que é o conceito de ‘contextualidade’. Como a ‘contextualidade’ também só opera a partir de três ou mais níveis, suspeitamos que ela possa estar por trás da eficácia de nosso algoritmo”, acrescentou o pesquisador.

Conceito de contextulidade

“O conceito de ‘contextualidade’ pode ser melhor entendido comparando-se as ideias de mensuração da física clássica e da física quântica. Na física clássica, supõe-se que a mensuração nada mais faça do que desvelar características previamente possuídas pelo sistema que está sendo medido. Por exemplo, um determinado comprimento ou uma determinada massa. Já na física quântica, o resultado da mensuração não depende apenas da característica que está sendo medida, mas também de como foi organizada a mensuração, e de todas as mensurações anteriores. Ou seja, o resultado depende do contexto do experimento. E a ‘contextualidade’ é a grandeza que descreve esse contexto”, explicou Fanchini.

Na história da física, a “contextualidade” foi reconhecida como uma característica necessária da teoria quântica por meio do famoso Teorema de Bell. Segundo esse teorema, publicado em 1964 pelo físico irlandês John Stewart Bell (1928 – 1990), nenhuma teoria física baseada em variáveis locais pode reproduzir todas as predições da mecânica quântica. Em outras palavras, os fenômenos físicos não podem ser descritos em termos estritamente locais, uma vez que expressam a totalidade.

“É importante frisar que em outro artigo [Contextuality supplies the ‘magic’ for quantum computation] publicado na Nature em junho de 2014, aponta a contextualidade como a possível fonte do poder da computação quântica. Nosso estudo vai no mesmo sentido, apresentando um algoritmo concreto e mais eficiente do que qualquer um jamais imaginável nos moldes clássicos.”

Preventing famine with mobile phones (Science Daily)

Date: November 19, 2015

Source: Vienna University of Technology, TU Vienna

Summary: With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition, say experts. By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. The method has now been tested in the Central African Republic.


Does drought lead to famine? A mobile app helps to collect information. Credit: Image courtesy of Vienna University of Technology, TU Vienna

With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition. The method has now been tested in the Central African Republic.

There are different possible causes for famine and malnutrition — not all of which are easy to foresee. Drought and crop failure can often be predicted by monitoring the weather and measuring soil moisture. But other risk factors, such as socio-economic problems or violent conflicts, can endanger food security too. For organizations such as Doctors without Borders / Médecins Sans Frontières (MSF), it is crucial to obtain information about vulnerable regions as soon as possible, so that they have a chance to provide help before it is too late.

Scientists from TU Wien in Vienna, Austria and the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria have now developed a way to monitor food security using a smartphone app, which combines weather and soil moisture data from satellites with crowd-sourced data on the vulnerability of the population, e.g. malnutrition and other relevant socioeconomic data. Tests in the Central African Republic have yielded promising results, which have now been published in the journal PLOS ONE.

Step One: Satellite Data

“For years, we have been working on methods of measuring soil moisture using satellite data,” says Markus Enenkel (TU Wien). By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. “This method works well and it provides us with very important information, but information about soil moisture deficits is not enough to estimate the danger of malnutrition,” says IIASA researcher Linda See. “We also need information about other factors that can affect the local food supply.” For example, political unrest may prevent people from farming, even if weather conditions are fine. Such problems can of course not be monitored from satellites, so the researchers had to find a way of collecting data directly in the most vulnerable regions.

“Today, smartphones are available even in developing countries, and so we decided to develop an app, which we called SATIDA COLLECT, to help us collect the necessary data,” says IIASA-based app developer Mathias Karner. For a first test, the researchers chose the Central African Republic- one of the world’s most vulnerable countries, suffering from chronic poverty, violent conflicts, and weak disaster resilience. Local MSF staff was trained for a day and collected data, conducting hundreds of interviews.

“How often do people eat? What are the current rates of malnutrition? Have any family members left the region recently, has anybody died? — We use the answers to these questions to statistically determine whether the region is in danger,” says Candela Lanusse, nutrition advisor from Doctors without Borders. “Sometimes all that people have left to eat is unripe fruit or the seeds they had stored for next year. Sometimes they have to sell their cattle, which may increase the chance of nutritional problems. This kind of behavior may indicate future problems, months before a large-scale crisis breaks out.”

A Map of Malnutrition Danger

The digital questionnaire of SATIDA COLLECT can be adapted to local eating habits, as the answers and the GPS coordinates of every assessment are stored locally on the phone. When an internet connection is available, the collected data are uploaded to a server and can be analyzed along with satellite-derived information about drought risk. In the end a map could be created, highlighting areas where the danger of malnutrition is high. For Doctors without Borders, such maps are extremely valuable. They help to plan future activities and provide help as soon as it is needed.

“Testing this tool in the Central African Republic was not easy,” says Markus Enenkel. “The political situation there is complicated. However, even under these circumstances we could show that our technology works. We were able to gather valuable information.” SATIDA COLLECT has the potential to become a powerful early warning tool. It may not be able to prevent crises, but it will at least help NGOs to mitigate their impacts via early intervention.


Story Source:

The above post is reprinted from materials provided by Vienna University of Technology, TU ViennaNote: Materials may be edited for content and length.


Journal Reference:

  1. Markus Enenkel, Linda See, Mathias Karner, Mònica Álvarez, Edith Rogenhofer, Carme Baraldès-Vallverdú, Candela Lanusse, Núria Salse. Food Security Monitoring via Mobile Data Collection and Remote Sensing: Results from the Central African RepublicPLOS ONE, 2015; 10 (11): e0142030 DOI: 10.1371/journal.pone.0142030

Dudas sobre El Niño retrasan preparación ante desastres (SciDev Net)

Dudas sobre El Niño retrasan preparación ante desastres

Crédito de la imagen: Patrick Brown/Panos

27/10/15

Martín De Ambrosio

De un vistazo

  • Efectos del fenómeno aún son confusos a lo largo del continente
  • No hay certeza, pero cruzarse de brazos no es opción, según Organización Panamericana de la Salud
  • Hay consenso científico del 95 por ciento sobre posibilidades de un El Niño fuerte

Los desacuerdos que existen entre los científicos sobre la posibilidad de que Centro y Sudamérica sufran o no un fuerte evento El Niño están generando cierto retraso en las preparaciones, según advierten las principales organizaciones que trabajan en el clima de la región.

Algunos investigadores sudamericanos aún tienen dudas sobre la forma cómo se desarrolla el evento este año. Esta incertidumbre impacta en los funcionarios y los estados, que deberían actuar cuanto antes para prevenir los peores escenarios, incluyendo muertes debido a desastres naturales, reclaman las organizaciones meteorológicas.

Eduardo Zambrano, investigador del Centro de Investigación Internacional sobre el Fenómeno de El Niño (CIIFEN) en Ecuador, y uno de los centros regionales de la Organización Meteorológica Mundial, dice que el problema es que los efectos del fenómeno todavía no han sido claros y evidentes en todo el continente.

“Algunas imágenes de satélite nos muestran un Océano Pacífico muy caliente, una de las características de El Niño”.

Willian Alva León, presidente de la Sociedad Meteorológica del Perú

“De todos modos podemos hablar sobre las extremas sequías en el noreste de Brasil, Venezuela y la zona del Caribe”, dice, y menciona además las inusualmente fuertes lluvias en el desierto de Atacama en Chile desde marzo y las inundaciones en zonas de Argentina, Uruguay y Paraguay.

El Niño alcanza su pico cuando una masa de aguas cálidas para los habituales parámetros del este del Océano Pacífico, se mueve de norte a sur y toca costas peruanas y ecuatorianas. Este movimiento causa efectos en cascada y estragos en todo el sistema de América Central y del Sur, convirtiendo las áridas regiones altas en lluviosas, al tiempo que se presentan sequías en las tierras bajas y tormentas sobre el Caribe.

Pero El Niño continúa siendo de difícil predicción debido a sus muy diferentes impactos. Los científicos, según Zambrano, esperaban al Niño el año pasado “cuando todas las alarmas sonaron, y luego no pasó nada demasiado extraordinario debido a un cambio en la dirección de los vientos”.

Tras ese error, muchas organizaciones prefirieron la cautela para evitar el alarmismo. “Algunas imágenes de satélite nos muestran un Océano Pacífico muy caliente, una de las características de El Niño”, dice Willian Alva León, presidente de la Sociedad Meteorológica del Perú. Pero, agrega, este calor no se mueve al sudeste, hacia las costas peruanas, como sucedería en caso del evento El Niño.

Alva León cree que los peores efectos ya sucedieron este año, lo que significa que el fenómeno está en retirada. “El Niño tiene un límite de energía y creo que ya ha sido alcanzado este año”, dice.

Este desacuerdo entre las instituciones de investigación del clima preocupa a quienes generan políticas, pues necesitan guías claras para iniciar las preparaciones necesarias del caso. Ciro Ugarte, asesor regional del área de Preparativos para Emergencia y Socorro en casos de Desastrede la Organización Panamericana de la Salud, dice que es obligatorio actuar como si El Niño en efecto estuviera en proceso para asegurar que el continente enfrente las posibles consecuencias.

“Estar preparados es importante porque reduce el impacto del fenómeno así como otras enfermedades que hoy son epidémicas”, dice.

Para asegurar el grado de probabilidad de El Niño, algunos científicos usan modelos que abstraen datos de la realidad y generan predicciones. María Teresa Martínez, subdirectora de meteorología del Instituto de Hidrología, Meteorología y Estudios Ambientales de Colombia, señala que los modelos más confiables predijeron en marzo que había entre un 50 y un 60 por ciento de posibilidad de un evento El Niño. “Ahora El Niño se desarrolla con fuerza desde su etapa de formación hacia la etapa de madurez, que será alcanzada en diciembre”, señala.

Ugarte admite que no hay certezas, pero dice que para su organización “no hacer nada no es una opción”.

“Como creadores de políticas de prevención, lo que tenemos que hacer es usar lo que es el consenso entre los científicos, y hoy ese consenso dice que hay un 95% de posibilidades de tener un fuerte o muy fuerte evento El Niño”, dice.

Aquecimento pode triplicar seca na Amazônia (Observatório do Clima)

15/10/2015

 Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Modelos de computador sugerem que leste amazônico, que contém a maior parte da floresta, teria mais estiagens, incêndios e morte de árvores, enquanto o oeste ficaria mais chuvoso.

As mudanças climáticas podem aumentar a frequência tanto de secas quanto de chuvas extremas na Amazônia antes do meio do século, compondo com o desmatamento para causar mortes maciças de árvores, incêndios e emissões de carbono. A conclusão é de uma avaliação de 35 modelos climáticos aplicados à região, feita por pesquisadores dos EUA e do Brasil.

Segundo o estudo, liderado por Philip Duffy, do WHRC (Instituto de Pesquisas de Woods Hole, nos EUA) e da Universidade Stanford, a área afetada por secas extremas no leste amazônico, região que engloba a maior parte da Amazônia, pode triplicar até 2100. Paradoxalmente, a frequência de períodos extremamente chuvosos e a área sujeita a chuvas extremas tende a crescer em toda a região após 2040 – mesmo nos locais onde a precipitação média anual diminuir.

Já o oeste amazônico, em especial o Peru e a Colômbia, deve ter um aumento na precipitação média anual.

A mudança no regime de chuvas é um efeito há muito teorizado do aquecimento global. Com mais energia na atmosfera e mais vapor d’água, resultante da maior evaporação dos oceanos, a tendência é que os extremos climáticos sejam amplificados. As estações chuvosas – na Amazônia, o período de verão no hemisfério sul, chamado pelos moradores da região de “inverno” ficam mais curtas, mas as chuvas caem com mais intensidade.

No entanto, a resposta da floresta essas mudanças tem sido objeto de controvérsias entre os cientistas. Estudos da década de 1990 propuseram que a reação da Amazônia fosse ser uma ampla “savanização”, ou mortandade de grandes árvores, e a transformação de vastas porções da selva numa savana empobrecida.

Outros estudos, porém, apontaram que o calor e o CO2 extra teriam o efeito oposto – o de fazer as árvores crescerem mais e fixarem mais carbono, de modo a compensar eventuais perdas por seca. Na média, portanto, o impacto do aquecimento global sobre a Amazônia seria relativamente pequeno.

Ocorre que a própria Amazônia encarregou-se de dar aos cientistas dicas de como reagiria. Em 2005, 2007 e 2010, a floresta passou por secas históricas. O resultado foi ampla mortalidade de árvores e incêndios em florestas primárias em mais de 85 mil quilômetros quadrados. O grupo de Duffy, também integrado por Paulo Brando, do Ipam (Instituto de Pesquisa Ambiental da Amazônia), aponta que de 1% a 2% do carbono da Amazônia foi lançado na atmosfera em decorrência das secas da década de 2000. Brando e colegas do Ipam também já haviam mostrado que a Amazônia está mais inflamável, provavelmente devido aos efeitos combinados do clima e do desmatamento.

Os pesquisadores simularam o clima futuro da região usando os modelos do chamado projeto CMIP5, usado pelo IPCC (Painel Intergovernamental sobre Mudança Climática) no seu último relatório de avaliação do clima global. Um dos membros do grupo, Chris Field, de Stanford, foi um dos coordenadores do relatório – foi também candidato à presidência do IPCC na eleição realizada na semana passada, perdendo para o coreano Hoesung Lee.

Os modelos de computador foram testados no pior cenário de emissões, o chamado RMP 8.5, no qual se assume que pouca coisa será feita para controlar emissões de gases-estufa.

Eles não apenas captaram bem a influência das temperaturas dos oceanos Atlântico e Pacífico sobre o padrão de chuvas na Amazônia – diferenças entre os dois oceanos explicam por que o leste amazônico ficará mais seco e o oeste, mais úmido –, como também mostraram nas simulações de seca futura uma característica das secas recorde de 2005 e 2010: o extremo norte da Amazônia teve grande aumento de chuvas enquanto o centro e o sul estorricavam.

Segundo os pesquisadores, o estudo pode ser até mesmo conservador, já que só levou em conta as variações de precipitação. “Por exemplo, as chuvas no leste da Amazônia têm uma forte dependência da evapotranspiração, então uma redução na cobertura de árvores poderia reduzir a precipitação”, escreveram Duffy e Brando. “Isso sugere que, se os processos relacionados a mudanças no uso da terra fossem mais bem representados nos modelos do CMIP5, a intensidade das secas poderia ser maior do que a projetada aqui.”

O estudo foi publicado na PNAS, a revista da Academia Nacional de Ciências dos EUA. (Observatório do Clima/ #Envolverde)

* Publicado originalmente no site Observatório do Clima.

‘Targeted punishments’ against countries could tackle climate change (Science Daily)

Date:
August 25, 2015
Source:
University of Warwick
Summary:
Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

This is a diagram of two possible strategies of targeted punishment studied in the paper. Credit: Royal Society Open Science

Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

Conducted at the University of Warwick, the research suggests that in situations such as climate change, where everyone would be better off if everyone cooperated but it may not be individually advantageous to do so, the use of a strategy called ‘targeted punishment’ could help shift society towards global cooperation.

Despite the name, the ‘targeted punishment’ mechanism can apply to positive or negative incentives. The research argues that the key factor is that these incentives are not necessarily applied to everyone who may seem to deserve them. Rather, rules should be devised according to which only a small number of players are considered responsible at any one time.

The study’s author Dr Samuel Johnson, from the University of Warwick’s Mathematics Institute, explains: “It is well known that some form of punishment, or positive incentives, can help maintain cooperation in situations where almost everyone is already cooperating, such as in a country with very little crime. But when there are only a few people cooperating and many more not doing so punishment can be too dilute to have any effect. In this regard, the international community is a bit like a failed state.”

The paper, published in Royal Society Open Science, shows that in situations of entrenched defection (non-cooperation), there exist strategies of ‘targeted punishment’ available to would-be punishers which can allow them to move a community towards global cooperation.

“The idea,” said Dr Johnson, “is not to punish everyone who is defecting, but rather to devise a rule whereby only a small number of defectors are considered at fault at any one time. For example, if you want to get a group of people to cooperate on something, you might arrange them on an imaginary line and declare that a person is liable to be punished if and only if the person to their left is cooperating while they are not. This way, those people considered at fault will find themselves under a lot more pressure than if responsibility were distributed, and cooperation can build up gradually as each person decides to fall in line when the spotlight reaches them.”

For the case of climate change, the paper suggests that countries should be divided into groups, and these groups placed in some order — ideally, according roughly to their natural tendencies to cooperate. Governments would make commitments (to reduce emissions or leave fossil fuels in the ground, for instance) conditional on the performance of the group before them. This way, any combination of sanctions and positive incentives that other countries might be willing to impose would have a much greater effect.

“In the mathematical model,” said Dr Johnson, “the mechanism works best if the players are somewhat irrational. It seems a reasonable assumption that this might apply to the international community.”


Journal Reference:

  1. Samuel Johnson. Escaping the Tragedy of the Commons through Targeted PunishmentRoyal Society Open Science, 2015 [link]

The Point of No Return: Climate Change Nightmares Are Already Here (Rolling Stone)

The worst predicted impacts of climate change are starting to happen — and much faster than climate scientists expected

BY  August 5, 2015

Walruses

Walruses, like these in Alaska, are being forced ashore in record numbers. Corey Accardo/NOAA/AP 

Historians may look to 2015 as the year when shit really started hitting the fan. Some snapshots: In just the past few months, record-setting heat waves in Pakistan and India each killed more than 1,000 people. In Washington state’s Olympic National Park, the rainforest caught fire for the first time in living memory. London reached 98 degrees Fahrenheit during the hottest July day ever recorded in the U.K.; The Guardian briefly had to pause its live blog of the heat wave because its computer servers overheated. In California, suffering from its worst drought in a millennium, a 50-acre brush fire swelled seventyfold in a matter of hours, jumping across the I-15 freeway during rush-hour traffic. Then, a few days later, the region was pounded by intense, virtually unheard-of summer rains. Puerto Rico is under its strictest water rationing in history as a monster El Niño forms in the tropical Pacific Ocean, shifting weather patterns worldwide.

On July 20th, James Hansen, the former NASA climatologist who brought climate change to the public’s attention in the summer of 1988, issued a bombshell: He and a team of climate scientists had identified a newly important feedback mechanism off the coast of Antarctica that suggests mean sea levels could rise 10 times faster than previously predicted: 10 feet by 2065. The authors included this chilling warning: If emissions aren’t cut, “We conclude that multi-meter sea-level rise would become practically unavoidable. Social disruption and economic consequences of such large sea-level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.”

Eric Rignot, a climate scientist at NASA and the University of California-Irvine and a co-author on Hansen’s study, said their new research doesn’t necessarily change the worst-case scenario on sea-level rise, it just makes it much more pressing to think about and discuss, especially among world leaders. In particular, says Rignot, the new research shows a two-degree Celsius rise in global temperature — the previously agreed upon “safe” level of climate change — “would be a catastrophe for sea-level rise.”

Hansen’s new study also shows how complicated and unpredictable climate change can be. Even as global ocean temperatures rise to their highest levels in recorded history, some parts of the ocean, near where ice is melting exceptionally fast, are actually cooling, slowing ocean circulation currents and sending weather patterns into a frenzy. Sure enough, a persistently cold patch of ocean is starting to show up just south of Greenland, exactly where previous experimental predictions of a sudden surge of freshwater from melting ice expected it to be. Michael Mann, another prominent climate scientist, recently said of the unexpectedly sudden Atlantic slowdown, “This is yet another example of where observations suggest that climate model predictions may be too conservative when it comes to the pace at which certain aspects of climate change are proceeding.”

Since storm systems and jet streams in the United States and Europe partially draw their energy from the difference in ocean temperatures, the implication of one patch of ocean cooling while the rest of the ocean warms is profound. Storms will get stronger, and sea-level rise will accelerate. Scientists like Hansen only expect extreme weather to get worse in the years to come, though Mann said it was still “unclear” whether recent severe winters on the East Coast are connected to the phenomenon.

And yet, these aren’t even the most disturbing changes happening to the Earth’s biosphere that climate scientists are discovering this year. For that, you have to look not at the rising sea levels but to what is actually happening within the oceans themselves.

Water temperatures this year in the North Pacific have never been this high for this long over such a large area — and it is already having a profound effect on marine life.

Eighty-year-old Roger Thomas runs whale-watching trips out of San Francisco. On an excursion earlier this year, Thomas spotted 25 humpbacks and three blue whales. During a survey on July 4th, federal officials spotted 115 whales in a single hour near the Farallon Islands — enough to issue a boating warning. Humpbacks are occasionally seen offshore in California, but rarely so close to the coast or in such numbers. Why are they coming so close to shore? Exceptionally warm water has concentrated the krill and anchovies they feed on into a narrow band of relatively cool coastal water. The whales are having a heyday. “It’s unbelievable,” Thomas told a local paper. “Whales are all over
the place.”

Last fall, in northern Alaska, in the same part of the Arctic where Shell is planning to drill for oil, federal scientists discovered 35,000 walruses congregating on a single beach. It was the largest-ever documented “haul out” of walruses, and a sign that sea ice, their favored habitat, is becoming harder and harder to find.

Marine life is moving north, adapting in real time to the warming ocean. Great white sharks have been sighted breeding near Monterey Bay, California, the farthest north that’s ever been known to occur. A blue marlin was caught last summer near Catalina Island — 1,000 miles north of its typical range. Across California, there have been sightings of non-native animals moving north, such as Mexican red crabs.

Salmon

Salmon on the brink of dying out. Michael Quinton/Newscom

No species may be as uniquely endangered as the one most associated with the Pacific Northwest, the salmon. Every two weeks, Bill Peterson, an oceanographer and senior scientist at the National Oceanic and Atmospheric Administration’s Northwest Fisheries Science Center in Oregon, takes to the sea to collect data he uses to forecast the return of salmon. What he’s been seeing this year is deeply troubling.

Salmon are crucial to their coastal ecosystem like perhaps few other species on the planet. A significant portion of the nitrogen in West Coast forests has been traced back to salmon, which can travel hundreds of miles upstream to lay their eggs. The largest trees on Earth simply wouldn’t exist without salmon.

But their situation is precarious. This year, officials in California are bringing salmon downstream in convoys of trucks, because river levels are too low and the temperatures too warm for them to have a reasonable chance of surviving. One species, the winter-run Chinook salmon, is at a particularly increased risk of decline in the next few years, should the warm water persist offshore.

“You talk to fishermen, and they all say: ‘We’ve never seen anything like this before,’ ” says Peterson. “So when you have no experience with something like this, it gets like, ‘What the hell’s going on?’ ”

Atmospheric scientists increasingly believe that the exceptionally warm waters over the past months are the early indications of a phase shift in the Pacific Decadal Oscillation, a cyclical warming of the North Pacific that happens a few times each century. Positive phases of the PDO have been known to last for 15 to 20 years, during which global warming can increase at double the rate as during negative phases of the PDO. It also makes big El Niños, like this year’s, more likely. The nature of PDO phase shifts is unpredictable — climate scientists simply haven’t yet figured out precisely what’s behind them and why they happen when they do. It’s not a permanent change — the ocean’s temperature will likely drop from these record highs, at least temporarily, some time over the next few years — but the impact on marine species will be lasting, and scientists have pointed to the PDO as a global-warming preview.

“The climate [change] models predict this gentle, slow increase in temperature,” says Peterson, “but the main problem we’ve had for the last few years is the variability is so high. As scientists, we can’t keep up with it, and neither can the animals.” Peterson likens it to a boxer getting pummeled round after round: “At some point, you knock them down, and the fight is over.”

India

Pavement-melting heat waves in India. Harish Tyagi/EPA/Corbis

Attendant with this weird wildlife behavior is a stunning drop in the number of plankton — the basis of the ocean’s food chain. In July, another major study concluded that acidifying oceans are likely to have a “quite traumatic” impact on plankton diversity, with some species dying out while others flourish. As the oceans absorb carbon dioxide from the atmosphere, it’s converted into carbonic acid — and the pH of seawater declines. According to lead author Stephanie Dutkiewicz of MIT, that trend means “the whole food chain is going to be different.”

The Hansen study may have gotten more attention, but the Dutkiewicz study, and others like it, could have even more dire implications for our future. The rapid changes Dutkiewicz and her colleagues are observing have shocked some of their fellow scientists into thinking that yes, actually, we’re heading toward the worst-case scenario. Unlike a prediction of massive sea-level rise just decades away, the warming and acidifying oceans represent a problem that seems to have kick-started a mass extinction on the same time scale.

Jacquelyn Gill is a paleoecologist at the University of Maine. She knows a lot about extinction, and her work is more relevant than ever. Essentially, she’s trying to save the species that are alive right now by learning more about what killed off the ones that aren’t. The ancient data she studies shows “really compelling evidence that there can be events of abrupt climate change that can happen well within human life spans. We’re talking less than a decade.”

For the past year or two, a persistent change in winds over the North Pacific has given rise to what meteorologists and oceanographers are calling “the blob” — a highly anomalous patch of warm water between Hawaii, Alaska and Baja California that’s thrown the marine ecosystem into a tailspin. Amid warmer temperatures, plankton numbers have plummeted, and the myriad species that depend on them have migrated or seen their own numbers dwindle.

Significant northward surges of warm water have happened before, even frequently. El Niño, for example, does this on a predictable basis. But what’s happening this year appears to be something new. Some climate scientists think that the wind shift is linked to the rapid decline in Arctic sea ice over the past few years, which separate research has shown makes weather patterns more likely to get stuck.

A similar shift in the behavior of the jet stream has also contributed to the California drought and severe polar vortex winters in the Northeast over the past two years. An amplified jet-stream pattern has produced an unusual doldrum off the West Coast that’s persisted for most of the past 18 months. Daniel Swain, a Stanford University meteorologist, has called it the “Ridiculously Resilient Ridge” — weather patterns just aren’t supposed to last this long.

What’s increasingly uncontroversial among scientists is that in many ecosystems, the impacts of the current off-the-charts temperatures in the North Pacific will linger for years, or longer. The largest ocean on Earth, the Pacific is exhibiting cyclical variability to greater extremes than other ocean basins. While the North Pacific is currently the most dramatic area of change in the world’s oceans, it’s not alone: Globally, 2014 was a record-setting year for ocean temperatures, and 2015 is on pace to beat it soundly, boosted by the El Niño in the Pacific. Six percent of the world’s reefs could disappear before the end of the decade, perhaps permanently, thanks to warming waters.

Since warmer oceans expand in volume, it’s also leading to a surge in sea-level rise. One recent study showed a slowdown in Atlantic Ocean currents, perhaps linked to glacial melt from Greenland, that caused a four-inch rise in sea levels along the Northeast coast in just two years, from 2009 to 2010. To be sure, it seems like this sudden and unpredicted surge was only temporary, but scientists who studied the surge estimated it to be a 1-in-850-year event, and it’s been blamed on accelerated beach erosion “almost as significant as some hurricane events.”

Turkey

Biblical floods in Turkey. Ali Atmaca/Anadolu Agency/Getty

Possibly worse than rising ocean temperatures is the acidification of the waters. Acidification has a direct effect on mollusks and other marine animals with hard outer bodies: A striking study last year showed that, along the West Coast, the shells of tiny snails are already dissolving, with as-yet-unknown consequences on the ecosystem. One of the study’s authors, Nina Bednaršek, told Science magazine that the snails’ shells, pitted by the acidifying ocean, resembled “cauliflower” or “sandpaper.” A similarly striking study by more than a dozen of the world’s top ocean scientists this July said that the current pace of increasing carbon emissions would force an “effectively irreversible” change on ocean ecosystems during this century. In as little as a decade, the study suggested, chemical changes will rise significantly above background levels in nearly half of the world’s oceans.

“I used to think it was kind of hard to make things in the ocean go extinct,” James Barry of the Monterey Bay Aquarium Research Institute in California told the Seattle Times in 2013. “But this change we’re seeing is happening so fast it’s almost instantaneous.”

Thanks to the pressure we’re putting on the planet’s ecosystem — warming, acidification and good old-fashioned pollution — the oceans are set up for several decades of rapid change. Here’s what could happen next.

The combination of excessive nutrients from agricultural runoff, abnormal wind patterns and the warming oceans is already creating seasonal dead zones in coastal regions when algae blooms suck up most of the available oxygen. The appearance of low-oxygen regions has doubled in frequency every 10 years since 1960 and should continue to grow over the coming decades at an even greater rate.

So far, dead zones have remained mostly close to the coasts, but in the 21st century, deep-ocean dead zones could become common. These low-oxygen regions could gradually expand in size — potentially thousands of miles across — which would force fish, whales, pretty much everything upward. If this were to occur, large sections of the temperate deep oceans would suffer should the oxygen-free layer grow so pronounced that it stratifies, pushing surface ocean warming into overdrive and hindering upwelling of cooler, nutrient-rich deeper water.

Enhanced evaporation from the warmer oceans will create heavier downpours, perhaps destabilizing the root systems of forests, and accelerated runoff will pour more excess nutrients into coastal areas, further enhancing dead zones. In the past year, downpours have broken records in Long Island, Phoenix, Detroit, Baltimore, Houston and Pensacola, Florida.

Evidence for the above scenario comes in large part from our best understanding of what happened 250 million years ago, during the “Great Dying,” when more than 90 percent of all oceanic species perished after a pulse of carbon dioxide and methane from land-based sources began a period of profound climate change. The conditions that triggered “Great Dying” took hundreds of thousands of years to develop. But humans have been emitting carbon dioxide at a much quicker rate, so the current mass extinction only took 100 years or so to kick-start.

With all these stressors working against it, a hypoxic feedback loop could wind up destroying some of the oceans’ most species-rich ecosystems within our lifetime. A recent study by Sarah Moffitt of the University of California-Davis said it could take the ocean thousands of years to recover. “Looking forward for my kid, people in the future are not going to have the same ocean that I have today,” Moffitt said.

As you might expect, having tickets to the front row of a global environmental catastrophe is taking an increasingly emotional toll on scientists, and in some cases pushing them toward advocacy. Of the two dozen or so scientists I interviewed for this piece, virtually all drifted into apocalyptic language at some point.

For Simone Alin, an oceanographer focusing on ocean acidification at NOAA’s Pacific Marine Environmental Laboratory in Seattle, the changes she’s seeing hit close to home. The Puget Sound is a natural laboratory for the coming decades of rapid change because its waters are naturally more acidified than most of the world’s marine ecosystems.

The local oyster industry here is already seeing serious impacts from acidifying waters and is going to great lengths to avoid a total collapse. Alin calls oysters, which are non-native, the canary in the coal mine for the Puget Sound: “A canary is also not native to a coal mine, but that doesn’t mean it’s not a good indicator of change.”

Though she works on fundamental oceanic changes every day, the Dutkiewicz study on the impending large-scale changes to plankton caught her off-guard: “This was alarming to me because if the basis of the food web changes, then . . . everything could change, right?”

Alin’s frank discussion of the looming oceanic apocalypse is perhaps a product of studying unfathomable change every day. But four years ago, the birth of her twins “heightened the whole issue,” she says. “I was worried enough about these problems before having kids that I maybe wondered whether it was a good idea. Now, it just makes me feel crushed.”

Katharine Hayhoe

Katharine Hayhoe speaks about climate change to students and faculty at Wayland Baptist University in 2011. Geoffrey McAllister/Chicago Tribune/MCT/Getty

Katharine Hayhoe, a climate scientist and evangelical Christian, moved from Canada to Texas with her husband, a pastor, precisely because of its vulnerability to climate change. There, she engages with the evangelical community on science — almost as a missionary would. But she’s already planning her exit strategy: “If we continue on our current pathway, Canada will be home for us long term. But the majority of people don’t have an exit strategy. . . . So that’s who I’m here trying to help.”

James Hansen, the dean of climate scientists, retired from NASA in 2013 to become a climate activist. But for all the gloom of the report he just put his name to, Hansen is actually somewhat hopeful. That’s because he knows that climate change has a straightforward solution: End fossil-fuel use as quickly as possible. If tomorrow, the leaders of the United States and China would agree to a sufficiently strong, coordinated carbon tax that’s also applied to imports, the rest of the world would have no choice but to sign up. This idea has already been pitched to Congress several times, with tepid bipartisan support. Even though a carbon tax is probably a long shot, for Hansen, even the slim possibility that bold action like this might happen is enough for him to devote the rest of his life to working to achieve it. On a conference call with reporters in July, Hansen said a potential joint U.S.-China carbon tax is more important than whatever happens at the United Nations climate talks in Paris.

One group Hansen is helping is Our Children’s Trust, a legal advocacy organization that’s filed a number of novel challenges on behalf of minors under the idea that climate change is a violation of intergenerational equity — children, the group argues, are lawfully entitled to inherit a healthy planet.

A separate challenge to U.S. law is being brought by a former EPA scientist arguing that carbon dioxide isn’t just a pollutant (which, under the Clean Air Act, can dissipate on its own), it’s also a toxic substance. In general, these substances have exceptionally long life spans in the environment, cause an unreasonable risk, and therefore require remediation. In this case, remediation may involve planting vast numbers of trees or restoring wetlands to bury excess carbon underground.

Even if these novel challenges succeed, it will take years before a bend in the curve is noticeable. But maybe that’s enough. When all feels lost, saving a few species will feel like a triumph.

From The Archives Issue 1241: August 13, 2015

Read more: http://www.rollingstone.com/politics/news/the-point-of-no-return-climate-change-nightmares-are-already-here-20150805#ixzz3iRVjFBme
Follow us: @rollingstone on Twitter | RollingStone on Facebook

Stop burning fossil fuels now: there is no CO2 ‘technofix’, scientists warn (The Guardian)

Researchers have demonstrated that even if a geoengineering solution to CO2 emissions could be found, it wouldn’t be enough to save the oceans

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber Photograph: Doug Perrine/Design Pics/Corbis

German researchers have demonstrated once again that the best way to limit climate change is to stop burning fossil fuels now.

In a “thought experiment” they tried another option: the future dramatic removal of huge volumes of carbon dioxide from the atmosphere. This would, they concluded, return the atmosphere to the greenhouse gas concentrations that existed for most of human history – but it wouldn’t save the oceans.

That is, the oceans would stay warmer, and more acidic, for thousands of years, and the consequences for marine life could be catastrophic.

The research, published in Nature Climate Change today delivers yet another demonstration that there is so far no feasible “technofix” that would allow humans to go on mining and drilling for coal, oil and gas (known as the “business as usual” scenario), and then geoengineer a solution when climate change becomes calamitous.

Sabine Mathesius (of the Helmholtz Centre for Ocean Research in Kiel and the Potsdam Institute for Climate Impact Research) and colleagues decided to model what could be done with an as-yet-unproven technology called carbon dioxide removal. One example would be to grow huge numbers of trees, burn them, trap the carbon dioxide, compress it and bury it somewhere. Nobody knows if this can be done, but Dr Mathesius and her fellow scientists didn’t worry about that.

They calculated that it might plausibly be possible to remove carbon dioxide from the atmosphere at the rate of 90 billion tons a year. This is twice what is spilled into the air from factory chimneys and motor exhausts right now.

The scientists hypothesised a world that went on burning fossil fuels at an accelerating rate – and then adopted an as-yet-unproven high technology carbon dioxide removal technique.

“Interestingly, it turns out that after ‘business as usual’ until 2150, even taking such enormous amounts of CO2 from the atmosphere wouldn’t help the deep ocean that much – after the acidified water has been transported by large-scale ocean circulation to great depths, it is out of reach for many centuries, no matter how much CO2 is removed from the atmosphere,” said a co-author, Ken Caldeira, who is normally based at the Carnegie Institution in the US.

The oceans cover 70% of the globe. By 2500, ocean surface temperatures would have increased by 5C (41F) and the chemistry of the ocean waters would have shifted towards levels of acidity that would make it difficult for fish and shellfish to flourish. Warmer waters hold less dissolved oxygen. Ocean currents, too, would probably change.

But while change happens in the atmosphere over tens of years, change in the ocean surface takes centuries, and in the deep oceans, millennia. So even if atmospheric temperatures were restored to pre-Industrial Revolution levels, the oceans would continue to experience climatic catastrophe.

“In the deep ocean, the chemical echo of this century’s CO2 pollution will reverberate for thousands of years,” said co-author Hans Joachim Schellnhuber, who directs the Potsdam Institute. “If we do not implement emissions reductions measures in line with the 2C (35.6F) target in time, we will not be able to preserve ocean life as we know it.”

Climate Seer James Hansen Issues His Direst Forecast Yet (The Daily Beast) + other sources, and repercussions

A polar bear walks in the snow near the Hudson Bay waiting for the bay to freeze, 13 November 2007, outside Churchill, Mantioba, Canada. Polar bears return to Churchill, the polar bear capital of the world, to hunt for seals on the icepack every year at this time and remain on the icepack feeding on seals until the spring thaw.   AFP PHOTO/Paul J. Richards (Photo credit should read PAUL J. RICHARDS/AFP/Getty Images)

Paul J Richards/AFP/Getty

Mark Hertsgaard 

07.20.151:00 AM ET

James Hansen’s new study explodes conventional goals of climate diplomacy and warns of 10 feet of sea level rise before 2100. The good news is, we can fix it.

James Hansen, the former NASA scientist whose congressional testimony put global warming on the world’s agenda a quarter-century ago, is now warning that humanity could confront “sea level rise of several meters” before the end of the century unless greenhouse gas emissions are slashed much faster than currently contemplated.This roughly 10 feet of sea level rise—well beyond previous estimates—would render coastal cities such as New York, London, and Shanghai uninhabitable.  “Parts of [our coastal cities] would still be sticking above the water,” Hansen says, “but you couldn’t live there.”

James Hanson

Columbia University

This apocalyptic scenario illustrates why the goal of limiting temperature rise to 2 degrees Celsius is not the safe “guardrail” most politicians and media coverage imply it is, argue Hansen and 16 colleagues in a blockbuster study they are publishing this week in the peer-reviewed journal Atmospheric Chemistry and Physics. On the contrary, a 2 C future would be “highly dangerous.”

If Hansen is right—and he has been right, sooner, about the big issues in climate science longer than anyone—the implications are vast and profound.

Physically, Hansen’s findings mean that Earth’s ice is melting and its seas are rising much faster than expected. Other scientists have offered less extreme findings; the United Nations Intergovernmental Panel on Climate Change (IPCC) has projected closer to 3 feet of sea level rise by the end of the century, an amount experts say will be difficult enough to cope with. (Three feet of sea level rise would put runways of all three New York City-area airports underwater unless protective barriers were erected. The same holds for airports in the San Francisco Bay Area.)

Worldwide, approximately $3 trillion worth infrastructure vital to civilization such as water treatment plants, power stations, and highways are located at or below 3 feet of sea level, according to the Stern Review, a comprehensive analysis published by the British government.

Hansen’s track record commands respect. From the time the soft-spoken Iowan told the U.S. Senate in 1988 that man-made global warming was no longer a theory but had in fact begun and threatened unparalleled disaster, he has consistently been ahead of the scientific curve.

Hansen has long suspected that computer models underestimated how sensitive Earth’s ice sheets were to rising temperatures. Indeed, the IPCC excluded ice sheet melt altogether from its calculations of sea level rise. For their study, Hansen and his colleagues combined ancient paleo-climate data with new satellite readings and an improved model of the climate system to demonstrate that ice sheets can melt at a “non-linear” rate: rather than an incremental melting as Earth’s poles inexorably warm, ice sheets might melt at exponential rates, shedding dangerous amounts of mass in a matter of decades, not millennia. In fact, current observations indicate that some ice sheets already are melting this rapidly.

“Prior to this paper I suspected that to be the case,” Hansen told The Daily Beast. “Now we have evidence to make that statement based on much more than suspicion.”

The Nature Climate Change study and Hansen’s new paper give credence to the many developing nations and climate justice advocates who have called for more ambitious action.

Politically, Hansen’s new projections amount to a huge headache for diplomats, activists, and anyone else hoping that a much-anticipated global climate summit the United Nations is convening in Paris in December will put the world on a safe path. President Barack Obama and other world leaders must now reckon with the possibility that the 2 degrees goal they affirmed at the Copenhagen summit in 2009 is actually a recipe for catastrophe. In effect, Hansen’s study explodes what has long been the goal of conventional climate diplomacy.

More troubling, honoring even the conventional 2 degrees C target has so far proven extremely challenging on political and economic grounds. Current emission trajectories put the world on track towards a staggering 4 degrees of warming before the end of the century, an amount almost certainly beyond civilization’s coping capacity. In preparation for the Paris summit, governments have begun announcing commitments to reduce emissions, but to date these commitments are falling well short of satisfying the 2 degrees goal. Now, factor in the possibility that even 2 degrees is too much and many negotiators may be tempted to throw up their hands in despair.

They shouldn’t. New climate science brings good news as well as bad.  Humanity can limit temperature rise to 1.5 degrees C if it so chooses, according to a little-noticed study by experts at the Potsdam Institute for Climate Impacts (now perhaps the world’s foremost climate research center) and the International Institute for Applied Systems Analysis published in Nature Climate Change in May.

“Actions for returning global warming to below 1.5 degrees Celsius by 2100 are in many ways similar to those limiting warming to below 2 degrees Celsius,” said Joeri Rogelj, a lead author of the study. “However … emission reductions need to scale up swiftly in the next decades.” And there’s a significant catch: Even this relatively optimistic study concludes that it’s too late to prevent global temperature rising by 2 degrees C. But this overshoot of the 2 C target can be made temporary, the study argues; the total increase can be brought back down to 1.5 C later in the century.

Besides the faster emissions reductions Rogelj referenced, two additional tools are essential, the study outlines. Energy efficiency—shifting to less wasteful lighting, appliances, vehicles, building materials and the like—is already the cheapest, fastest way to reduce emissions. Improved efficiency has made great progress in recent years but will have to accelerate, especially in emerging economies such as China and India.

Also necessary will be breakthroughs in so-called “carbon negative” technologies. Call it the photosynthesis option: because plants inhale carbon dioxide and store it in their roots, stems, and leaves, one can remove carbon from the atmosphere by growing trees, planting cover crops, burying charred plant materials underground, and other kindred methods. In effect, carbon negative technologies can turn back the clock on global warming, making the aforementioned descent from the 2 C overshoot to the 1.5 C goal later in this century theoretically possible. Carbon-negative technologies thus far remain unproven at the scale needed, however; more research and deployment is required, according to the study.

Together, the Nature Climate Change study and Hansen’s new paper give credence to the many developing nations and climate justice advocates who have called for more ambitious action. The authors of the Nature Climate Changestudy point out that the 1.5 degrees goal “is supported by more than 100 countries worldwide, including those most vulnerable to climate change.” In May, the governments of 20 of those countries, including the Philippines, Costa Rica, Kenya, and Bangladesh, declared the 2 degrees target “inadequate” and called for governments to “reconsider” it in Paris.

Hansen too is confident that the world “could actually come in well under 2 degrees, if we make the price of fossil fuels honest.”

That means making the market price of gasoline and other products derived from fossil fuels reflect the enormous costs that burning those fuels currently externalizes onto society as a whole. Economists from left to right have advocated achieving this by putting a rising fee or tax on fossil fuels. This would give businesses, governments, and other consumers an incentive to shift to non-carbon fuels such as solar, wind, nuclear, and, best of all, increased energy efficiency. (The cheapest and cleanest fuel is the fuel you don’t burn in the first place.)

But putting a fee on fossil fuels will raise their price to consumers, threatening individual budgets and broader economic prospects, as opponents will surely point out. Nevertheless, higher prices for carbon-based fuels need not have injurious economic effects if the fees driving those higher prices are returned to the public to spend as it wishes. It’s been done that way for years with great success in Alaska, where all residents receive an annual check in compensation for the impact the Alaskan oil pipeline has on the state.

“Tax Pollution, Pay People” is the bumper sticker summary coined by activists at the Citizens Climate Lobby. Legislation to this effect has been introduced in both houses of the U.S. Congress.

Meanwhile, there are also a host of other reasons to believe it’s not too late to preserve a livable climate for young people and future generations.

The transition away from fossil fuels has begun and is gaining speed and legitimacy. In 2014, global greenhouse gas emissions remained flat even as the world economy grew—a first. There has been a spectacular boom in wind and solar energy, including in developing countries, as their prices plummet. These technologies now qualify as a “disruptive” economic force that promises further breakthroughs, said Achim Steiner, executive director of the UN Environment Programme.

Coal, the most carbon-intensive conventional fossil fuel, is in a death spiral, partly thanks to another piece of encouraging news: the historic climate agreement the U.S. and China reached last November, which envisions both nations slashing coal consumption (as China is already doing). Hammering another nail into coal’s coffin, the leaders of Great Britain’s three main political parties pledged to phase out coal, no matter who won the general elections last May.

“If you look at the long-term [for coal], it’s not getting any better,” said Standard & Poor’s Aneesh Prabhu when S&P downgraded coal company bonds to junk status. “It’s a secular decline,” not a mere cyclical downturn.

Last but not least, a vibrant mass movement has arisen to fight climate change, most visibly manifested when hundreds of thousands of people thronged the streets of New York City last September, demanding action from global leaders gathered at the UN. The rally was impressive enough that it led oil and gas giant ExxonMobil to increase its internal estimate of how likely the U.S. government is to take strong action. “That many people marching is clearly going to put pressure on government to do something,” an ExxonMobil spokesman told Bloomberg Businessweek.

The climate challenge has long amounted to a race between the imperatives of science and the contingencies of politics. With Hansen’s paper, the science has gotten harsher, even as the Nature Climate Change study affirms that humanity can still choose life, if it will. The question now is how the politics will respond—now, at Paris in December, and beyond.

Mark Hertsgaard has reported on politics, culture, and the environment from more than 20 countries and written six books, including “HOT: Living Through the Next Fifty Years on Earth.”

*   *   *

Experts make dire prediction about sea levels (CBS)

VIDEO

In the future, there could be major flooding along every coast. So says a new study that warns the world’s seas are rising.

Ever-warming oceans that are melting polar ice could raise sea levels 15 feet in the next 50 to 100 years, NASA’s former climate chief now says. That’s five times higher than previous predictions.

“This is the biggest threat the planet faces,” said James Hansen, the co-author of the new journal article raising that alarm scenario.

“If we get sea level rise of several meters, all coastal cities become dysfunctional,” he said. “The implications of this are just incalculable.”

If ocean levels rise just 10 feet, areas like Miami, Boston, Seattle and New York City would face flooding.

The melting ice would cool ocean surfaces at the poles even more. While the overall climate continues to warm. The temperature difference would fuel even more volatile weather.

“As the atmosphere gets warmer and there’s more water vapor, that’s going to drive stronger thunderstorms, stronger hurricanes, stronger tornadoes, because they all get their energy from the water vapor,” said Hansen.

Nearly a decade ago, Hansen told “60 Minutes” we had 10 years to get global warming under control, or we would reach “tipping point.”

“It will be a situation that is out of our control,” he said. “We’re essentially at the edge of that. That’s why this year is a critical year.”

Critical because of a United Nations meeting in Paris that is designed to reach legally binding agreements on carbons emissions, those greenhouse gases that create global warming.

*   *   *

Sea Levels Could Rise Much Faster than Thought (Climate Denial Crock of the Week)

with Peter SinclairJuly 21, 2015

Washington Post:

James Hansen has often been out ahead of his scientific colleagues.

With his 1988 congressional testimony, the then-NASA scientist is credited with putting the global warming issue on the map by saying that a warming trend had already begun. “It is time to stop waffling so much and say that the evidence is pretty strong that the greenhouse effect is here,” Hansen famously testified.

Now Hansen — who retired in 2013 from his NASA post, and is currently an adjunct professor at Columbia University’s Earth Institute — is publishing what he says may be his most important paper. Along with 16 other researchers — including leading experts on the Greenland and Antarctic ice sheets — he has authored a lengthy study outlining an scenario of potentially rapid sea level rise combined with more intense storm systems.

It’s an alarming picture of where the planet could be headed — and hard to ignore, given its author. But it may also meet with considerable skepticism in the broader scientific community, given that its scenarios of sea level rise occur more rapidly than those ratified by the United Nations’ Intergovernmental Panel on Climate Change in its latest assessment of the state of climate science, published in 2013.

In the new study, Hansen and his colleagues suggest that the “doubling time” for ice loss from West Antarctica — the time period over which the amount of loss could double — could be as short as 10 years. In other words, a non-linear process could be at work, triggering major sea level rise in a time frame of 50 to 200 years. By contrast, Hansen and colleagues note, the IPCC assumed more of a linear process, suggesting only around 1 meter of sea level rise, at most, by 2100.

Here, a clip from our extended interview with Eric Rignot in December of 2014.  Rignot is one of the co-authors of the new study.

Slate:

The study—written by James Hansen, NASA’s former lead climate scientist, and 16 co-authors, many of whom are considered among the top in their fields—concludes that glaciers in Greenland and Antarctica will melt 10 times faster than previous consensus estimates, resulting in sea level rise of at least 10 feet in as little as 50 years. The study, which has not yet been peer reviewed, brings new importance to a feedback loop in the ocean near Antarctica that results in cooler freshwater from melting glaciers forcing warmer, saltier water underneath the ice sheets, speeding up the melting rate. Hansen, who is known for being alarmist and also right, acknowledges that his study implies change far beyond previous consensus estimates. In a conference call with reporters, he said he hoped the new findings would be “substantially more persuasive than anything previously published.” I certainly find them to be.

We conclude that continued high emissions will make multi-meter sea level rise practically unavoidable and likely to occur this century. Social disruption and economic consequences of such large sea level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.

The science of ice melt rates is advancing so fast, scientists have generally been reluctant to put a number to what is essentially an unpredictable, non-linear response of ice sheets to a steadily warming ocean. With Hansen’s new study, that changes in a dramatic way. One of the study’s co-authors is Eric Rignot, whose own study last year found that glacial melt from West Antarctica now appears to be “unstoppable.” Chris Mooney, writing for Mother Jonescalled that study a “holy shit” moment for the climate.

Daily Beast:

New climate science brings good news as well as bad.  Humanity can limit temperature rise to 1.5 degrees C if it so chooses, according to a little-noticed study by experts at the Potsdam Institute for Climate Impacts (now perhaps the world’s foremost climate research center) and the International Institute for Applied Systems Analysis published in Nature Climate Changein May.

shanghai500

“Actions for returning global warming to below 1.5 degrees Celsius by 2100 are in many ways similar to those limiting warming to below 2 degrees Celsius,” said Joeri Rogelj, a lead author of the study. “However … emission reductions need to scale up swiftly in the next decades.” And there’s a significant catch: Even this relatively optimistic study concludes that it’s too late to prevent global temperature rising by 2 degrees C. But this overshoot of the 2 C target can be made temporary, the study argues; the total increase can be brought back down to 1.5 C later in the century.

Besides the faster emissions reductions Rogelj referenced, two additional tools are essential, the study outlines. Energy efficiency—shifting to less wasteful lighting, appliances, vehicles, building materials and the like—is already the cheapest, fastest way to reduce emissions. Improved efficiency has made great progress in recent years but will have to accelerate, especially in emerging economies such as China and India.

Also necessary will be breakthroughs in so-called “carbon negative” technologies. Call it the photosynthesis option: because plants inhale carbon dioxide and store it in their roots, stems, and leaves, one can remove carbon from the atmosphere by growing trees, planting cover crops, burying charred plant materials underground, and other kindred methods. In effect, carbon negative technologies can turn back the clock on global warming, making the aforementioned descent from the 2 C overshoot to the 1.5 C goal later in this century theoretically possible. Carbon-negative technologies thus far remain unproven at the scale needed, however; more research and deployment is required, according to the study.

*   *   *

Earth’s Most Famous Climate Scientist Issues Bombshell Sea Level Warning (Slate)

495456719-single-family-homes-on-islands-and-condo-buildings-on

Monday’s new study greatly increases the potential for catastrophic near-term sea level rise. Here, Miami Beach, among the most vulnerable cities to sea level rise in the world. Photo by Joe Raedle/Getty Images

In what may prove to be a turning point for political action on climate change, a breathtaking new study casts extreme doubt about the near-term stability of global sea levels.

The study—written by James Hansen, NASA’s former lead climate scientist, and 16 co-authors, many of whom are considered among the top in their fields—concludes that glaciers in Greenland and Antarctica will melt 10 times faster than previous consensus estimates, resulting in sea level rise of at least 10 feet in as little as 50 years. The study, which has not yet been peer-reviewed, brings new importance to a feedback loop in the ocean near Antarctica that results in cooler freshwater from melting glaciers forcing warmer, saltier water underneath the ice sheets, speeding up the melting rate. Hansen, who is known for being alarmist and also right, acknowledges that his study implies change far beyond previous consensus estimates. In a conference call with reporters, he said he hoped the new findings would be “substantially more persuasive than anything previously published.” I certainly find them to be.

To come to their findings, the authors used a mixture of paleoclimate records, computer models, and observations of current rates of sea level rise, but “the real world is moving somewhat faster than the model,” Hansen says.

Hansen’s study does not attempt to predict the precise timing of the feedback loop, only that it is “likely” to occur this century. The implications are mindboggling: In the study’s likely scenario, New York City—and every other coastal city on the planet—may only have a few more decades of habitability left. That dire prediction, in Hansen’s view, requires “emergency cooperation among nations.”

We conclude that continued high emissions will make multi-meter sea level rise practically unavoidable and likely to occur this century. Social disruption and economic consequences of such large sea level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.

The science of ice melt rates is advancing so fast, scientists have generally been reluctant to put a number to what is essentially an unpredictable, nonlinear response of ice sheets to a steadily warming ocean. With Hansen’s new study, that changes in a dramatic way. One of the study’s co-authors is Eric Rignot, whose own study last year found that glacial melt from West Antarctica now appears to be “unstoppable.” Chris Mooney, writing for Mother Jonescalled that study a “holy shit” moment for the climate.

One necessary note of caution: Hansen’s study comes via a nontraditional publishing decision by its authors. The study will be published in Atmospheric Chemistry and Physics, an open-access “discussion” journal, and will not have formal peer review prior to its appearance online later this week. [Update, July 23: The paper is now available.] The complete discussion draft circulated to journalists was 66 pages long, and included more than 300 references. The peer review will take place in real time, with responses to the work by other scientists also published online. Hansen said this publishing timeline was necessary to make the work public as soon as possible before global negotiators meet in Paris later this year. Still, the lack of traditional peer review and the fact that this study’s results go far beyond what’s been previously published will likely bring increased scrutiny. On Twitter, Ruth Mottram, a climate scientist whose work focuses on Greenland and the Arctic, was skeptical of such enormous rates of near-term sea level rise, though she defended Hansen’s decision to publish in a nontraditional way.

In 2013, Hansen left his post at NASA to become a climate activist because, in his words, “as a government employee, you can’t testify against the government.” In a wide-ranging December 2013 study, conducted to support Our Children’s Trust, a group advancing legal challenges to lax greenhouse gas emissions policies on behalf of minors, Hansen called for a “human tipping point”—essentially, a social revolution—as one of the most effective ways of combating climate change, though he still favors a bilateral carbon tax agreed upon by the United States and China as the best near-term climate policy. In the new study, Hansen writes, “there is no morally defensible excuse to delay phase-out of fossil fuel emissions as rapidly as possible.”

Asked whether Hansen has plans to personally present the new research to world leaders, he said: “Yes, but I can’t talk about that today.” What’s still uncertain is whether, like with so many previous dire warnings, world leaders will be willing to listen.

*   *   *

Ice Melt, Sea Level Rise and Superstorms (Climate Sciences, Awareness and Solutions / Earth Institute, Columbia University)

23 July 2015

James Hansen

The paper “Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2°C global warming is highly dangerous” has been published in Atmospheric Chemistry and Physics Discussion and is freely available here.

The paper draws on a large body of work by the research community, as indicated by the 300 references. No doubt we missed some important relevant contributions, which we may be able to rectify in the final version of the paper. I thank all the researchers who provided data or information, many of whom I may have failed to include in the acknowledgments, as the work for the paper occurred over a several year period.

I am especially grateful to the Durst family for a generous grant that allowed me to work full time this year on finishing the paper, as well as the other supporters of our program Climate Science, Awareness and Solutions at the Columbia University Earth Institute.

In the conceivable event that you do not read the full paper plus supplement, I include the Acknowledgments here:

Acknowledgments. Completion of this study was made possible by a generous gift from The Durst Family to the Climate Science, Awareness and Solutions program at the Columbia University Earth Institute. That program was initiated in 2013 primarily via support from the Grantham Foundation for Protection of the Environment, Jim and Krisann Miller, and Gerry Lenfest and sustained via their continuing support. Other substantial support has been provided by the Flora Family Foundation, Dennis Pence, the Skoll Global Threats Fund, Alexander Totic and Hugh Perrine. We thank Anders Carlson, Elsa Cortijo, Nil Irvali, Kurt Lambeck, Scott Lehman, and Ulysses Ninnemann for their kind provision of data and related information. Support for climate simulations was provided by the NASA High-End Computing (HEC) Program through the NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center.

Climate models are even more accurate than you thought (The Guardian)

The difference between modeled and observed global surface temperature changes is 38% smaller than previously thought

Looking across the frozen sea of Ullsfjord in Norway.  Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures.

Looking across the frozen sea of Ullsfjord in Norway. Melting Arctic sea ice is one complicating factor in comparing modeled and observed surface temperatures. Photograph: Neale Clark/Robert Harding World Imagery/Corbis

Global climate models aren’t given nearly enough credit for their accurate global temperature change projections. As the 2014 IPCC report showed, observed global surface temperature changes have been within the range of climate model simulations.

Now a new study shows that the models were even more accurate than previously thought. In previous evaluations like the one done by the IPCC, climate model simulations of global surface air temperature were compared to global surface temperature observational records like HadCRUT4. However, over the oceans, HadCRUT4 uses sea surface temperatures rather than air temperatures.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

A depiction of how global temperatures calculated from models use air temperatures above the ocean surface (right frame), while observations are based on the water temperature in the top few metres (left frame). Created by Kevin Cowtan.

Thus looking at modeled air temperatures and HadCRUT4 observations isn’t quite an apples-to-apples comparison for the oceans. As it turns out, sea surface temperatures haven’t been warming fast as marine air temperatures, so this comparison introduces a bias that makes the observations look cooler than the model simulations. In reality, the comparisons weren’t quite correct. As lead author Kevin Cowtan told me,

We have highlighted the fact that the planet does not warm uniformly. Air temperatures warm faster than the oceans, air temperatures over land warm faster than global air temperatures. When you put a number on global warming, that number always depends on what you are measuring. And when you do a comparison, you need to ensure you are comparing the same things.

The model projections have generally reported global air temperatures. That’s quite helpful, because we generally live in the air rather than the water. The observations, by mixing air and water temperatures, are expected to slightly underestimate the warming of the atmosphere.

The new study addresses this problem by instead blending the modeled air temperatures over land with the modeled sea surface temperatures to allow for an apples-to-apples comparison. The authors also identified another challenging issue for these model-data comparisons in the Arctic. Over sea ice, surface air temperature measurements are used, but for open ocean, sea surface temperatures are used. As co-author Michael Mann notes, as Arctic sea ice continues to melt away, this is another factor that accurate model-data comparisons must account for.

One key complication that arises is that the observations typically extrapolate land temperatures over sea ice covered regions since the sea surface temperature is not accessible in that case. But the distribution of sea ice changes seasonally, and there is a long-term trend toward decreasing sea ice in many regions. So the observations actually represent a moving target.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias.  Created by Kevin Cowtan.

A depiction of how as sea ice retreats, some grid cells change from taking air temperatures to taking water temperatures. If the two are not on the same scale, this introduces a bias. Created by Kevin Cowtan.

When accounting for these factors, the study finds that the difference between observed and modeled temperatures since 1975 is smaller than previously believed. The models had projected a 0.226°C per decade global surface air warming trend for 1975–2014 (and 0.212°C per decade over the geographic area covered by the HadCRUT4 record). However, when matching the HadCRUT4 methods for measuring sea surface temperatures, the modeled trend is reduced to 0.196°C per decade. The observed HadCRUT4 trend is 0.170°C per decade.

So when doing an apples-to-apples comparison, the difference between modeled global temperature simulations and observations is 38% smaller than previous estimates. Additionally, as noted in a 2014 paper led by NASA GISS director Gavin Schmidt, less energy from the sun has reached the Earth’s surface than anticipated in these model simulations, both because solar activity declined more than expected, and volcanic activity was higher than expected. Ed Hawkins, another co-author of this study, wrote about this effect.

Combined, the apparent discrepancy between observations and simulations of global temperature over the past 15 years can be partly explained by the way the comparison is done (about a third), by the incorrect radiative forcings (about a third) and the rest is either due to climate variability or because the models are slightly over sensitive on average. But, the room for the latter effect is now much smaller.

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

Comparison of 84 climate model simulations (using RCP8.5) against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The upper panel shows anomalies derived from the unmodified climate model results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. (2014).

As Hawkins notes, the remaining discrepancy between modeled and observed temperatures may come down to climate variability; namely the fact that there has been a preponderance of La Niña events over the past decade, which have a short-term cooling influence on global surface temperatures. When there are more La Niñas, we expect temperatures to fall below the average model projection, and when there are more El Niños, we expect temperatures to be above the projection, as may be the case when 2015 breaks the temperature record.

We can’t predict changes in solar activity, volcanic eruptions, or natural ocean cycles ahead of time. If we want to evaluate the accuracy of long-term global warming model projections, we have to account for the difference between the simulated and observed changes in these factors. When the authors of this study did so, they found that climate models have very accurately projected the observed global surface warming trend.

In other words, as I discussed in my book and Denial101x lecture, climate models have proven themselves reliable in predicting long-term global surface temperature changes. In fact, even more reliable than I realized.

Denial101x climate science success stories lecture by Dana Nuccitelli.

There’s a common myth that models are unreliable, often based on apples-to-oranges comparisons, like looking at satellite estimates of temperatures higher in the atmosphere versus modeled surface air temperatures. Or, some contrarians like John Christy will only consider the temperature high in the atmosphere, where satellite estimates are less reliable, and where people don’t live.

This new study has shown that when we do an apples-to-apples comparison, climate models have done a good job projecting the observed temperatures where humans live. And those models predict that unless we take serious and immediate action to reduce human carbon pollution, global warming will continue to accelerate into dangerous territory.

Nova técnica estima multidões analisando atividade de celulares (BBC Brasil)

3 junho 2015

Multidão em aeroporto | Foto: Getty

Pesquisadores buscam maneiras mais eficientes de medir tamanho de multidões sem depender de imagens

Um estudo de uma universidade britânica desenvolveu um novo meio de estimar multidões em protestos ou outros eventos de massa: através da análise de dados geográficos de celulares e Twitter.

Pesquisadores da Warwick University, na Inglaterra, analisaram a geolocalização de celulares e de mensagens no Twitter durante um período de dois meses em Milão, na Itália.

Em dois locais com números de visitantes conhecidos – um estádio de futebol e um aeroporto – a atividade nas redes sociais e nos celulares aumentou e diminuiu de maneira semelhante ao fluxo de pessoas.

A equipe disse que, utilizando esta técnica, pode fazer medições em eventos como protestos.

Outros pesquisadores enfatizaram o fato de que há limitações neste tipo de dados – por exemplo, somente uma parte da população usa smartphones e Twitter e nem todas as áreas em um espaço estão bem servidos de torres telefônicas.

Mas os autores do estudo dizem que os resultados foram “um excelente ponto de partida” para mais estimativas do tipo – com mais precisão – no futuro.

“Estes números são exemplos de calibração nos quais podemos nos basear”, disse o coautor do estudo, Tobias Preis.

“Obviamente seria melhor termos exemplos em outros países, outros ambientes, outros momentos. O comportamento humano não é uniforme em todo o mundo, mas está é uma base muito boa para conseguir estimativas iniciais.”

O estudo, divulgado na publicação científica Royal Society Open Science, é parte de um campo de pesquisa em expansão que explora o que a atividade online pode revelar sobre o comportamento humano e outros fenômenos reais.

Foto: F. Botta et al

Cientistas compararam dados oficiais de visitantes em aeroporto e estádio com atividade no Twitter e no celular

Federico Botta, estudante de PhD que liderou a análise, afirmou que a metodologia baseada em celulares tem vantagens importantes sobre outros métodos para estimar o tamanho de multidões – que costumam se basear em observações no local ou em imagens.

“Este método é muito rápido e não depende do julgamento humano. Ele só depende dos dados que vêm dos telefones celulares ou da atividade no Twitter”, disse à BBC.

Margem de erro

Com dois meses de dados de celulares fornecidos pela Telecom Italia, Botta e seus colegas se concentraram no aeroporto de Linate e no estádio de futebol San Siro, em Milão.

Eles compararam o número de pessoas que se sabia estarem naqueles locais a cada momento – baseado em horários de voos e na venda de ingressos para os jogos de futebol – com três tipos de atividade em telefones celulares: o número de chamadas feitas e de mensagens de texto enviadas, a quantidade de internet utilizada e o volume de tuítes feitos.

“O que vimos é que estas atividades realmente tinham um comportamento muito semelhante ao número de pessoas no local”, afirma Botta.

Isso pode não parecer tão surpreendente, mas, especialmente no estádio de futebol, os padrões observados pela equipe eram tão confiáveis que eles conseguiam até fazer previsões.

Houve dez jogos de futebol no período em que o experimento foi feito. Com base nos dados de nove jogos, foi possível estimar quantas pessoas estariam no décimo jogo usando apenas os dados dos celulares.

“Nossa porcentagem absoluta média de erro é cerca de 13%. Isso significa que nossas estimativas e o número real de pessoas têm uma diferença entre si, em valores absolutos, de cerca de 13%”, diz Botta.

De acordo com os pesquisadores, esta margem de erro é boa em comparação com as técnicas tradicionais baseadas em imagens e no julgamento humano.

Eles deram o exemplo do manifestação em Washington, capital americana, conhecida como “Million Man March” (Passeata do milhão, em tradução livre) em 1995, em que mesmo as análises mais criteriosas conseguiram produzir estimativas com 20% de erro – depois que medições iniciais variaram entre 400 mil e dois milhões de pessoas.

Multidão em estádio italiano | Foto: Getty

Precisão de dados coletados em estádio de futebol surpreendeu até mesmo a equipe de pesquisadores

Segundo Ed Manley, do Centro para Análise Espacial Avançada do University College London, a técnica tem potencial e as pessoas devem sentir-se “otimistas, mas cautelosas” em relação ao uso de dados de celulares nestas estimativas.

“Temos essas bases de dados enormes e há muito o que pode ser feito com elas… Mas precisamos ter cuidado com o quanto vamos exigir dos dados”, afirmou.

Ele também chama a atenção para o fato de que tais informações não refletem igualitariamente uma população.

“Há vieses importantes aqui. Quem exatamente estamos medindo com essas bases de dados?”, o Twitter, por exemplo, diz Manley, tem uma base de usuários relativamente jovem e de classe alta.

Além destas dificuldades, há o fato de que é preciso escolher com cuidado as atividades que serão medidas, porque as pessoas usam seus telefones de maneira diferente em diferentes lugares – mais chamadas no aeroporto e mais tuítes no futebol, por exemplo.

Outra ressalva importante é o fato de que toda a metodologia de análise defendida por Botta depende do sinal de telefone e internet – que varia muito de lugar para lugar, quando está disponível.

“Se estamos nos baseando nesses dados para saber onde as pessoas estão, o que acontece quando temos um problema com a maneira como os dados são coletados?”, indaga Manley.

How Facebook’s Algorithm Suppresses Content Diversity (Modestly) and How the Newsfeed Rules Your Clicks (The Message)

Zeynep Tufekci on May 7, 2015

Today, three researchers at Facebook published an article in Science on how Facebook’s newsfeed algorithm suppresses the amount of “cross-cutting” (i.e. likely to cause disagreement) news articles a person sees. I read a lot of academic research, and usually, the researchers are at a pains to highlight their findings. This one buries them as deep as it could, using a mix of convoluted language and irrelevant comparisons. So, first order of business is spelling out what they found. Also, for another important evaluation — with some overlap to this one — go read this post by University of Michigan professor Christian Sandvig.

The most important finding, if you ask me, is buried in an appendix. Here’s the chart showing that the higher an item is in the newsfeed, the more likely it is clicked on.

Notice how steep the curve is. The higher the link, more (a lot more) likely it will be clicked on. You live and die by placement, determined by the newsfeed algorithm. (The effect, as Sean J. Taylor correctly notes, is a combination of placement, and the fact that the algorithm is guessing what you would like). This was already known, mostly, but it’s great to have it confirmed by Facebook researchers (the study was solely authored by Facebook employees).

The most important caveat that is buried is that this study is not about all of Facebook users, despite language at the end that’s quite misleading. The researchers end their paper with: “Finally, we conclusively establish that on average in the context of Facebook…” No. The research was conducted on a small, skewed subset of Facebook users who chose to self-identify their political affiliation on Facebook and regularly log on to Facebook, about ~4% of the population available for the study. This is super important because this sampling confounds the dependent variable.

The gold standard of sampling is random, where every unit has equal chance of selection, which allows us to do amazing things like predict elections with tiny samples of thousands. Sometimes, researchers use convenience samples — whomever they can find easily — and those can be okay, or not, depending on how typical the sample ends up being compared to the universe. Sometimes, in cases like this, the sampling affects behavior: people who self-identify their politics are almost certainly going to behave quite differently, on average, than people who do not, when it comes to the behavior in question which is sharing and clicking through ideologically challenging content. So, everything in this study applies only to that small subsample of unusual people. (Here’s a post by the always excellent Eszter Hargittai unpacking the sampling issue further.) The study is still interesting, and important, but it is not a study that can generalize to Facebook users. Hopefully that can be a future study.

What does the study actually say?

  • Here’s the key finding: Facebook researchers conclusively show that Facebook’s newsfeed algorithm decreases ideologically diverse, cross-cutting content people see from their social networks on Facebook by a measurable amount. The researchers report that exposure to diverse content is suppressed by Facebook’s algorithm by 8% for self-identified liberals and by 5% for self-identified conservatives. Or, as Christian Sandvig puts it, “the algorithm filters out 1 in 20 cross-cutting hard news stories that a self-identified conservative sees (or 5%) and 1 in 13cross-cutting hard news stories that a self-identified liberal sees (8%).” You are seeing fewer news items that you’d disagree with which are shared by your friends because the algorithm is not showing them to you.
  • Now, here’s the part which will likely confuse everyone, but it should not. The researchers also report a separate finding that individual choice to limit exposure through clicking behavior results in exposure to 6% less diverse content for liberals and 17% less diverse content for conservatives.

Are you with me? One novel finding is that the newsfeed algorithm (modestly) suppresses diverse content, and another crucial and also novel finding is that placement in the feed is (strongly) influential of click-through rates.

Researchers then replicate and confirm a well-known, uncontested and long-established finding which is that people have a tendency to avoid content that challenges their beliefs. Then, confusingly, the researchers compare whether algorithm suppression effect size is stronger than people choosing what to click, and have a lot of language that leads Christian Sandvig to call this the “it’s not our fault” study. I cannot remember a worse apples to oranges comparison I’ve seen recently, especially since these two dynamics, algorithmic suppression and individual choice, have cumulative effects.

Comparing the individual choice to algorithmic suppression is like asking about the amount of trans fatty acids in french fries, a newly-added ingredient to the menu, and being told that hamburgers, which have long been on the menu, also have trans-fatty acids — an undisputed, scientifically uncontested and non-controversial fact. Individual self-selection in news sources long predates the Internet, and is a well-known, long-identified and well-studied phenomenon. Its scientific standing has never been in question. However, the role of Facebook’s algorithm in this process is a new — and important — issue. Just as the medical profession would be concerned about the amount of trans-fatty acids in the new item, french fries, as well as in the existing hamburgers, researchers should obviously be interested in algorithmic effects in suppressing diversity, in addition to long-standing research on individual choice, since the effects are cumulative. An addition, not a comparison, is warranted.

Imagine this (imperfect) analogy where many people were complaining, say, a washing machine has a faulty mechanism that sometimes destroys clothes. Now imagine washing machine company research paper which finds this claim is correct for a small subsample of these washing machines, and quantifies that effect, but also looks into how many people throw out their clothes before they are totally worn out, a well-established, undisputed fact in the scientific literature. The correct headline would not be “people throwing out used clothes damages more dresses than the the faulty washing machine mechanism.” And if this subsample was drawn from one small factory located everywhere else than all the other factories that manufacture the same brand, and produced only 4% of the devices, the headline would not refer to all washing machines, and the paper would not (should not) conclude with a claim about the average washing machine.

Also, in passing the paper’s conclusion appears misstated. Even though the comparison between personal choice and algorithmic effects is not very relevant, the result is mixed, rather than “conclusively establish[ing] that on average in the context of Facebook individual choices more than algorithms limit exposure to attitude-challenging content”. For self-identified liberals, the algorithm was a stronger suppressor of diversity (8% vs. 6%) while for self-identified conservatives, it was a weaker one (5% vs 17%).)

Also, as Christian Sandvig states in this post, and Nathan Jurgenson in this important post here, and David Lazer in the introduction to the piece in Science explore deeply, the Facebook researchers are not studying some neutral phenomenon that exists outside of Facebook’s control. The algorithm is designed by Facebook, and is occasionally re-arranged, sometimes to the devastation of groups who cannot pay-to-play for that all important positioning. I’m glad that Facebook is choosing to publish such findings, but I cannot but shake my head about how the real findings are buried, and irrelevant comparisons take up the conclusion. Overall, from all aspects, this study confirms that for this slice of politically-engaged sub-population, Facebook’s algorithm is a modest suppressor of diversity of content people see on Facebook, and that newsfeed placement is a profoundly powerful gatekeeper for click-through rates. This, not all the roundabout conversation about people’s choices, is the news.

Late Addition: Contrary to some people’s impressions, I am not arguing against all uses of algorithms in making choices in what we see online. The questions that concern me are how these algorithms work, what their effects are, who controls them, and what are the values that go into the design choices. At a personal level, I’d love to have the choice to set my newsfeed algorithm to “please show me more content I’d likely disagree with” — something the researchers prove that Facebook is able to do.

Is the universe a hologram? (Science Daily)

Date:
April 27, 2015
Source:
Vienna University of Technology
Summary:
The ‘holographic principle,’ the idea that a universe with gravity can be described by a quantum field theory in fewer dimensions, has been used for years as a mathematical tool in strange curved spaces. New results suggest that the holographic principle also holds in flat spaces. Our own universe could in fact be two dimensional and only appear three dimensional — just like a hologram.

Is our universe a hologram? Credit: TU Wien 

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must by physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.


Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler. Entanglement Entropy in Galilean Conformal Field Theories and Flat HolographyPhysical Review Letters, 2015; 114 (11) DOI: 10.1103/PhysRevLett.114.111602

Extending climate predictability beyond El Niño (Science Daily)

Date: April 21, 2015

Source: University of Hawaii – SOEST

Summary: Tropical Pacific climate variations and their global weather impacts may be predicted much further in advance than previously thought, according to research by an international team of climate scientists. The source of this predictability lies in the tight interactions between the ocean and the atmosphere and among the Atlantic, the Pacific and the Indian Oceans. Such long-term tropical climate forecasts are useful to the public and policy makers, researchers say.


This image shows inter-basin coupling as a cause of multi-year tropical Pacific climate predictability: Impact of Atlantic warming on global atmospheric Walker Circulation (arrows). Rising air over the Atlantic subsides over the equatorial Pacific, causing central Pacific sea surface cooling, which in turn reinforces the large-scale wind anomalies. Credit: Yoshimitsu Chikamoto

Tropical Pacific climate variations and their global weather impacts may be predicted much further in advance than previously thought, according to research by an international team of climate scientists from the USA, Australia, and Japan. The source of this predictability lies in the tight interactions between the ocean and the atmosphere and among the Atlantic, the Pacific and the Indian Oceans. Such long-term tropical climate forecasts are useful to the public and policy makers.

At present computer simulations can predict the occurrence of an El Niño event at best three seasons in advance. Climate modeling centers worldwide generate and disseminate these forecasts on an operational basis. Scientists have assumed that the skill and reliability of such tropical climate forecasts drop rapidly for lead times longer than one year.

The new findings of predictable climate variations up to three years in advance are based on a series of hindcast computer modeling experiments, which included observed ocean temperature and salinity data. The results are presented in the April 21, 2015, online issue of Nature Communications.

“We found that, even three to four years after starting the prediction, the model was still tracking the observations well,” says Yoshimitsu Chikamoto at the University of Hawaii at Manoa International Pacific Research Center and lead author of the study. “This implies that central Pacific climate conditions can be predicted over several years ahead.”

“The mechanism is simple,” states co-author Shang-Ping Xie from the University of California San Diego. “Warmer water in the Atlantic heats up the atmosphere. Rising air and increased precipitation drive a large atmospheric circulation cell, which then sinks over the Central Pacific. The relatively dry air feeds surface winds back into the Atlantic and the Indian Ocean. These winds cool the Central Pacific leading to conditions, which are similar to a La Niña Modoki event. The central Pacific cooling then strengthens the global atmospheric circulation anomalies.”

“Our results present a paradigm shift,” explains co-author Axel Timmermann, climate scientist and professor at the University of Hawaii. “Whereas the Pacific was previously considered the main driver of tropical climate variability and the Atlantic and Indian Ocean its slaves, our results document a much more active role for the Atlantic Ocean in determining conditions in the other two ocean basins. The coupling between the oceans is established by a massive reorganization of the atmospheric circulation.”

The impacts of the findings are wide-ranging. “Central Pacific temperature changes have a remote effect on rainfall in California and Australia. Seeing the Atlantic as an important contributor to these rainfall shifts, which happen as far away as Australia, came to us as a great surprise. It highlights the fact that on multi-year timescales we have to view climate variability in a global perspective, rather than through a basin-wide lens,” says Jing-Jia Luo, co-author of the study and climate scientist at the Bureau of Meteorology in Australia.

“Our study fills the gap between the well-established seasonal predictions and internationally ongoing decadal forecasting efforts. We anticipate that the main results will soon be corroborated by other climate computer models,” concludes co-author Masahide Kimoto from the University of Tokyo, Japan.

Journal Reference:

  1. Yoshimitsu Chikamoto, Axel Timmermann, Jing-Jia Luo, Takashi Mochizuki, Masahide Kimoto, Masahiro Watanabe, Masayoshi Ishii, Shang-Ping Xie, Fei-Fei Jin. Skilful multi-year predictions of tropical trans-basin climate variabilityNature Communications, 2015; 6: 6869 DOI: 10.1038/ncomms7869

Geoengineering proposal may backfire: Ocean pipes ‘not cool,’ would end up warming climate (Science Daily)

Date: March 19, 2015

Source: Carnegie Institution

Summary: There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits. One idea involves using ocean pipes to facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters. New research shows that these pipes could actually increase global warming quite drastically.


To combat global climate change caused by greenhouse gases, alternative energy sources and other types of environmental recourse actions are needed. There are a variety of proposals that involve using vertical ocean pipes to move seawater to the surface from the depths in order to reap different potential climate benefits.A new study from a group of Carnegie scientists determines that these types of pipes could actually increase global warming quite drastically. It is published in Environmental Research Letters.

One proposed strategy–called Ocean Thermal Energy Conversion, or OTEC–involves using the temperature difference between deeper and shallower water to power a heat engine and produce clean electricity. A second proposal is to move carbon from the upper ocean down into the deep, where it wouldn’t interact with the atmosphere. Another idea, and the focus of this particular study, proposes that ocean pipes could facilitate direct physical cooling of the surface ocean by replacing warm surface ocean waters with colder, deeper waters.

“Our prediction going into the study was that vertical ocean pipes would effectively cool the Earth and remain effective for many centuries,” said Ken Caldeira, one of the three co-authors.

The team, which also included lead author Lester Kwiatkowski as well as Katharine Ricke, configured a model to test this idea and what they found surprised them. The model mimicked the ocean-water movement of ocean pipes if they were applied globally reaching to a depth of about a kilometer (just over half a mile). The model simulated the motion created by an idealized version of ocean pipes, not specific pipes. As such the model does not include real spacing of pipes, nor does it calculate how much energy they would require.

Their simulations showed that while global temperatures could be cooled by ocean pipe systems in the short term, warming would actually start to increase just 50 years after the pipes go into use. Their model showed that vertical movement of ocean water resulted in a decrease of clouds over the ocean and a loss of sea-ice.

Colder air is denser than warm air. Because of this, the air over the ocean surface that has been cooled by water from the depths has a higher atmospheric pressure than the air over land. The cool air over the ocean sinks downward reducing cloud formation over the ocean. Since more of the planet is covered with water than land, this would result in less cloud cover overall, which means that more of the Sun’s rays are absorbed by Earth, rather than being reflected back into space by clouds.

Water mixing caused by ocean pipes would also bring sea ice into contact with warmer waters, resulting in melting. What’s more, this would further decrease the reflection of the Sun’s radiation, which bounces off ice as well as clouds.

After 60 years, the pipes would cause an increase in global temperature of up to 1.2 degrees Celsius (2.2degrees Fahrenheit). Over several centuries, the pipes put the Earth on a warming trend towards a temperature increase of 8.5 degrees Celsius (15.3 degrees Fahrenheit).

“I cannot envisage any scenario in which a large scale global implementation of ocean pipes would be advisable,” Kwiatkowski said. “In fact, our study shows it could exacerbate long-term warming and is therefore highly inadvisable at global scales.”

The authors do say, however, that ocean pipes might be useful on a small scale to help aerate ocean dead zones.


Journal Reference:

  1. Lester Kwiatkowski, Katharine L Ricke and Ken Caldeira. Atmospheric consequences of disruption of the ocean thermoclineEnvironmental Research Letters, 2015 DOI: 10.1088/1748-9326/10/3/034016

Kurt Vonnegut graphed the world’s most popular stories (The Washington Post)

 February 9

This post comes via Know More, Wonkblog’s social media site.

Kurt Vonnegut claimed that his prettiest contribution to culture wasn’t a popular novel like “Cat’s Cradle” or “Slaughterhouse-Five,” but a largely forgotten master’s thesis he wrote while studying anthropology at the University of Chicago. The thesis argued that a main character has ups and downs that can be graphed to reveal the taxonomy of a story, as well as something about the culture it comes from. “The fundamental idea is that stories have shapes which can be drawn on graph paper, and that the shape of a given society’s stories is at least as interesting as the shape of its pots or spearheads,” Vonnegut said.

In addition to churning out novels, Vonnegut was deeply interested in the practice of writing. The tips he wrote for other writers – including “How to write with style” and “Eight rules for writing fiction” — are concise, funny, and still very useful. The thesis shows that Vonnegut’s preoccupation with the nuts and bolts of writing started early in his career.

Vonnegut spelled out the main argument of his thesis in a hilarious lecture, where he also graphed some of the more common story types. (Vonnegut was famously funny and irreverent, and you can hear the audience losing it throughout.) He published the transcript of this talk in his memoir, “A Man Without a Country,” which includes his own drawings of the graphs.

Vonnegut plotted stories on a vertical “G-I axis,” representing the good or ill fortunes of the main character, and a horizontal “B-E” axis that represented the course of the story from beginning to end.

One of the most popular story types is what Vonnegut called “Man in Hole,” graphed here by designer Maya Eilam. Somebody gets in trouble, gets out of it again, and ends up better off than where they started. “You see this story again and again. People love it, and it is not copyrighted,” Vonnegut says in his lecture. A close variant is “Boy Loses Girl,” in which a person gets something amazing, loses it, and then gets it back again.

Creation and religious stories follow a different arc, one that feels unfamiliar to modern readers. In most creation stories, a deity delivers incremental gifts that build to form the world. The Old Testament features the same pattern, except it ends with humans getting the rug pulled out from under them.

The New Testament follows a more modern story path, according to Vonnegut. He was delighted by the similarity of that story arc with Cinderella, which he called, “The most popular story in our civilization. Every time it’s retold, someone makes a million dollars.”

Some of the most notable works of literature are more ambiguous – like Kafka’s “The Metamorphosis,” which starts off bad and gets infinitely worse, and “Hamlet,” in which story developments are deeply ambiguous.

In his lecture, Vonnegut explains why we consider Hamlet, with this ambiguous and uncomfortable story type, to be a masterpiece:

“Cinderella or Kafka’s cockroach? I don’t think Shakespeare believed in a heaven or hell any more than I do. And so we don’t know whether it’s good news or bad news.

“I have just demonstrated to you that Shakespeare was as poor a storyteller as any Arapaho.

“But there’s a reason we recognize Hamlet as a masterpiece: it’s that Shakespeare told us the truth, and people so rarely tell us the truth in this rise and fall here [indicates blackboard]. The truth is, we know so little about life, we don’t really know what the good news is and what the bad news is.

“And if I die — God forbid — I would like to go to heaven to ask somebody in charge up there, ‘Hey, what was the good news and what was the bad news?’”

Matemática evolutiva (Folha de S.Paulo)

Hélio Schwartsman

26 de janeiro de 2015

SÃO PAULO – Para quem gosta de matemática, uma boa leitura é “Mathematics and the Real World” (matemática e o mundo real), de Zvi Artstein, professor do Instituto Weizmann, de Israel.

O autor começa dividindo a matemática em duas, uma mais natural, que a evolução nos preparou (e também a outros bichos) para compreender, e outra totalmente abstrata, cuja intelecção exige refrear todas as nossas intuições. No primeiro grupo estão a aritmética e parte da geometria. No segundo, destacam-se lógica formal, estatística, teoria dos conjuntos e o grosso do material sobre o qual se debruçam hoje os matemáticos.

Egípicios, babilônios, indianos e outros povos da Antiguidade desenvolveram razoavelmente bem a matemática natural. Fizeram-no por razões práticas, como facilitar o comércio e o cálculo astrológico. Foram os gregos, contudo, que, tentando escapar ao que consideravam ilusões de ótica do mundo sensível, resolveram fiar-se na matemática para descobrir o “real”. É aqui que a matemática ganha autonomia para florescer para além das intuições.

Na sequência, Artstein traça uma interessantíssima história da ciência, destacando quais transformações foram necessárias na matemática para que pudessem firmar-se teorias e modelos como heliocentrismo, gravitação universal, relatividade, mecânica quântica, cordas etc. Não foge, embora nem sempre desenvolva muito, das implicações filosóficas.

O autor discute também assuntos mais classicamente matemáticos, como incerteza, caos, infinito, os teoremas da incompletude de Gödel. Numa concessão ao mundo prático, aborda quase apressadamente algumas questões da sociologia e da computação. Finaliza advogando por reformas no ensino da matemática.

O bacana do livro é que Artstein consegue transformar um assunto potencialmente árido num texto que se lê com a fluidez de um romance. Não é para qualquer um.

Monitoramento e análise de dados – A crise nos mananciais de São Paulo (Probabit)

Situação 25.1.2015

4,2 milímetros de chuva em 24.1.2015 nos reservatórios de São Paulo (média ponderada).

305 bilhões de litros (13,60%) de água em estoque. Em 24 horas, o volume subiu 4,4 bilhões de litros (0,19%).

134 dias até acabar toda a água armazenada, com chuvas de 996 mm/ano e mantida a eficiência corrente do sistema.

66% é a redução no consumo necessária para equilibrar o sistema nas condições atuais e 33% de perdas na distribuição.


Para entender a crise

Como ler este gráfico?

Os pontos no gráfico mostram 4040 intervalos de 1 ano para o acumulado de chuva e a variação no estoque total de água (do dia 1º de janeiro de 2003/2004 até hoje). O padrão mostra que mais chuva faz o estoque variar para cima e menos chuva para baixo, como seria de se esperar.

Este e os demais gráficos desta página consideram sempre a capacidade total de armazenamento de água em São Paulo (2,24 trilhões de litros), isto é, a soma dos reservatórios dos Sistemas Cantareira, Alto Tietê, Guarapiranga, Cotia, Rio Grande e Rio Claro. Quer explorar os dados?

A região de chuva acumulada de 1.400 mm a 1.600 mm ao ano concentra a maioria dos pontos observados de 2003 para cá. É para esse padrão usual de chuvas que o sistema foi projetado. Nessa região, o sistema opera sem grandes desvios de seu equilíbrio: máximo de 15% para cima ou para baixo em um ano. Por usar como referência a variação em 1 ano, esse modo de ver os dados elimina a oscilação sazonal de chuvas e destaca as variações climáticas de maior amplitude. Ver padrões ano a ano.

Uma segunda camada de informação no mesmo gráfico são as zonas de risco. A zona vermelha é delimitada pelo estoque atual de água em %. Todos os pontos dentro dessa área (com frequência indicada à direita) representam, portanto, situações que se repetidas levarão ao colapso do sistema em menos de 1 ano. A zona amarela mostra a incidência de casos que se repetidos levarão à diminuição do estoque. Só haverá recuperação efetiva do sistema se ocorrerem novos pontos acima da faixa amarela.

Para contextualizar o momento atual e dar uma ideia de tendência, pontos interligados em azul destacam a leitura adicionada hoje (acumulado de chuva e variação entre hoje e mesmo dia do ano passado) e as leituras de 30, 60 e 90 atrás (em tons progressivamente mais claros).


Discussão a partir de um modelo simples

O ajuste de um modelo linear aos casos observados mostra que existe uma razoável correlação entre o acumulado de chuva e a variação no estoque hídrico, como o esperado.

Ao mesmo tempo, fica clara a grande dispersão de comportamento do sistema, especialmente na faixa de chuvas entre 1.400 mm e 1.500 mm. Acima de 1.600 mm há dois caminhos bem separados, o inferior corresponde ao perído entre 2009 e 2010 quando os reservatórios ficaram cheios e não foi possível estocar a chuva excedente.

Além de uma gestão deliberadamente mais ou menos eficiente da água disponível, podem contribuir para as flutuações observadas as variações combinadas no consumo, nas perdas e na efetividade da captação de água. Entretanto, não há dados para examinarmos separadamente o efeito de cada uma dessas variáveis.

Simulação 1: Efeito do aumento do estoque de água

Nesta simulação foi hipoteticamente incluído no sistema de abastecimento a reserva adicional da represa Billings, com volume de 998 bilhões de litros (já descontados o braço “potável” do reservatório Rio Grande).

Aumentar o estoque disponível não muda o ponto de equilíbrio, mas altera a inclinação da reta que representa a relação entre a chuva e a variação no estoque. A diferença de inclinação entre a linha azul (simulada) e a vermelha (real) mostra o efeito da ampliação do estoque.

Se a Billings não fosse hoje um depósito gigante de esgotos, poderíamos estar fora da situação crítica. Entretanto, vale enfatizar que o simples aumento de estoque não é capaz de evitar indefinidamente a escassez se a quantidade de chuva persistir abaixo do ponto de equilíbrio.

Simulação 2: Efeito da melhoria na eficiência

O único modo de manter o estoque estável quando as chuvas se tornam mais escassas é mudar a ‘curva de eficiência’ do sistema. Em outras palavras, é preciso consumir menos e se adaptar a uma menor entrada de água no sistema.

A linha azul no gráfico ao lado indica o eixo ao redor do qual os pontos precisariam flutuar para que o sistema se equilibrasse com uma oferta anual de 1.200 mm de chuva.

A melhoria da eficiência pode ser alcançada por redução no consumo, redução nas perdas e melhoria na tecnologia de captação de água (por exemplo pela recuperação das matas ciliares e nascentes em torno dos mananciais).

Se persistir a situação desenhada de 2013 a 2015, com chuvas em torno de 1.000 mm será necessário atingir uma curva de eficiência que está muito distante do que já se conseguiu praticar, acima mesmo dos melhores casos já observados.

Com o equilíbrio de “projeto” em torno de 1.500 mm, a conta é mais ou menos assim: a Sabesp perde 500 mm (33% da água distribuída), a população consume 1.000 mm. Para chegar rapidamente ao equilíbrio em 1.000 mm, o consumo deveria ser de 500 mm, uma vez que as perdas não poderão ser rapidamente evitadas e acontecem antes do consumo.

Se 1/3 da água distribuída não fosse sistematicamente perdida não haveria crise. Os 500 mm de chuva disperdiçados anualmente pela precariedade do sistema de distribução não fazem falta quando chove 1.500 mm, mas com 1.000 mm cada litro jogado fora de um lado é um litro que terá de ser economizado do outro.

Simulação 3: Eficiência corrente e economia necessária

Para estimar a eficiência corrente são usadas as últimas 120 observações do comportamento do sistema.

A curva de eficiência corrente permite estimar o ponto de equilíbrio atual do sistema (ponto vermelho em destaque).

O ponto azul indica a última observação do acumulado anual de chuvas. A diferença entre os dois mede o tamanho do desequilíbrio.

Apenas para estancar a perda de água do sistema, é preciso reduzir em 49% o fluxo de retirada. Como esse fluxo inclui todas as perdas, se depender apenas da redução no consumo, a economia precisa ser de 66% se as perdas forem de 33%, ou de 56% se as perdas forem de 17%.

Parece incrível que a eficiência do sistema esteja tão baixa em meio a uma crise tão grave. A tentativa de contenção no consumo está aumentando o consumo? Volumes menores e mais rasos evaporam mais? As pessoas ainda não perceberam a tamanho do desastre?


Prognóstico

Supondo que novos estoques de água não serão incorporados no curto prazo, o prognóstico sobre se e quando a água vai acabar depende da quantidade de chuva e da eficiência do sistema.

O gráfico mostra quantos dias restam de água em função do acumulado de chuva, considerando duas curvas de eficiência: a média e a corrente (estimada a partir dos últimos 120 dias).

O ponto em destaque considera a observação mais recente de chuva acumulada no ano e mostra quantos dias restam de água se persistirem as condições atuais de chuva e de eficiência.

O prognóstico é uma referência que varia de acordo com as novas observações e não tem probabilidade definida. Trata-se de uma projeção para melhor visualizar as condições necessárias para escapar do colapso.

Porém, lembrando que a média histórica de chuvas em São Paulo é de 1.441 mm ao ano, uma curva que cruze esse limite significa um sistema com mais de 50% de chances de colapsar em menos de um ano. Somos capazes de evitar o desastre?


Os dados

O ponto de partida são os dados divulgados diariamente pela Sabesp. A série de dados original atualizada está disponível aqui.

Porém, há duas importantes limitações nesses dados que podem distorcer a interpretação da realidade: 1) a Sabesp usa somente porcentagens para se referir a reservatórios com volumes totais muito diferentes; 2) a entrada de novos volumes não altera a base-de-cálculo sobre o qual essa porcentagem é medida.

Por isso, foi necessário corrigir as porcentagens da série de dados original em relação ao volume total atual, uma vez que os volumes que não eram acessíveis se tornaram acessíveis e, convenhamos, sempre estiveram lá nas represas. A série corrigida pode ser obtida aqui. Ela contém uma coluna adicional com os dados dos volumes reais (em bilhões de litros: hm3)

Além disso, decidimos tratar os dados de forma consolidada, como se toda a água estivesse em um único grande reservatório. A série de dados usada para gerar os gráficos desta página contém apenas a soma ponderada do estoque (%) e da chuva (mm) diários e também está disponível.

As correções realizadas eliminam os picos causados pelas entradas dos volumes mortos e permitem ver com mais clareza o padrão de queda do estoque em 2014.


Padrões ano a ano


Média e quartis do estoque durante o ano


Sobre este estudo

Preocupado com a escassez de água, comecei a estudar o problema ao final de 2014. Busquei uma abordagem concisa e consistente de apresentar os dados, dando destaque para as três variáveis que realmente importam: a chuva, o estoque total e a eficiência do sistema. O site entrou no ar em 16 de janeiro de 2015. Todos os dias, os modelos e os gráficos são refeitos com as novas informações.

Espero que esta página ajude a informar a real dimensão da crise da água em São Paulo e estimule mais ações para o seu enfrentamento.

Mauro Zackiewicz

maurozacgmail.com

scientia probabitlaboratório de dados essenciais

Can Humanity’s ‘Great Acceleration’ Be Managed and, If So, How? (Dot Earth, New York Times)

By January 15, 2015 5:00 pm

Updated below | Through three-plus decades of reporting, I’ve been seeking ways to better mesh humanity’s infinite aspirations with life on a finite planet. (Do this Google search — “infinite aspirations” “finite planet” Revkin – to get the idea. Also read the 2002 special issue of Science Times titled “Managing Planet Earth.”)

So I was naturally drawn to a research effort that surfaced in 2009 defining a “safe operating space for humanity” by estimating a set of nine “planetary boundaries” for vital-sign-style parameters like levels of greenhouse gases, flows of nitrogen and phosphorus and loss of biodiversity.

Photo

A diagram from a 2009 analysis of "planetary boundaries" showed humans were already hitting limits (red denotes danger zones).
A diagram from a 2009 analysis of “planetary boundaries” showed humans were already hitting limits (red denotes danger zones).Credit Stockholm Resilience Center

The same was true for a related “Great Acceleration” dashboard showing humanity’s growth spurt (the graphs below), created by the International Geosphere-Biosphere Program.

Photo

A graphic illustrating how human social and economic trends, resource appetites and environmental impacts have surged since 1950.
A graphic illustrating how human social and economic trends, resource appetites and environmental impacts have surged since 1950.Credit International Geosphere-Biosphere Program

Who would want to drive a car without gauges tracking engine heat, speed and fuel levels? I use that artwork in all my talks.

Now, both the dashboard of human impacts and planetary boundaries have been updated. For more detail on the dashboard, explore the website of the geosphere-biosphere organization.

In a prepared statement, a co-author of the acceleration analysis, Lisa Deutsch, a senior lecturer at the Stockholm Resilience Center, saw little that was encouraging:

Of all the socio-economic trends only construction of new large dams seems to show any sign of the bending of the curves – or a slowing of the Great Acceleration. Only one Earth System trend indicates a curve that may be the result of intentional human intervention – the success story of ozone depletion. The leveling off of marine fisheries capture since the 1980s is unfortunately not due to marine stewardship, but to overfishing.

And all that acceleration (mostly since 1950, as I wrote yesterday) has pushed us out of four safe zones, according to the 18 authors of the updated assessment of environmental boundaries, published online today by the journal Science here: “Planetary Boundaries: Guiding human development on a changing planet.”

The paper is behind a paywall, but the Stockholm Resilience Center, which has led this work, has summarized the results, including the authors’ conclusion that we’re in the danger zone on four of the nine boundaries: climate change, loss of biosphere integrity, land-system change and alteration of biogeochemical cycles (for the nutrients phosphorus and nitrogen).

Their work has been a valuable prod to the community of scientists and policy analysts aiming to smooth the human journey, resulting in strings of additional studies. Some followup work has supported the concept, and even broadened it, as with a 2011 proposal by Kate Raworth of the aid group Oxfam to add social-justice boundaries, as well: “A Safe and Just Space for Humanity – Can We Live Within the Doughnut?

Photo

In 2011, <a href="http://www.oxfam.org/en/research/safe-and-just-space-humanity">Kate Raworth</a> at the aid group Oxfam proposed a framework for safe and just human advancement illustrated as a doughnut-shaped zone.
In 2011, Kate Raworth at the aid group Oxfam proposed a framework for safe and just human advancement illustrated as a doughnut-shaped zone.Credit Oxfam

But others have convincingly challenged many of the boundaries and also questioned their usefulness, given how both impacts of, and decisions about, human activities like fertilizing fields or tapping aquifers are inherently local — not planetary in scale. (You’ll hear from some critics below.)

In 2012, the boundaries work helped produce a compelling alternative framework for navigating the Anthropocene — “Planetary Opportunities: A Social Contract for Global Change Science to Contribute to a Sustainable Future.”

I hope the public (and policy makers) will realize this is not a right-wrong, win-lose science debate. A complex planet dominated by a complicated young species will never be managed neatly. All of us, including environmental scientists, will continue to learn and adjust.

I was encouraged, for instance, to see the new iteration of the boundaries analysis take a much more refined view of danger zones, including more of an emphasis on the deep level of uncertainty in many areas:

Photo

A diagram from a paper defining "planetary boundaries" for human activities shows areas of greatest risk in red.
A diagram from a paper defining “planetary boundaries” for human activities shows areas of greatest risk in red.Credit Science

The authors, led by Will Steffen of Australian National University and Johan Rockström of the Stockholm Resilience Center, have tried to refine how they approach risks related to disrupting ecosystems – not simply pointing to lost biological diversity but instead devising a measure of general “biosphere integrity.”

That measure, and the growing human influence on the climate through the buildup of long-lived greenhouse gases are the main source of concern, they wrote:

Two core boundaries – climate change and biosphere integrity – have been identified, each of which has the potential on its own to drive the Earth System into a new state should they be substantially and persistently transgressed.

But the bottom line has a very retro feel, adding up to the kind of ominous, but generalized warnings that many environmental scientists and other scholars began giving with the “Limits to Growth” analysis in 1972. Here’s a cornerstone passage from the paper, reprising a longstanding view that the environmental conditions of the Holocene – the equable span since the end of the last ice age – is ideal:

The precautionary principle suggests that human societies would be unwise to drive the Earth System substantially away from a Holocene-like condition. A continuing trajectory away from the Holocene could lead, with an uncomfortably high probability, to a very different state of the Earth System, one that is likely to be much less hospitable to the development of human societies.

I sent the Science paper to a batch of environmental researchers who have been constructive critics of the Boundaries work. Four of them wrote a group response, posted below, which includes this total rejection of the idea that the Holocene is somehow special:

[M]ost species evolved before the Holocene and the contemporary ecosystems that sustain humanity are agroecosystems, urban ecosystems and other human-altered ecosystems….

Here’s their full response:

The Limits of Planetary Boundaries
Erle EllisBarry BrookLinus BlomqvistRuth DeFries

Steffen et al (2015) revise the “planetary boundaries framework” initially proposed in 2009 as the “safe limits” for human alteration of Earth processes (Rockstrom et al 2009). Limiting human harm to environments is a major challenge and we applaud all efforts to increase the public utility of global-change science. Yet the planetary boundaries (PB) framework – in its original form and as revised by Steffen et al – obscures rather than clarifies the environmental and sustainability challenges faced by humanity this century.

Steffen et al concede that “not all Earth system processes included in the PB have singular thresholds at the global/continental/ocean basin level.” Such processes include biosphere integrity (see Brook et al 2013), biogeochemical flows, freshwater use, and land-system change. “Nevertheless,” they continue, “it is important that boundaries be established for these processes.” Why? Where a global threshold is unknown or lacking, there is no scientifically robust way of specifying such a boundary – determining a limit along a continuum of environmental change becomes a matter of guesswork or speculation (see e.g. Bass 2009Nordhaus et al 2012). For instance, the land-system boundary for temperate forest is set at 50% of forest cover remaining. There is no robust justification for why this boundary should not be 40%, or 70%, or some other level.

While the stated objective of the PB framework is to “guide human societies” away from a state of the Earth system that is “less hospitable to the development of human societies”, it offers little scientific evidence to support the connection between the global state of specific Earth system processes and human well-being. Instead, the Holocene environment (the most recent 10,000 years) is assumed to be ideal. Yet most species evolved before the Holocene and the contemporary ecosystems that sustain humanity are agroecosystems, urban ecosystems and other human-altered ecosystems that in themselves represent some of the most important global and local environmental changes that characterize the Anthropocene. Contrary to the authors’ claim that the Holocene is the “only state of the planet that we know for certain can support contemporary human societies,” the human-altered ecosystems of the Anthropocene represent the only state of the planet that we know for certain can support contemporary civilization.

Human alteration of environments produces multiple effects, some advantageous to societies, such as enhanced food production, and some detrimental, like environmental pollution with toxic chemicals, excess nutrients and carbon emissions from fossil fuels, and the loss of wildlife and their habitats. The key to better environmental outcomes is not in ending human alteration of environments but in anticipating and mitigating their negative consequences. These decisions and trade-offs should be guided by robust evidence, with global-change science investigating the connections and tradeoffs between the state of the environment and human well-being in the context of the local setting, rather than by framing and reframing environmental challenges in terms of untestable assumptions about the virtues of past environments.

Even without specifying exact global boundaries, global metrics can be highly misleading for policy. For example, with nitrogen, where the majority of human emissions come from synthetic fertilizers, the real-world challenge is to apply just the right amount of nitrogen to optimize crop yields while minimizing nitrogen losses that harm aquatic ecosystems. Reducing fertilizer application in Africa might seem beneficial globally, yet the result in this region would be even poorer crop yields without any notable reduction in nitrogen pollution; Africa’s fertilizer use is already suboptimal for crop yields. What can look like a good or a bad thing globally can prove exactly the opposite when viewed regionally and locally. What use is a global indicator for a local issue? As in real estate, location is everything.

Finally, and most importantly, the planetary boundaries are burdened not only with major uncertainties and weak scientific theory – they are also politically problematic. Real world environmental challenges like nitrogen pollution, freshwater consumption and land-use change are ultimately a matter of politics, in the sense that there are losers and winners, and solutions have to be negotiated among many stakeholders. The idea of a scientific expert group determining top-down global limits on these activities and processes ignores these inevitable trade-offs and seems to preclude democratic resolution of these questions. It has been argued that (Steffen et al 2011):

Ultimately, there will need to be an institution (or institutions) operating, with authority, above the level of individual countries to ensure that the planetary boundaries are respected. In effect, such an institution, acting on behalf of humanity as a whole, would be the ultimate arbiter of the myriad trade-offs that need to be managed as nations and groups of people jockey for economic and social advantage. It would, in essence, become the global referee on the planetary playing field.

Here the planetary boundaries framework reaches its logical conclusion with a political scenario that is as unlikely as it is unpalatable. There is no ultimate global authority to rule over humanity or the environment. Science has a tremendously important role to play in guiding environmental management, not as a decider, but as a resource for deliberative, evidence-based decision making by the public, policy makers, and interest groups on the challenges, trade-offs and possible courses of action in negotiating the environmental challenges of societal development (DeFries et al 2012). Proposing that science itself can define the global environmental limits of human development is simultaneously unrealistic, hubristic, and a strategy doomed to fail.

I’ve posted the response online as a standalone document for easier downloading; there you can view the authors’ references, as well.

Update, 9:40 p.m.| Will Steffen, the lead author of the updated Planetary Boundaries analysis, sent this reply to Ellis and co-authors tonight:

Response to Ellis et al. on planetary boundaries

Of course we welcome constructive debate on and criticism of the planetary boundaries (PB) update paper. However, the comments of Ellis et al. appear to be more of a knee-jerk reaction to the original 2009 paper than a careful analysis of the present paper. In fact, one wonders if they have even read the paper, including the Supplementary Online Material (SOM) where much methodological detail is provided.

One criticism seems to be based on a rather bizarre conflation of a state of the Earth System with (i) the time when individual biological species evolved, and (ii) the nature and distribution of human-altered terrestrial ecosystems. This makes no sense from an Earth System science perspective. The state of the Earth System (a single system at the planetary level) also involves the oceans, the atmosphere, the cryosphere and very important processes like the surface energy balance and the flows and transformation of elements. It is the state of this single complex system, which provides the planetary life support system for humanity, that the PB framework is concerned with, not with fragmentary bits of it in isolation.

In particular, the PB framework is based on the fact – and I emphasise the word “fact” – that the relatively stable Holocene state of the Earth System (the past approximately 11,700 years) is the only state of the System that has allowed the development of agriculture, urban settlements and complex human societies. Some argue that humanity can now survive, and even thrive, in a rapidly destabilizing planetary environment, but that is a belief system based on supreme technological optimism, and is not a reasoned scientifically informed judgment. Also, Ellis et al. seem to conflate human alteration of terrestrial environments with human alteration of the fundamental state of the Earth System as a whole. These are two vastly different things.

The criticisms show further misunderstanding of the nature of complex systems like the Earth System and how they operate. For example, Ellis et al. claim that a process is not important unless it has a threshold. Even a cursory understanding of the carbon cycle, for example, shows that this is nonsense. Neither the terrestrial nor the marine carbon sinks have known large-scale thesholds yet they are exceedingly important for the functioning of the climate system, which does indeed have known large-scale thresholds such as the melting of the Greenland ice sheet. Sure, it is more challenging to define boundaries for processes that are very important for the resilience of the Earth System but don’t have large-scale thresholds, but it is not impossible. The zone of uncertainty tends to be larger for these boundaries, but as scientific understanding improves, this zone will narrow.

An important misrepresentation of our paper is the assertion that we are somehow suggesting that fertilizer application in Africa be reduced. Nothing could be further from the truth. In fact, if Ellis et al had taken the time to read the SOM, the excellent paper by Carpenter and Bennett (2011) on the P boundary, the equally excellent paper by de Vries et al. (2013) on the N boundary, and the paper by Steffen and Stafford Smith (2013) on the distribution and equity issues for many of the PBs, including N and P, they wouldn’t have made such a misrepresentation.

Finally, the Steffen et al. (2011) paper seems to have triggered yet another misrepresentation. The paragraph of the paper quoted by Ellis et al. is based on contributions from two of the authors who are experts in institutions and governance issues, and does not come from the natural science community. Nowhere in the paragraph quoted, nor in the Steffen et al. (2011) paper as a whole, is there the proposal for a “a scientific expert group determining top-down global limits…”. The paragraph reprinted by Ellis et al. doesn’t mention scientists at all. That is a complete misrepresentation of our work.

We reiterate that we very much welcome careful and constructive critiques of the PB update paper, preferably in the peer-reviewed literature. In fact, such critiques of the 2009 PB paper were very helpful in developing the 2015 paper. Knee-jerk reactions in the blogosphere make for interesting reading, but they are far less useful in advancing the science.

Update, Jan. 16, 2:09 p.m. | Johan Rockström and Katherine Richardson, authors of the boundaries analysis, sent these additional reactions to the Ellis et al. critique:

We are honored that Erle Ellis, Barry Brook, Linus Blomqvist and Ruth DeFries (Ellis et al.) show such strong interest in our Planetary Boundaries research. The 2015 science update draws upon the over 60 scientific articles that have been published specifically scrutinizing different aspects of the Planetary Boundaries framework (amongst them the contributions by all these four researchers), and the most recent advancements in Earth System science. This new paper scientifically addresses and clarifies all of the natural science related aspects of Ellis et al.’s critique. It can also be noted that Ellis et al.’s critique simply echoes the standpoints regarding Planetary Boundaries research that the same group (Blomqvist et al., 2012) brought forward in 2012. Now, as then, their criticisms seem largely to be based on misunderstandings and their own viewpoints:

(1) We have never argued that there are planetary scale tipping points for all Planetary Boundary processes. Furthermore, there does not need to be a tipping point for these processes and systems in order for them to function as key regulators of the stability of the Earth system. A good example here is the carbon sink in the biosphere (approximately 4.5 Gt/year) which has doubled over the past 50 years in response to human emissions of CO2 and, thus, provides a good example of Earth resilience at play;

(2) Establishing the Planetary Boundaries, i.e. identifying Earth System scale boundaries for environmental processes that regulate the stability of the planet, does not (of course) contradict or replace the need for local action, transparency and democratic processes. Our society has long accepted the need for local – and to some extent regional- environmental management. Scientific evidence has now accumulated that indicates a further need for management of some environmental challenges at the global level. Many years of multi-lateral climate negotiation indicate a recognized need for global management of the CO2 emissions that occur locally. Our Planetary Boundaries research identifies that there are also other processes critical to the functioning of the Earth System that are so impacted by human activities that they, too, demand management at the global level. Ours is a positive – not a doomsday – message. It will come as no surprise to any reader that there are environmental challenges associated with all of the 9 Earth System functions we examine. Through our research, we offer a framework that can be useful in developing management at a global level.

It is important to emphasize that Ellis et al. associate socio-political attributes to our work that do not exist. The Science paper published today (16th January 2015), is a natural science update and advancement of the planetary boundaries framework. It makes no attempt to enter the (very important) social science realm of equity, institutions or global governance. The implications attributed to the PB framework must, then, reflect Ellis et al.’s own normative values. Furthermore, Ellis et al. argue that the “key to better environmental outcomes is not ending human alteration” but “anticipating and mitigating the negative consequences” of human environmental perturbation. While Planetary Boundaries research does not dictate how societies should use the insights it provides, “anticipating negative consequences” is at the absolute core of our approach!

Regarding Earth system tipping points. As Will Steffen points out in his earlier response, it would have been scientifically more correct for Ellis et al. to refer not only to their own assessment of uncertainties regarding a potential biosphere tipping point but also to the response to their article by Terry Hughes et al. (2014). These researchers presented the current state of empirical evidence concerning changes in interactions and feedbacks and how they can (in several cases do!) trigger tipping points at ecosystem and biome scale, and that such non-linear dynamics at local to regional scale can add up to impacts at the Earth system scale.

A different worldview. The Ellis et al. critique appears not to be a scientific criticism per se but rather is based on their own interpretation of differences in worldview. They do not substantively put in question the stability of the Earth system as a basis for human development– see Will Steffen’s response. Thus, it appears that we and Ellis et al. are in agreement here. Of course species and ecosystems have evolved prior to the Holocene but only in the stable environment of the Holocene have humans been able to exploit the Earth system at scale (e.g., by inventing agriculture as a response to a stable hydro-climate in the Holocene).

Ellis et al. argue that the only constructive avenue is to “investigate the connections and trade-offs between the state of the environment and human well-being in the context of the local setting..:”. This is clearly not aligned with current scientific evidence. In the Anthropocene, there is robust evidence showing that we need to address global environmental change at the global level, as well as at the regional, national and local contexts, and in particular understanding cross-scale interactions between them.

On global governance. It seems hardly surprising, given the Ellis et al.’s misunderstanding of the Planetary Boundaries framework that their interpretation of the implications of operationalizing the framework rests also on misunderstandings. They claim the Planetary Boundaries framework translates to an “ultimate global authority to rule over humanity”. No one would argue that the current multi-lateral climate negotiations are an attempt to establish “ultimate global authority over humanity” and this is certainly never been suggested by the Planetary Boundaries research. In essence, the Planetary Boundary analysis simply identifies Earth System processes that – in the same manner as climate – regulate the stability of the Earth System, and if impacted too far by human activities potentially can disrupt the functioning of the Earth System. The Planetary Boundaries is, then, nothing more than a natural sciences contribution to an important societal discussion and which presents evidence which can support the definition of Planetary Boundaries to safeguard a stable and resilient Earth system. How this then translates to governance is another issue entirely and important social science contributions have addressed these (Galaz et al 2012). As our research shows, there is natural science evidence that global management of some environmental challenges is necessary. From the social science literature (Biermann et al., 2012) as well as from real world policy making, we see that such global scale regulation is possible to construct in a democratic manner and does establish a safe operating space, e.g. the Montreal protocol, a global agreement to address one of the identified planetary boundaries and which, to our knowledge, is never referred to as a “global authority ruling over humanity”. As noted above, the UNFCCC process is also fundamentally concerned with establishing the global “rules of the game” by which society can continue to develop within a climate planetary boundary. The Aichi targets (within the UN Convention on Biological Diversity) of setting aside marine and terrestrial areas for conservation are also good examples of the political translation of a science based concern over global loss of biodiversity. The coming SDG (Sustainable Development Goals) framework includes a proposed set of four goals (oceans, climate, biodiversity and freshwater), which is a de-facto example of applying planetary boundary thinking to create a global framework for safeguarding a stable environment on the planet for societies and communities across the world. We find it interesting – and encouraging – that societies and the world community are already developing management tools within several “planetary boundary domains”. In all cases, this is happening in good democratic order and building upon bottom-up processes and informed by science. This ought to be reassuring for Ellis et al. who portray implementation of Planetary Boundary thinking as a dark force of planetary rule.

*   *   *

[Reaction]

The Limits of Planetary Boundaries 2.0 (Brave New Climate)

Back in 2013, I led some research that critiqued the ‘Planetary Boundaries‘ concept (my refereed paper, Does the terrestrial biosphere have planetary tipping points?, appeared in Trends in Ecology & Evolution). I also blogged about this here: Worrying about global tipping points distracts from real planetary threats.

Today a new paper appeared in the journal Science, called “Planetary boundaries: Guiding human development on a changing planet“, which attempts to refine and clarify the concept. It states that four of nine planetary boundaries have been crossed, re-imagines the biodiversity boundary as one of ‘biodiversity integrity’, and introduces the concept of ‘novel entities’. A popular summary in the Washington Post can be read here. On the invitation of New York Times “Dot Earth” reporter Andy Revkin, my colleagues and I have written a short response, which I reproduce below. The full Dot Earth article can be read here.

The Limits of Planetary Boundaries
Erle EllisBarry BrookLinus BlomqvistRuth DeFries

Steffen et al (2015) revise the “planetary boundaries framework” initially proposed in 2009 as the “safe limits” for human alteration of Earth processes(Rockstrom et al 2009). Limiting human harm to environments is a major challenge and we applaud all efforts to increase the public utility of global-change science. Yet the planetary boundaries (PB) framework – in its original form and as revised by Steffen et al – obscures rather than clarifies the environmental and sustainability challenges faced by humanity this century.

Steffen et al concede that “not all Earth system processes included in the PB have singular thresholds at the global/continental/ocean basin level.” Such processes include biosphere integrity (see Brook et al 2013), biogeochemical flows, freshwater use, and land-system change. “Nevertheless,” they continue, “it is important that boundaries be established for these processes.” Why? Where a global threshold is unknown or lacking, there is no scientifically robust way of specifying such a boundary – determining a limit along a continuum of environmental change becomes a matter of guesswork or speculation (see e.g. Bass 2009;Nordhaus et al 2012). For instance, the land-system boundary for temperate forest is set at 50% of forest cover remaining. There is no robust justification for why this boundary should not be 40%, or 70%, or some other level.

While the stated objective of the PB framework is to “guide human societies” away from a state of the Earth system that is “less hospitable to the development of human societies”, it offers little scientific evidence to support the connection between the global state of specific Earth system processes and human well-being. Instead, the Holocene environment (the most recent 10,000 years) is assumed to be ideal. Yet most species evolved before the Holocene and the contemporary ecosystems that sustain humanity are agroecosystems, urban ecosystems and other human-altered ecosystems that in themselves represent some of the most important global and local environmental changes that characterize the Anthropocene. Contrary to the authors’ claim that the Holocene is the “only state of the planet that we know for certain can support contemporary human societies,” the human-altered ecosystems of the Anthropocene represent the only state of the planet that we know for certain can support contemporary civilization.

Human alteration of environments produces multiple effects, some advantageous to societies, such as enhanced food production, and some detrimental, like environmental pollution with toxic chemicals, excess nutrients and carbon emissions from fossil fuels, and the loss of wildlife and their habitats. The key to better environmental outcomes is not in ending human alteration of environments but in anticipating and mitigating their negative consequences. These decisions and trade-offs should be guided by robust evidence, with global-change science investigating the connections and tradeoffs between the state of the environment and human well-being in the context of the local setting, rather than by framing and reframing environmental challenges in terms of untestable assumptions about the virtues of past environments.

Even without specifying exact global boundaries, global metrics can be highly misleading for policy. For example, with nitrogen, where the majority of human emissions come from synthetic fertilizers, the real-world challenge is to apply just the right amount of nitrogen to optimize crop yields while minimizing nitrogen losses that harm aquatic ecosystems. Reducing fertilizer application in Africa might seem beneficial globally, yet the result in this region would be even poorer crop yields without any notable reduction in nitrogen pollution; Africa’s fertilizer use is already suboptimal for crop yields. What can look like a good or a bad thing globally can prove exactly the opposite when viewed regionally and locally. What use is a global indicator for a local issue? As in real estate, location is everything.

Finally, and most importantly, the planetary boundaries are burdened not only with major uncertainties and weak scientific theory – they are also politically problematic. Real world environmental challenges like nitrogen pollution, freshwater consumption and land-use change are ultimately a matter of politics, in the sense that there are losers and winners, and solutions have to be negotiated among many stakeholders. The idea of a scientific expert group determining top-down global limits on these activities and processes ignores these inevitable trade-offs and seems to preclude democratic resolution of these questions. It has been argued that (Steffen et al 2011):

Ultimately, there will need to be an institution (or institutions) operating, with authority, above the level of individual countries to ensure that the planetary boundaries are respected. In effect, such an institution, acting on behalf of humanity as a whole, would be the ultimate arbiter of the myriad trade-offs that need to be managed as nations and groups of people jockey for economic and social advantage. It would, in essence, become the global referee on the planetary playing field.

Here the planetary boundaries framework reaches its logical conclusion with a political scenario that is as unlikely as it is unpalatable. There is no ultimate global authority to rule over humanity or the environment. Science has a tremendously important role to play in guiding environmental management, not as a decider, but as a resource for deliberative, evidence-based decision making by the public, policy makers, and interest groups on the challenges, trade-offs and possible courses of action in negotiating the environmental challenges of societal development (DeFries et al 2012). Proposing that science itself can define the global environmental limits of human development is simultaneously unrealistic, hubristic, and a strategy doomed to fail.