Arquivo da tag: Modelagem

Climate Change – Catastrophic or Linear Slow Progression? (Armstrong Economics)

woolyrhinoIndeed, science was turned on its head after a discovery in 1772 near Vilui, Siberia, of an intact frozen woolly rhinoceros, which was followed by the more famous discovery of a frozen mammoth in 1787. You may be shocked, but these discoveries of frozen animals with grass still in their stomachs set in motion these two schools of thought since the evidence implied you could be eating lunch and suddenly find yourself frozen, only to be discovered by posterity.

baby-mammoth

The discovery of the woolly rhinoceros in 1772, and then frozen mammoths, sparked the imagination that things were not linear after all. These major discoveries truly contributed to the “Age of Enlightenment” where there was a burst of knowledge erupting in every field of inquisition. Such finds of frozen mammoths in Siberia continue to this day. This has challenged theories on both sides of this debate to explain such catastrophic events. These frozen animals in Siberia suggest strange events are possible even in climates that are not that dissimilar from the casts of dead victims who were buried alive after the volcanic eruption of 79 AD at Pompeii in ancient Roman Italy. Animals can be grazing and then suddenly freeze abruptly. That climate change was long before man invented the combustion engine.

Even the field of geology began to create great debates that perhaps the earth simply burst into a catastrophic convulsion and indeed the planet was cyclical — not linear. This view of sequential destructive upheavals at irregular intervals or cycles emerged during the 1700s. This school of thought was perhaps best expressed by a forgotten contributor to the knowledge of mankind, George Hoggart Toulmin in his rare 1785 book, “The Eternity of the World“:

” ••• convulsions and revolutions violent beyond our experience or conception, yet unequal to the destruction of the globe, or the whole of the human species, have both existed and will again exist ••• [terminating] ••• an astonishing succession of ages.”

Id./p3, 110

bernhardi-erratics

In 1832, Professor A. Bernhardi argued that the North Polar ice cap had extended into the plains of Germany. To support this theory, he pointed to the existence of huge boulders that have become known as “erratics,” which he suggested were pushed by the advancing ice. This was a shocking theory for it was certainly a nonlinear view of natural history. Bernhardi was thinking out of the box. However, in natural science people listen and review theories unlike in social science where theories are ignored if they challenge what people want to believe. In 1834, Johann von Charpentier (1786-1855) argued that there were deep grooves cut into the Alpine rock concluding, as did Karl Schimper, that they were caused by an advancing Ice Age.

This body of knowledge has been completely ignored by the global warming/climate change religious cult. They know nothing about nature or cycles and they are completely ignorant of history or even that it was the discovery of these ancient creatures who froze with food in their mouths. They cannot explain these events nor the vast amount of knowledge written by people who actually did research instead of trying to cloak an agenda in pretend science.

Glaciologists have their own word, jökulhlaup(from Icelandic), to describe the spectacular outbursts when water builds up behind a glacier and then breaks loose. An example was the 1922 jökulhlaup in Iceland. Some seven cubic kilometers of water, melted by a volcano under a glacier, had rushed out in a few days. Still grander, almost unimaginably events, were floods that had swept across Washington state toward the end of the last ice age when a vast lake dammed behind a glacier broke loose. Catastrophic geologic events are not generally part of the uniformitarian geologist’s thinking. Rather, the normal view tends to be linear including events that are local or regional in size

One example of a regional event would be the 15,000 square miles of the Channeled Scablands in eastern WashingtonInitially, this spectacular erosion was thought to be the product of slow gradual processes. In 1923, JHarlen Bretz presented a paper to the Geological Society of America suggesting the Scablands were eroded catastrophically. During the 1940s, after decades of arguing, geologists admitted that high ridges in the Scablands were the equivalent of the little ripples one sees in mud on a streambed, magnified ten thousand times. Finally, by the 1950s, glaciologists were accustomed to thinking about catastrophic regional floods. The Scablands are now accepted to have been catastrophically eroded by the “Spokane Flood.” This Spokane flood was the result of the breaching of an ice dam which had created glacial Lake Missoula. Now the United States Geological Survey estimates the flood released 500 cubic miles of water, which drained in as little as 48 hours. That rush of water gouged out millions of tons of solid rock.

When Mount St. Helens erupted in 1980, this too produced a catastrophic process whereby two hundred million cubic yards of material was deposited by volcanic flows at the base of the mountain in just a matter of hours. Then, less than two years later, there was another minor eruption, but this resulted in creating a mudflow, which carved channels through the recently deposited material. These channels, which are 1/40th the size of the Grand Canyon, exposed flat segments between the catastrophically deposited layers. This is what we see between the layers exposed in the walls of the Grand Canyon. What is clear, is that these events were relatively minor compared to a global flood. For example, the eruption of Mount St. Helens contained only 0.27 cubic miles of material compared to other eruptions, which have been as much as 950 cubic miles. That is over 2,000 times the size of Mount St. Helens!

With respect to the Grand Canyon, the specific geologic processes and timing of the formation of the Grand Canyon have always sparked lively debates by geologists. The general scientific consensus, updated at a 2010 conference, maintains that the Colorado River carved the Grand Canyon beginning 5 million to 6 million years ago. This general thinking is still linear and by no means catastrophic. The Grand Canyon is believed to have been gradually eroded. However, there is an example cyclical behavior in nature which demonstrates that water can very rapidly erode even solid rock. An example of this took place in the Grand Canyon region back on June 28th, 1983. There emerged an overflow of Lake Powell which required the use of the Glen Canyon Dam’s 40-foot diameter spillway tunnels for the first time. As the volume of water increased, the entire dam started to vibrate and large boulders spewed from one of the spillways. The spillway was immediately shut down and an inspection revealed catastrophic erosion had cut through the three-foot-thick reinforced concrete walls and eroded a hole 40 feet wide, 32 feet deep, and 150 feet long in the sandstone beneath the dam. Nobody thought such catastrophic erosion that quick was even possible.

Some have speculated that the end of the Ice Age resulted in a flood of water which had been contained by an ice dam. Like that of the Scablands, it is possible that a sudden catastrophic release of water originally carved the Grand Canyon. It is clear that both the formation of the Scablands and the evidence of how Mount St Helens unfolded, may be support for the catastrophic formation of events rather than nice, slow, and linear formations.

Then there is the Biblical Account of the Great Flood and Noah. Noah is also considered to be a Prophet of Islam. Darren Aronofsky’s film Noah was based on the biblical story of Genesis. Some Christians were angry because the film strayed from biblical Scripture. The Muslim-majority countries banned the film Noah from screening in theaters because Noah was a prophet of God in the Koran. They considered it to be blasphemous to make a film about a prophet. Many countries banned the film entirely.

The story of Noah predates the Bible. There exists the legend of the Great Flood rooted in the ancient civilizations of Mesopotamia. The Sumerian Epic of Gilgamesh dates back nearly 5,000 years which is believed to be perhaps the oldest written tale on Earth. Here too, we find an account of the great sage Utnapishtim, who is warned of an imminent flood to be unleashed by wrathful gods. He builds a vast circular-shaped boat, reinforced with tar and pitch, and carries his relatives, grains along with animals. After enduring days of storms, Utnapishtim, like Noah in Genesis, releases a bird in search of dry land. Since there is evidence that there were survivors in different parts of the world, it is merely logical that there should be more than just one.

Archaeologists generally agree that there was a historical deluge between 5,000 and 7,000 years ago which hit lands ranging from the Black Sea to what many call the cradle of civilization, which was the floodplain between the Tigris and Euphrates rivers. The translation of ancient cuneiform tablets in the 19th century confirmed the Mesopotamian Great Flood myth as an antecedent of the Noah story in the Bible.

The problem that existed was the question of just how “great” was the Great Flood? Was it regional or worldwide? The stories of the Great Flood in Western Culture clearly date back before the Bible. The region implicated has long been considered to be the Black Sea. It has been suggested that the water broke through the land by Istanbul and flooded a fertile valley on the other side much as we just looked at in the Scablands. Robert Ballard, one of the world’s best-known underwater archaeologists, who found the Titanic, set out to test that theory to search for an underwater civilization. He discovered that some four hundred feet below the surface, there was an ancient shoreline, proving that there was a catastrophic event did happen in the Black Sea. By carbon dating shells found along the underwater shoreline, Ballard dated this catastrophic event to around 5,000 BC. This may match around the time when Noah’s flood could have occurred.

Given the fact that for the entire Earth to be submerged for 40 days and 40 nights is impossible for that much water to simply vanish, we are probably looking at a Great Flood that at the very least was regional. However, there are tales of the Great Floodwhich spring from many other sources. Various ancient cultures have their own legends of a Great Flood and salvation. According to Vedic lore, a fish tells the mythic Indian king Manu of a Great Flood that will wipe out humanity. In turn, Manu also builds a ship to withstand the epic rains and is later led to a mountaintop by the same fish.

We also find an Aztec story that tells of a devout couple hiding in the hollow of a vast tree with two ears of corn as divine storms drown the wicked of the land. Creation myths from Egypt to Scandinavia also involve tidal floods of all sorts of substances purging and remaking the earth. The fact that we have Great Flood stories from India is not really a surprise since there was contact between the Middle East and India throughout recorded history. However, the Aztec story lacks the ship, but it still contains punishing the wicked and here there was certainly no direct contact, although there is evidence of cocaine use in Egypt implying there was some trade route probably through island hopping in the Pacific to the shores of India and off to Egypt. Obviously, we cannot rule out that this story of the Great Flood even made it to South America. 

Then again, there is the story of Atlantis – the island that sunk beath the sea. The Atlantic Ocean covers approximately one-fifth of Earth’s surface and second in size only to the Pacific Ocean. The ocean’s name, derived from Greek mythology, means the “Sea of Atlas.” The origin of names is often very interesting clues as well. For example. New Jersey is the English Translation of Latin Nova Caesarea which appeared even on the colonial coins of the 18th century. Hence, the state of New Jersey is named after the Island of Jersey which in turn was named in the honor of Julius Caesar. So we actually have an American state named after the man who changed the world on par with Alexander the Great, for whom Alexandria of Virginia is named after with the location of the famous cemetery for veterans, where John F. Kennedy is buried.

So here the Atlantic Ocean is named after Atlas and the story of Atlantis. The original story of Atlantis comes to us from two Socratic dialogues called Timaeus and Critias, both written about 360 BC by the Greek philosopher Plato. According to the dialogues, Socrates asked three men to meet him: Timaeus of Locri, Hermocrates of Syracuse, and Critias of Athens. Socrates asked the men to tell him stories about how ancient Athens interacted with other states. Critias was the first to tell the story. Critias explained how his grandfather had met with the Athenian lawgiver Solon, who had been to Egypt where priests told the Egyptian story about Atlantis. According to the Egyptians, Solon was told that there was a mighty power based on an island in the Atlantic Ocean. This empire was called Atlantis and it ruled over several other islands and parts of the continents of Africa and Europe.

Atlantis was arranged in concentric rings of alternating water and land. The soil was rich and the engineers were technically advanced. The architecture was said to be extravagant with baths, harbor installations, and barracks. The central plain outside the city was constructed with canals and an elaborate irrigation system. Atlantis was ruled by kings but also had a civil administration. Its military was well organized. Their religious rituals were similar to that of Athens with bull-baiting, sacrifice, and prayer.

Plato told us about the metals found in Atlantis, namely gold, silver, copper, tin and the mysterious Orichalcum. Plato said that the city walls were plated with Orichalcum (Brass). This was a rare alloy metal back then which was found both in Crete as well as in the Andes, in South America. An ancient shipwreck was discovered off the coast of Sicily in 2015 which contained 39 ingots of Orichalcum. Many claimed this proved the story of AtlantisOrichalcum was believed to have been a gold/copper alloy that was cheaper than gold, but twice the value of copper. Of course, Orichalcum was really a copper-tin or copper-zinc brass. We find in Virgil’s Aeneid, the breastplate of Turnus is described as “stiff with gold and white orichalc”.

The monetary reform of Augustus in 23BC reintroduced bronze coinage which had vanished after 84BC. Here we see the introduction of Orichalcum for the Roman sesterius and the dupondius. The Roman As was struck in near pure copper. Therefore, about 300 years after Plato, we do see Orichalcum being introduced as part of the monetary system of Rome. It is clear that Orichalcum was rare at the time Plato wrote this. Consequently, this is similar to the stories of America that there was so much gold, they paved the streets with it.

As the story is told, Atlantis was located in the Atlantic Ocean. There have been bronze-age anchors discovered at the Gates of Hercules (Straights of Gibralter) and many people proclaimed this proved Atlantis was real. However, what these proponents fail to take into account is the Minoans. The Minoans were perhaps the first International Economy. They traded far and wide even with Britain seeking tin to make bronze – henceBronze Age. Their civilization was of the Bronze Age rising civilization that arose on the island of Crete and flourished from approximately the 27th century BC to the 15th century BC – nearly 12,000 years. Their trading range and colonization extended to Spain, Egypt, Israel (Canaan), Syria (Levantine), Greece, Rhodes, and of course to Turkey (Anatolia). Many other cultures referred to them as the people from the islands in the middle of the sea. However, the Minoans had no mineral deposits. They lacked gold as well as silver or even the ability to produce large mining of copper. They appear to have copper mines in Anatolia (Turkey) in colonized cities. What has survived are examples of copper ingots that served as MONEY in trade. Keep in mind that gold at this point was rare, too rare to truly serve as MONEY. It is found largely as jewelry in tombs of royal dignitaries.

The Bronze Age emerged at different times globally appearing in Greece and China around 3,000BC but it came late to Britain reaching there about 1900BC. It is known that copper emerged as a valuable tool in Anatolia (Turkey) as early as 6,500BC, where it began to replace stone in the creation of tools. It was the development of casting copper that also appears to aid the urbanization of man in Mesopotamia. By 3,000BC, copper is in wide use throughout the Middle East and starts to move up into Europe. Copper in its pure stage appears first, and tin is eventually added creating actual bronze where a bronze sword would break a copper sword. It was this addition of tin that really propelled the transition of copper to bronze and the tin was coming from England where vast deposits existed at Cornwall. We know that the Minoans traveled into the Atlantic for trade. Anchors are not conclusive evidence of Atlantis.

As the legend unfolds, Atlantis waged an unprovoked imperialistic war on the remainder of Asia and Europe. When Atlantis attacked, Athens showed its excellence as the leader of the Greeks, the much smaller city-state the only power to stand against Atlantis. Alone, Athens triumphed over the invading Atlantean forces, defeating the enemy, preventing the free from being enslaved, and freeing those who had been enslaved. This part may certainly be embellished and remains doubtful at best. However, following this battle, there were violent earthquakes and floods, and Atlantis sank into the sea, and all the Athenian warriors were swallowed up by the earth. This appears to be almost certainly a fiction based on some ancient political realities. Still, the explosive disappearance of an island some have argued is a reference to the eruption of MinoanSantorini. The story of Atlantis does closely correlate with Plato’s notions of The Republic examining the deteriorating cycle of life in a state.

 

There have been theories that Atlantiswas the Azores, and still, others argue it was actually South America. That would explain to some extent the cocaine mummies in Egypt. Yet despite all these theories, usually, when there is an ancient story, despite embellishment, there is often a grain of truth hidden deep within. In this case, Atlantis may not have completely submerged, but it could have partially submerged from an earthquake at least where some people survived. Survivors could have made to either the Americas or to Africa/Europe. What is clear, is that a sudden event could have sent a  tsunami into the Mediterranean which then broke the land mass at Istanbul and flooded the valley below transforming this region into the Black Sea becoming the story of Noah.

We also have evidence which has surfaced that the Earth was struck by a comet around 12,800 years ago. Scientific American has published that sediments from six sites across North America—Murray Springs, Ariz.; Bull Creek, Okla.; Gainey, Mich.; Topper, S.C.; Lake Hind, Manitoba; and Chobot, Alberta, have yielded tiny diamonds, which only occur in sediment exposed to extreme temperatures and pressures. The evidence surfacing implies that the Earth moved into an Ice Age killing off large mammals and setting the course for Global Cooling for the next 1300 years. This may indeed explain that catastrophic freezing of Wooly Mammoths in Siberia. Such an event could have also been responsible for the legend of Atlantis where the survivors migrated taking their stories with them.

There is also evidence surfacing from stone carvings at one of the oldest sites recorded located in Anatolia (Turkey). Using a computer programme to show where the constellations would have appeared above Turkey thousands of years ago, researchers were able to pinpoint the comet strike to 10,950BC, the exact time the Younger Dryas,which was was a return to glacial conditions and Global Cooling which temporarily reversed the gradual climatic warming after the Last Glacial Maximum that began to recede around 20,000 BC, utilizing ice core data from Greenland.

Now, there is a very big asteroid which passed by the Earth on September 16th, 2013. What is most disturbing is the fact that its cycle is 19 years so it will return in 2032. Astronomers have not been able to swear it will not hit the Earth on the next pass in 2032. It was discovered by Ukrainian astronomers with just 10 days to go back in 2013.  The 2013 pass was only a distance of 4.2 million miles (6.7 million kilometers). If anything alters its orbit, then it will get closer and closer. It just so happens to line up on a cyclical basis that suggests we should begin to look at how to deflect asteroids and soon.

It definitely appears that catastrophic cooling may also be linked to the Earth being struck by a meteor, asteroids, or a comet. We are clearly headed into a period of Global Cooling and this will get worse as we head into 2032. The question becomes: Is our model also reflecting that it is once again time for an Earth change caused by an asteroid encounter? Such events are not DOOMSDAY and the end of the world. They do seem to be regional. However, a comet striking in North America would have altered the comet freezing animals in Siberia.

If there is a tiny element of truth in the story of Atlantis, the one thing it certainly proves is clear – there are ALWAYS survivors. Based upon a review of the history of civilization as well as climate, what resonates profoundly is that events follow the cyclical model of catastrophic occurrences rather than the linear steady slow progression of evolution.

Anúncios

Distant tropical storms have ripple effects on weather close to home (Science Daily)

Researchers describe a breakthrough in making accurate predictions of weather weeks ahead

Date:
February 20, 2018
Source:
Colorado State University
Summary:
Researchers report a breakthrough in making accurate predictions of weather weeks ahead. They’ve created an empirical model fed by careful analysis of 37 years of historical weather data. Their model centers on the relationship between two well-known global weather patterns: the Madden-Julian Oscillation and the quasi-biennial oscillation.

Storm clouds (stock image). Credit: © mdesigner125 / Fotolia

The famously intense tropical rainstorms along Earth’s equator occur thousands of miles from the United States. But atmospheric scientists know that, like ripples in a pond, tropical weather creates powerful waves in the atmosphere that travel all the way to North America and have major impacts on weather in the U.S.

These far-flung, interconnected weather processes are crucial to making better, longer-term weather predictions than are currently possible. Colorado State University atmospheric scientists, led by professors Libby Barnes and Eric Maloney, are hard at work to address these longer-term forecasting challenges.

In a new paper in npj Climate and Atmospheric Science, the CSU researchers describe a breakthrough in making accurate predictions of weather weeks ahead. They’ve created an empirical model fed by careful analysis of 37 years of historical weather data. Their model centers on the relationship between two well-known global weather patterns: the Madden-Julian Oscillation and the quasi-biennial oscillation.

According to the study, led by former graduate researcher Bryan Mundhenk, the model, using both these phenomena, allows skillful prediction of the behavior of major rain storms, called atmospheric rivers, three and up to five weeks in advance.

“It’s impressive, considering that current state-of-the-art numerical weather models, such as NOA’s Global Forecast System, or the European Centre for Medium-Range Weather Forecasts’ operational model, are only skillful up to one to two weeks in advance,” says paper co-author Cory Baggett, a postdoctoral researcher in the Barnes and Maloney labs.

The researchers’ chief aim is improving forecast capabilities within the tricky no-man’s land of “subseasonal to seasonal” timescales: roughly three weeks to three months out. Predictive capabilities that far in advance could save lives and livelihoods, from sounding alarms for floods and mudslides to preparing farmers for long dry seasons. Barnes also leads a federal NOAA task force for improving subseasonal to seasonal forecasting, with the goal of sharpening predictions for hurricanes, heat waves, the polar vortex and more.

Atmospheric rivers aren’t actual waterways, but”rivers in the sky,” according to researchers. They’re intense plumes of water vapor that cause extreme precipitation, plumes so large they resemble rivers in satellite pictures. These “rivers” are responsible for more than half the rainfall in the western U.S.

The Madden-Julian Oscillation is a cluster of rainstorms that moves east along the Equator over 30 to 60 days. The location of the oscillation determines where atmospheric waves will form, and their eventual impact on say, California. In previous work, the researchers have uncovered key stages of the Madden-Julian Oscillation that affect far-off weather, including atmospheric rivers.

Sitting above the Madden-Julian Oscillation is a very predictable wind pattern called the quasi-biennial oscillation. Over two- to three-year periods, the winds shift east, west and back east again, and almost never deviate. This pattern directly affects the Madden-Julian Oscillation, and thus indirectly affects weather all the way to California and beyond.

The CSU researchers created a model that can accurately predict atmospheric river activity in the western U.S. three weeks from now. Its inputs include the current state of the Madden-Julian Oscillation and the quasi-biennial oscillation. Using information on how atmospheric rivers have previously behaved in response to these oscillations, they found that the quasi-biennial oscillation matters — a lot.

Armed with their model, the researchers want to identify and understand deficiencies in state-of-the-art numerical weather models that prevent them from predicting weather on these subseasonal time scales.

“It would be worthwhile to develop a good understanding of the physical relationship between the Madden-Julian Oscillation and the quasi-biennial oscillation, and see what can be done to improve models’ simulation of this relationship,” Mundhenk said.

Another logical extension of their work would be to test how well their model can forecast actual rainfall and wind or other severe weather, such as tornadoes and hail.


Journal Reference:

  1. Bryan D. Mundhenk, Elizabeth A. Barnes, Eric D. Maloney, Cory F. Baggett. Skillful empirical subseasonal prediction of landfalling atmospheric river activity using the Madden–Julian oscillation and quasi-biennial oscillation. npj Climate and Atmospheric Science, 2018; 1 (1) DOI: 10.1038/s41612-017-0008-2

What happens to language as populations grow? It simplifies, say researchers (Cornell)

PUBLIC RELEASE: 

CORNELL UNIVERSITY

ITHACA, N.Y. – Languages have an intriguing paradox. Languages with lots of speakers, such as English and Mandarin, have large vocabularies with relatively simple grammar. Yet the opposite is also true: Languages with fewer speakers have fewer words but complex grammars.

Why does the size of a population of speakers have opposite effects on vocabulary and grammar?

Through computer simulations, a Cornell University cognitive scientist and his colleagues have shown that ease of learning may explain the paradox. Their work suggests that language, and other aspects of culture, may become simpler as our world becomes more interconnected.

Their study was published in the Proceedings of the Royal Society B: Biological Sciences.

“We were able to show that whether something is easy to learn – like words – or hard to learn – like complex grammar – can explain these opposing tendencies,” said co-author Morten Christiansen, professor of psychology at Cornell University and co-director of the Cognitive Science Program.

The researchers hypothesized that words are easier to learn than aspects of morphology or grammar. “You only need a few exposures to a word to learn it, so it’s easier for words to propagate,” he said.

But learning a new grammatical innovation requires a lengthier learning process. And that’s going to happen more readily in a smaller speech community, because each person is likely to interact with a large proportion of the community, he said. “If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

Conversely, in a large community, like a big city, one person will talk only to a small proportion the population. This means that only a few people might be exposed to that complex grammar rule, making it harder for it to survive, he said.

This mechanism can explain why all sorts of complex cultural conventions emerge in small communities. For example, bebop developed in the intimate jazz world of 1940s New York City, and the Lindy Hop came out of the close-knit community of 1930s Harlem.

The simulations suggest that language, and possibly other aspects of culture, may become simpler as our world becomes increasingly interconnected, Christiansen said. “This doesn’t necessarily mean that all culture will become overly simple. But perhaps the mainstream parts will become simpler over time.”

Not all hope is lost for those who want to maintain complex cultural traditions, he said: “People can self-organize into smaller communities to counteract that drive toward simplification.”

His co-authors on the study, “Simpler Grammar, Larger Vocabulary: How Population Size Affects Language,” are Florencia Reali of Universidad de los Andes, Colombia, and Nick Chater of University of Warwick, England.

Algoritmos das rede sociais promovem preconceito e desigualdade, diz matemática de Harvard (BBC Brasil)

AlgoritmosPara Cathy O’Neil, por trás da aparente imparcialidade ddos algoritmos escondem-se critérios nebulosos que agravam injustiças. GETTY IMAGES

Eles estão por toda parte. Nos formulários que preenchemos para vagas de emprego. Nas análises de risco a que somos submetidos em contratos com bancos e seguradoras. Nos serviços que solicitamos pelos nossos smartphones. Nas propagandas e nas notícias personalizadas que abarrotam nossas redes sociais. E estão aprofundando o fosso da desigualdade social e colocando em risco as democracias.

Definitivamente, não é com entusiasmo que a americana Cathy O’Neil enxerga a revolução dos algoritmos, sistemas capazes de organizar uma quantidade cada vez mais impressionante de informações disponíveis na internet, o chamado Big Data.

Matemática com formação em Harvard e Massachussetts Institute of Technology (MIT), duas das mais prestigiadas universidades do mundo, ela abandonou em 2012 uma bem-sucedida carreira no mercado financeiro e na cena das startups de tecnologia para estudar o assunto a fundo.

Quatro anos depois, publicou o livro Weapons of Math Destruction (Armas de Destruição em Cálculos, em tradução livre, um trocadilho com a expressão “armas de destruição em massa” em inglês) e tornou-se uma das vozes mais respeitadas no país sobre os efeitos colaterais da economia do Big Data.

A obra é recheada de exemplos de modelos matemáticos atuais que ranqueiam o potencial de seres humanos como estudantes, trabalhadores, criminosos, eleitores e consumidores. Segundo a autora, por trás da aparente imparcialidade desses sistemas, escondem-se critérios nebulosos que agravam injustiças.

É o caso dos seguros de automóveis nos Estados Unidos. Motoristas que nunca tomaram uma multa sequer, mas que tinham restrições de crédito por morarem em bairros pobres, pagavam valores consideravelmente mais altos do que aqueles com facilidade de crédito, mas já condenados por dirigirem embriagados. “Para a seguradora, é um ganha-ganha. Um bom motorista com restrição de crédito representa um risco baixo e um retorno altíssimo”, exemplifica.

Confira abaixo os principais trechos da entrevista:

BBC Brasil – Há séculos pesquisadores analisam dados para entender padrões de comportamento e prever acontecimentos. Qual é novidade trazida pelo Big Data?

Cathy O’Neil – O diferencial do Big Data é a quantidade de dados disponíveis. Há uma montanha gigantesca de dados que se correlacionam e que podem ser garimpados para produzir a chamada “informação incidental”. É incidental no sentido de que uma determinada informação não é fornecida diretamente – é uma informação indireta. É por isso que as pessoas que analisam os dados do Twitter podem descobrir em qual político eu votaria. Ou descobrir se eu sou gay apenas pela análise dos posts que curto no Facebook, mesmo que eu não diga que sou gay.

Ambiente de trabalho automatizado‘Essa ideia de que os robôs vão substituir o trabalho humano é muito fatalista. É preciso reagir e mostrar que essa é uma batalha política’, diz autora. GETTY IMAGES

A questão é que esse processo é cumulativo. Agora que é possível descobrir a orientação sexual de uma pessoa a partir de seu comportamento nas redes sociais, isso não vai ser “desaprendido”. Então, uma das coisas que mais me preocupam é que essas tecnologias só vão ficar melhores com o passar do tempo. Mesmo que as informações venham a ser limitadas – o que eu acho que não vai acontecer – esse acúmulo de conhecimento não vai se perder.

BBC Brasil – O principal alerta do seu livro é de que os algoritmos não são ferramentas neutras e objetivas. Pelo contrário: eles são enviesados pelas visões de mundo de seus programadores e, de forma geral, reforçam preconceitos e prejudicam os mais pobres. O sonho de que a internet pudesse tornar o mundo um lugar melhor acabou?

O’Neil – É verdade que a internet fez do mundo um lugar melhor em alguns contextos. Mas, se colocarmos numa balança os prós e os contras, o saldo é positivo? É difícil dizer. Depende de quem é a pessoa que vai responder. É evidente que há vários problemas. Só que muitos exemplos citados no meu livro, é importante ressaltar, não têm nada a ver com a internet. As prisões feitas pela polícia ou as avaliações de personalidade aplicadas em professores não têm a ver estritamente com a internet. Não há como evitar que isso seja feito, mesmo que as pessoas evitem usar a internet. Mas isso foi alimentado pela tecnologia de Big Data.

Por exemplo: os testes de personalidade em entrevistas de emprego. Antes, as pessoas se candidatavam a uma vaga indo até uma determinada loja que precisava de um funcionário. Mas hoje todo mundo se candidata pela internet. É isso que gera os testes de personalidade. Existe uma quantidade tão grande de pessoas se candidatando a vagas que é necessário haver algum filtro.

BBC Brasil – Qual é o futuro do trabalho sob os algoritmos?

O’Neil – Testes de personalidade e programas que filtram currículos são alguns exemplos de como os algoritmos estão afetando o mundo do trabalho. Isso sem mencionar os algoritmos que ficam vigiando as pessoas enquanto elas trabalham, como é o caso de professores e caminhoneiros. Há um avanço da vigilância. Se as coisas continuarem indo do jeito como estão, isso vai nos transformar em robôs.

Reprodução de propaganda no Facebook usada para influenciar as eleições nos EUAReprodução de propaganda no Facebook usada para influenciar as eleições nos EUA: ‘não deveriam ser permitidos anúncios personalizados, customizados’, opina autora

Mas eu não quero pensar nisso como um fato inevitável – que os algoritmos vão transformar as pessoas em robôs ou que os robôs vão substituir o trabalho dos seres humanos. Eu não quero admitir isso. Isso é algo que podemos decidir que não vai acontecer. É uma decisão política. Essa ideia de que os robôs vão substituir o trabalho humano é muito fatalista. É preciso reagir e mostrar que essa é uma batalha política. O problema é que estamos tão intimidados pelo avanço dessas tecnologias que sentimos que não há como lutar contra.

BBC Brasil – E no caso das companhias de tecnologia como a Uber? Alguns estudiosos usam o termo “gig economy” (economia de “bicos”) para se referir à organização do trabalho feita por empresas que utilizam algoritmos.

O’Neil – Esse é um ótimo exemplo de como entregamos o poder a essas empresas da gig economy, como se fosse um processo inevitável. Certamente, elas estão se saindo muito bem na tarefa de burlar legislações trabalhistas, mas isso não quer dizer que elas deveriam ter permissão para agir dessa maneira. Essas companhias deveriam pagar melhores remunerações e garantir melhores condições de trabalho.

No entanto, os movimentos que representam os trabalhadores ainda não conseguiram assimilar as mudanças que estão ocorrendo. Mas essa não é uma questão essencialmente algorítmica. O que deveríamos estar perguntando é: como essas pessoas estão sendo tratadas? E, se elas não estão sendo bem tratadas, deveríamos criar leis para garantir isso.

Eu não estou dizendo que os algoritmos não têm nada a ver com isso – eles têm, sim. É uma forma que essas companhias usam para dizer que elas não podem ser consideradas “chefes” desses trabalhadores. A Uber, por exemplo, diz que os motoristas são autônomos e que o algoritmo é o chefe. Esse é um ótimo exemplo de como nós ainda não entendemos o que se entende por “responsabilidade” no mundo dos algoritmos. Essa é uma questão em que venho trabalhando há algum tempo: que pessoas vão ser responsabilizadas pelos erros dos algoritmos?

BBC Brasil – No livro você argumenta que é possível criar algoritmos para o bem – o principal desafio é garantir transparência. Porém, o segredo do sucesso de muitas empresas é justamente manter em segredo o funcionamento dos algoritmos. Como resolver a contradição?

O’Neil – Eu não acho que seja necessária transparência para que um algoritmo seja bom. O que eu preciso saber é se ele funciona bem. Eu preciso de indicadores de que ele funciona bem, mas isso não quer dizer que eu necessite conhecer os códigos de programação desse algoritmo. Os indicadores podem ser de outro tipo – é mais uma questão de auditoria do que de abertura dos códigos.

A melhor maneira de resolver isso é fazer com que os algoritmos sejam auditados por terceiros. Não é recomendável confiar nas próprias empresas que criaram os algoritmos. Precisaria ser um terceiro, com legitimidade, para determinar se elas estão operando de maneira justa – a partir da definição de alguns critérios de justiça – e procedendo dentro da lei.

Cathy O'NeilPara Cathy O’Neil, polarização política e fake news só vão parar se “fecharmos o Facebook”. DIVULGAÇÃO

BBC Brasil – Recentemente, você escreveu um artigo para o jornal New York Times defendendo que a comunidade acadêmica participe mais dessa discussão. As universidades poderiam ser esse terceiro de que você está falando?

O’Neil – Sim, com certeza. Eu defendo que as universidades sejam o espaço para refletir sobre como construir confiabilidade, sobre como requerer informações para determinar se os algoritmos estão funcionando.

BBC Brasil – Quando vieram a público as revelações de Edward Snowden de que o governo americano espionava a vida das pessoas através da internet, muita gente não se surpreendeu. As pessoas parecem dispostas a abrir mão da sua privacidade em nome da eficiência da vida virtual?

O’Neil – Eu acho que só agora estamos percebendo quais são os verdadeiros custos dessa troca. Com dez anos de atraso, estamos percebendo que os serviços gratuitos na internet não são gratuitos de maneira alguma, porque nós fornecemos nossos dados pessoais. Há quem argumente que existe uma troca consentida de dados por serviços, mas ninguém faz essa troca de forma realmente consciente – nós fazemos isso sem prestar muita atenção. Além disso, nunca fica claro para nós o que realmente estamos perdendo.

Mas não é pelo fato de a NSA (sigla em inglês para a Agência de Segurança Nacional) nos espionar que estamos entendendo os custos dessa troca. Isso tem mais a ver com os empregos que nós arrumamos ou deixamos de arrumar. Ou com os benefícios de seguros e de cartões de crédito que nós conseguimos ou deixamos de conseguir. Mas eu gostaria que isso estivesse muito mais claro.

No nível individual ainda hoje, dez anos depois, as pessoas não se dão conta do que está acontecendo. Mas, como sociedade, estamos começando a entender que fomos enganados por essa troca. E vai ser necessário um tempo para saber como alterar os termos desse acordo.

Aplicativo do Uber‘A Uber, por exemplo, diz que os motoristas são autônomos e que o algoritmo é o chefe. Esse é um ótimo exemplo de como nós ainda não entendemos o que se entende por “responsabilidade” no mundo dos algoritmos’, diz O’Neil. EPA

BBC Brasil – O último capítulo do seu livro fala sobre a vitória eleitoral de Donald Trump e avalia como as pesquisas de opinião e as redes sociais influenciaram na corrida à Casa Branca. No ano que vem, as eleições no Brasil devem ser as mais agitadas das últimas três décadas. Que conselho você daria aos brasileiros?

O’Neil – Meu Deus, isso é muito difícil! Está acontecendo em todas as partes do mundo. E eu não sei se isso vai parar, a não ser que fechem o Facebook – o que, a propósito, eu sugiro que façamos. Agora, falando sério: as campanhas políticas na internet devem ser permitidas, mas não deveriam ser permitidos anúncios personalizados, customizados – ou seja, todo mundo deveria receber os mesmos anúncios. Eu sei que essa ainda não é uma proposta realista, mas acho que deveríamos pensar grande porque esse problema é grande. E eu não consigo pensar em outra maneira de resolver essa questão.

É claro que isso seria um elemento de um conjunto maior de medidas porque nada vai impedir pessoas idiotas de acreditar no que elas querem acreditar – e de postar sobre isso. Ou seja, nem sempre é um problema do algoritmo. Às vezes, é um problema das pessoas mesmo. O fenômeno das fake news é um exemplo. Os algoritmos pioram a situação, personalizando as propagandas e amplificando o alcance, porém, mesmo que não existisse o algoritmo do Facebook e que as propagandas políticas fossem proibidas na internet, ainda haveria idiotas disseminando fake news que acabariam viralizando nas redes sociais. E eu não sei o que fazer a respeito disso, a não ser fechar as redes sociais.

Eu tenho três filhos, eles têm 17, 15 e 9 anos. Eles não usam redes sociais porque acham que são bobas e eles não acreditam em nada do que veem nas redes sociais. Na verdade, eles não acreditam em mais nada – o que também não é bom. Mas o lado positivo é que eles estão aprendendo a checar informações por conta própria. Então, eles são consumidores muito mais conscientes do que os da minha geração. Eu tenho 45 anos, a minha geração é a pior. As coisas que eu vi as pessoas da minha idade compartilhando após a eleição de Trump eram ridículas. Pessoas postando ideias sobre como colocar Hilary Clinton na presidência mesmo sabendo que Trump tinha vencido. Foi ridículo. A esperança é ter uma geração de pessoas mais espertas.

The new astrology (Aeon)

By fetishising mathematical models, economists turned economics into a highly paid pseudoscience

04 April, 2016

Alan Jay Levinovitz is an assistant professor of philosophy and religion at James Madison University in Virginia. His most recent book is The Gluten Lie: And Other Myths About What You Eat (2015).Edited by Sam Haselby

 

What would make economics a better discipline?

Since the 2008 financial crisis, colleges and universities have faced increased pressure to identify essential disciplines, and cut the rest. In 2009, Washington State University announced it would eliminate the department of theatre and dance, the department of community and rural sociology, and the German major – the same year that the University of Louisiana at Lafayette ended its philosophy major. In 2012, Emory University in Atlanta did away with the visual arts department and its journalism programme. The cutbacks aren’t restricted to the humanities: in 2011, the state of Texas announced it would eliminate nearly half of its public undergraduate physics programmes. Even when there’s no downsizing, faculty salaries have been frozen and departmental budgets have shrunk.

But despite the funding crunch, it’s a bull market for academic economists. According to a 2015 sociological study in the Journal of Economic Perspectives, the median salary of economics teachers in 2012 increased to $103,000 – nearly $30,000 more than sociologists. For the top 10 per cent of economists, that figure jumps to $160,000, higher than the next most lucrative academic discipline – engineering. These figures, stress the study’s authors, do not include other sources of income such as consulting fees for banks and hedge funds, which, as many learned from the documentary Inside Job (2010), are often substantial. (Ben Bernanke, a former academic economist and ex-chairman of the Federal Reserve, earns $200,000-$400,000 for a single appearance.)

Unlike engineers and chemists, economists cannot point to concrete objects – cell phones, plastic – to justify the high valuation of their discipline. Nor, in the case of financial economics and macroeconomics, can they point to the predictive power of their theories. Hedge funds employ cutting-edge economists who command princely fees, but routinely underperform index funds. Eight years ago, Warren Buffet made a 10-year, $1 million bet that a portfolio of hedge funds would lose to the S&P 500, and it looks like he’s going to collect. In 1998, a fund that boasted two Nobel Laureates as advisors collapsed, nearly causing a global financial crisis.

The failure of the field to predict the 2008 crisis has also been well-documented. In 2003, for example, only five years before the Great Recession, the Nobel Laureate Robert E Lucas Jr told the American Economic Association that ‘macroeconomics […] has succeeded: its central problem of depression prevention has been solved’. Short-term predictions fair little better – in April 2014, for instance, a survey of 67 economists yielded 100 per cent consensus: interest rates would rise over the next six months. Instead, they fell. A lot.

Nonetheless, surveys indicate that economists see their discipline as ‘the most scientific of the social sciences’. What is the basis of this collective faith, shared by universities, presidents and billionaires? Shouldn’t successful and powerful people be the first to spot the exaggerated worth of a discipline, and the least likely to pay for it?

In the hypothetical worlds of rational markets, where much of economic theory is set, perhaps. But real-world history tells a different story, of mathematical models masquerading as science and a public eager to buy them, mistaking elegant equations for empirical accuracy.

As an extreme example, take the extraordinary success of Evangeline Adams, a turn-of-the-20th-century astrologer whose clients included the president of Prudential Insurance, two presidents of the New York Stock Exchange, the steel magnate Charles M Schwab, and the banker J P Morgan. To understand why titans of finance would consult Adams about the market, it is essential to recall that astrology used to be a technical discipline, requiring reams of astronomical data and mastery of specialised mathematical formulas. ‘An astrologer’ is, in fact, the Oxford English Dictionary’s second definition of ‘mathematician’. For centuries, mapping stars was the job of mathematicians, a job motivated and funded by the widespread belief that star-maps were good guides to earthly affairs. The best astrology required the best astronomy, and the best astronomy was done by mathematicians – exactly the kind of person whose authority might appeal to bankers and financiers.

In fact, when Adams was arrested in 1914 for violating a New York law against astrology, it was mathematics that eventually exonerated her. During the trial, her lawyer Clark L Jordan emphasised mathematics in order to distinguish his client’s practice from superstition, calling astrology ‘a mathematical or exact science’. Adams herself demonstrated this ‘scientific’ method by reading the astrological chart of the judge’s son. The judge was impressed: the plaintiff, he observed, went through a ‘mathematical process to get at her conclusions… I am satisfied that the element of fraud… is absent here.’

Romer compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism

The enchanting force of mathematics blinded the judge – and Adams’s prestigious clients – to the fact that astrology relies upon a highly unscientific premise, that the position of stars predicts personality traits and human affairs such as the economy. It is this enchanting force that explains the enduring popularity of financial astrology, even today. The historian Caley Horan at the Massachusetts Institute of Technology described to me how computing technology made financial astrology explode in the 1970s and ’80s. ‘Within the world of finance, there’s always a superstitious, quasi-spiritual trend to find meaning in markets,’ said Horan. ‘Technical analysts at big banks, they’re trying to find patterns in past market behaviour, so it’s not a leap for them to go to astrology.’ In 2000, USA Today quoted Robin Griffiths, the chief technical analyst at HSBC, the world’s third largest bank, saying that ‘most astrology stuff doesn’t check out, but some of it does’.

Ultimately, the problem isn’t with worshipping models of the stars, but rather with uncritical worship of the language used to model them, and nowhere is this more prevalent than in economics. The economist Paul Romer at New York University has recently begun calling attention to an issue he dubs ‘mathiness’ – first in the paper ‘Mathiness in the Theory of Economic Growth’ (2015) and then in a series of blog posts. Romer believes that macroeconomics, plagued by mathiness, is failing to progress as a true science should, and compares debates among economists to those between 16th-century advocates of heliocentrism and geocentrism. Mathematics, he acknowledges, can help economists to clarify their thinking and reasoning. But the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone’s work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority.

‘I’ve come to the position that there should be a stronger bias against the use of math,’ Romer explained to me. ‘If somebody came and said: “Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language”, we’d say go to hell, unless they could convince us it was really essential. The burden of proof is on them.’

Right now, however, there is widespread bias in favour of using mathematics. The success of math-heavy disciplines such as physics and chemistry has granted mathematical formulas with decisive authoritative force. Lord Kelvin, the 19th-century mathematical physicist, expressed this quantitative obsession:

When you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it… in numbers, your knowledge is of a meagre and unsatisfactory kind.

The trouble with Kelvin’s statement is that measurement and mathematics do not guarantee the status of science – they guarantee only the semblance of science. When the presumptions or conclusions of a scientific theory are absurd or simply false, the theory ought to be questioned and, eventually, rejected. The discipline of economics, however, is presently so blinkered by the talismanic authority of mathematics that theories go overvalued and unchecked.

Romer is not the first to elaborate the mathiness critique. In 1886, an article in Science accused economics of misusing the language of the physical sciences to conceal ‘emptiness behind a breastwork of mathematical formulas’. More recently, Deirdre N McCloskey’s The Rhetoric of Economics(1998) and Robert H Nelson’s Economics as Religion (2001) both argued that mathematics in economic theory serves, in McCloskey’s words, primarily to deliver the message ‘Look at how very scientific I am.’

After the Great Recession, the failure of economic science to protect our economy was once again impossible to ignore. In 2009, the Nobel Laureate Paul Krugman tried to explain it in The New York Times with a version of the mathiness diagnosis. ‘As I see it,’ he wrote, ‘the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.’ Krugman named economists’ ‘desire… to show off their mathematical prowess’ as the ‘central cause of the profession’s failure’.

The mathiness critique isn’t limited to macroeconomics. In 2014, the Stanford financial economist Paul Pfleiderer published the paper‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, which helped to inspire Romer’s understanding of mathiness. Pfleiderer called attention to the prevalence of ‘chameleons’ – economic models ‘with dubious connections to the real world’ that substitute ‘mathematical elegance’ for empirical accuracy. Like Romer, Pfleiderer wants economists to be transparent about this sleight of hand. ‘Modelling,’ he told me, ‘is now elevated to the point where things have validity just because you can come up with a model.’

The notion that an entire culture – not just a few eccentric financiers – could be bewitched by empty, extravagant theories might seem absurd. How could all those people, all that math, be mistaken? This was my own feeling as I began investigating mathiness and the shaky foundations of modern economic science. Yet, as a scholar of Chinese religion, it struck me that I’d seen this kind of mistake before, in ancient Chinese attitudes towards the astral sciences. Back then, governments invested incredible amounts of money in mathematical models of the stars. To evaluate those models, government officials had to rely on a small cadre of experts who actually understood the mathematics – experts riven by ideological differences, who couldn’t even agree on how to test their models. And, of course, despite collective faith that these models would improve the fate of the Chinese people, they did not.

Astral Science in Early Imperial China, a forthcoming book by the historian Daniel P Morgan, shows that in ancient China, as in the Western world, the most valuable type of mathematics was devoted to the realm of divinity – to the sky, in their case (and to the market, in ours). Just as astrology and mathematics were once synonymous in the West, the Chinese spoke of li, the science of calendrics, which early dictionaries also glossed as ‘calculation’, ‘numbers’ and ‘order’. Li models, like macroeconomic theories, were considered essential to good governance. In the classic Book of Documents, the legendary sage king Yao transfers the throne to his successor with mention of a single duty: ‘Yao said: “Oh thou, Shun! The li numbers of heaven rest in thy person.”’

China’s oldest mathematical text invokes astronomy and divine kingship in its very title – The Arithmetical Classic of the Gnomon of the Zhou. The title’s inclusion of ‘Zhou’ recalls the mythic Eden of the Western Zhou dynasty (1045–771 BCE), implying that paradise on Earth can be realised through proper calculation. The book’s introduction to the Pythagorean theorem asserts that ‘the methods used by Yu the Great in governing the world were derived from these numbers’. It was an unquestioned article of faith: the mathematical patterns that govern the stars also govern the world. Faith in a divine, invisible hand, made visible by mathematics. No wonder that a newly discovered text fragment from 200 BCE extolls the virtues of mathematics over the humanities. In it, a student asks his teacher whether he should spend more time learning speech or numbers. His teacher replies: ‘If my good sir cannot fathom both at once, then abandon speech and fathom numbers, [for] numbers can speak, [but] speech cannot number.’

Modern governments, universities and businesses underwrite the production of economic theory with huge amounts of capital. The same was true for li production in ancient China. The emperor – the ‘Son of Heaven’ – spent astronomical sums refining mathematical models of the stars. Take the armillary sphere, such as the two-metre cage of graduated bronze rings in Nanjing, made to represent the celestial sphere and used to visualise data in three-dimensions. As Morgan emphasises, the sphere was literally made of money. Bronze being the basis of the currency, governments were smelting cash by the metric ton to pour it into li. A divine, mathematical world-engine, built of cash, sanctifying the powers that be.

The enormous investment in li depended on a huge assumption: that good government, successful rituals and agricultural productivity all depended upon the accuracy of li. But there were, in fact, no practical advantages to the continued refinement of li models. The calendar rounded off decimal points such that the difference between two models, hotly contested in theory, didn’t matter to the final product. The work of selecting auspicious days for imperial ceremonies thus benefited only in appearance from mathematical rigour. And of course the comets, plagues and earthquakes that these ceremonies promised to avert kept on coming. Farmers, for their part, went about business as usual. Occasional governmental efforts to scientifically micromanage farm life in different climes using li ended in famine and mass migration.

Like many economic models today, li models were less important to practical affairs than their creators (and consumers) thought them to be. And, like today, only a few people could understand them. In 101 BCE, Emperor Wudi tasked high-level bureaucrats – including the Great Director of the Stars – with creating a new li that would glorify the beginning of his path to immortality. The bureaucrats refused the task because ‘they couldn’t do the math’, and recommended the emperor outsource it to experts.

The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession

The debates of these ancient li experts bear a striking resemblance to those of present-day economists. In 223 CE, a petition was submitted to the emperor asking him to approve tests of a new li model developed by the assistant director of the astronomical office, a man named Han Yi.

At the time of the petition, Han Yi’s model, and its competitor, the so-called Supernal Icon, had already been subjected to three years of ‘reference’, ‘comparison’ and ‘exchange’. Still, no one could agree which one was better. Nor, for that matter, was there any agreement on how they should be tested.

In the end, a live trial involving the prediction of eclipses and heliacal risings was used to settle the debate. With the benefit of hindsight, we can see this trial was seriously flawed. The helical rising (first visibility) of planets depends on non-mathematical factors such as eyesight and atmospheric conditions. That’s not to mention the scoring of the trial, which was modelled on archery competitions. Archers scored points for proximity to the bullseye, with no consideration for overall accuracy. The equivalent in economic theory might be to grant a model high points for success in predicting short-term markets, while failing to deduct for missing the Great Recession.

None of this is to say that li models were useless or inherently unscientific. For the most part, li experts were genuine mathematical virtuosos who valued the integrity of their discipline. Despite being based on inaccurate assumptions – that the Earth was at the centre of the cosmos – their models really did work to predict celestial motions. Imperfect though the live trial might have been, it indicates that superior predictive power was a theory’s most important virtue. All of this is consistent with real science, and Chinese astronomy progressed as a science, until it reached the limits imposed by its assumptions.

However, there was no science to the belief that accurate li would improve the outcome of rituals, agriculture or government policy. No science to the Hall of Light, a temple for the emperor built on the model of a magic square. There, by numeric ritual gesture, the Son of Heaven was thought to channel the invisible order of heaven for the prosperity of man. This was quasi-theology, the belief that heavenly patterns – mathematical patterns – could be used to model every event in the natural world, in politics, even the body. Macro- and microcosm were scaled reflections of one another, yin and yang in a unifying, salvific mathematical vision. The expensive gadgets, the personnel, the bureaucracy, the debates, the competition – all of this testified to the divinely authoritative power of mathematics. The result, then as now, was overvaluation of mathematical models based on unscientific exaggerations of their utility.

In ancient China it would have been unfair to blame li experts for the pseudoscientific exploitation of their theories. These men had no way to evaluate the scientific merits of assumptions and theories – ‘science’, in a formalised, post-Enlightenment sense, didn’t really exist. But today it is possible to distinguish, albeit roughly, science from pseudoscience, astronomy from astrology. Hypothetical theories, whether those of economists or conspiracists, aren’t inherently pseudoscientific. Conspiracy theories can be diverting – even instructive – flights of fancy. They become pseudoscience only when promoted from fiction to fact without sufficient evidence.

Romer believes that fellow economists know the truth about their discipline, but don’t want to admit it. ‘If you get people to lower their shield, they’ll tell you it’s a big game they’re playing,’ he told me. ‘They’ll say: “Paul, you may be right, but this makes us look really bad, and it’s going to make it hard for us to recruit young people.”’

Demanding more honesty seems reasonable, but it presumes that economists understand the tenuous relationship between mathematical models and scientific legitimacy. In fact, many assume the connection is obvious – just as in ancient China, the connection between li and the world was taken for granted. When reflecting in 1999 on what makes economics more scientific than the other social sciences, the Harvard economist Richard B Freeman explained that economics ‘attracts stronger students than [political science or sociology], and our courses are more mathematically demanding’. In Lives of the Laureates (2004), Robert E Lucas Jr writes rhapsodically about the importance of mathematics: ‘Economic theory is mathematical analysis. Everything else is just pictures and talk.’ Lucas’s veneration of mathematics leads him to adopt a method that can only be described as a subversion of empirical science:

The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories – setting them aside. That can be hard to do – facts are facts – and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory.

Even for those who agree with Romer, conflict of interest still poses a problem. Why would skeptical astronomers question the emperor’s faith in their models? In a phone conversation, Daniel Hausman, a philosopher of economics at the University of Wisconsin, put it bluntly: ‘If you reject the power of theory, you demote economists from their thrones. They don’t want to become like sociologists.’

George F DeMartino, an economist and an ethicist at the University of Denver, frames the issue in economic terms. ‘The interest of the profession is in pursuing its analysis in a language that’s inaccessible to laypeople and even some economists,’ he explained to me. ‘What we’ve done is monopolise this kind of expertise, and we of all people know how that gives us power.’

Every economist I interviewed agreed that conflicts of interest were highly problematic for the scientific integrity of their field – but only tenured ones were willing to go on the record. ‘In economics and finance, if I’m trying to decide whether I’m going to write something favourable or unfavourable to bankers, well, if it’s favourable that might get me a dinner in Manhattan with movers and shakers,’ Pfleiderer said to me. ‘I’ve written articles that wouldn’t curry favour with bankers but I did that when I had tenure.’

When mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience

Then there’s the additional problem of sunk-cost bias. If you’ve invested in an armillary sphere, it’s painful to admit that it doesn’t perform as advertised. When confronted with their profession’s lack of predictive accuracy, some economists find it difficult to admit the truth. Easier, instead, to double down, like the economist John H Cochrane at the University of Chicago. The problem isn’t too much mathematics, he writes in response to Krugman’s 2009 post-Great-Recession mea culpa for the field, but rather ‘that we don’t have enough math’. Astrology doesn’t work, sure, but only because the armillary sphere isn’t big enough and the equations aren’t good enough.

If overhauling economics depended solely on economists, then mathiness, conflict of interest and sunk-cost bias could easily prove insurmountable. Fortunately, non-experts also participate in the market for economic theory. If people remain enchanted by PhDs and Nobel Prizes awarded for the production of complicated mathematical theories, those theories will remain valuable. If they become disenchanted, the value will drop.

Economists who rationalise their discipline’s value can be convincing, especially with prestige and mathiness on their side. But there’s no reason to keep believing them. The pejorative verb ‘rationalise’ itself warns of mathiness, reminding us that we often deceive each other by making prior convictions, biases and ideological positions look ‘rational’, a word that confuses truth with mathematical reasoning. To be rational is, simply, to think in ratios, like the ratios that govern the geometry of the stars. Yet when mathematical theory is the ultimate arbiter of truth, it becomes difficult to see the difference between science and pseudoscience. The result is people like the judge in Evangeline Adams’s trial, or the Son of Heaven in ancient China, who trust the mathematical exactitude of theories without considering their performance – that is, who confuse math with science, rationality with reality.

There is no longer any excuse for making the same mistake with economic theory. For more than a century, the public has been warned, and the way forward is clear. It’s time to stop wasting our money and recognise the high priests for what they really are: gifted social scientists who excel at producing mathematical explanations of economies, but who fail, like astrologers before them, at prophecy.

What Did Neanderthals Leave to Modern Humans? Some Surprises (New York Times)

Geneticists tell us that somewhere between 1 and 5 percent of the genome of modern Europeans and Asians consists of DNA inherited from Neanderthals, our prehistoric cousins.

At Vanderbilt University, John Anthony Capra, an evolutionary genomics professor, has been combining high-powered computation and a medical records databank to learn what a Neanderthal heritage — even a fractional one — might mean for people today.

We spoke for two hours when Dr. Capra, 35, recently passed through New York City. An edited and condensed version of the conversation follows.

Q. Let’s begin with an indiscreet question. How did contemporary people come to have Neanderthal DNA on their genomes?

A. We hypothesize that roughly 50,000 years ago, when the ancestors of modern humans migrated out of Africa and into Eurasia, they encountered Neanderthals. Matings must have occurred then. And later.

One reason we deduce this is because the descendants of those who remained in Africa — present day Africans — don’t have Neanderthal DNA.

What does that mean for people who have it? 

At my lab, we’ve been doing genetic testing on the blood samples of 28,000 patients at Vanderbilt and eight other medical centers across the country. Computers help us pinpoint where on the human genome this Neanderthal DNA is, and we run that against information from the patients’ anonymized medical records. We’re looking for associations.

What we’ve been finding is that Neanderthal DNA has a subtle influence on risk for disease. It affects our immune system and how we respond to different immune challenges. It affects our skin. You’re slightly more prone to a condition where you can get scaly lesions after extreme sun exposure. There’s an increased risk for blood clots and tobacco addiction.

To our surprise, it appears that some Neanderthal DNA can increase the risk for depression; however, there are other Neanderthal bits that decrease the risk. Roughly 1 to 2 percent of one’s risk for depression is determined by Neanderthal DNA. It all depends on where on the genome it’s located.

Was there ever an upside to having Neanderthal DNA?

It probably helped our ancestors survive in prehistoric Europe. When humans migrated into Eurasia, they encountered unfamiliar hazards and pathogens. By mating with Neanderthals, they gave their offspring needed defenses and immunities.

That trait for blood clotting helped wounds close up quickly. In the modern world, however, this trait means greater risk for stroke and pregnancy complications. What helped us then doesn’t necessarily now.

Did you say earlier that Neanderthal DNA increases susceptibility to nicotine addiction?

Yes. Neanderthal DNA can mean you’re more likely to get hooked on nicotine, even though there were no tobacco plants in archaic Europe.

We think this might be because there’s a bit of Neanderthal DNA right next to a human gene that’s a neurotransmitter implicated in a generalized risk for addiction. In this case and probably others, we think the Neanderthal bits on the genome may serve as switches that turn human genes on or off.

Aside from the Neanderthals, do we know if our ancestors mated with other hominids?

We think they did. Sometimes when we’re examining genomes, we can see the genetic afterimages of hominids who haven’t even been identified yet.

A few years ago, the Swedish geneticist Svante Paabo received an unusual fossilized bone fragment from Siberia. He extracted the DNA, sequenced it and realized it was neither human nor Neanderthal. What Paabo found was a previously unknown hominid he named Denisovan, after the cave where it had been discovered. It turned out that Denisovan DNA can be found on the genomes of modern Southeast Asians and New Guineans.

Have you long been interested in genetics?

Growing up, I was very interested in history, but I also loved computers. I ended up majoring in computer science at college and going to graduate school in it; however, during my first year in graduate school, I realized I wasn’t very motivated by the problems that computer scientists worked on.

Fortunately, around that time — the early 2000s — it was becoming clear that people with computational skills could have a big impact in biology and genetics. The human genome had just been mapped. What an accomplishment! We now had the code to what makes you, you, and me, me. I wanted to be part of that kind of work.

So I switched over to biology. And it was there that I heard about a new field where you used computation and genetics research to look back in time — evolutionary genomics.

There may be no written records from prehistory, but genomes are a living record. If we can find ways to read them, we can discover things we couldn’t know any other way.

Not long ago, the two top editors of The New England Journal of Medicine published an editorial questioning “data sharing,” a common practice where scientists recycle raw data other researchers have collected for their own studies. They labeled some of the recycling researchers, “data parasites.” How did you feel when you read that?

I was upset. The data sets we used were not originally collected to specifically study Neanderthal DNA in modern humans. Thousands of patients at Vanderbilt consented to have their blood and their medical records deposited in a “biobank” to find genetic diseases.

Three years ago, when I set up my lab at Vanderbilt, I saw the potential of the biobank for studying both genetic diseases and human evolution. I wrote special computer programs so that we could mine existing data for these purposes.

That’s not being a “parasite.” That’s moving knowledge forward. I suspect that most of the patients who contributed their information are pleased to see it used in a wider way.

What has been the response to your Neanderthal research since you published it last year in the journal Science?

Some of it’s very touching. People are interested in learning about where they came from. Some of it is a little silly. “I have a lot of hair on my legs — is that from Neanderthals?”

But I received racist inquiries, too. I got calls from all over the world from people who thought that since Africans didn’t interbreed with Neanderthals, this somehow justified their ideas of white superiority.

It was illogical. Actually, Neanderthal DNA is mostly bad for us — though that didn’t bother them.

As you do your studies, do you ever wonder about what the lives of the Neanderthals were like?

It’s hard not to. Genetics has taught us a tremendous amount about that, and there’s a lot of evidence that they were much more human than apelike.

They’ve gotten a bad rap. We tend to think of them as dumb and brutish. There’s no reason to believe that. Maybe those of us of European heritage should be thinking, “Let’s improve their standing in the popular imagination. They’re our ancestors, too.’”

A mysterious 14-year cycle has been controlling our words for centuries (Science Alert)

Some of your favourite science words are making a comeback.

DAVID NIELD
2 DEC 2016

Researchers analysing several centuries of literature have spotted a strange trend in our language patterns: the words we use tend to fall in and out of favour in a cycle that lasts around 14 years.

Scientists ran computer scripts to track patterns stretching back to the year 1700 through the Google Ngram Viewer database, which monitors language use across more than 4.5 million digitised books. In doing so, they identified a strange oscillation across 5,630 common nouns.

The team says the discovery not only shows how writers and the population at large use words to express themselves – it also affects the topics we choose to discuss.

“It’s very difficult to imagine a random phenomenon that will give you this pattern,” Marcelo Montemurro from the University of Manchester in the UK told Sophia Chen at New Scientist.

“Assuming these patterns reflect some cultural dynamics, I hope this develops into better understanding of why we change the topics we discuss,” he added.“We might learn why writers get tired of the same thing and choose something new.”

The 14-year pattern of words coming into and out of widespread use was surprisingly consistent, although the researchers found that in recent years the cycles have begun to get longer by a year or two. The cycles are also more pronounced when it comes to certain words.

What’s interesting is how related words seem to rise and fall together in usage. For example, royalty-related words like “king”, “queen”, and “prince” appear to be on the crest of a usage wave, which means they could soon fall out of favour.

By contrast, a number of scientific terms, including “astronomer”, “mathematician”, and “eclipse” could soon be on the rebound, having dropped in usage recently.

According to the analysis, the same phenomenon happens with verbs as well, though not to the same extent as with nouns, and the academics found similar 14-year patterns in French, German, Italian, Russian, and Spanish, so this isn’t exclusive to English.

The study suggests that words get a certain momentum, causing more and more people to use them, before reaching a saturation point, where writers start looking for alternatives.

Montemurro and fellow researcher Damián Zanette from the National Council for Scientific and Technical Research in Argentina aren’t sure what’s causing this, although they’re willing to make some guesses.

“We expect that this behaviour is related to changes in the cultural environment that, in turn, stir the thematic focus of the writers represented in the Google database,” the researchers write in their paper.

“It’s fascinating to look for cultural factors that might affect this, but we also expect certain periodicities from random fluctuations,” biological scientist Mark Pagel, from the University of Reading in the UK, who wasn’t involved in the research, told New Scientist.

“Now and then, a word like ‘apple’ is going to be written more, and its popularity will go up,” he added. “But then it’ll fall back to a long-term average.”

It’s clear that language is constantly evolving over time, but a resource like the Google Ngram Viewer gives scientists unprecedented access to word use and language trends across the centuries, at least as far as the written word goes.

You can try it out for yourself, and search for any word’s popularity over time.

But if there are certain nouns you’re fond of, make the most of them, because they might not be in common use for much longer.

The findings have been published in Palgrave Communications.

Global climate models do not easily downscale for regional predictions (Science Daily)

Date:
August 24, 2016
Source:
Penn State
Summary:
One size does not always fit all, especially when it comes to global climate models, according to climate researchers who caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.

One size does not always fit all, especially when it comes to global climate models, according to Penn State climate researchers.

“The impacts of climate change rightfully concern policy makers and stakeholders who need to make decisions about how to cope with a changing climate,” said Fuqing Zhang, professor of meteorology and director, Center for Advanced Data Assimilation and Predictability Techniques, Penn State. “They often rely upon climate model projections at regional and local scales in their decision making.”

Zhang and Michael Mann, Distinguished professor of atmospheric science and director, Earth System Science Center, were concerned that the direct use of climate model output at local or even regional scales could produce inaccurate information. They focused on two key climate variables, temperature and precipitation.

They found that projections of temperature changes with global climate models became increasingly uncertain at scales below roughly 600 horizontal miles, a distance equivalent to the combined widths of Pennsylvania, Ohio and Indiana. While climate models might provide useful information about the overall warming expected for, say, the Midwest, predicting the difference between the warming of Indianapolis and Pittsburgh might prove futile.

Regional changes in precipitation were even more challenging to predict, with estimates becoming highly uncertain at scales below roughly 1200 miles, equivalent to the combined width of all the states from the Atlantic Ocean through New Jersey across Nebraska. The difference between changing rainfall totals in Philadelphia and Omaha due to global warming, for example, would be difficult to assess. The researchers report the results of their study in the August issue of Advances in Atmospheric Sciences.

“Policy makers and stakeholders use information from these models to inform their decisions,” said Mann. “It is crucial they understand the limitation in the information the model projections can provide at local scales.”

Climate models provide useful predictions of the overall warming of the globe and the largest-scale shifts in patterns of rainfall and drought, but are considerably more hard pressed to predict, for example, whether New York City will become wetter or drier, or to deal with the effects of mountain ranges like the Rocky Mountains on regional weather patterns.

“Climate models can meaningfully project the overall global increase in warmth, rises in sea level and very large-scale changes in rainfall patterns,” said Zhang. “But they are uncertain about the potential significant ramifications on society in any specific location.”

The researchers believe that further research may lead to a reduction in the uncertainties. They caution users of climate model projections to take into account the increased uncertainties in assessing local climate scenarios.

“Uncertainty is hardly a reason for inaction,” said Mann. “Moreover, uncertainty can cut both ways, and we must be cognizant of the possibility that impacts in many regions could be considerably greater and more costly than climate model projections suggest.”

Theoretical tiger chases statistical sheep to probe immune system behavior (Science Daily)

Physicists update predator-prey model for more clues on how bacteria evade attack from killer cells

Date:
April 29, 2016
Source:
IOP Publishing
Summary:
Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Researchers have created a numerical model that explores this behavior in more detail.

Studying the way that solitary hunters such as tigers, bears or sea turtles chase down their prey turns out to be very useful in understanding the interaction between individual white blood cells and colonies of bacteria. Reporting their results in the Journal of Physics A: Mathematical and Theoretical, researchers in Europe have created a numerical model that explores this behaviour in more detail.

Using mathematical expressions, the group can examine the dynamics of a single predator hunting a herd of prey. The routine splits the hunter’s motion into a diffusive part and a ballistic part, which represent the search for prey and then the direct chase that follows.

“We would expect this to be a fairly good approximation for many animals,” explained Ralf Metzler, who led the work and is based at the University of Potsdam in Germany.

Obstructions included

To further improve its analysis, the group, which includes scientists from the National Institute of Chemistry in Slovenia, and Sorbonne University in France, has incorporated volume effects into the latest version of its model. The addition means that prey can now inadvertently get in each other’s way and endanger their survival by blocking potential escape routes.

Thanks to this update, the team can study not just animal behaviour, but also gain greater insight into the way that killer cells such as macrophages (large white blood cells patrolling the body) attack colonies of bacteria.

One of the key parameters determining the life expectancy of the prey is the so-called ‘sighting range’ — the distance at which the prey is able to spot the predator. Examining this in more detail, the researchers found that the hunter profits more from the poor eyesight of the prey than from the strength of its own vision.

Long tradition with a new dimension

The analysis of predator-prey systems has a long tradition in statistical physics and today offers many opportunities for cooperative research, particularly in fields such as biology, biochemistry and movement ecology.

“With the ever more detailed experimental study of systems ranging from molecular processes in living biological cells to the motion patterns of animal herds and humans, the need for cross-fertilisation between the life sciences and the quantitative mathematical approaches of the physical sciences has reached a new dimension,” Metzler comments.

To help support this cross-fertilisation, he heads up a new section of the Journal of Physics A: Mathematical and Theoretical that is dedicated to biological modelling and examines the use of numerical techniques to study problems in the interdisciplinary field connecting biology, biochemistry and physics.


Journal Reference:

  1. Maria Schwarzl, Aljaz Godec, Gleb Oshanin, Ralf Metzler. A single predator charging a herd of prey: effects of self volume and predator–prey decision-makingJournal of Physics A: Mathematical and Theoretical, 2016; 49 (22): 225601 DOI: 10.1088/1751-8113/49/22/225601

Modelo matemático auxilia a planejar operação de reservatórios de água (Fapesp)

Sistema computacional desenvolvido por pesquisadores da USP e da Unicamp estabelece regras de racionamento de suprimento hídrico em períodos de seca

Pesquisadores da Escola Politécnica da Universidade de São Paulo (Poli-USP) e da Faculdade de Engenharia Civil, Arquitetura e Urbanismo da Universidade Estadual de Campinas (FEC-Unicamp) desenvolveram novos modelos matemáticos e computacionais voltados a otimizar a gestão e a operação de sistemas complexos de suprimento hídrico e de energia elétrica, como os existentes no Brasil.

Os modelos, que começaram a ser desenvolvidos no início dos anos 2000, foram aprimorados por meio do Projeto Temático “HidroRisco: Tecnologias de gestão de riscos aplicadas a sistemas de suprimento hídrico e de energia elétrica”, realizado com apoio da Fapesp.

“A ideia é que os modelos matemáticos e computacionais que desenvolvemos possam auxiliar os gestores dos sistemas de distribuição e abastecimento de água e energia elétrica na tomada de decisões que têm enormes impactos sociais e econômicos, como a de decretar racionamento”, disse Paulo Sérgio Franco Barbosa, professor da FEC-Unicamp e coordenador do projeto, à Agência Fapesp.

De acordo com Barbosa, muitas das tecnologias utilizadas hoje nos setores hídrico e energético no Brasil para gerir a oferta e a demanda e os riscos de desabastecimento de água e energia em situações de eventos climáticos extremos, como estiagem severa, foram desenvolvidas na década de 1970, quando as cidades brasileiras eram menores e o País não dispunha de um sistema hídrico e hidroenergético tão complexo como o atual.

Por essas razões, segundo ele, esses sistemas de gestão apresentam falhas como não levar em conta a conexão entre as diferentes bacias e não estimar a ocorrência de eventos climáticos mais extremos do que os que já aconteceram no passado ao planejar a operação de um sistema de reservatórios e distribuição de água.

“Houve falha no dimensionamento da capacidade de abastecimento de água do reservatório Cantareira, por exemplo, porque não se imaginou que aconteceria uma seca pior do que a que atingiu a bacia em 1953, considerado o ano mais seco da história do reservatório antes de 2014”, afirmou Barbosa.

A fim de aprimorar esses sistemas de gestão de risco existentes hoje, os pesquisadores desenvolveram novos modelos matemáticos e computacionais que simulam a operação de um sistema de suprimento hídrico ou de energia de forma integrada e em diferentes cenários de aumento de oferta e demanda de água.

“Por meio de algumas técnicas estatísticas e computacionais, os modelos que desenvolvemos são capazes de fazer simulações melhores e proteger mais um sistema de suprimento hídrico ou de energia elétrica contra riscos climáticos”, disse Barbosa.

Sisagua

Um dos modelos desenvolvidos pelos pesquisadores em colaboração com colegas da University of California em Los Angeles, nos Estados Unidos, é a plataforma de modelagem de otimização e simulação de sistemas de suprimento hídrico Sisagua.

A plataforma computacional integra e representa todas as fontes de abastecimento de um sistema de reservatórios e distribuição de água de cidades de grande porte, como São Paulo, incluindo os reservatórios, canais, dutos, estações de tratamento e de bombeamento.

“O Sisagua possibilita planejar a operação, estudar a capacidade de suprimento e avaliar alternativas de expansão ou de diminuição do fornecimento de um sistema de abastecimento de água de forma integrada”, apontou Barbosa.

Um dos diferenciais do modelo computacional, segundo o pesquisador, é estabelecer regras de racionamento de um sistema de reservatórios e distribuição de água de grande porte em períodos de seca, como o que São Paulo passou em 2014, de modo a minimizar os danos à população e à economia causados por um eventual racionamento.

Quando um dos reservatórios do sistema atinge um volume abaixo dos níveis normais e próximo do volume mínimo de operação, o modelo computacional indica um primeiro estágio de racionamento, reduzindo a oferta da água armazenada em 10%, por exemplo.

Se a crise de abastecimento do reservatório prolongar, o modelo matemático indica alternativas para minimizar a intensidade do racionamento distribuindo o corte de água de forma mais uniforme ao longo do período de escassez de água e entre os outros reservatórios do sistema.

“O Sisagua possui uma inteligência computacional que indica onde e quando cortar o fornecimento de água de um sistema de abastecimento hídrico, de modo a minimizar os danos no sistema e para a população e a economia de uma cidade”, afirmou Barbosa.

Sistema Cantareira

Os pesquisadores aplicaram o Sisagua para simular a operação e a gestão do sistema de distribuição de água da região metropolitana de São Paulo, que abastece cerca de 18 milhões de pessoas e é considerado um dos maiores do mundo, com vazão média de 67 metros cúbicos por segundo (m³/s).

O sistema de distribuição de água paulista é composto por oito subsistemas de abastecimento, sendo o maior deles o Cantareira, que fornece água para 5,3 milhões de pessoas, com vazão média de 33 m³/s.

A fim de avaliar a capacidade de suprimento do Cantareira em um cenário de escassez de água e, ao mesmo tempo, de aumento da demanda pelo recurso natural, os pesquisadores realizaram uma simulação de planejamento do uso do subsistema em um período de dez anos utilizando o Sisagua.

Para isso, eles usaram dados de vazões afluentes (de entrada de água) do Cantareira entre 1950 e 1960, fornecidos pela Companhia de Saneamento Básico do Estado de São Paulo (Sabesp).

“Essa período de tempo foi escolhido como base para as projeções do Sisagua porque registrou secas severas, quando as afluências ficaram significativamente abaixo das médias por quatro anos seguidos, entre 1952 e 1956”, explicou Barbosa.

A partir dos dados de vazão afluente desse série histórica, o modelo matemático e computacional analisou cenários com demanda variável de água do Cantareira entre 30 e 40 m³/s.

Algumas das constatações do modelo foram que o Cantareira é capaz de atender uma demanda de até 34 m³/s em um cenário de escassez de água como ocorreu entre 1950 a 1960 com um risco insignificante de desabastecimento. Acima desse valor a escassez e, consequentemente, o risco de racionamento de água no reservatório aumenta exponencialmente.

Para que o Cantareira possa atender uma demanda de 38 m³/s em um período de escassez de água, o modelo indicou que seria preciso começar a racionar a água do reservatório 40 meses (3 anos e 4 meses) antes que o nível da bacia atingisse o ponto crítico, abaixo do volume normal e próximo do limite mínimo de operação.

Dessa forma, seria possível atender entre 85% e 90% da demanda de água do reservatório no período de seca até que ele recuperasse seu volume ideal, evitando um racionamento mais grave do que aconteceria caso fosse mantido o nível pleno de abastecimento do reservatório.

“Quanto antes for feito o racionamento de água de um sistema de abastecimento hídrico melhor o prejuízo é distribuído ao longo do tempo”, disse Barbosa. “A população pode se preparar melhor para um racionamento de 15% de água durante um período de dois anos, por exemplo, do que um corte de 40% em apenas dois meses”, comparou.

Sistemas integrados

Em outro estudo, os pesquisadores usaram o Sisagua para avaliar a capacidade de os subsistemas Cantareira, Guarapiranga, Alto Tietê e Alto Cotia atenderem as atuais demandas de água em um cenário de escassez do recurso natural.

Para isso, eles também utilizaram dados de vazões afluentes dos quatro subsistemas no período de 1950 a 1960.

Os resultados das análises feitas pelo método matemático e computacional indicaram que o subsistema de Cotia atingiu um limite crítico de racionamento diversas vezes durante o período simulado de dez anos.

Em contrapartida, o subsistema Alto Tietê ficou com volume de água acima de sua meta frequentemente.

Com base nessas constatações, os pesquisadores sugerem novas interligações para transferência entre esses quatro subsistemas de abastecimento.

Parte da demanda de água do subsistema de Cotia poderia ser fornecida pelos subsistemas de Guarapiranga e Cantareira. Por outro lado, esses dois subsistemas também poderiam receber água do subsistema Alto Tietê, indicaram as projeções do Sisagua.

“A transferência de água entre os subsistemas proporcionaria maior flexibilidade e resultaria em uma melhor distribuição, eficiência e confiabilidade do sistema de abastecimento hídrico da região metropolitana de São Paulo”, avaliou Barbosa.

De acordo com o pesquisador, as projeções feitas pelo Sisagua também indicaram a necessidade de investimentos em novas fontes de abastecimento de água para a região metropolitana de São Paulo.

Segundo ele, as principais bacias que abastecem São Paulo sofrem de problemas como a concentração urbana.

Em torno da bacia do Alto Tietê, por exemplo, que ocupa apenas 2,7% do território paulista, está concentrada quase 50% da população do Estado de São Paulo, superando em cinco vezes a densidade demográfica de países como Japão, Coréia e Holanda.

Já as bacias de Piracicaba, Paraíba do Sul, Sorocaba e Baixada Santista – que representam 20% da área de São Paulo – concentram 73% da população paulista, com densidade demográfica superior ao de países como Japão, Holanda e Reino Unido, apontam os pesquisadores.

“Será inevitável pensar em outras fontes de abastecimento de água para a região metropolitana de São Paulo, como o sistema Juquiá, no interior do estado, que tem água de excelente quantidade e em grandes volumes”, disse Barbosa.

“Em razão da distância, essa obra será cara e tem sido postergada. Mas, agora, não dá mais para adiá-la”, afirmou.

Além de São Paulo, o Sisagua também foi utilizado para modelar os sistemas de suprimento hídrico de Los Angeles, nos Estados Unidos, e Taiwan.

O artigo “Planning and operation of large-scale water distribution systems with preemptive priorities”, (doi: 10.1061/(ASCE)0733-9496(2008)134:3(247)), de Barros e outros, pode ser lido por assinantes do Journal of Water Resources Planning and Managementem ascelibrary.org/doi/abs/10.1061/%28ASCE%290733-9496%282008%29134%3A3%28247%29.

Agência Fapesp

Doenças sexualmente transmissíveis explicam a monogamia (El País)

Com a ampliação das sociedades, as infecções sexuais se tornaram endêmicas e afetaram os que mantinham muitas relações

DANIEL MEDIAVILLA

13 ABR 2016 – 02:29 CEST

A origem da monogamia imposta ainda é um mistério. Em algum momento na história da humanidade, quando o advento da agricultura e da pecuária começou a transformar as sociedades, começou a mudar a ideia do que era aceitável nas relações entre homens e mulheres. Ao longo da história, a maioria das sociedades tem permitido a poligamia. O estudo sobre caçadores-coletores sugere que, entre as sociedades pré-históricas, era frequente que um grupo relativamente pequeno de homens monopolizasse as mulheres da tribo para aumentar sua prole.

No entanto, aconteceu algo para que muitos dos grupos que conseguiram se sobrepor adotassem um sistema de organização do sexo tão distante das inclinações humanas, como a monogamia. Como se pode ler em várias passagens da Bíblia, a recomendação para resolver conflitos geralmente consistia na morte dos adúlteros por apedrejamento.

Um grupo de pesquisadores da Universidade de Waterloo (Canadá) e do Instituto Max Planck de Antropologia Evolutiva (Alemanha), que publicou nesta terça-feira um artigo sobre o tema na revista Nature Communications, acredita que as doenças sexualmente transmissíveis desempenharam um papel fundamental. Segundo a hipótese, que foi testada com modelos tecnológicos, os pesquisadores sugerem que, quando a agricultura permitiu o surgimento de populações nas quais mais de 300 pessoas viviam juntas, nossa relação com bactérias como a gonorreia ou sífilis mudou.

A sífilis e a gonorreia afetavam a fertilidade em uma sociedade sem antibióticos ou preservativos

Nos pequenos grupos do Plistoceno, os surtos causados por esses micróbios duravam pouco e tinham um impacto reduzido sobre a população. No entanto, quando o número de indivíduos na sociedade é maior, os surtos se tornam endêmicos e o impacto sobre aqueles que praticam a poligamia é maior. Em uma sociedade sem preservativos de látex ou antibióticos, as infecções bacterianas têm um grande impacto sobre a fertilidade.

Essa condição biológica teria dado vantagem às pessoas que se acasalavam de forma monogâmica e, além disso, também teria tornado mais aceitáveis castigos, como os descritos na Bíblia, para indivíduos que desrespeitassem a norma. Eventualmente, nas crescentes sociedades agrárias do início da história da humanidade, a interação entre a monogamia e a imposição de normas para sustentá-la acabaria dando vantagem sob a forma de maior fertilidade para as sociedades que as praticassem.

Os autores do estudo acreditam que estas abordagens, que testam premissas onde se tenta compreender a interação entre as dinâmicas sociais e naturais, podem ajudar a entender não só o surgimento da monogamia imposta socialmente, mas também outras normas sociais relacionadas com o contato físico entre os seres humanos.

Nossas normas sociais não se desenvolveram isoladas do que estava acontecendo em nosso ambiente natural”, afirmou em um comunicado Chris Bauch, professor de matemática aplicada da Universidade de Waterloo e um dos autores do estudo. “Pelo contrário, não podemos compreender as normas sociais sem entender sua origem em nosso ambiente natural”, acrescentou. “As normas foram moldadas por nosso ambiente natural”, conclui.

The Water Data Drought (N.Y.Times)

Then there is water.

Water may be the most important item in our lives, our economy and our landscape about which we know the least. We not only don’t tabulate our water use every hour or every day, we don’t do it every month, or even every year.

The official analysis of water use in the United States is done every five years. It takes a tiny team of people four years to collect, tabulate and release the data. In November 2014, the United States Geological Survey issued its most current comprehensive analysis of United States water use — for the year 2010.

The 2010 report runs 64 pages of small type, reporting water use in each state by quality and quantity, by source, and by whether it’s used on farms, in factories or in homes.

It doesn’t take four years to get five years of data. All we get every five years is one year of data.

The data system is ridiculously primitive. It was an embarrassment even two decades ago. The vast gaps — we start out missing 80 percent of the picture — mean that from one side of the continent to the other, we’re making decisions blindly.

In just the past 27 months, there have been a string of high-profile water crises — poisoned water in Flint, Mich.; polluted water in Toledo, Ohio, and Charleston, W. Va.; the continued drying of the Colorado River basin — that have undermined confidence in our ability to manage water.

In the time it took to compile the 2010 report, Texas endured a four-year drought. California settled into what has become a five-year drought. The most authoritative water-use data from across the West couldn’t be less helpful: It’s from the year before the droughts began.

In the last year of the Obama presidency, the administration has decided to grab hold of this country’s water problems, water policy and water innovation. Next Tuesday, the White House is hosting a Water Summit, where it promises to unveil new ideas to galvanize the sleepy world of water.

The question White House officials are asking is simple: What could the federal government do that wouldn’t cost much but that would change how we think about water?

The best and simplest answer: Fix water data.

More than any other single step, modernizing water data would unleash an era of water innovation unlike anything in a century.

We have a brilliant model for what water data could be: the Energy Information Administration, which has every imaginable data point about energy use — solar, wind, biodiesel, the state of the heating oil market during the winter we’re living through right now — all available, free, to anyone. It’s not just authoritative, it’s indispensable. Congress created the agency in the wake of the 1970s energy crisis, when it became clear we didn’t have the information about energy use necessary to make good public policy.

That’s exactly the state of water — we’ve got crises percolating all over, but lack the data necessary to make smart policy decisions.

Congress and President Obama should pass updated legislation creating inside the United States Geological Survey a vigorous water data agency with the explicit charge to gather and quickly release water data of every kind — what utilities provide, what fracking companies and strawberry growers use, what comes from rivers and reservoirs, the state of aquifers.

Good information does three things.

First, it creates the demand for more good information. Once you know what you can know, you want to know more.

Second, good data changes behavior. The real-time miles-per-gallon gauges in our cars are a great example. Who doesn’t want to edge the M.P.G. number a little higher? Any company, community or family that starts measuring how much water it uses immediately sees ways to use less.

Finally, data ignites innovation. Who imagined that when most everyone started carrying a smartphone, we’d have instant, nationwide traffic data? The phones make the traffic data possible, and they also deliver it to us.

The truth is, we don’t have any idea what detailed water use data for the United States will reveal. But we can be certain it will create an era of water transformation. If we had monthly data on three big water users — power plants, farmers and water utilities — we’d instantly see which communities use water well, and which ones don’t.

We’d see whether tomato farmers in California or Florida do a better job. We’d have the information to make smart decisions about conservation, about innovation and about investing in new kinds of water systems.

Water’s biggest problem, in this country and around the world, is its invisibility. You don’t tackle problems that are out of sight. We need a new relationship with water, and that has to start with understanding it.

Study suggests different written languages are equally efficient at conveying meaning (Eureka/University of Southampton)

PUBLIC RELEASE: 1-FEB-2016

UNIVERSITY OF SOUTHAMPTON

IMAGE

IMAGE: A STUDY LED BY THE UNIVERSITY OF SOUTHAMPTON HAS FOUND THERE IS NO DIFFERENCE IN THE TIME IT TAKES PEOPLE FROM DIFFERENT COUNTRIES TO READ AND PROCESS DIFFERENT LANGUAGES. view more  CREDIT: UNIVERSITY OF SOUTHAMPTON

A study led by the University of Southampton has found there is no difference in the time it takes people from different countries to read and process different languages.

The research, published in the journal Cognition, finds the same amount of time is needed for a person, from for example China, to read and understand a text in Mandarin, as it takes a person from Britain to read and understand a text in English – assuming both are reading their native language.

Professor of Experimental Psychology at Southampton, Simon Liversedge, says: “It has long been argued by some linguists that all languages have common or universal underlying principles, but it has been hard to find robust experimental evidence to support this claim. Our study goes at least part way to addressing this – by showing there is universality in the way we process language during the act of reading. It suggests no one form of written language is more efficient in conveying meaning than another.”

The study, carried out by the University of Southampton (UK), Tianjin Normal University (China) and the University of Turku (Finland), compared the way three groups of people in the UK, China and Finland read their own languages.

The 25 participants in each group – one group for each country – were given eight short texts to read which had been carefully translated into the three different languages. A rigorous translation process was used to make the texts as closely comparable across languages as possible. English, Finnish and Mandarin were chosen because of the stark differences they display in their written form – with great variation in visual presentation of words, for example alphabetic vs. logographic(1), spaced vs. unspaced, agglutinative(2) vs. non-agglutinative.

The researchers used sophisticated eye-tracking equipment to assess the cognitive processes of the participants in each group as they read. The equipment was set up identically in each country to measure eye movement patterns of the individual readers – recording how long they spent looking at each word, sentence or paragraph.

The results of the study showed significant and substantial differences between the three language groups in relation to the nature of eye movements of the readers and how long participants spent reading each individual word or phrase. For example, the Finnish participants spent longer concentrating on some words compared to the English readers. However, most importantly and despite these differences, the time it took for the readers of each language to read each complete sentence or paragraph was the same.

Professor Liversedge says: “This finding suggests that despite very substantial differences in the written form of different languages, at a basic propositional level, it takes humans the same amount of time to process the same information regardless of the language it is written in.

“We have shown it doesn’t matter whether a native Chinese reader is processing Chinese, or a Finnish native reader is reading Finnish, or an English native reader is processing English, in terms of comprehending the basic propositional content of the language, one language is as good as another.”

The study authors believe more research would be needed to fully understand if true universality of language exists, but that their study represents a good first step towards demonstrating that there is universality in the process of reading.

###

Notes for editors:

1) Logographic language systems use signs or characters to represent words or phrases.

2) Agglutinative language tends to express concepts in complex words consisting of many sub-units that are strung together.

3) The paper Universality in eye movements and reading: A trilingual investigation, (Simon P. Liversedge, Denis Drieghe, Xin Li, Guoli Yan, Xuejun Bai, Jukka Hyönä) is published in the journal Cognition and can also be found at: http://eprints.soton.ac.uk/382899/1/Liversedge,%20Drieghe,%20Li,%20Yan,%20Bai,%20%26%20Hyona%20(in%20press)%20copy.pdf

 

Semantically speaking: Does meaning structure unite languages? (Eureka/Santa Fe Institute)

1-FEB-2016

Humans’ common cognitive abilities and language dependance may provide an underlying semantic order to the world’s languages

SANTA FE INSTITUTE

We create words to label people, places, actions, thoughts, and more so we can express ourselves meaningfully to others. Do humans’ shared cognitive abilities and dependence on languages naturally provide a universal means of organizing certain concepts? Or do environment and culture influence each language uniquely?

Using a new methodology that measures how closely words’ meanings are related within and between languages, an international team of researchers has revealed that for many universal concepts, the world’s languages feature a common structure of semantic relatedness.

“Before this work, little was known about how to measure [a culture’s sense of] the semantic nearness between concepts,” says co-author and Santa Fe Institute Professor Tanmoy Bhattacharya. “For example, are the concepts of sun and moon close to each other, as they are both bright blobs in the sky? How about sand and sea, as they occur close by? Which of these pairs is the closer? How do we know?”

Translation, the mapping of relative word meanings across languages, would provide clues. But examining the problem with scientific rigor called for an empirical means to denote the degree of semantic relatedness between concepts.

To get reliable answers, Bhattacharya needed to fully quantify a comparative method that is commonly used to infer linguistic history qualitatively. (He and collaborators had previously developed this quantitative method to study changes in sounds of words as languages evolve.)

“Translation uncovers a disagreement between two languages on how concepts are grouped under a single word,” says co-author and Santa Fe Institute and Oxford researcher Hyejin Youn. “Spanish, for example, groups ‘fire’ and ‘passion’ under ‘incendio,’ whereas Swahili groups ‘fire’ with ‘anger’ (but not ‘passion’).”

To quantify the problem, the researchers chose a few basic concepts that we see in nature (sun, moon, mountain, fire, and so on). Each concept was translated from English into 81 diverse languages, then back into English. Based on these translations, a weighted network was created. The structure of the network was used to compare languages’ ways of partitioning concepts.

The team found that the translated concepts consistently formed three theme clusters in a network, densely connected within themselves and weakly to one another: water, solid natural materials, and earth and sky.

“For the first time, we now have a method to quantify how universal these relations are,” says Bhattacharya. “What is universal – and what is not – about how we group clusters of meanings teaches us a lot about psycholinguistics, the conceptual structures that underlie language use.”

The researchers hope to expand this study’s domain, adding more concepts, then investigating how the universal structure they reveal underlies meaning shift.

Their research was published today in PNAS.

Is human behavior controlled by our genes? Richard Levins reviews ‘The Social Conquest of Earth’ (Climate & Capitalism)

“Failing to take class division into account is not simply a political bias. It also distorts how we look at human evolution as intrinsically bio-social and human biology as socialized biology.”

 

August 1, 2012

Edward O. Wilson. The Social Conquest of Earth. Liverwright Publishing, New York, 2012

reviewed by Richard Levins

In the 1970s, Edward O. Wilson, Richard Lewontin, Stephen Jay Gould and I were colleagues in Harvard’s new department of Organismic and Evolutionary Biology. In spite of our later divergences, I retain grateful memories of working in the field with Ed, turning over rocks, sharing beer, breaking open twigs, putting out bait (canned tuna fish) to attract the ants we were studying..

We were part of a group that hoped to jointly write and publish articles offering a common view of evolutionary science, but that collaboration was brief, largely because Lewontin and I strongly disagreed with Wilson’s Sociobiology.

Reductionism and Sociobiology

Although Wilson fought hard against the reduction of biology to the study of molecules, his holism stopped there. He came to promote the reduction of social and behavioral science to biology. In his view:

“Our lives are restrained by two laws of biology: all of life’s entities and processes are obedient to the laws of physics and chemistry; and all of life’s entities and processes have arisen through evolution and natural selection.” [Social Conquest, p. 287]

This is true as far as it goes but fails in two important ways.

First, it ignores the reciprocal feedback between levels. The biological creates the ensemble of molecules in the cell; the social alters the spectrum of molecules in the biosphere; biological activity creates the biosphere itself and the conditions for the maintenance of life.

Second, it doesn’t consider how the social level alters the biological: our biology is a socialized biology.

Higher (more inclusive) levels are indeed constrained by the laws at lower levels of organization, but they also have their own laws that emerge from the lower level yet are distinct and that also determine which chemical and physical entities are present in the organisms. In new contexts they operate differently.

Thus for example we, like a few other animals including bears, are omnivores. For some purposes such as comparing digestive systems that’s an adequate label. But we are omnivores of a special kind: we not only acquire food by predation, but we also producefood, turning the inedible into edible, the transitory into stored food. This has had such a profound effect on our lives that it is also legitimate to refer to us as something new, productivores.

The productivore mode of sustenance opens a whole new domain: the mode of production. Human societies have experienced different modes of production and ways to organize reproduction, each with its own dynamics, relations with the rest of nature, division into classes, and processes which restore or change it when it is disturbed.

The division of society into classes changes how natural selection works, who is exposed to what diseases, who eats and who doesn’t eat, who does the dishes, who must do physical work, how long we can expect to live. It is no longer possible to prescribe the direction of natural selection for the whole species.

So failing to take class division into account is not simply a political bias. It also distorts how we look at human evolution as intrinsically bio-social and human biology as socialized biology.

The opposite of the genetic determinism of sociobiology is not “the blank slate” view that claims that our biological natures were irrelevant to behavior and society. The question is, what about our animal heritage was relevant?

We all agree that we are animals; that as animals we need food; that we are terrestrial rather than aquatic animals; that we are mammals and therefore need a lot of food to support our high metabolic rates that maintain body temperature; that for part of our history we lived in trees and acquired characteristics adapted to that habitat, but came down from the trees with a dependence on vision, hands with padded fingers, and so on. We have big brains, with regions that have different major functions such as emotions, color vision, and language.

But beyond these general capacities, there is widespread disagreement about which behaviors or attitudes are expressions of brain structure. The amygdala is a locus of emotion, but does it tell us what to be angry or rejoice about? It is an ancient part of our brains, but has it not evolved in response to what the rest of the brain is doing? There is higher intellectual function in the cortex, but does it tell us what to think about?

Every part of an organism is the environment for the rest of the organism, setting the context for natural selection. In contrast to this fluid viewpoint, phrases such as “hard-wired” have become part of the pop vocabulary, applied promiscuously to all sorts of behaviors.

In a deeper sense, asking if something is heritable is a nonsense question. Heritability is always a comparison: how much of the difference between humans and chimps is heritable? What about the differences between ourselves and Neanderthals? Between nomads and farmers?

Social Conquest of Earth

The Social Conquest of Earth, Ed Wilson’s latest book, continues his interest in the “eusocial” animals – ants, bees and others that live in groups with overlapping generations and a division of labor that includes altruistic behavior. As the title shows. he also continues to use the terminology of conquest and domination, so that social animals “conquer” the earth, their abundance makes them “dominate.”

The problem that Wilson poses in this book is first, why did eusociality arise at all, and second, why is it so rare?

Wilson is at his best when discussing the more remote past, the origins of social behavior 220 million years ago for termites, 150 million years for ants, 70-80 million years for humble bees and honey bees.

But as he gets closer to humanity the reductionist biases that informed Sociobiology reassert themselves. Once again Wilson argues that brain architecture determines what people do socially – that war, aggression, morality, honor and hierarchy are part of “human nature.”

Rejecting kin selection

A major change, and one of the most satisfying parts of the book, is his rejection of kin selection as a motive force of social evolution, a theory he once defended strongly.

Kin selection assumed that natural selection acts on genes. A gene will be favored if it results in enhancing its own survival and reproduction, but it is not enough to look at the survival of the individual. If my brother and I each have 2 offspring, a shared gene would be doubled in the next generation. But if my brother sacrifices himself so that I might leave 5 offspring while he leaves none, our shared gene will increase 250%.

Therefore, argued the promoters of this theory, the fitness that natural selection increases has to be calculated over a whole set of kin, weighted by the closeness of their relationship. Mathematical formulations were developed to support this theory. Wilson found it attractive because it appeared to support sociobiology.

However, plausible inference is not enough to prove a theory. Empirical studies comparing different species or traits did not confirm the kin selection hypothesis, and a reexamination of its mathematical structure (such as the fuzziness of defining relatedness) showed that it could not account for the observed natural world. Wilson devotes a lot of space to refuting kin selection because of his previous support of it: it is a great example of scientific self-correction.

Does group selection explain social behaviour?

Wilson has now adopted another model in which the evolution of sociality is the result of opposing processes of ordinary individual selection acting within populations, and group selection acting between populations. He invokes this model account to for religion, morality, honor and other human behaviors.

He argues that individual selection promotes “selfishness” (that is, behavior that enhances individual survival) while group selection favors cooperative and “altruistic” behavior. The two forms of selection oppose each other, and that results in our mixed behaviors.

“We are an evolutionary chimera living on intelligence steered by the demands of animal instinct. This is the reason we are mindlessly dismantling the biosphere and with it, our own prospects for permanent existence.” [p.13]

But this simplistic reduction of environmental destruction to biology will not stand. Contrary to Wilson, the destruction of the biosphere is not “mindless.” It is the outcome of interactions in the noxious triad of greed, poverty, and ignorance, all produced by a socio-economic system that must expand to survive.

For Wilson, as for many environmentalists, the driver of ecological destruction is some generic “we,” who are all in the same boat. But since the emergence of classes after the adoption of agriculture some 8-10,000 years ago it is no longer appropriate to talk of a collective “we.”

The owners of the economy are willing to use up resources, pollute the environment, debase the quality of products, and undermine the health of the producers out of a kind of perverse economic rationality. They support their policies with theories such as climate change denial or doubting the toxicity of pesticides, and buttress it with legislation and court decisions.

Evolution and religion

The beginning and end of the book, a spirited critique of religion as possibly explaining human nature, is more straightforwardly materialist than the view supported by Stephen J. Gould, who argued that religion and science are separate magisteria that play equal roles in human wellbeing.

But Wilson’s use of evidence is selective.

For example, he argues that religion demands absolute belief from its followers – but this is true only of Christianity and Islam. Judaism lets you think what you want as long as you practice the prescribed rituals, Buddhism doesn’t care about deities or the afterlife.

Similarly he argues that creation myths are a product of evolution:

“Since paleolithic times … each tribe invented its own creation myths… No tribe could long survive without a creation myth… The creation myth is a Darwinian device for survival.” [p. 8]

But the ancient Israelites did not have an origin myth when they emerged as a people in the hills of Judea around 1250 B.C.E. Although it appears at the beginning of the Bible, the Israelites did not adapt the Book of Genesis from Babylonian mythology until four centuries after Deuteronomy was written, after they had survived 200 years as a tribal confederation, two kingdoms and the Assyrian and Babylonian conquests— by then the writing of scripture was a political act, not a “Darwinian device for survival.”

Biologizing war

In support of his biologizing of “traits,” Wilson reviews recent research that appears to a show a biological basis for the way people see and interpret color, for the incest taboo, and for the startle response – and then asserts that inherited traits include war, hierarchy, honor and such. Ignoring the role of social class, he views these as universal traits of human nature.

Consider war. Wilson claims that war reflects genes for group selection. “A soldier going into battle will benefit his country but he runs a higher risk of death than one who does not.” [p. 165]

But soldiers don’t initiate conflict. We know in our own times that those who decide to make war are not those who fight the wars – but, perhaps unfortunately, sterilizing the general staff of the Pentagon and of the CIA would not produce a more peaceful America.

The evidence against war as a biological imperative is strong. Willingness to fight is situational.

Group selection can’t explain why soldiers have to be coerced into fighting, why desertion is a major problem for generals and is severely punished, or why resistance to recruitment is a major problem of armies. In the present militarist USA, soldiers are driven to join up through unemployment and the promises of benefits such as learning skills and getting an education and self-improvement. No recruitment posters offer the opportunity to kill people as an inducement for signing up.

The high rates of surrender and desertion of Italian soldiers in World War II did not reflect any innate cowardice among Italians but a lack of fascist conviction. The very rarity of surrender by Japanese soldiers in the same war was not a testimony to greater bravery on the part of the Japanese but of the inculcated combination of nationalism and religion.

As the American people turned against the Vietnam war, increased desertions and the killing of officers by the soldiers reflected their rejection of the war.

The terrifying assaults of the Vikings during the middle ages bear no resemblance to the mellow Scandinavian culture of today, too short a time for natural selection to transform national character.

The attempt to make war an inherited trait favored by natural selection reflects the sexism that has been endemic in sociobiology. It assumes that local groups differed in their propensity for aggression and prowess in war. The victorious men carry off the women of the conquered settlements and incorporate them into their own communities. Therefore the new generation has been selected for greater military success among the men. But the women, coming from a defeated, weaker group, would bring with them their genes for lack of prowess, a selection for military weakness! Such a selection process would be self-negating.

Ethnocentrism

Wilson also considers ethnocentrism to be an inherited trait: group selection leads people to favor members of their own group and reject outsiders.

The problem is that the lines between groups vary under different circumstances. For example, in Spanish America, laws governing marriage included a large number of graded racial categories, while in North America there were usually just two. What’s more, the category definitions are far from permanent: at one time, the Irish were regarded as Black, and the whiteness of Jews was questioned.

Adoption, immigration, mergers of clans also confound any possible genetic basis for exclusion.

Hierarchy

Wilson draws on the work of Herbert Simon to argue that hierarchy is a result of human nature: there will always be rulers and ruled. His argument fails to distinguish between hierarchy and leadership.

There are other forms of organization possible besides hierarchy and chaos, including democratic control by the workers who elect the operational leadership. In some labor unions, leaders’ salaries are pegged to the median wage of the members. In University departments the chairmanship is often a rotating task that nobody really wants. When Argentine factory owners closed their plants during the recession, workers in fact seized control and ran them profitably despite police sieges.

Darwinian behavior?

Wilson argues that “social traits” evolved through Darwinian natural selection. Genes that promoted behaviors that helped the individual or group to survive were passed on; genes that weakened the individual or group were not. The tension between individual and group selection decided which traits would be part of our human nature.

But a plausible claim that a trait might be good for people is not enough to explain its origin and survival. A gene may become fixed in a population even if it is harmful, just by the random genetic changes that we know occur. Or a gene may be harmful but be dragged along by an advantageous gene close to it on the same chromosome.

Selection may act in different directions in different subpopulations, or in different habitats, or in differing environmental. Or the adaptive value of a gene may change with its prevalence or the distribution of ages in the population, itself a consequence of the environment and population heterogeneity.

For instance, Afro-Americans have a higher death rate from cancer than Euro-Americans. In part this reflects the carcinogenic environments they have been subjected to, but there is also a genetic factor. It is the combination of living conditions and genetics that causes higher mortality rates.

* * *

Obviously I am not arguing that evolution doesn’t happen. The point is that we need a much better argument than just a claim that some genotype might be beneficial. And we need a much more rigorous understanding of the differences and linkages between the biological and social components of humanity’s nature. Just calling some social behavior a “trait” does not make it heritable.

In a book that attempts such a wide-ranging panorama of human evolution, there are bound to be errors. But the errors in The Social Conquest of Earth form a pattern: they reduce social issues to biology, and they insist on our evolutionary continuity with other animals while ignoring the radical discontinuity that made us productivores and divided us into classes.

Impact of human activity on local climate mapped (Science Daily)

Date: January 20, 2016

Source: Concordia University

Summary: A new study pinpoints the temperature increases caused by carbon dioxide emissions in different regions around the world.


This is a map of climate change. Credit: Nature Climate Change

Earth’s temperature has increased by 1°C over the past century, and most of this warming has been caused by carbon dioxide emissions. But what does that mean locally?

A new study published in Nature Climate Change pinpoints the temperature increases caused by CO2 emissions in different regions around the world.

Using simulation results from 12 global climate models, Damon Matthews, a professor in Concordia’s Department of Geography, Planning and Environment, along with post-doctoral researcher Martin Leduc, produced a map that shows how the climate changes in response to cumulative carbon emissions around the world.

They found that temperature increases in most parts of the world respond linearly to cumulative emissions.

“This provides a simple and powerful link between total global emissions of carbon dioxide and local climate warming,” says Matthews. “This approach can be used to show how much human emissions are to blame for local changes.”

Leduc and Matthews, along with co-author Ramon de Elia from Ouranos, a Montreal-based consortium on regional climatology, analyzed the results of simulations in which CO2 emissions caused the concentration of CO2 in the atmosphere to increase by 1 per cent each year until it reached four times the levels recorded prior to the Industrial Revolution.

Globally, the researchers saw an average temperature increase of 1.7 ±0.4°C per trillion tonnes of carbon in CO2 emissions (TtC), which is consistent with reports from the Intergovernmental Panel on Climate Change.

But the scientists went beyond these globally averaged temperature rises, to calculate climate change at a local scale.

At a glance, here are the average increases per trillion tonnes of carbon that we emit, separated geographically:

  • Western North America 2.4 ± 0.6°C
  • Central North America 2.3 ± 0.4°C
  • Eastern North America 2.4 ± 0.5°C
  • Alaska 3.6 ± 1.4°C
  • Greenland and Northern Canada 3.1 ± 0.9°C
  • North Asia 3.1 ± 0.9°C
  • Southeast Asia 1.5 ± 0.3°C
  • Central America 1.8 ± 0.4°C
  • Eastern Africa 1.9 ± 0.4°C

“As these numbers show, equatorial regions warm the slowest, while the Arctic warms the fastest. Of course, this is what we’ve already seen happen — rapid changes in the Arctic are outpacing the rest of the planet,” says Matthews.

There are also marked differences between land and ocean, with the temperature increase for the oceans averaging 1.4 ± 0.3°C TtC, compared to 2.2 ± 0.5°C for land areas.

“To date, humans have emitted almost 600 billion tonnes of carbon,” says Matthews. “This means that land areas on average have already warmed by 1.3°C because of these emissions. At current emission rates, we will have emitted enough CO¬2 to warm land areas by 2°C within 3 decades.”


Journal Reference:

  1. Martin Leduc, H. Damon Matthews, Ramón de Elía. Regional estimates of the transient climate response to cumulative CO2 emissionsNature Climate Change, 2016; DOI: 10.1038/nclimate2913

The world’s greatest literature reveals multi fractals and cascades of consciousness (Science Daily)

Date: January 21, 2016

Source: The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Summary: James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure.


Sequences of sentence lengths (as measured by number of words) in four literary works representative of various degree of cascading character. Credit: Source: IFJ PAN 

James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world’s greatest writers appear to be, in some respects, constructing fractals. Statistical analysis carried out at the Institute of Nuclear Physics of the Polish Academy of Sciences, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

As far as many bookworms are concerned, advanced equations and graphs are the last things which would hold their interest, but there’s no escape from the math. Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is in fact a fractal. In the case of several works their mathematical complexity proved to be exceptional, comparable to the structure of complex mathematical objects considered to be multifractal. Interestingly, in the analyzed pool of all the works, one genre turned out to be exceptionally multifractal in nature.

Fractals are self-similar mathematical objects: when we begin to expand one fragment or another, what eventually emerges is a structure that resembles the original object. Typical fractals, especially those widely known as the Sierpinski triangle and the Mandelbrot set, are monofractals, meaning that the pace of enlargement in any place of a fractal is the same, linear: if they at some point were rescaled x number of times to reveal a structure similar to the original, the same increase in another place would also reveal a similar structure.

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals ‘interwoven’ with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is therefore non-linear in nature.

“Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

The study involved 113 literary works written in English, French, German, Italian, Polish, Russian and Spanish by such famous figures as Honore de Balzac, Arthur Conan Doyle, Julio Cortazar, Charles Dickens, Fyodor Dostoevsky, Alexandre Dumas, Umberto Eco, George Elliot, Victor Hugo, James Joyce, Thomas Mann, Marcel Proust, Wladyslaw Reymont, William Shakespeare, Henryk Sienkiewicz, JRR Tolkien, Leo Tolstoy and Virginia Woolf, among others. The selected works were no less than 5,000 sentences long, in order to ensure statistical reliability.

To convert the texts to numerical sequences, sentence length was measured by the number of words (an alternative method of counting characters in the sentence turned out to have no major impact on the conclusions). The dependences were then searched for in the data — beginning with the simplest, i.e. linear. This is the posited question: if a sentence of a given length is x times longer than the sentences of different lengths, is the same aspect ratio preserved when looking at sentences respectively longer or shorter?

“All of the examined works showed self-similarity in terms of organization of the lengths of sentences. Some were more expressive — here The Ambassadors by Henry James stood out — while others to far less of an extreme, as in the case of the French seventeenth-century romance Artamene ou le Grand Cyrus. However, correlations were evident, and therefore these texts were the construction of a fractal,” comments Dr. Pawel Oswiecimka (IFJ PAN), who also noted that fractality of a literary text will in practice never be as perfect as in the world of mathematics. It is possible to magnify mathematical fractals up to infinity, while the number of sentences in each book is finite, and at a certain stage of scaling there will always be a cut-off in the form of the end of the dataset.

Things took a particularly interesting turn when physicists from the IFJ PAN began tracking non-linear dependence, which in most of the studied works was present to a slight or moderate degree. However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has so far never been associated with this literary genre.

“The absolute record in terms of multifractality turned out to be Finnegan’s Wake by James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals,” says Prof. Drozdz.

The most multifractal works also included A Heartbreaking Work of Staggering Genius by Dave Eggers, Rayuela by Julio Cortazar, The US Trilogy by John Dos Passos, The Waves by Virginia Woolf, 2666 by Roberto Bolano, and Joyce’s Ulysses. At the same time a lot of works usually regarded as stream of consciousness turned out to show little correlation to multifractality, as it was hardly noticeable in books such as Atlas Shrugged by Ayn Rand and A la recherche du temps perdu by Marcel Proust.

“It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers. It is hardly surprising that ascribing a work to a particular genre is, for whatever reason, sometimes subjective. We see, moreover, the possibility of an interesting application of our methodology: it may someday help in a more objective assignment of books to one genre or another,” notes Prof. Drozdz.

Multifractal analyses of literary texts carried out by the IFJ PAN have been published in Information Sciences, a journal of computer science. The publication has undergone rigorous verification: given the interdisciplinary nature of the subject, editors immediately appointed up to six reviewers.


Journal Reference:

  1. Stanisław Drożdż, Paweł Oświȩcimka, Andrzej Kulig, Jarosław Kwapień, Katarzyna Bazarnik, Iwona Grabska-Gradzińska, Jan Rybicki, Marek Stanuszek. Quantifying origin and character of long-range correlations in narrative textsInformation Sciences, 2016; 331: 32 DOI: 10.1016/j.ins.2015.10.023

Algoritmo quântico mostrou-se mais eficaz do que qualquer análogo clássico (Revista Fapesp)

11 de dezembro de 2015

José Tadeu Arantes | Agência FAPESP – O computador quântico poderá deixar de ser um sonho e se tornar realidade nos próximos 10 anos. A expectativa é que isso traga uma drástica redução no tempo de processamento, já que algoritmos quânticos oferecem soluções mais eficientes para certas tarefas computacionais do que quaisquer algoritmos clássicos correspondentes.

Até agora, acreditava-se que a chave da computação quântica eram as correlações entre dois ou mais sistemas. Exemplo de correlação quântica é o processo de “emaranhamento”, que ocorre quando pares ou grupos de partículas são gerados ou interagem de tal maneira que o estado quântico de cada partícula não pode ser descrito independentemente, já que depende do conjunto (Para mais informações veja agencia.fapesp.br/20553/).

Um estudo recente mostrou, no entanto, que mesmo um sistema quântico isolado, ou seja, sem correlações com outros sistemas, é suficiente para implementar um algoritmo quântico mais rápido do que o seu análogo clássico. Artigo descrevendo o estudo foi publicado no início de outubro deste ano na revista Scientific Reports, do grupo Nature: Computational speed-up with a single qudit.

O trabalho, ao mesmo tempo teórico e experimental, partiu de uma ideia apresentada pelo físico Mehmet Zafer Gedik, da Sabanci Üniversitesi, de Istambul, Turquia. E foi realizado mediante colaboração entre pesquisadores turcos e brasileiros. Felipe Fernandes Fanchini, da Faculdade de Ciências da Universidade Estadual Paulista (Unesp), no campus de Bauru, é um dos signatários do artigo. Sua participação no estudo se deu no âmbito do projeto Controle quântico em sistemas dissipativos, apoiado pela FAPESP.

“Este trabalho traz uma importante contribuição para o debate sobre qual é o recurso responsável pelo poder de processamento superior dos computadores quânticos”, disse Fanchini à Agência FAPESP.

“Partindo da ideia de Gedik, realizamos no Brasil um experimento, utilizando o sistema de ressonância magnética nuclear (RMN) da Universidade de São Paulo (USP) em São Carlos. Houve, então, a colaboração de pesquisadores de três universidades: Sabanci, Unesp e USP. E demonstramos que um circuito quântico dotado de um único sistema físico, com três ou mais níveis de energia, pode determinar a paridade de uma permutação numérica avaliando apenas uma vez a função. Isso é impensável em um protocolo clássico.”

Segundo Fanchini, o que Gedik propôs foi um algoritmo quântico muito simples que, basicamente, determina a paridade de uma sequência. O conceito de paridade é utilizado para informar se uma sequência está em determinada ordem ou não. Por exemplo, se tomarmos os algarismos 1, 2 e 3 e estabelecermos que a sequência 1- 2-3 está em ordem, as sequências 2-3-1 e 3-1-2, resultantes de permutações cíclicas dos algarismos, estarão na mesma ordem.

Isso é fácil de entender se imaginarmos os algarismos dispostos em uma circunferência. Dada a primeira sequência, basta girar uma vez em um sentido para obter a sequência seguinte, e girar mais uma vez para obter a outra. Porém, as sequências 1-3-2, 3-2-1 e 2-1-3 necessitam, para serem criadas, de permutações acíclicas. Então, se convencionarmos que as três primeiras sequências são “pares”, as outras três serão “ímpares”.

“Em termos clássicos, a observação de um único algarismo, ou seja uma única medida, não permite dizer se a sequência é par ou ímpar. Para isso, é preciso realizar ao menos duas observações. O que Gedik demonstrou foi que, em termos quânticos, uma única medida é suficiente para determinar a paridade. Por isso, o algoritmo quântico é mais rápido do que qualquer equivalente clássico. E esse algoritmo pode ser concretizado por meio de uma única partícula. O que significa que sua eficiência não depende de nenhum tipo de correlação quântica”, informou Fanchini.

O algoritmo em pauta não diz qual é a sequência. Mas informa se ela é par ou ímpar. Isso só é possível quando existem três ou mais níveis. Porque, havendo apenas dois níveis, algo do tipo 1-2 ou 2-1, não é possível definir uma sequência par ou ímpar. “Nos últimos tempos, a comunidade voltada para a computação quântica vem explorando um conceito-chave da teoria quântica, que é o conceito de ‘contextualidade’. Como a ‘contextualidade’ também só opera a partir de três ou mais níveis, suspeitamos que ela possa estar por trás da eficácia de nosso algoritmo”, acrescentou o pesquisador.

Conceito de contextulidade

“O conceito de ‘contextualidade’ pode ser melhor entendido comparando-se as ideias de mensuração da física clássica e da física quântica. Na física clássica, supõe-se que a mensuração nada mais faça do que desvelar características previamente possuídas pelo sistema que está sendo medido. Por exemplo, um determinado comprimento ou uma determinada massa. Já na física quântica, o resultado da mensuração não depende apenas da característica que está sendo medida, mas também de como foi organizada a mensuração, e de todas as mensurações anteriores. Ou seja, o resultado depende do contexto do experimento. E a ‘contextualidade’ é a grandeza que descreve esse contexto”, explicou Fanchini.

Na história da física, a “contextualidade” foi reconhecida como uma característica necessária da teoria quântica por meio do famoso Teorema de Bell. Segundo esse teorema, publicado em 1964 pelo físico irlandês John Stewart Bell (1928 – 1990), nenhuma teoria física baseada em variáveis locais pode reproduzir todas as predições da mecânica quântica. Em outras palavras, os fenômenos físicos não podem ser descritos em termos estritamente locais, uma vez que expressam a totalidade.

“É importante frisar que em outro artigo [Contextuality supplies the ‘magic’ for quantum computation] publicado na Nature em junho de 2014, aponta a contextualidade como a possível fonte do poder da computação quântica. Nosso estudo vai no mesmo sentido, apresentando um algoritmo concreto e mais eficiente do que qualquer um jamais imaginável nos moldes clássicos.”

Preventing famine with mobile phones (Science Daily)

Date: November 19, 2015

Source: Vienna University of Technology, TU Vienna

Summary: With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition, say experts. By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. The method has now been tested in the Central African Republic.


Does drought lead to famine? A mobile app helps to collect information. Credit: Image courtesy of Vienna University of Technology, TU Vienna

With a mobile data collection app and satellite data, scientists will be able to predict whether a certain region is vulnerable to food shortages and malnutrition. The method has now been tested in the Central African Republic.

There are different possible causes for famine and malnutrition — not all of which are easy to foresee. Drought and crop failure can often be predicted by monitoring the weather and measuring soil moisture. But other risk factors, such as socio-economic problems or violent conflicts, can endanger food security too. For organizations such as Doctors without Borders / Médecins Sans Frontières (MSF), it is crucial to obtain information about vulnerable regions as soon as possible, so that they have a chance to provide help before it is too late.

Scientists from TU Wien in Vienna, Austria and the International Institute for Applied Systems Analysis (IIASA) in Laxenburg, Austria have now developed a way to monitor food security using a smartphone app, which combines weather and soil moisture data from satellites with crowd-sourced data on the vulnerability of the population, e.g. malnutrition and other relevant socioeconomic data. Tests in the Central African Republic have yielded promising results, which have now been published in the journal PLOS ONE.

Step One: Satellite Data

“For years, we have been working on methods of measuring soil moisture using satellite data,” says Markus Enenkel (TU Wien). By scanning Earth’s surface with microwave beams, researchers can measure the water content in soil. Comparing these measurements with extensive data sets obtained over the last few decades, it is possible to calculate whether the soil is sufficiently moist or whether there is danger of droughts. “This method works well and it provides us with very important information, but information about soil moisture deficits is not enough to estimate the danger of malnutrition,” says IIASA researcher Linda See. “We also need information about other factors that can affect the local food supply.” For example, political unrest may prevent people from farming, even if weather conditions are fine. Such problems can of course not be monitored from satellites, so the researchers had to find a way of collecting data directly in the most vulnerable regions.

“Today, smartphones are available even in developing countries, and so we decided to develop an app, which we called SATIDA COLLECT, to help us collect the necessary data,” says IIASA-based app developer Mathias Karner. For a first test, the researchers chose the Central African Republic- one of the world’s most vulnerable countries, suffering from chronic poverty, violent conflicts, and weak disaster resilience. Local MSF staff was trained for a day and collected data, conducting hundreds of interviews.

“How often do people eat? What are the current rates of malnutrition? Have any family members left the region recently, has anybody died? — We use the answers to these questions to statistically determine whether the region is in danger,” says Candela Lanusse, nutrition advisor from Doctors without Borders. “Sometimes all that people have left to eat is unripe fruit or the seeds they had stored for next year. Sometimes they have to sell their cattle, which may increase the chance of nutritional problems. This kind of behavior may indicate future problems, months before a large-scale crisis breaks out.”

A Map of Malnutrition Danger

The digital questionnaire of SATIDA COLLECT can be adapted to local eating habits, as the answers and the GPS coordinates of every assessment are stored locally on the phone. When an internet connection is available, the collected data are uploaded to a server and can be analyzed along with satellite-derived information about drought risk. In the end a map could be created, highlighting areas where the danger of malnutrition is high. For Doctors without Borders, such maps are extremely valuable. They help to plan future activities and provide help as soon as it is needed.

“Testing this tool in the Central African Republic was not easy,” says Markus Enenkel. “The political situation there is complicated. However, even under these circumstances we could show that our technology works. We were able to gather valuable information.” SATIDA COLLECT has the potential to become a powerful early warning tool. It may not be able to prevent crises, but it will at least help NGOs to mitigate their impacts via early intervention.


Story Source:

The above post is reprinted from materials provided by Vienna University of Technology, TU ViennaNote: Materials may be edited for content and length.


Journal Reference:

  1. Markus Enenkel, Linda See, Mathias Karner, Mònica Álvarez, Edith Rogenhofer, Carme Baraldès-Vallverdú, Candela Lanusse, Núria Salse. Food Security Monitoring via Mobile Data Collection and Remote Sensing: Results from the Central African RepublicPLOS ONE, 2015; 10 (11): e0142030 DOI: 10.1371/journal.pone.0142030

Dudas sobre El Niño retrasan preparación ante desastres (SciDev Net)

Dudas sobre El Niño retrasan preparación ante desastres

Crédito de la imagen: Patrick Brown/Panos

27/10/15

Martín De Ambrosio

De un vistazo

  • Efectos del fenómeno aún son confusos a lo largo del continente
  • No hay certeza, pero cruzarse de brazos no es opción, según Organización Panamericana de la Salud
  • Hay consenso científico del 95 por ciento sobre posibilidades de un El Niño fuerte

Los desacuerdos que existen entre los científicos sobre la posibilidad de que Centro y Sudamérica sufran o no un fuerte evento El Niño están generando cierto retraso en las preparaciones, según advierten las principales organizaciones que trabajan en el clima de la región.

Algunos investigadores sudamericanos aún tienen dudas sobre la forma cómo se desarrolla el evento este año. Esta incertidumbre impacta en los funcionarios y los estados, que deberían actuar cuanto antes para prevenir los peores escenarios, incluyendo muertes debido a desastres naturales, reclaman las organizaciones meteorológicas.

Eduardo Zambrano, investigador del Centro de Investigación Internacional sobre el Fenómeno de El Niño (CIIFEN) en Ecuador, y uno de los centros regionales de la Organización Meteorológica Mundial, dice que el problema es que los efectos del fenómeno todavía no han sido claros y evidentes en todo el continente.

“Algunas imágenes de satélite nos muestran un Océano Pacífico muy caliente, una de las características de El Niño”.

Willian Alva León, presidente de la Sociedad Meteorológica del Perú

“De todos modos podemos hablar sobre las extremas sequías en el noreste de Brasil, Venezuela y la zona del Caribe”, dice, y menciona además las inusualmente fuertes lluvias en el desierto de Atacama en Chile desde marzo y las inundaciones en zonas de Argentina, Uruguay y Paraguay.

El Niño alcanza su pico cuando una masa de aguas cálidas para los habituales parámetros del este del Océano Pacífico, se mueve de norte a sur y toca costas peruanas y ecuatorianas. Este movimiento causa efectos en cascada y estragos en todo el sistema de América Central y del Sur, convirtiendo las áridas regiones altas en lluviosas, al tiempo que se presentan sequías en las tierras bajas y tormentas sobre el Caribe.

Pero El Niño continúa siendo de difícil predicción debido a sus muy diferentes impactos. Los científicos, según Zambrano, esperaban al Niño el año pasado “cuando todas las alarmas sonaron, y luego no pasó nada demasiado extraordinario debido a un cambio en la dirección de los vientos”.

Tras ese error, muchas organizaciones prefirieron la cautela para evitar el alarmismo. “Algunas imágenes de satélite nos muestran un Océano Pacífico muy caliente, una de las características de El Niño”, dice Willian Alva León, presidente de la Sociedad Meteorológica del Perú. Pero, agrega, este calor no se mueve al sudeste, hacia las costas peruanas, como sucedería en caso del evento El Niño.

Alva León cree que los peores efectos ya sucedieron este año, lo que significa que el fenómeno está en retirada. “El Niño tiene un límite de energía y creo que ya ha sido alcanzado este año”, dice.

Este desacuerdo entre las instituciones de investigación del clima preocupa a quienes generan políticas, pues necesitan guías claras para iniciar las preparaciones necesarias del caso. Ciro Ugarte, asesor regional del área de Preparativos para Emergencia y Socorro en casos de Desastrede la Organización Panamericana de la Salud, dice que es obligatorio actuar como si El Niño en efecto estuviera en proceso para asegurar que el continente enfrente las posibles consecuencias.

“Estar preparados es importante porque reduce el impacto del fenómeno así como otras enfermedades que hoy son epidémicas”, dice.

Para asegurar el grado de probabilidad de El Niño, algunos científicos usan modelos que abstraen datos de la realidad y generan predicciones. María Teresa Martínez, subdirectora de meteorología del Instituto de Hidrología, Meteorología y Estudios Ambientales de Colombia, señala que los modelos más confiables predijeron en marzo que había entre un 50 y un 60 por ciento de posibilidad de un evento El Niño. “Ahora El Niño se desarrolla con fuerza desde su etapa de formación hacia la etapa de madurez, que será alcanzada en diciembre”, señala.

Ugarte admite que no hay certezas, pero dice que para su organización “no hacer nada no es una opción”.

“Como creadores de políticas de prevención, lo que tenemos que hacer es usar lo que es el consenso entre los científicos, y hoy ese consenso dice que hay un 95% de posibilidades de tener un fuerte o muy fuerte evento El Niño”, dice.

Aquecimento pode triplicar seca na Amazônia (Observatório do Clima)

15/10/2015

 Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Seca em Silves (AM) em 2005. Foto: Ana Cintia Gazzelli/WWF

Modelos de computador sugerem que leste amazônico, que contém a maior parte da floresta, teria mais estiagens, incêndios e morte de árvores, enquanto o oeste ficaria mais chuvoso.

As mudanças climáticas podem aumentar a frequência tanto de secas quanto de chuvas extremas na Amazônia antes do meio do século, compondo com o desmatamento para causar mortes maciças de árvores, incêndios e emissões de carbono. A conclusão é de uma avaliação de 35 modelos climáticos aplicados à região, feita por pesquisadores dos EUA e do Brasil.

Segundo o estudo, liderado por Philip Duffy, do WHRC (Instituto de Pesquisas de Woods Hole, nos EUA) e da Universidade Stanford, a área afetada por secas extremas no leste amazônico, região que engloba a maior parte da Amazônia, pode triplicar até 2100. Paradoxalmente, a frequência de períodos extremamente chuvosos e a área sujeita a chuvas extremas tende a crescer em toda a região após 2040 – mesmo nos locais onde a precipitação média anual diminuir.

Já o oeste amazônico, em especial o Peru e a Colômbia, deve ter um aumento na precipitação média anual.

A mudança no regime de chuvas é um efeito há muito teorizado do aquecimento global. Com mais energia na atmosfera e mais vapor d’água, resultante da maior evaporação dos oceanos, a tendência é que os extremos climáticos sejam amplificados. As estações chuvosas – na Amazônia, o período de verão no hemisfério sul, chamado pelos moradores da região de “inverno” ficam mais curtas, mas as chuvas caem com mais intensidade.

No entanto, a resposta da floresta essas mudanças tem sido objeto de controvérsias entre os cientistas. Estudos da década de 1990 propuseram que a reação da Amazônia fosse ser uma ampla “savanização”, ou mortandade de grandes árvores, e a transformação de vastas porções da selva numa savana empobrecida.

Outros estudos, porém, apontaram que o calor e o CO2 extra teriam o efeito oposto – o de fazer as árvores crescerem mais e fixarem mais carbono, de modo a compensar eventuais perdas por seca. Na média, portanto, o impacto do aquecimento global sobre a Amazônia seria relativamente pequeno.

Ocorre que a própria Amazônia encarregou-se de dar aos cientistas dicas de como reagiria. Em 2005, 2007 e 2010, a floresta passou por secas históricas. O resultado foi ampla mortalidade de árvores e incêndios em florestas primárias em mais de 85 mil quilômetros quadrados. O grupo de Duffy, também integrado por Paulo Brando, do Ipam (Instituto de Pesquisa Ambiental da Amazônia), aponta que de 1% a 2% do carbono da Amazônia foi lançado na atmosfera em decorrência das secas da década de 2000. Brando e colegas do Ipam também já haviam mostrado que a Amazônia está mais inflamável, provavelmente devido aos efeitos combinados do clima e do desmatamento.

Os pesquisadores simularam o clima futuro da região usando os modelos do chamado projeto CMIP5, usado pelo IPCC (Painel Intergovernamental sobre Mudança Climática) no seu último relatório de avaliação do clima global. Um dos membros do grupo, Chris Field, de Stanford, foi um dos coordenadores do relatório – foi também candidato à presidência do IPCC na eleição realizada na semana passada, perdendo para o coreano Hoesung Lee.

Os modelos de computador foram testados no pior cenário de emissões, o chamado RMP 8.5, no qual se assume que pouca coisa será feita para controlar emissões de gases-estufa.

Eles não apenas captaram bem a influência das temperaturas dos oceanos Atlântico e Pacífico sobre o padrão de chuvas na Amazônia – diferenças entre os dois oceanos explicam por que o leste amazônico ficará mais seco e o oeste, mais úmido –, como também mostraram nas simulações de seca futura uma característica das secas recorde de 2005 e 2010: o extremo norte da Amazônia teve grande aumento de chuvas enquanto o centro e o sul estorricavam.

Segundo os pesquisadores, o estudo pode ser até mesmo conservador, já que só levou em conta as variações de precipitação. “Por exemplo, as chuvas no leste da Amazônia têm uma forte dependência da evapotranspiração, então uma redução na cobertura de árvores poderia reduzir a precipitação”, escreveram Duffy e Brando. “Isso sugere que, se os processos relacionados a mudanças no uso da terra fossem mais bem representados nos modelos do CMIP5, a intensidade das secas poderia ser maior do que a projetada aqui.”

O estudo foi publicado na PNAS, a revista da Academia Nacional de Ciências dos EUA. (Observatório do Clima/ #Envolverde)

* Publicado originalmente no site Observatório do Clima.

‘Targeted punishments’ against countries could tackle climate change (Science Daily)

Date:
August 25, 2015
Source:
University of Warwick
Summary:
Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

This is a diagram of two possible strategies of targeted punishment studied in the paper. Credit: Royal Society Open Science

Targeted punishments could provide a path to international climate change cooperation, new research in game theory has found.

Conducted at the University of Warwick, the research suggests that in situations such as climate change, where everyone would be better off if everyone cooperated but it may not be individually advantageous to do so, the use of a strategy called ‘targeted punishment’ could help shift society towards global cooperation.

Despite the name, the ‘targeted punishment’ mechanism can apply to positive or negative incentives. The research argues that the key factor is that these incentives are not necessarily applied to everyone who may seem to deserve them. Rather, rules should be devised according to which only a small number of players are considered responsible at any one time.

The study’s author Dr Samuel Johnson, from the University of Warwick’s Mathematics Institute, explains: “It is well known that some form of punishment, or positive incentives, can help maintain cooperation in situations where almost everyone is already cooperating, such as in a country with very little crime. But when there are only a few people cooperating and many more not doing so punishment can be too dilute to have any effect. In this regard, the international community is a bit like a failed state.”

The paper, published in Royal Society Open Science, shows that in situations of entrenched defection (non-cooperation), there exist strategies of ‘targeted punishment’ available to would-be punishers which can allow them to move a community towards global cooperation.

“The idea,” said Dr Johnson, “is not to punish everyone who is defecting, but rather to devise a rule whereby only a small number of defectors are considered at fault at any one time. For example, if you want to get a group of people to cooperate on something, you might arrange them on an imaginary line and declare that a person is liable to be punished if and only if the person to their left is cooperating while they are not. This way, those people considered at fault will find themselves under a lot more pressure than if responsibility were distributed, and cooperation can build up gradually as each person decides to fall in line when the spotlight reaches them.”

For the case of climate change, the paper suggests that countries should be divided into groups, and these groups placed in some order — ideally, according roughly to their natural tendencies to cooperate. Governments would make commitments (to reduce emissions or leave fossil fuels in the ground, for instance) conditional on the performance of the group before them. This way, any combination of sanctions and positive incentives that other countries might be willing to impose would have a much greater effect.

“In the mathematical model,” said Dr Johnson, “the mechanism works best if the players are somewhat irrational. It seems a reasonable assumption that this might apply to the international community.”


Journal Reference:

  1. Samuel Johnson. Escaping the Tragedy of the Commons through Targeted PunishmentRoyal Society Open Science, 2015 [link]

The Point of No Return: Climate Change Nightmares Are Already Here (Rolling Stone)

The worst predicted impacts of climate change are starting to happen — and much faster than climate scientists expected

BY  August 5, 2015

Walruses

Walruses, like these in Alaska, are being forced ashore in record numbers. Corey Accardo/NOAA/AP 

Historians may look to 2015 as the year when shit really started hitting the fan. Some snapshots: In just the past few months, record-setting heat waves in Pakistan and India each killed more than 1,000 people. In Washington state’s Olympic National Park, the rainforest caught fire for the first time in living memory. London reached 98 degrees Fahrenheit during the hottest July day ever recorded in the U.K.; The Guardian briefly had to pause its live blog of the heat wave because its computer servers overheated. In California, suffering from its worst drought in a millennium, a 50-acre brush fire swelled seventyfold in a matter of hours, jumping across the I-15 freeway during rush-hour traffic. Then, a few days later, the region was pounded by intense, virtually unheard-of summer rains. Puerto Rico is under its strictest water rationing in history as a monster El Niño forms in the tropical Pacific Ocean, shifting weather patterns worldwide.

On July 20th, James Hansen, the former NASA climatologist who brought climate change to the public’s attention in the summer of 1988, issued a bombshell: He and a team of climate scientists had identified a newly important feedback mechanism off the coast of Antarctica that suggests mean sea levels could rise 10 times faster than previously predicted: 10 feet by 2065. The authors included this chilling warning: If emissions aren’t cut, “We conclude that multi-meter sea-level rise would become practically unavoidable. Social disruption and economic consequences of such large sea-level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization.”

Eric Rignot, a climate scientist at NASA and the University of California-Irvine and a co-author on Hansen’s study, said their new research doesn’t necessarily change the worst-case scenario on sea-level rise, it just makes it much more pressing to think about and discuss, especially among world leaders. In particular, says Rignot, the new research shows a two-degree Celsius rise in global temperature — the previously agreed upon “safe” level of climate change — “would be a catastrophe for sea-level rise.”

Hansen’s new study also shows how complicated and unpredictable climate change can be. Even as global ocean temperatures rise to their highest levels in recorded history, some parts of the ocean, near where ice is melting exceptionally fast, are actually cooling, slowing ocean circulation currents and sending weather patterns into a frenzy. Sure enough, a persistently cold patch of ocean is starting to show up just south of Greenland, exactly where previous experimental predictions of a sudden surge of freshwater from melting ice expected it to be. Michael Mann, another prominent climate scientist, recently said of the unexpectedly sudden Atlantic slowdown, “This is yet another example of where observations suggest that climate model predictions may be too conservative when it comes to the pace at which certain aspects of climate change are proceeding.”

Since storm systems and jet streams in the United States and Europe partially draw their energy from the difference in ocean temperatures, the implication of one patch of ocean cooling while the rest of the ocean warms is profound. Storms will get stronger, and sea-level rise will accelerate. Scientists like Hansen only expect extreme weather to get worse in the years to come, though Mann said it was still “unclear” whether recent severe winters on the East Coast are connected to the phenomenon.

And yet, these aren’t even the most disturbing changes happening to the Earth’s biosphere that climate scientists are discovering this year. For that, you have to look not at the rising sea levels but to what is actually happening within the oceans themselves.

Water temperatures this year in the North Pacific have never been this high for this long over such a large area — and it is already having a profound effect on marine life.

Eighty-year-old Roger Thomas runs whale-watching trips out of San Francisco. On an excursion earlier this year, Thomas spotted 25 humpbacks and three blue whales. During a survey on July 4th, federal officials spotted 115 whales in a single hour near the Farallon Islands — enough to issue a boating warning. Humpbacks are occasionally seen offshore in California, but rarely so close to the coast or in such numbers. Why are they coming so close to shore? Exceptionally warm water has concentrated the krill and anchovies they feed on into a narrow band of relatively cool coastal water. The whales are having a heyday. “It’s unbelievable,” Thomas told a local paper. “Whales are all over
the place.”

Last fall, in northern Alaska, in the same part of the Arctic where Shell is planning to drill for oil, federal scientists discovered 35,000 walruses congregating on a single beach. It was the largest-ever documented “haul out” of walruses, and a sign that sea ice, their favored habitat, is becoming harder and harder to find.

Marine life is moving north, adapting in real time to the warming ocean. Great white sharks have been sighted breeding near Monterey Bay, California, the farthest north that’s ever been known to occur. A blue marlin was caught last summer near Catalina Island — 1,000 miles north of its typical range. Across California, there have been sightings of non-native animals moving north, such as Mexican red crabs.

Salmon

Salmon on the brink of dying out. Michael Quinton/Newscom

No species may be as uniquely endangered as the one most associated with the Pacific Northwest, the salmon. Every two weeks, Bill Peterson, an oceanographer and senior scientist at the National Oceanic and Atmospheric Administration’s Northwest Fisheries Science Center in Oregon, takes to the sea to collect data he uses to forecast the return of salmon. What he’s been seeing this year is deeply troubling.

Salmon are crucial to their coastal ecosystem like perhaps few other species on the planet. A significant portion of the nitrogen in West Coast forests has been traced back to salmon, which can travel hundreds of miles upstream to lay their eggs. The largest trees on Earth simply wouldn’t exist without salmon.

But their situation is precarious. This year, officials in California are bringing salmon downstream in convoys of trucks, because river levels are too low and the temperatures too warm for them to have a reasonable chance of surviving. One species, the winter-run Chinook salmon, is at a particularly increased risk of decline in the next few years, should the warm water persist offshore.

“You talk to fishermen, and they all say: ‘We’ve never seen anything like this before,’ ” says Peterson. “So when you have no experience with something like this, it gets like, ‘What the hell’s going on?’ ”

Atmospheric scientists increasingly believe that the exceptionally warm waters over the past months are the early indications of a phase shift in the Pacific Decadal Oscillation, a cyclical warming of the North Pacific that happens a few times each century. Positive phases of the PDO have been known to last for 15 to 20 years, during which global warming can increase at double the rate as during negative phases of the PDO. It also makes big El Niños, like this year’s, more likely. The nature of PDO phase shifts is unpredictable — climate scientists simply haven’t yet figured out precisely what’s behind them and why they happen when they do. It’s not a permanent change — the ocean’s temperature will likely drop from these record highs, at least temporarily, some time over the next few years — but the impact on marine species will be lasting, and scientists have pointed to the PDO as a global-warming preview.

“The climate [change] models predict this gentle, slow increase in temperature,” says Peterson, “but the main problem we’ve had for the last few years is the variability is so high. As scientists, we can’t keep up with it, and neither can the animals.” Peterson likens it to a boxer getting pummeled round after round: “At some point, you knock them down, and the fight is over.”

India

Pavement-melting heat waves in India. Harish Tyagi/EPA/Corbis

Attendant with this weird wildlife behavior is a stunning drop in the number of plankton — the basis of the ocean’s food chain. In July, another major study concluded that acidifying oceans are likely to have a “quite traumatic” impact on plankton diversity, with some species dying out while others flourish. As the oceans absorb carbon dioxide from the atmosphere, it’s converted into carbonic acid — and the pH of seawater declines. According to lead author Stephanie Dutkiewicz of MIT, that trend means “the whole food chain is going to be different.”

The Hansen study may have gotten more attention, but the Dutkiewicz study, and others like it, could have even more dire implications for our future. The rapid changes Dutkiewicz and her colleagues are observing have shocked some of their fellow scientists into thinking that yes, actually, we’re heading toward the worst-case scenario. Unlike a prediction of massive sea-level rise just decades away, the warming and acidifying oceans represent a problem that seems to have kick-started a mass extinction on the same time scale.

Jacquelyn Gill is a paleoecologist at the University of Maine. She knows a lot about extinction, and her work is more relevant than ever. Essentially, she’s trying to save the species that are alive right now by learning more about what killed off the ones that aren’t. The ancient data she studies shows “really compelling evidence that there can be events of abrupt climate change that can happen well within human life spans. We’re talking less than a decade.”

For the past year or two, a persistent change in winds over the North Pacific has given rise to what meteorologists and oceanographers are calling “the blob” — a highly anomalous patch of warm water between Hawaii, Alaska and Baja California that’s thrown the marine ecosystem into a tailspin. Amid warmer temperatures, plankton numbers have plummeted, and the myriad species that depend on them have migrated or seen their own numbers dwindle.

Significant northward surges of warm water have happened before, even frequently. El Niño, for example, does this on a predictable basis. But what’s happening this year appears to be something new. Some climate scientists think that the wind shift is linked to the rapid decline in Arctic sea ice over the past few years, which separate research has shown makes weather patterns more likely to get stuck.

A similar shift in the behavior of the jet stream has also contributed to the California drought and severe polar vortex winters in the Northeast over the past two years. An amplified jet-stream pattern has produced an unusual doldrum off the West Coast that’s persisted for most of the past 18 months. Daniel Swain, a Stanford University meteorologist, has called it the “Ridiculously Resilient Ridge” — weather patterns just aren’t supposed to last this long.

What’s increasingly uncontroversial among scientists is that in many ecosystems, the impacts of the current off-the-charts temperatures in the North Pacific will linger for years, or longer. The largest ocean on Earth, the Pacific is exhibiting cyclical variability to greater extremes than other ocean basins. While the North Pacific is currently the most dramatic area of change in the world’s oceans, it’s not alone: Globally, 2014 was a record-setting year for ocean temperatures, and 2015 is on pace to beat it soundly, boosted by the El Niño in the Pacific. Six percent of the world’s reefs could disappear before the end of the decade, perhaps permanently, thanks to warming waters.

Since warmer oceans expand in volume, it’s also leading to a surge in sea-level rise. One recent study showed a slowdown in Atlantic Ocean currents, perhaps linked to glacial melt from Greenland, that caused a four-inch rise in sea levels along the Northeast coast in just two years, from 2009 to 2010. To be sure, it seems like this sudden and unpredicted surge was only temporary, but scientists who studied the surge estimated it to be a 1-in-850-year event, and it’s been blamed on accelerated beach erosion “almost as significant as some hurricane events.”

Turkey

Biblical floods in Turkey. Ali Atmaca/Anadolu Agency/Getty

Possibly worse than rising ocean temperatures is the acidification of the waters. Acidification has a direct effect on mollusks and other marine animals with hard outer bodies: A striking study last year showed that, along the West Coast, the shells of tiny snails are already dissolving, with as-yet-unknown consequences on the ecosystem. One of the study’s authors, Nina Bednaršek, told Science magazine that the snails’ shells, pitted by the acidifying ocean, resembled “cauliflower” or “sandpaper.” A similarly striking study by more than a dozen of the world’s top ocean scientists this July said that the current pace of increasing carbon emissions would force an “effectively irreversible” change on ocean ecosystems during this century. In as little as a decade, the study suggested, chemical changes will rise significantly above background levels in nearly half of the world’s oceans.

“I used to think it was kind of hard to make things in the ocean go extinct,” James Barry of the Monterey Bay Aquarium Research Institute in California told the Seattle Times in 2013. “But this change we’re seeing is happening so fast it’s almost instantaneous.”

Thanks to the pressure we’re putting on the planet’s ecosystem — warming, acidification and good old-fashioned pollution — the oceans are set up for several decades of rapid change. Here’s what could happen next.

The combination of excessive nutrients from agricultural runoff, abnormal wind patterns and the warming oceans is already creating seasonal dead zones in coastal regions when algae blooms suck up most of the available oxygen. The appearance of low-oxygen regions has doubled in frequency every 10 years since 1960 and should continue to grow over the coming decades at an even greater rate.

So far, dead zones have remained mostly close to the coasts, but in the 21st century, deep-ocean dead zones could become common. These low-oxygen regions could gradually expand in size — potentially thousands of miles across — which would force fish, whales, pretty much everything upward. If this were to occur, large sections of the temperate deep oceans would suffer should the oxygen-free layer grow so pronounced that it stratifies, pushing surface ocean warming into overdrive and hindering upwelling of cooler, nutrient-rich deeper water.

Enhanced evaporation from the warmer oceans will create heavier downpours, perhaps destabilizing the root systems of forests, and accelerated runoff will pour more excess nutrients into coastal areas, further enhancing dead zones. In the past year, downpours have broken records in Long Island, Phoenix, Detroit, Baltimore, Houston and Pensacola, Florida.

Evidence for the above scenario comes in large part from our best understanding of what happened 250 million years ago, during the “Great Dying,” when more than 90 percent of all oceanic species perished after a pulse of carbon dioxide and methane from land-based sources began a period of profound climate change. The conditions that triggered “Great Dying” took hundreds of thousands of years to develop. But humans have been emitting carbon dioxide at a much quicker rate, so the current mass extinction only took 100 years or so to kick-start.

With all these stressors working against it, a hypoxic feedback loop could wind up destroying some of the oceans’ most species-rich ecosystems within our lifetime. A recent study by Sarah Moffitt of the University of California-Davis said it could take the ocean thousands of years to recover. “Looking forward for my kid, people in the future are not going to have the same ocean that I have today,” Moffitt said.

As you might expect, having tickets to the front row of a global environmental catastrophe is taking an increasingly emotional toll on scientists, and in some cases pushing them toward advocacy. Of the two dozen or so scientists I interviewed for this piece, virtually all drifted into apocalyptic language at some point.

For Simone Alin, an oceanographer focusing on ocean acidification at NOAA’s Pacific Marine Environmental Laboratory in Seattle, the changes she’s seeing hit close to home. The Puget Sound is a natural laboratory for the coming decades of rapid change because its waters are naturally more acidified than most of the world’s marine ecosystems.

The local oyster industry here is already seeing serious impacts from acidifying waters and is going to great lengths to avoid a total collapse. Alin calls oysters, which are non-native, the canary in the coal mine for the Puget Sound: “A canary is also not native to a coal mine, but that doesn’t mean it’s not a good indicator of change.”

Though she works on fundamental oceanic changes every day, the Dutkiewicz study on the impending large-scale changes to plankton caught her off-guard: “This was alarming to me because if the basis of the food web changes, then . . . everything could change, right?”

Alin’s frank discussion of the looming oceanic apocalypse is perhaps a product of studying unfathomable change every day. But four years ago, the birth of her twins “heightened the whole issue,” she says. “I was worried enough about these problems before having kids that I maybe wondered whether it was a good idea. Now, it just makes me feel crushed.”

Katharine Hayhoe

Katharine Hayhoe speaks about climate change to students and faculty at Wayland Baptist University in 2011. Geoffrey McAllister/Chicago Tribune/MCT/Getty

Katharine Hayhoe, a climate scientist and evangelical Christian, moved from Canada to Texas with her husband, a pastor, precisely because of its vulnerability to climate change. There, she engages with the evangelical community on science — almost as a missionary would. But she’s already planning her exit strategy: “If we continue on our current pathway, Canada will be home for us long term. But the majority of people don’t have an exit strategy. . . . So that’s who I’m here trying to help.”

James Hansen, the dean of climate scientists, retired from NASA in 2013 to become a climate activist. But for all the gloom of the report he just put his name to, Hansen is actually somewhat hopeful. That’s because he knows that climate change has a straightforward solution: End fossil-fuel use as quickly as possible. If tomorrow, the leaders of the United States and China would agree to a sufficiently strong, coordinated carbon tax that’s also applied to imports, the rest of the world would have no choice but to sign up. This idea has already been pitched to Congress several times, with tepid bipartisan support. Even though a carbon tax is probably a long shot, for Hansen, even the slim possibility that bold action like this might happen is enough for him to devote the rest of his life to working to achieve it. On a conference call with reporters in July, Hansen said a potential joint U.S.-China carbon tax is more important than whatever happens at the United Nations climate talks in Paris.

One group Hansen is helping is Our Children’s Trust, a legal advocacy organization that’s filed a number of novel challenges on behalf of minors under the idea that climate change is a violation of intergenerational equity — children, the group argues, are lawfully entitled to inherit a healthy planet.

A separate challenge to U.S. law is being brought by a former EPA scientist arguing that carbon dioxide isn’t just a pollutant (which, under the Clean Air Act, can dissipate on its own), it’s also a toxic substance. In general, these substances have exceptionally long life spans in the environment, cause an unreasonable risk, and therefore require remediation. In this case, remediation may involve planting vast numbers of trees or restoring wetlands to bury excess carbon underground.

Even if these novel challenges succeed, it will take years before a bend in the curve is noticeable. But maybe that’s enough. When all feels lost, saving a few species will feel like a triumph.

From The Archives Issue 1241: August 13, 2015

Read more: http://www.rollingstone.com/politics/news/the-point-of-no-return-climate-change-nightmares-are-already-here-20150805#ixzz3iRVjFBme
Follow us: @rollingstone on Twitter | RollingStone on Facebook

Stop burning fossil fuels now: there is no CO2 ‘technofix’, scientists warn (The Guardian)

Researchers have demonstrated that even if a geoengineering solution to CO2 emissions could be found, it wouldn’t be enough to save the oceans

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber

“The chemical echo of this century’s CO2 pollutiuon will reverberate for thousands of years,” said the report’s co-author, Hans Joachim Schellnhuber Photograph: Doug Perrine/Design Pics/Corbis

German researchers have demonstrated once again that the best way to limit climate change is to stop burning fossil fuels now.

In a “thought experiment” they tried another option: the future dramatic removal of huge volumes of carbon dioxide from the atmosphere. This would, they concluded, return the atmosphere to the greenhouse gas concentrations that existed for most of human history – but it wouldn’t save the oceans.

That is, the oceans would stay warmer, and more acidic, for thousands of years, and the consequences for marine life could be catastrophic.

The research, published in Nature Climate Change today delivers yet another demonstration that there is so far no feasible “technofix” that would allow humans to go on mining and drilling for coal, oil and gas (known as the “business as usual” scenario), and then geoengineer a solution when climate change becomes calamitous.

Sabine Mathesius (of the Helmholtz Centre for Ocean Research in Kiel and the Potsdam Institute for Climate Impact Research) and colleagues decided to model what could be done with an as-yet-unproven technology called carbon dioxide removal. One example would be to grow huge numbers of trees, burn them, trap the carbon dioxide, compress it and bury it somewhere. Nobody knows if this can be done, but Dr Mathesius and her fellow scientists didn’t worry about that.

They calculated that it might plausibly be possible to remove carbon dioxide from the atmosphere at the rate of 90 billion tons a year. This is twice what is spilled into the air from factory chimneys and motor exhausts right now.

The scientists hypothesised a world that went on burning fossil fuels at an accelerating rate – and then adopted an as-yet-unproven high technology carbon dioxide removal technique.

“Interestingly, it turns out that after ‘business as usual’ until 2150, even taking such enormous amounts of CO2 from the atmosphere wouldn’t help the deep ocean that much – after the acidified water has been transported by large-scale ocean circulation to great depths, it is out of reach for many centuries, no matter how much CO2 is removed from the atmosphere,” said a co-author, Ken Caldeira, who is normally based at the Carnegie Institution in the US.

The oceans cover 70% of the globe. By 2500, ocean surface temperatures would have increased by 5C (41F) and the chemistry of the ocean waters would have shifted towards levels of acidity that would make it difficult for fish and shellfish to flourish. Warmer waters hold less dissolved oxygen. Ocean currents, too, would probably change.

But while change happens in the atmosphere over tens of years, change in the ocean surface takes centuries, and in the deep oceans, millennia. So even if atmospheric temperatures were restored to pre-Industrial Revolution levels, the oceans would continue to experience climatic catastrophe.

“In the deep ocean, the chemical echo of this century’s CO2 pollution will reverberate for thousands of years,” said co-author Hans Joachim Schellnhuber, who directs the Potsdam Institute. “If we do not implement emissions reductions measures in line with the 2C (35.6F) target in time, we will not be able to preserve ocean life as we know it.”