Arquivo da tag: Mediação tecnológica

For Insurers, No Doubts on Climate Change (N.Y.Times)

Master Sgt. Mark Olsen/U.S. Air Force, via Associated Press. Damage in Mantoloking, N.J., after Hurricane Sandy. Natural disasters caused $35 billion in private property losses last year.

By EDUARDO PORTER

Published: May 14, 2013

If there were one American industry that would be particularly worried about climate change it would have to be insurance, right?

From Hurricane Sandy’s devastating blow to the Northeast to the protracted drought that hit the Midwest Corn Belt, natural catastrophes across the United States pounded insurers last year, generating$35 billion in privately insured property losses, $11 billion more than the average over the last decade.

And the industry expects the situation will get worse. “Numerous studies assume a rise in summer drought periods in North America in the future and an increasing probability of severe cyclones relatively far north along the U.S. East Coast in the long term,” said Peter Höppe, who heads Geo Risks Research at the reinsurance giant Munich Re. “The rise in sea level caused by climate change will further increase the risk of storm surge.” Most insurers, including the reinsurance companies that bear much of the ultimate risk in the industry, have little time for the arguments heard in some right-wing circles that climate change isn’t happening, and are quite comfortable with the scientific consensus that burning fossil fuels is the main culprit of global warming.

“Insurance is heavily dependent on scientific thought,” Frank Nutter, president of the Reinsurance Association of America, told me last week. “It is not as amenable to politicized scientific thought.”

Yet when I asked Mr. Nutter what the American insurance industry was doing to combat global warming, his answer was surprising: nothing much. “The industry has really not been engaged in advocacy related to carbon taxes or proposals addressing carbon,” he said. While some big European reinsurers like Munich Re and Swiss Re support efforts to reduce CO2 emissions, “in the United States the household names really have not engaged at all.” Instead, the focus of insurers’ advocacy efforts is zoning rules and disaster mitigation.

Last week, scientists announced that the concentration of heat-trapping carbon dioxide in the atmosphere had reached 400 parts per million — its highest level in at least three million years, before humans appeared on the scene. Back then, mastodons roamed the earth, the polar ice caps were smaller and the sea level was as much as 60 to 80 feet higher.

The milestone puts the earth nearer a point of no return, many scientists think, when vast, disruptive climate change is baked into our future. Pietr P. Tans, who runs the monitoring program at the National Oceanic and Atmospheric Administration, told my colleague Justin Gillis: “It symbolizes that so far we have failed miserably in tackling this problem.” And it raises a perplexing question: why hasn’t corporate America done more to sway its allies in the Republican Party to try to avert a disaster that would clearly be devastating to its own interests?

Mr. Nutter argues that the insurance industry’s reluctance is born of hesitation to become embroiled in controversies over energy policy. But perhaps its executives simply don’t feel so vulnerable. Like farmers, who are largely protected from the ravages of climate change by government-financed crop insurance, insurers also have less to fear than it might at first appear.

The federal government covers flood insurance, among the riskiest kind in this time of crazy weather. And insurers can raise premiums or even drop coverage to adjust to higher risks. Indeed, despite Sandy and drought, property and casualty insurance in the United States was more profitable in 2012 than in 2011, according to the Property Casualty Insurers Association of America.

But the industry’s analysis of the risks it faces is evolving. One sign of that is how some top American insurers responded to a billboard taken out by the conservative Heartland Institute, a prominent climate change denier that has received support from the insurance industry.

The billboard had a picture of Theodore Kaczynski, the Unabomber, who asked: “I still believe in global warming. Do you?”

Concerned about global warming and angry to be equated with a murderous psychopath, insurance companies like Allied World, Renaissance Re, State Farm and XL Groupdropped their support for Heartland.

Even more telling, Eli Lehrer, a Heartland vice president who at the time led an insurance-financed project, left the group and helped start the R Street Institute, a standard conservative organization in all respects but one: it believes in climate change and supports a carbon tax to combat it. And it is financed largely with insurance industry money.

Mr. Lehrer points out that a carbon tax fits conservative orthodoxy. It is a broad and flat tax, whose revenue can be used to do away with the corporate income tax — a favorite target of the right. It provides a market-friendly signal, forcing polluters to bear the cost imposed on the rest of us and encouraging them to pollute less. And it is much preferable to a parade of new regulations from the Environmental Protection Agency.

“We are having a debate on the right about a carbon tax for the first time in a long time,” Mr. Lehrer said.

Bob Inglis, formerly a Republican congressman from South Carolina who lost his seat in the 2010 primary to a Tea Party-supported challenger, is another member of this budding coalition. Before he left Congress, he proposed a revenue-neutral bill to create a carbon tax and cut payroll taxes.

Changing the political economy of a carbon tax remains an uphill slog especially in a stagnant economy. But Mr. Inglis notices a thaw. “The best way to do this is in the context of a grand bargain on tax reform,” he said. “It could happen in 2015 or 2016, but probably not before.”

He lists a dozen Republicans in the House and eight in the Senate who would be open to legislation to help avert climate change. He notes that Exelon, the gas and electricity giant, is sympathetic to his efforts — perhaps not least because a carbon tax would give an edge to gas over its dirtier rival, coal. Exxon, too, has also said a carbon tax would be the most effective way to reduce emissions. So why hasn’t the insurance industry come on board?

Robert Muir-Wood is the chief research officer of Risk Management Solutions, one of two main companies the insurance industry relies on to crunch data and model future risks. He argues that insurers haven’t changed their tune because — with the exception of 2004 and 2005, when a string of hurricanes from Ivan to Katrina caused damage worth more than $200 billion — they haven’t yet experienced hefty, sustained losses attributable to climate change.

“Insurers were ready to sign up to all sorts of actions against climate change,” Mr. Muir-Wood told me from his office in London. Then the weather calmed down.

Still, Mr. Muir-Wood notes that the insurance industry faces a different sort of risk: political action. “That is the biggest threat,” he said. When insurers canceled policies and raised premiums in Florida in 2006, politicians jumped on them. “Insurers in Florida,” he said, “became Public Enemy No. 1.”

And that’s the best hope for those concerned about climate change: that global warming isn’t just devastating for society, but also bad for business.

Climate slowdown means extreme rates of warming ‘not as likely’ (BBC)

19 May 2013 Last updated at 17:31 GMT

By Matt McGrath – Environment correspondent, BBC News

ice

The impacts of rising temperature are being felt particularly keenly in the polar regions

Scientists say the recent downturn in the rate of global warming will lead to lower temperature rises in the short-term.

Since 1998, there has been an unexplained “standstill” in the heating of the Earth’s atmosphere.

Writing in Nature Geoscience, the researchers say this will reduce predicted warming in the coming decades.

But long-term, the expected temperature rises will not alter significantly.

“The most extreme projections are looking less likely than before” – Dr Alexander Otto, University of Oxford

The slowdown in the expected rate of global warming has been studied for several years now. Earlier this year, the UK Met Office lowered their five-year temperature forecast.

But this new paper gives the clearest picture yet of how any slowdown is likely to affect temperatures in both the short-term and long-term.

An international team of researchers looked at how the last decade would impact long-term, equilibrium climate sensitivity and the shorter term climate response.

Transient nature

Climate sensitivity looks to see what would happen if we doubled concentrations of CO2 in the atmosphere and let the Earth’s oceans and ice sheets respond to it over several thousand years.

Transient climate response is much shorter term calculation again based on a doubling of CO2.

The Intergovernmental Panel on Climate Change reported in 2007 that the short-term temperature rise would most likely be 1-3C (1.8-5.4F).

But in this new analysis, by only including the temperatures from the last decade, the projected range would be 0.9-2.0C.

IceThe report suggests that warming in the near term will be less than forecast

“The hottest of the models in the medium-term, they are actually looking less likely or inconsistent with the data from the last decade alone,” said Dr Alexander Otto from the University of Oxford.

“The most extreme projections are looking less likely than before.”

The authors calculate that over the coming decades global average temperatures will warm about 20% more slowly than expected.

But when it comes to the longer term picture, the authors say their work is consistent with previous estimates. The IPCC said that climate sensitivity was in the range of 2.0-4.5C.

Ocean storage

This latest research, including the decade of stalled temperature rises, produces a range of 0.9-5.0C.

“It is a bigger range of uncertainty,” said Dr Otto.

“But it still includes the old range. We would all like climate sensitivity to be lower but it isn’t.”

The researchers say the difference between the lower short-term estimate and the more consistent long-term picture can be explained by the fact that the heat from the last decade has been absorbed into and is being stored by the world’s oceans.

Not everyone agrees with this perspective.

Prof Steven Sherwood, from the University of New South Wales, says the conclusion about the oceans needs to be taken with a grain of salt for now.

“There is other research out there pointing out that this storage may be part of a natural cycle that will eventually reverse, either due to El Nino or the so-called Atlantic Multidecadal Oscillation, and therefore may not imply what the authors are suggesting,” he said.

The authors say there are ongoing uncertainties surrounding the role of aerosols in the atmosphere and around the issue of clouds.

“We would expect a single decade to jump around a bit but the overall trend is independent of it, and people should be exactly as concerned as before about what climate change is doing,” said Dr Otto.

Is there any succour in these findings for climate sceptics who say the slowdown over the past 14 years means the global warming is not real?

“None. No comfort whatsoever,” he said.

“Cientistas podem contar uma história sem perder a acurácia” (Fapesp)

Dan Fay, diretor da Microsoft Research Connections, afirma que a tecnologia pode ajudar os pesquisadores a expor artigos e atrair mais leitores, sejam eles seus pares, formuladores de políticas ou agências de fomento (foto:Edu Cesar/FAPESP)

Entrevistas

21/05/2013

Por Frances Jones

Agência FAPESP – Com mais de 20 anos de Microsoft, o engenheiro Dan Fay faz parte de um seleto grupo de profissionais que vasculha o mundo científico em busca de parcerias entre a gigante de informática e autores de projetos relacionados às ciências da Terra, energia e meio ambiente.

São projetos como o software World Wide Telescope, desenvolvido em parceria com pesquisadores da Universidade Johns Hopkins, que permite que cientistas de diversas regiões do mundo acessem imagens de objetos celestes coletadas por telescópios espaciais, observatórios e instituições de pesquisa, manipulem e compartilhem esses dados.

“Nosso desafio é encontrar pesquisadores a cuja pesquisa podemos agregar valor e não apenas fornecer mais máquinas”, disse Fay, que, além de ser diretor da divisão de Terra, Energia e Ambiente da Microsoft Research Connections, o braço de pesquisas da Microsoft, integra o conselho consultivo industrial para Computação e Tecnologia da Informação da Purdue University, no estado norte-americano de Indiana.

Fay esteve em São Paulo para participar do Latin American eScience Workshop 2013, promovido pela FAPESP e pela Microsoft Research de 13 a 15 de maio, quando proferiu duas palestras a pesquisadores e estudantes de diversos países sobre avanços em diversas áreas do conhecimento proporcionados pela melhoria na capacidade de análise de grandes volumes de informações. Em uma delas, falou sobre como a ciência pode utilizar a computação em nuvem; na outra, sobre “como divulgar sua pesquisa internacionalmente”. Em seguida, conversou com a Agência FAPESP. Leia abaixo, trechos da entrevista.

Agência FAPESP – O que o senhor espera para o futuro com relação à eScience, a utilização no fazer ciência de tecnologias e ferramentas de computação?
Fay – O interessante sobre a eScience é combinar dados computacionais e novas técnicas nas mais diversas áreas. O que vemos agora é um uso maior da computação, mas o passo seguinte será o cruzamento dos diferentes domínios e o uso combinado dessas informações, como, por exemplo, da biologia e do meio ambiente juntos, fazendo uma análise transversal. Os dois domínios falam línguas diferentes. A passagem entre as áreas será um dos próximos desafios. Não se pode assumir que um dado significa algo. É preciso ter certeza.

Agência FAPESP – Como os pesquisadores podem utilizar a computação em nuvem para seus estudos?
Fay – A computação em nuvem fornece um novo paradigma para enfrentar os desafios da computação e da análise de dados nas mais diversas áreas do conhecimento científico. Diferentemente dos supercomputadores tradicionais, isolados e centralizados, a nuvem está em todos os lugares e pode oferecer suporte a diferente estilos de computação que são adequados para a análise de dados e para colaboração. Nos últimos três anos temos trabalhado com pesquisadores acadêmicos para explorar o potencial desta nova plataforma. Temos mais de 90 projetos de pesquisa que usam o Windows Azure [plataforma em nuvem da Microsoft] e temos aprendido muito. Como em qualquer tecnologia nova, há sempre pessoas que começam antes e outras depois. Há vários pesquisadores que começaram a usá-la para compreender como isso pode mudar a forma com que fazem pesquisa.

Agência FAPESP – O senhor acha que no futuro haverá uma mudança na forma de se divulgar ciência?
Dan Fay – Comentei com os alunos no workshop que sempre haverá os artigos tradicionais, de revistas científicas, que são revisados pelos pares. Com a quantidade de informações que temos hoje, no entanto, para tornar isso mais visível para outras pessoas e para o público em geral, os dados também podem ser apresentados de outra forma. Um artigo que se aprofunda nos detalhes também pode ser acessível a pessoas de diferentes domínios. Produzir fotos e vídeos, entre outros recursos, também pode ajudar.

Agência FAPESP – Esse é um papel dos cientistas?
Fay – Você pode usar os mesmos dados científicos acurados e contar histórias sobre eles. E permitir que as pessoas escutem diretamente de você essa história de forma visual. Essa interação é muito poderosa. É a mesma coisa que ir ao museu e ver várias obras de arte: as pessoas se conectam a elas. Os cientistas querem que seus colegas e as pessoas em geral se conectem dessa forma com suas informações, dados ou artigos. Acho bom encontrar outros mecanismos que possam explicar os dados. Estamos em uma época fascinante na qual há uma preocupação em divulgar a informação de um jeito interessante. Isso é verdade não apenas quando se quer falar para os seus pares, mas especialmente se você quer que os formuladores de políticas, o governo ou as agências de fomento entendam o que é o seu trabalho. Às vezes ele tem de ser apresentado de forma que possa ser consumido com mais facilidade.

Agência FAPESP – A tecnologia ajuda nesse ponto?
Fay – Sim. Parte disso porque essas formas fazem com que as pessoas leiam com mais profundidade os artigos. Há uma pesquisadora em Harvard, uma astrônoma, que criou um desses tours virtuais que temos sobre as galáxias. Ela calcula que mais pessoas assistiram a seus tours do que leram seus artigos científicos. Esse tipo de entendimento está aumentando. Se posso ajudar alguém a ler um artigo ao ver isso em outro lugar, isso é um avanço.

Agência FAPESP – O senhor acha que os cientistas devem entrar em redes sociais digitais, como o Facebook, para expor seus trabalhos?
Fay – Sim. Eles devem sempre se apoiar no rigor científico, mas ajuda empregar técnicas de design ou de marketing. Mesmo em seus pôsteres. É uma forma de divulgar melhor as informações.

Agência FAPESP – E as novas tecnologias promoverão a abertura dos dados científicos?
Fay – Para além da abertura de dados, há um problema social, no qual as pessoas se sentem donas de suas ideias ou informações. Encontrar formas de compartilhar os dados e dar o devido reconhecimento às pessoas que os coletaram, processaram e os tornaram disponíveis é importante. É aí, acredito, que estão vários desafios. Também há a questão dos dados médicos ou de outros dados que você não quer que fiquem soltos por aí.

Cientistas americanos conseguem clonar embriões humanos (O Globo)

Trabalho é o primeiro a obter êxito em humanos com a técnica que deu origem à ovelha Dolly

Autores dizem que não se trata de fazer clones humanos, mas sim avançar apenas até a fase de blastocisto para obter as células-tronco

Em 2004, sul-coreano anunciou o mesmo feito mas foi desmentido um ano depois

ROBERTA JANSEN

Publicado:15/05/13 – 16h03; atualizado:15/05/13 – 20h56

Clone de embrião obtido no estudoFoto: DivulgaçãoClone de embrião obtido no estudo Divulgação

OREGON. Dezesseis anos depois da clonagem do primeiro mamífero, a ovelha Dolly, cientistas conseguiram, pela primeira vez, clonar um embrião humano em seus primeiros estágios de desenvolvimento. Os protoembriões foram usados para produzir células-tronco embrionárias — capazes de se transformar em qualquer tecido do corpo —, num avanço bastante significativo e há muito tempo esperado para o tratamento de lesões e doenças graves como Parkinson, esclerose múltipla e problemas cardíacos. Especialistas envolvidos no processo garantem que o objetivo não é clonar seres humanos, mas, sim criar novas terapias personalizadas.

Tanto é assim que os embriões humanos clonados usados na pesquisa foram destruídos em estágios ainda muito iniciais de desenvolvimento, logo depois da extração das células-tronco, e não levados ao crescimento, como no caso da ovelha Dolly e de tantos outros animais clonados depois dela. A técnica usada, no entanto, foi bastante similar à que criou a ovelha. Células da pele de um indivíduo foram colocadas em um óvulo previamente esvaziado de seu material genético e estimuladas a se desenvolver. Quando atingiram a fase de blastocisto, as células-tronco embrionárias foram extraídas e os embriões destruídos. O estudo foi publicado na revista “Cell”.

Conseguir gerar grande quantidade de células-tronco do próprio paciente era uma espécie de Santo Graal da atual ciência médica, como comparou o jornalista Steve Connor, no “Independent”. Embora o procedimento tenha sido feito com animais, até agora nunca tinha sido obtido com material humano, a despeito de inúmeras tentativas. Aparentemente, a dificuldade viria da maior fragilidade do óvulo humano.

Em 2004, um grupo coordenado por Woo Suk Hwang, da Universidade Nacional de Seoul, anunciou ter produzido o primeiro embirão humano clonado e, em seguida, obtido células-tronco embionárias a partir dele. Menos de um ano depois, no entanto, o grupo, que já havia clonado um cachorro, foi acusado de fraude e desmentiu os resultados obtidos. Outros grupos tentaram, mas os embriões não passaram do estágio de 6 a 12 células.

A corrida pela obtenção das células-tronco embrionárias faz todo o sentido. Cultivadas em laboratório, essas células podem dar origem a qualquer tecido do corpo humano. Por isso, em tese ao menos, poderiam curar lesões na medula, recompor órgãos, tratar problemas graves de visão, oferecendo tratamentos inéditos para muitas doenças hoje incuráveis. Como os tecidos seriam feitos a partir do material genético do próprio paciente (que, no caso, cedeu as células de sua pele), não haveria risco algum de rejeição. A medicina personalizada alcançaria o seu ápice.

— Nossa descoberta oferece novas maneiras de gerar células-tronco embrionárias para pacientes com problemas em tecidos e órgãos — afirmou o coordenador do estudo, Shoukhrat Mitalipov, da Universidade de Ciência e Saúde do Oregon, nos EUA, em comunicado oficial sobre o estudo. — Essas células-tronco podem regenerar órgãos ou substituir tecidos danificados, levando à cura de doenças que hoje afetam milhões de pessoas.

O grupo também conseguiu observar a capacidade de diferenciação das células obtidas em tecidos específicos

— Um atento exame das células-tronco obtidas por meio desta técnica demonstrou sua capacidade de se converter, como qualquer célula-tronco embrionária normal, em diferentes tipos de células, entre elas, células nervosas, células do fígado e céluas cardíacas — disse Mitalipov, em entrevista ao “Independent”.

No entanto, o estudo já levanta sérias preocupações éticas, sobretudo em relação à criação de clones humanos. Há o temor de que a técnica seja incorporada às oferecidas por clínicas de fertilização in vitro, como alternativa para casais estéreis, por exemplo. Outros grupos argumentam que é simplesmente antiético manipular embriões humanos.

— A pesquisa tem como único objetivo gerar células-tronco embrionárias para tratar doenças graves; e não aumentar as chances de produzir bebês clonados — garantiu Mitalipov. — Este não é o nosso foco e não acreditamos que nossas descobertas sejam usadas por outros grupos como um avanço na clonagem humana reprodutiva.

Leia mais sobre esse assunto em http://oglobo.globo.com/ciencia/cientistas-americanos-conseguem-clonar-embrioes-humanos-8399684#ixzz2TlwDWsur  © 1996 – 2013. Todos direitos reservados a Infoglobo Comunicação e Participações S.A. Este material não pode ser publicado, transmitido por broadcast, reescrito ou redistribuído sem autorização.

Global Networks Must Be Redesigned, Experts Urge (Science Daily)

May 1, 2013 — Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems.

Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems. (Credit: © Angie Lingnau / Fotolia)

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push human-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Our Intuition of Systemic Risks Is Misleading

Networking system components that are well-behaved in separation may create counter-intuitive emergent system behaviors, which are not well-behaved at all. For example, cooperative behavior might unexpectedly break down as the connectivity of interaction partners grows. “Applying this to the global network of banks, this might actually have caused the financial meltdown in 2008,” believes Helbing.

Globally networked risks are difficult to identify, map and understand, since there are often no evident, unique cause-effect relationships. Failure rates may change depending on the random path taken by the system, with the consequence of increasing risks as cascade failures progress, thereby decreasing the capacity of the system to recover. “In certain cases, cascade effects might reach any size, and the damage might be practically unbounded,” says Helbing. “This is quite disturbing and hard to imagine.” All of these features make strongly coupled, complex systems difficult to predict and control, such that our attempts to manage them go astray.

“Take the financial system,” says Helbing. “The financial crisis hit regulators by surprise.” But back in 2003, the legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. It took 5 years until the “investment time bomb” exploded, causing losses of trillions of dollars to our economy. “The financial architecture is not properly designed,” concludes Helbing. “The system lacks breaking points, as we have them in our electrical system.” This allows local problems to spread globally, thereby reaching catastrophic dimensions.

A Global Ticking Time Bomb?

Have we unintentionally created a global time bomb? If so, what kinds of global catastrophic scenarios might humans face in complex societies? A collapse of the world economy or of our information and communication systems? Global pandemics? Unsustainable growth or environmental change? A global food or energy crisis? A cultural clash or global-scale conflict? Or will we face a combination of these contagious phenomena — a scenario that the World Economic Forum calls the “perfect storm”?

“While analyzing such global risks,” says Helbing, “one must bear in mind that the propagation speed of destructive cascade effects might be slow, but nevertheless hard to stop. It is time to recognize that crowd disasters, conflicts, revolutions, wars, and financial crises are the undesired result of operating socio-economic systems in the wrong parameter range, where systems are unstable.” In the past, these social problems seemed to be puzzling, unrelated, and almost “God-given” phenomena one had to live with. Nowadays, thanks to new complexity science models and large-scale data sets (“Big Data”), one can analyze and understand the underlying mechanisms, which let complex systems get out of control.

Disasters should not be considered “bad luck.” They are a result of inappropriate interactions and institutional settings, caused by humans. Even worse, they are often the consequence of a flawed understanding of counter-intuitive system behaviors. “For example, it is surprising that we didn’t have sufficient precautions against a financial crisis and well-elaborated contingency plans,” states Helbing. “Perhaps, this is because there should not be any bubbles and crashes according to the predominant theoretical paradigm of efficient markets.” Conventional thinking can cause fateful decisions and the repetition of previous mistakes. “In other words: While we want to do the right thing, we often do wrong things,” concludes Helbing. This obviously calls for a paradigm shift in our thinking. “For example, we may try to promote innovation, but suffer economic decline, because innovation requires diversity more than homogenization.”

Global Networks Must Be Re-Designed

Helbing’s publication explores why today’s risk analysis falls short. “Predictability and controllability are design issues,” stresses Helbing. “And uncertainty, which means the impossibility to determine the likelihood and expected size of damage, is often man-made.” Many systems could be better managed with real-time data. These would allow one to avoid delayed response and to enhance the transparency, understanding, and adaptive control of systems. However, even all the data in the world cannot compensate for ill-designed systems such as the current financial system. Such systems will sooner or later get out of control, causing catastrophic human-made failure. Therefore, a re-design of such systems is urgently needed.

Helbing’s Nature paper on “Globally Networked Risks” also calls attention to strategies that make systems more resilient, i.e. able to recover from shocks. For example, setting up backup systems (e.g. a parallel financial system), limiting the system size and connectivity, building in breaking points to stop cascade effects, or reducing complexity may be used to improve resilience. In the case of financial systems, there is still much work to be done to fully incorporate these principles.

Contemporary information and communication technologies (ICT) are also far from being failure-proof. They are based on principles that are 30 or more years old and not designed for today’s use. The explosion of cyber risks is a logical consequence. This includes threats to individuals (such as privacy intrusion, identity theft, or manipulation through personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). To counter this, Helbing recommends an entirely new ICT architecture inspired by principles of decentralized self-organization as observed in immune systems, ecology, and social systems.

Coming Era of Social Innovation

A better understanding of the success principles of societies is urgently needed. “For example, when systems become too complex, they cannot be effectively managed top-down” explains Helbing. “Guided self-organization is a promising alternative to manage complex dynamical systems bottom-up, in a decentralized way.” The underlying idea is to exploit, rather than fight, the inherent tendency of complex systems to self-organize and thereby create a robust, ordered state. For this, it is important to have the right kinds of interactions, adaptive feedback mechanisms, and institutional settings, i.e. to establish proper “rules of the game.” The paper offers the example of an intriguing “self-control” principle, where traffic lights are controlled bottom-up by the vehicle flows rather than top-down by a traffic center.

Creating and Protecting Social Capital

“One man’s disaster is another man’s opportunity. Therefore, many problems can only be successfully addressed with transparency, accountability, awareness, and collective responsibility,” underlines Helbing. Moreover, social capital such as cooperativeness or trust is important for economic value generation, social well-being and societal resilience, but it may be damaged or exploited. “Humans must learn how to quantify and protect social capital. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis.” This crisis was largely caused by a loss of trust. “It is important to stress that risk insurances today do not consider damage to social capital,” Helbing continues. However, it is known that large-scale disasters have a disproportionate public impact, in part because they destroy social capital. As we neglect social capital in risk assessments, we are taking excessive risks.

Journal Reference:

  1. Dirk Helbing. Globally networked risks and how to respondNature, 2013; 497 (7447): 51 DOI:10.1038/nature12047

Computer Scientists Suggest New Spin On Origins of Evolvability: Competition to Survive Not Necessary? (Science Daily)

Apr. 26, 2013 — Scientists have long observed that species seem to have become increasingly capable of evolving in response to changes in the environment. But computer science researchers now say that the popular explanation of competition to survive in nature may not actually be necessary for evolvability to increase.

The average evolvability of organisms in each niche at the end of a simulation is shown. The lighter the color, the more evolvable individuals are within that niche. The overall result is that, as in the first model, evolvability increases with increasing distance from the starting niche in the center. (Credit: Joel Lehman, Kenneth O. Stanley. Evolvability Is Inevitable: Increasing Evolvability without the Pressure to Adapt. PLoS ONE, 2013; 8 (4): e62186 DOI: 10.1371/journal.pone.0062186)

In a paper published this week inPLOS ONE, the researchers report that evolvability can increase over generations regardless of whether species are competing for food, habitat or other factors.

Using a simulated model they designed to mimic how organisms evolve, the researchers saw increasing evolvability even without competitive pressure.

“The explanation is that evolvable organisms separate themselves naturally from less evolvable organisms over time simply by becoming increasingly diverse,” said Kenneth O. Stanley, an associate professor at the College of Engineering and Computer Science at the University of Central Florida. He co-wrote the paper about the study along with lead author Joel Lehman, a post-doctoral researcher at the University of Texas at Austin.

The finding could have implications for the origins of evolvability in many species.

“When new species appear in the future, they are most likely descendants of those that were evolvable in the past,” Lehman said. “The result is that evolvable species accumulate over time even without selective pressure.”

During the simulations, the team’s simulated organisms became more evolvable without any pressure from other organisms out-competing them. The simulations were based on a conceptual algorithm.

“The algorithms used for the simulations are abstractly based on how organisms are evolved, but not on any particular real-life organism,” explained Lehman.

The team’s hypothesis is unique and is in contrast to most popular theories for why evolvability increases.

“An important implication of this result is that traditional selective and adaptive explanations for phenomena such as increasing evolvability deserve more scrutiny and may turn out unnecessary in some cases,” Stanley said.

Stanley is an associate professor at UCF. He has a bachelor’s of science in engineering from the University of Pennsylvania and a doctorate in computer science from the University of Texas at Austin. He serves on the editorial boards of several journals. He has over 70 publications in competitive venues and has secured grants worth more than $1 million. His works in artificial intelligence and evolutionary computation have been cited more than 4,000 times.

Journal Reference:

  1. Joel Lehman, Kenneth O. Stanley. Evolvability Is Inevitable: Increasing Evolvability without the Pressure to AdaptPLoS ONE, 2013; 8 (4): e62186 DOI:10.1371/journal.pone.0062186

Mathematical Models Out-Perform Doctors in Predicting Cancer Patients’ Responses to Treatment (Science Daily)

Apr. 19, 2013 — Mathematical prediction models are better than doctors at predicting the outcomes and responses of lung cancer patients to treatment, according to new research presented today (Saturday) at the 2nd Forum of the European Society for Radiotherapy and Oncology (ESTRO).

These differences apply even after the doctor has seen the patient, which can provide extra information, and knows what the treatment plan and radiation dose will be.

“The number of treatment options available for lung cancer patients are increasing, as well as the amount of information available to the individual patient. It is evident that this will complicate the task of the doctor in the future,” said the presenter, Dr Cary Oberije, a postdoctoral researcher at the MAASTRO Clinic, Maastricht University Medical Center, Maastricht, The Netherlands. “If models based on patient, tumour and treatment characteristics already out-perform the doctors, then it is unethical to make treatment decisions based solely on the doctors’ opinions. We believe models should be implemented in clinical practice to guide decisions.”

Dr Oberije and her colleagues in The Netherlands used mathematical prediction models that had already been tested and published. The models use information from previous patients to create a statistical formula that can be used to predict the probability of outcome and responses to treatment using radiotherapy with or without chemotherapy for future patients.

Having obtained predictions from the mathematical models, the researchers asked experienced radiation oncologists to predict the likelihood of lung cancer patients surviving for two years, or suffering from shortness of breath (dyspnea) and difficulty swallowing (dysphagia) at two points in time:

1) after they had seen the patient for the first time, and

2) after the treatment plan was made. At the first time point, the doctors predicted two-year survival for 121 patients, dyspnea for 139 and dysphagia for 146 patients.

At the second time point, predictions were only available for 35, 39 and 41 patients respectively.

For all three predictions and at both time points, the mathematical models substantially outperformed the doctors’ predictions, with the doctors’ predictions being little better than those expected by chance.

The researchers plotted the results on a special graph [1] on which the area below the plotted line is used for measuring the accuracy of predictions; 1 represents a perfect prediction, while 0.5 represents predictions that were right in 50% of cases, i.e. the same as chance. They found that the model predictions at the first time point were 0.71 for two-year survival, 0.76 for dyspnea and 0.72 for dysphagia. In contrast, the doctors’ predictions were 0.56, 0.59 and 0.52 respectively.

The models had a better positive predictive value (PPV) — a measure of the proportion of patients who were correctly assessed as being at risk of dying within two years or suffering from dyspnea and dysphagia — than the doctors. The negative predictive value (NPV) — a measure of the proportion of patients that would not die within two years or suffer from dyspnea and dysphagia — was comparable between the models and the doctors.

“This indicates that the models were better at identifying high risk patients that have a very low chance of surviving or a very high chance of developing severe dyspnea or dysphagia,” said Dr Oberije.

The researchers say that it is important that further research is carried out into how prediction models can be integrated into standard clinical care. In addition, further improvement of the models by incorporating all the latest advances in areas such as genetics, imaging and other factors, is important. This will make it possible to tailor treatment to the individual patient’s biological make-up and tumour type

“In our opinion, individualised treatment can only succeed if prediction models are used in clinical practice. We have shown that current models already outperform doctors. Therefore, this study can be used as a strong argument in favour of using prediction models and changing current clinical practice,” said Dr Oberije.

“Correct prediction of outcomes is important for several reasons,” she continued. “First, it offers the possibility to discuss treatment options with patients. If survival chances are very low, some patients might opt for a less aggressive treatment with fewer side-effects and better quality of life. Second, it could be used to assess which patients are eligible for a specific clinical trial. Third, correct predictions make it possible to improve and optimise the treatment. Currently, treatment guidelines are applied to the whole lung cancer population, but we know that some patients are cured while others are not and some patients suffer from severe side-effects while others don’t. We know that there are many factors that play a role in the prognosis of patients and prediction models can combine them all.”

At present, prediction models are not used as widely as they could be by doctors. Dr Oberije says there are a number of reasons: some models lack clinical credibility; others have not yet been tested; the models need to be available and easy to use by doctors; and many doctors still think that seeing a patient gives them information that cannot be captured in a model. “Our study shows that it is very unlikely that a doctor can outperform a model,” she concluded.

President of ESTRO, Professor Vincenzo Valentini, a radiation oncologist at the Policlinico Universitario A. Gemelli, Rome, Italy, commented: “The booming growth of biological, imaging and clinical information will challenge the decision capacity of every oncologist. The understanding of the knowledge management sciences is becoming a priority for radiation oncologists in order for them to tailor their choices to cure and care for individual patients.”

[1] For the mathematicians among you, the graph is known as an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC).

[2] This work was partially funded by grants from the Dutch Cancer Society (KWF), the European Fund for Regional Development (INTERREG/EFRO), and the Center for Translational Molecular Medicine (CTMM).

Carbon bubble will plunge the world into another financial crisis – report (The Guardian)

Trillions of dollars at risk as stock markets inflate value of fossil fuels that may have to remain buried forever, experts warn

Damian Carrington – The Guardian, Friday 19 April 2013

Carbon bubble : carbon dioxide polluting power plant : coal-fired Bruce Mansfield Power Plant

Global stock markets are betting on countries failing to adhere to legally binding carbon emission targets. Photograph: Robert Nickelsberg/Getty Images

The world could be heading for a major economic crisis as stock marketsinflate an investment bubble in fossil fuels to the tune of trillions of dollars, according to leading economists.

“The financial crisis has shown what happens when risks accumulate unnoticed,” said Lord (Nicholas) Stern, a professor at the London School of Economics. He said the risk was “very big indeed” and that almost all investors and regulators were failing to address it.

The so-called “carbon bubble” is the result of an over-valuation of oil,coal and gas reserves held by fossil fuel companies. According to a report published on Friday, at least two-thirds of these reserves will have to remain underground if the world is to meet existing internationally agreed targets to avoid the threshold for “dangerous” climate changeIf the agreements hold, these reserves will be in effect unburnable and so worthless – leading to massive market losses. But the stock markets are betting on countries’ inaction on climate change.

The stark report is by Stern and the thinktank Carbon Tracker. Their warning is supported by organisations including HSBC, Citi, Standard and Poor’s and the International Energy Agency. The Bank of England has also recognised that a collapse in the value of oil, gas and coal assets as nations tackle global warming is a potential systemic risk to the economy, with London being particularly at risk owing to its huge listings of coal.

Stern said that far from reducing efforts to develop fossil fuels, the top 200 companies spent $674bn (£441bn) in 2012 to find and exploit even more new resources, a sum equivalent to 1% of global GDP, which could end up as “stranded” or valueless assets. Stern’s landmark 2006 reporton the economic impact of climate change – commissioned by the then chancellor, Gordon Brown – concluded that spending 1% of GDP would pay for a transition to a clean and sustainable economy.

The world’s governments have agreed to restrict the global temperature rise to 2C, beyond which the impacts become severe and unpredictable. But Stern said the investors clearly did not believe action to curb climate change was going to be taken. “They can’t believe that and also believe that the markets are sensibly valued now.”

“They only believe environmental regulation when they see it,” said James Leaton, from Carbon Tracker and a former PwC consultant. He said short-termism in financial markets was the other major reason for the carbon bubble. “Analysts say you should ride the train until just before it goes off the cliff. Each thinks they are smart enough to get off in time, but not everyone can get out of the door at the same time. That is why you get bubbles and crashes.”

Paul Spedding, an oil and gas analyst at HSBC, said: “The scale of ‘listed’ unburnable carbon revealed in this report is astonishing. This report makes it clear that ‘business as usual’ is not a viable option for the fossil fuel industry in the long term. [The market] is assuming it will get early warning, but my worry is that things often happen suddenly in the oil and gas sector.”

HSBC warned that 40-60% of the market capitalisation of oil and gas companies was at risk from the carbon bubble, with the top 200 fossil fuel companies alone having a current value of $4tn, along with $1.5tn debt.

Lord McFall, who chaired the Commons Treasury select committee for a decade, said: “Despite its devastating scale, the banking crisis was at its heart an avoidable crisis: the threat of significant carbon writedown has the unmistakable characteristics of the same endemic problems.”

The report calculates that the world’s currently indicated fossil fuel reserves equate to 2,860bn tonnes of carbon dioxide, but that just 31% could be burned for an 80% chance of keeping below a 2C temperature rise. For a 50% chance of 2C or less, just 38% could be burned.

Carbon capture and storage technology, which buries emissions underground, can play a role in the future, but even an optimistic scenario which sees 3,800 commercial projects worldwide would allow only an extra 4% of fossil fuel reserves to be burned. There are currently no commercial projects up and running. The normally conservativeInternational Energy Agency has also concluded that a major part of fossil fuel reserves is unburnable.

Citi bank warned investors in Australia’s vast coal industry that little could be done to avoid the future loss of value in the face of action on climate change. “If the unburnable carbon scenario does occur, it is difficult to see how the value of fossil fuel reserves can be maintained, so we see few options for risk mitigation.”

Ratings agencies have expressed concerns, with Standard and Poor’s concluding that the risk could lead to the downgrading of the credit ratings of oil companies within a few years.

Steven Oman, senior vice-president at Moody’s, said: “It behoves us as investors and as a society to know the true cost of something so that intelligent and constructive policy and investment decisions can be made. Too often the true costs are treated as unquantifiable or even ignored.”

Jens Peers, who manages €4bn (£3bn) for Mirova, part of €300bn asset managers Natixis, said: “It is shocking to see the report’s numbers, as they are worse than people realise. The risk is massive, but a lot of asset managers think they have a lot of time. I think they are wrong.” He said a key moment will come in 2015, the date when the world’s governments have pledged to strike a global deal to limit carbon emissions. But he said that fund managers need to move now. If they wait till 2015, “it will be too late for them to take action.”

Pension funds are also concerned. “Every pension fund manager needs to ask themselves have we incorporated climate change and carbon risk into our investment strategy? If the answer is no, they need to start to now,” said Howard Pearce, head of pension fund management at the Environment Agency, which holds £2bn in assets.

Stern and Leaton both point to China as evidence that carbon cuts are likely to be delivered. China’s leaders have said its coal use will peak in the next five years, said Leaton, but this has not been priced in. “I don’t know why the market does not believe China,” he said. “When it says it is going to do something, it usually does.” He said the US and Australia were banking on selling coal to China but that this “doesn’t add up”.

Jeremy Grantham, a billionaire fund manager who oversees $106bn of assets, said his company was on the verge of pulling out of all coal and unconventional fossil fuels, such as oil from tar sands. “The probability of them running into trouble is too high for me to take that risk as an investor.” He said: “If we mean to burn all the coal and any appreciable percentage of the tar sands, or other unconventional oil and gas then we’re cooked. [There are] terrible consequences that we will lay at the door of our grandchildren.”

Mathematics Provides a Shortcut to Timely, Cost-Effective Interventions for HIV (Science Daily)

Apr. 15, 2013 — Mathematical estimates of treatment outcomes can cut costs and provide faster delivery of preventative measures.

South Africa is home to the largest HIV epidemic in the world with a total of 5.6 million people living with HIV. Large-scale clinical trials evaluating combination methods of prevention and treatment are often prohibitively expensive and take years to complete. In the absence of such trials, mathematical models can help assess the effectiveness of different HIV intervention combinations, as demonstrated in a new study by Elisa Long and Robert Stavert from Yale University in the US. Their findings appear in the Journal of General Internal Medicine, published by Springer.

Currently 60 percent of individuals in need of treatment for HIV in South Africa do not receive it. The allocation of scant resources to fight the HIV epidemic means each strategy must be measured in terms of cost versus benefit. A number of new clinical trials have presented evidence supporting a range of biomedical interventions that reduce transmission of HIV. These include voluntary male circumcision — now recommended by the World Health Organization and Joint United Nations Programme on HIV/AIDS as a preventive strategy — as well as vaginal microbicides and oral pre-exposure prophylaxis, all of which confer only partial protection against HIV. Long and Stavert show that a combination portfolio of multiple interventions could not only prevent up to two-thirds of future HIV infections, but is also cost-effective in a resource-limited setting such as South Africa.

The authors developed a mathematical model accounting for disease progression, mortality, morbidity and the heterosexual transmission of HIV to help forecast future trends in the disease. Using data specific for South Africa, the authors estimated the health benefits and cost-effectiveness of a “combination approach” using all three of the above methods in tandem with current levels of antiretroviral therapy, screening and counseling.

For each intervention, they calculated the HIV incidence and prevalence over 10 years. At present rates of screening and treatment, the researchers predict that HIV prevalence will decline from 19 percent to 14 percent of the population in the next 10 years. However, they calculate that their combination approach including male circumcision, vaginal microbicides and oral pre-exposure prophylaxis could further reduce HIV prevalence to 10 percent over that time scale — preventing 1.5 million HIV infection over 10 years — even if screening and antiretroviral therapy are kept at current levels. Increasing antiretroviral therapy use and HIV screening frequency in addition could avert more than 2 million HIV infections over 10 years, or 60 percent of the projected total.

The researchers also determined a hierarchy of effectiveness versus cost for these intervention strategies. Where budgets are limited, they suggest money should be allocated first to increasing male circumcision, then to more frequent HIV screening, use of vaginal microbicides and increasing antiretroviral therapy. Additionally, they calculate that omitting pre-exposure prophylaxis from their combination strategy could offer 90 percent of the benefits of treatment for less than 25 percent of the costs.

The authors conclude: “In the absence of multi-intervention randomized clinical or observational trials, a mathematical HIV epidemic model provides useful insights about the aggregate benefit of implementing a portfolio of biomedical, diagnostic and treatment programs. Allocating limited available resources for HIV control in South Africa is a key priority, and our study indicates that a multi-intervention HIV portfolio could avert nearly two-thirds of projected new HIV infections, and is a cost-effective use of resources.”

Journal Reference:

  1. Long, E.F. and Stavert, R.R. Portfolios of biomedical HIV interventions in South Africa: a cost-effectiveness analysisJournal of General Internal Medicine, 2013 DOI:10.1007/s11606-013-2417-1

Carbon Dioxide Removal Can Lower Costs of Climate Protection (Science Daily)

Apr. 12, 2013 — Directly removing CO2 from the air has the potential to alter the costs of climate change mitigation. It could allow prolonging greenhouse-gas emissions from sectors like transport that are difficult, thus expensive, to turn away from using fossil fuels. And it may help to constrain the financial burden on future generations, a study now published by the Potsdam Institute for Climate Impact Research (PIK) shows. It focuses on the use of biomass for energy generation, combined with carbon capture and storage (CCS). According to the analysis, carbon dioxide removal could be used under certain requirements to alleviate the most costly components of mitigation, but it would not replace the bulk of actual emissions reductions. 

Directly removing CO2 from the air has the potential to alter the costs of climate change mitigation. It could allow prolonging greenhouse-gas emissions from sectors like transport that are difficult, thus expensive, to turn away from using fossil fuels. And it may help to constrain the financial burden on future generations, a new study shows. It focuses on the use of biomass for energy generation, combined with carbon capture and storage. (Credit: © Jürgen Fälchle / Fotolia)

“Carbon dioxide removal from the atmosphere allows to separate emissions control from the time and location of the actual emissions. This flexibility can be important for climate protection,” says lead-author Elmar Kriegler. “You don’t have to prevent emissions in every factory or truck, but could for instance plant grasses that suck CO2 out of the air to grow — and later get processed in bioenergy plants where the CO2 gets stored underground.”

In economic terms, this flexibility allows to lower costs by compensating for emissions which would be most costly to eliminate. “This means that a phase-out of global emissions by the end of the century — that we would need to hold the 2 degree line adopted by the international community — does not necessarily require to eliminate each and every source of emissions,” says Kriegler. “Decisions whether and how to protect future generations from the risks of climate change have to be made today, but the burden of achieving these targets will increase over time. The costs for future generations can be substantially reduced if carbon dioxide removal technologies become available in the long run.”

Balancing the financial burden across generations

The study now published is the first to quantify this. If bioenergy plus CCS is available, aggregate mitigation costs over the 21st century might be halved. In the absence of such a carbon dioxide removal strategy, costs for future generations rise significantly, up to a quadrupling of mitigation costs in the period of 2070 to 2090. The calculation was carried out using a computer simulation of the economic system, energy markets, and climate, covering a range of scenarios.

Options for carbon dioxide removal from the atmosphere include afforestation and chemical approaches like direct air capture of CO2 from the atmosphere or reactions of CO2 with minerals to form carbonates. But the use of biomass for energy generation combined with carbon capture and storage is less costly than chemical options, as long as sufficient biomass feedstock is available, the scientists point out.

Serious concerns about large-scale biomass use combined with CCS

“Of course, there are serious concerns about the sustainability of large-scale biomass use for energy,” says co-author Ottmar Edenhofer, chief-economist of PIK. “We therefore considered the bioenergy with CCS option only as an example of the role that carbon dioxide removal could play for climate change mitigation.” The exploitation of bioenergy can conflict with land-use for food production or ecosystem protection. To account for sustainability concerns, the study restricts the bioenergy production to a medium level, that may be realized mostly on abandoned agricultural land.

Still, global population growth and changing dietary habits, associated with an increased demand for land, as well as improvements of agricultural productivity, associated with a decreased demand for land, are important uncertainties here. Furthermore, CCS technology is not yet available for industrial-scale use and, due to environmental concerns, is controversial in countries like Germany. Yet in this study it is assumed that it will become available in the near future.

“CO2 removal from the atmosphere could enable humankind to keep the window of opportunity open for low-stabilization targets despite of a likely delay in international cooperation, but only under certain requirements,” says Edenhofer. “The risks of scaling up bioenergy use need to be better understood, and safety concerns about CCS have to be thoroughly investigated. Still, carbon dioxide removal technologies are no science fiction and need to be further explored.” In no way should they be seen as a pretext to neglect emissions reductions now, notes Edenhofer. “By far the biggest share of climate change mitigation has to come from a large effort to reduce greenhouse-gas emissions globally.”

Journal Reference:

  1. Elmar Kriegler, Ottmar Edenhofer, Lena Reuster, Gunnar Luderer, David Klein. Is atmospheric carbon dioxide removal a game changer for climate change mitigation? Climatic Change, 2013; DOI: 10.1007/s10584-012-0681-4

Líder indígena brasileiro ganha prêmio ‘Herói da Floresta’ da ONU (G1;Globo Natureza)

JC e-mail 4703, de 11 de Abril de 2013.

Almir Suruí, de Rondônia, fez parceria com Google para monitorar floresta. Ele está na Turquia para receber o título internacional

Almir Suruí, líder indígena de Rondônia, é um dos vencedores do prêmio “Herói da Floresta” este ano. O título é concedido pelas Nações Unidas.

A cerimônia oficial de entrega estava prevista para acontecer na noite desta quarta-feira (10) em Istambul (hora local), onde acontece o Fórum sobre Florestas da ONU, que congrega representantes de 197 país.

Os outros quatro “Heróis da Floresta” deste ano são dos Estados Unidos, Ruanda, Tailândia e Turquia. Almir é o vencedor pela América Latina e o Caribe. Líder dos índios paiter suruí, Almir criou diferentes iniciativas para proteger e desenvolver a Terra Indígena Sete de Setembro, em Rondônia, onde mora.

O projeto mais conhecido usa a internet para valorizar a cultura de seu povo e combater o desmatamento ilegal. A partir de uma parceria com o Google e algumas ONGs, os suruí colocaram à disposição dos usuários da rede um “mapa cultural” que dá informações sobre sua cultura e história.

Eles também usam telefones celulares para tirar fotos da derrubada ilegal de floresta, determinando com o GPS o local exato do crime ambiental e enviando denúncias a autoridades competentes.

No ano passado, outros brasileiros já haviam sido premiados como “Heróis da Floresta” pela ONU: Paulo Adário, diretor do Greenpeace para a Amazônia, e o casal de ativistas José Cláudio Ribeiro e Maria do Espírito Santo, assassinado no Pará em maio de 2011, que foi nomeado como uma homenagem póstuma.

Major Depression: Great Success With Pacemaker Electrodes, Small Study Suggests (Science Daily)

Apr. 9, 2013 — Researchers from the Bonn University Hospital implanted pacemaker electrodes into the medial forebrain bundle in the brains of patients suffering from major depression with amazing results: In six out of seven patients, symptoms improved both considerably and rapidly. The method of Deep Brain Stimulation had already been tested on various structures within the brain, but with clearly lesser effect.

The medial forebrain bundle is highlighted in green. (Credit: Volker Arnd Coenen/Uni Freiburg)

The results of this new study have now been published in the international journal Biological Psychiatry.

After months of deep sadness, a first smile appears on a patient’s face. For many years, she had suffered from major depression and tried to end her life several times. She had spent the past years mostly in a passive state on her couch; even watching TV was too much effort for her. Now this young woman has found her joie de vivre again, enjoys laughing and travelling. She and an additional six patients with treatment resistant depression participated in a study involving a novel method for addressing major depression at the Bonn University Hospital.

Considerable amelioration of depression within days

Prof. Dr. Volker Arnd Coenen, neurosurgeon at the Department of Neurosurgery (Klinik und Poliklinik für Neurochirurgie), implanted electrodes into the medial forebrain bundles in the brains of subjects suffering from major depression with the electrodes being connected to a brain pacemaker. The nerve cells were then stimulated by means of a weak electrical current, a method called Deep Brain Stimulation. In a matter of days, in six out of seven patients, symptoms such as anxiety, despondence, listlessness and joylessness had improved considerably. “Such sensational success both in terms of the strength of the effects, as well as the speed of the response has so far not been achieved with any other method,” says Prof. Dr. Thomas E. Schläpfer from the Bonn University Hospital Department of Psychiatry und Psychotherapy (Bonner Uniklinik für Psychiatrie und Psychotherapie).

Central part of the reward circuit

The medial forebrain bundle is a bundle of nerve fibers running from the deep-seated limbic system to the prefrontal cortex. In a certain place, the bundle is particularly narrow because the individual nerve fibers lie close together. “This is exactly the location in which we can have maximum effect using a minimum of current,” explains Prof. Coenen, who is now the new head of the Freiburg University Hospital’s Department of Stereotactic and Functional Neurosurgery (Abteilung Stereotaktische und Funktionelle Neurochirurgie am Universitätsklinikum Freiburg). The medial forebrain bundle is a central part of a euphoria circuit belonging to the brain’s reward system. What kind of effect stimulation exactly has on nerve cells is not yet known. But it obviously changes metabolic activity in the different brain centers.

Success clearly increased over that of earlier studies

The researchers have already shown in several studies that deep brain stimulation shows an amazing and-given the severity of the symptoms- unexpected degree of amelioration of symptoms in major depression. In those studies, however, the physicians had not implanted the electrodes into the medial forebrain bundle but instead into the nucleus accumbens, another part of the brain’s reward system. This had resulted in clear and sustainable improvements in about 50 percent of subjects. “But in this new study, our results were even much better,” says Prof. Schläpfer. A clear improvement in complaints was found in 85 percent of patients, instead of the earlier 50 percent. In addition, stimulation was performed with lower current levels, and the effects showed within a few days, instead of after weeks.

Method’s long-term success

“Obviously, we have now come closer to a critical structure within the brain that is responsible for major depression,” says the psychiatrist from the Bonn University Hospital. Another cause for optimism among the group of physicians is that, since the study’s completion, an eighth patient has also been treated successfully. The patients have been observed for a period of up to 18 month after the intervention. Prof. Schläpfer reports, “The anti-depressive effect of deep brain stimulation within the medial forebrain bundle has not decreased during this period.” This clearly indicates that the effects are not temporary. This method gives those who suffer from major depression reason to hope. However, it will take quite a bit of time for the new procedure to become part of standard therapy.

Journal Reference:

  1. Thomas E. Schlaepfer, Bettina H. Bewernick, Sarah Kayser, Burkhard Mädler, Volker A. Coenen. Rapid Effects of Deep Brain Stimulation for Treatment-Resistant Major DepressionBiological Psychiatry, 2013; DOI:10.1016/j.biopsych.2013.01.034

Maya Long Count Calendar Calibrated to Modern European Calendar Using Carbon-14 Dating (Science Daily)

Apr. 11, 2013 — The Maya are famous for their complex, intertwined calendric systems, and now one calendar, the Maya Long Count, is empirically calibrated to the modern European calendar, according to an international team of researchers.

Elaborately carved wooden lintel or ceiling from a temple in the ancient Maya city of Tikal, Guatemala, that carries a carving and dedication date in the Maya calendar. (Credit: Courtesy of the Museum der Kulturen)

“The Long Count calendar fell into disuse before European contact in the Maya area,” said Douglas J. Kennett, professor of environmental archaeology, Penn State.

“Methods of tying the Long Count to the modern European calendar used known historical and astronomical events, but when looking at how climate affects the rise and fall of the Maya, I began to question how accurately the two calendars correlated using those methods.”

The researchers found that the new measurements mirrored the most popular method in use, the Goodman-Martinez-Thompson (GMT) correlation, initially put forth by Joseph Goodman in 1905 and subsequently modified by others. In the 1950s scientists tested this correlation using early radiocarbon dating, but the large error range left open the validity of GMT.

“With only a few dissenting voices, the GMT correlation is widely accepted and used, but it must remain provisional without some form of independent corroboration,” the researchers report in today’s (April 11) issue of Scientific Reports.

A combination of high-resolution accelerator mass spectrometry carbon-14 dates and a calibration using tree growth rates showed the GMT correlation is correct.

The Long Count counts days from a mythological starting point. The date is composed of five components that combine a multiplier times 144,000 days — Bak’tun, 7,200 days — K’atun, 360 days — Tun, 20 days — Winal, and 1 day — K’in separated, in standard notation, by dots.

Archaeologists want to place the Long Count dates into the European calendar so there is an understanding of when things happened in the Maya world relative to historic events elsewhere. Correlation also allows the rich historical record of the Maya to be compared with other sources of environmental, climate and archaeological data calibrated using the European calendar.

The samples came from an elaborately carved wooden lintel or ceiling from a temple in the ancient Maya city of Tikal, Guatemala, that carries a carving and dedication date in the Maya calendar. This same lintel was one of three analyzed in the previous carbon-14 study.

Researchers measured tree growth by tracking annual changes in calcium uptake by the trees, which is greater during the rainy season.

The amount of carbon-14 in the atmosphere is incorporated into a tree’s incremental growth. Atmospheric carbon-14 changes through time, and during the Classic Maya period oscillated up and down.

The researchers took four samples from the lintel and used annually fluctuating calcium concentrations evident in the incremental growth of the tree to determine the true time distance between each by counting the number of elapsed rainy seasons. The researchers used this information to fit the four radiocarbon dates to the wiggles in the calibration curve. Wiggle-matching the carbon-14 dates provided a more accurate age for linking the Maya and Long Count dates to the European calendars.

These calculations were further complicated by known differences in the atmospheric radiocarbon content between northern and southern hemisphere.

“The complication is that radiocarbon concentrations differ between the southern and northern hemisphere,” said Kennett. “The Maya area lies on the boundary, and the atmosphere is a mixture of the southern and northern hemispheres that changes seasonally. We had to factor that into the analysis.”

The researchers results mirror the GMT European date correlations indicating that the GMT was on the right track for linking the Long Count and European calendars.

Events recorded in various Maya locations “can now be harmonized with greater assurance to other environmental, climatic and archaeological datasets from this and adjacent regions and suggest that climate change played an important role in the development and demise of this complex civilization,” the researchers wrote.

Journal Reference:

  1. Douglas J. Kennett, Irka Hajdas, Brendan J. Culleton, Soumaya Belmecheri, Simon Martin, Hector Neff, Jaime Awe, Heather V. Graham, Katherine H. Freeman, Lee Newsom, David L. Lentz, Flavio S. Anselmetti, Mark Robinson, Norbert Marwan, John Southon, David A. Hodell, Gerald H. Haug. Correlating the Ancient Maya and Modern European Calendars with High-Precision AMS 14C DatingScientific Reports, 2013; 3 DOI:10.1038/srep01597

In Big Data, We Hope and Distrust (Huffington Post)

By Robert Hall

Posted: 04/03/2013 6:57 pm

“In God we trust. All others must bring data.” — W. Edwards Deming, statistician, quality guru

Big data helped reelect a pesident, find Osama bin Laden, and contributed to the meltdown of our financial system. We are in the midst of a data revolution where social media introduces new terms like Arab Spring, Facebook Depression and Twitter anxiety that reflect a new reality: Big data is changing the social and relationship fabric of our culture.

We spend hours installing and learning how to use the latest versions of our ever-expanding technology while enduring a never-ending battle to protect our information. Then we labor while developing practices to rid ourselves of technology — rules for turning devices off during meetings or movies, legislation to outlaw texting while driving, restrictions in classrooms to prevent cheating, and scheduling meals or family time where devices are turned off. Information and technology: We love it, hate it, can’t live with it, can’t live without it, use it voraciously, and distrust it immensely. I am schizophrenic and so am I.

Big data is not only big but growing rapidly. According to IBM, we create 2.5 quintillion bytes a day and that “ninety percent of the data in the world has been created in the last two years.” Vast new computing capacity can analyze Web-browsing trails that track our every click, sensor signals from every conceivable device, GPS tracking and social network traffic. It is now possible to measure and monitor people and machines to an astonishing degree. How exciting, how promising. And how scary.

This is not our first data rodeo. The early stages of the customer relationship management movement were filled with hope and with hype. Large data warehouses were going to provide the kind of information that would make companies masters of customer relationships. There were just two problems. First, getting the data out of the warehouse wasn’t nearly as hard as getting it into the person or device interacting with the customers in a way that added value, trust and expanded relationships. We seem to always underestimate the speed of technology and overestimate the speed at which we can absorb it and socialize around it.

Second, unfortunately the customers didn’t get the memo and mostly decided in their own rich wisdom they did not need or want “masters.” In fact as providers became masters of knowing all the details about our lives, consumers became more concerned. So while many organizations were trying to learn more about customer histories, behaviors and future needs — customers and even their governments were busy trying to protect privacy, security, and access. Anyone attempting to help an adult friend or family member with mental health issues has probably run into well-intentioned HIPAA rules (regulations that ensure privacy of medical records) that unfortunately also restrict the ways you can assist them. Big data gives and the fear of big data takes away.

Big data does not big relationships make. Over the last 20 years as our data keeps getting stronger, our customer relationships keep getting weaker. Eighty-six percent of consumers trust corporations less than they did five years ago. Customer retention across industries has fallen about 30 percent in recent years. Is it actually possible that we have unwittingly contributed in the undermining of our customer relationships? How could that be? For one thing, as companies keep getting better at targeting messages to specific groups and those groups keep getting better at blocking their messages. As usual, the power to resist trumps the power to exert.

No matter how powerful big data becomes, if it is to realize its potential, it must build trust on three levels. First, customers must trust our intentions. Data that can be used for us can also be used against us. There is growing fear institutions will become a part of a “surveillance state.” While organizations have gone to great length to promote protection of our data — the numbers reflect a fair amount of doubt. For example, according to MainStreet, “87 percent of Americans do not feel large banks are transparent and 68 percent do not feel their bank is on their side.:

Second, customers must trust our actions. Even if they trust our intentions, they might still fear that our actions put them at risk. Our private information can be hacked, then misused and disclosed in damaging and embarrassing ways. After the Sandy Hook tragedy a New York newspaper published the names and addresses of over 33,000 licensed gun owners along with an interactive map that showed exactly where they lived. In response names and addresses of the newspaper editor and writers were published on-line along with information about their children. No one, including retired judges, law enforcement officers and FBI agents expected their private information to be published in the midst of a very high decibel controversy.

Third, customers must trust the outcome — that sharing data will benefit them. Even with positive intentions and constructive actions, the results may range from disappointing to damaging. Most of us have provided email addresses or other contact data — around a customer service issue or such — and then started receiving email, phone or online solicitations. I know a retired executive who helps hard-to-hire people. She spent one evening surfing the Internet to research about expunging criminal records for released felons. Years later, Amazon greets her with books targeted to the felon it believes she is. Even with opt-out options, we felt used. Or, we provide specific information, only to repeat it in the next transaction or interaction — not getting the hoped for benefit of saving our time.

It will be challenging to grow the trust at anywhere near the rate we grow the data. Information develops rapidly, competence and trust develop slowly. Investing heavily in big data and scrimping on trust will have the opposite effect desired. To quote Dolly Parton who knows a thing or two about big: “It costs a lot of money to look this cheap.”

How Big Could a Man-Made Earthquake Get? (Popular Mechanics)

Scientists have found evidence that wastewater injection induced a record-setting quake in Oklahoma two years ago. How big can a man-made earthquake get, and will we see more of them in the future?

By Sarah Fecht – April 2, 2013 5:00 PM

hydraulic fracking drilling illustration

Hydraulic fracking drilling illustration. Brandon Laufenberg/Getty Images

In November 2011, a magnitude-5.7 earthquake rattled Prague, Okla., and 16 other nearby states. It flattened 14 homes and many other buildings, injured two people, and set the record as the state’s largest recorded earthquake. And according to a new study in the journal Geology, the event can also claim the title of Largest Earthquake That’s Ever Been Induced by Fluid Injection.”

In the paper, a team of geologists pinpoints the quake’s starting point at less than 200 meters (about 650 feet) from an injection well where wastewater from oil drilling was being pumped into the ground at high pressures. At 5.7 magnitude, the Prague earthquake was about 10 times stronger than the previous record holder: a magnitude-4.8 Rocky Mountain Arsenal earthquake in Colorado in 1967, caused by the U.S. Army injecting a deep well with 148,000 gallons per day of fluid wastes from chemical-weapons testing. So how big can these man-made earthquakes get?

The short answer is that scientists don’t really know yet, but it’s possible that fluid injection could cause some big ones on very rare occasions. “We don’t see any reason that there should be any upper limit for an earthquake that is induced,” says Bill Ellsworth, a geophysicist with the U.S. Geological Survey, who wasn’t involved in the new study.

As with natural earthquakes, most man-made earthquakes have been small to moderate in size, and most are felt only by seismometers. Larger quakes are orders of magnitude rarer than small quakes. For example, for every 1000 magnitude-1.0 earthquakes that occur, expect to see 100 magnitude-2.0s, 10 magnitude-3.0s, just 1 magnitude-4.0, and so on. And just as with natural earthquakes, the strength of the induced earthquake depends on the size of the nearby fault and the amount of stress acting on it. Some faults just don’t have the capacity to cause big earthquakes, whether natural or induced.

How do Humans Trigger Earthquakes?

Faults have two major kinds of stressors: shear stress, which makes two plates slide past each other along the fault line, and normal stress, which pushes the two plates together. Usually the normal stress keeps the fault from moving sideways. But when a fluid is injected into the ground, as in Prague, that can reduce the normal stress and make it easier for the fault to slip sideways. It’s as if if you have a tall stack of books on a table, Ellsworth says: If you take half the books away, it’s easier to slide the stack across the table.

“Water increases the fluid pressure in pores of rocks, which acts against the pressure across the fault,” says Geoffrey Abers, a Columbia University geologist and one of the new study’s authors. “By increasing the fluid pressure, you’re decreasing the strength of the fault.”

A similar mechanism may be behind earthquakes induced by large water reservoirs. In those instances, the artificial lake behind a dam causes water to seep into the pore spaces in the ground. In 1967, India’s Koyna Dam caused a 6.5 earthquake that killed 177 people, injured more than 2000, and left 50,000 homeless. Unprecedented seasonal fluctuations in water level behind a dam in Oroville, Calif., are believed to be behind the magnitude-6.1 earthquake that occurred there in 1975.

Extracting a fluid from the ground can also contribute to triggering a quake. “Think about filling a balloon with water and burying it at the beach,” Ellsworth says. “If you let the water out, the sand will collapse inward.” Similarly, when humans remove large amounts of oil and natural gas from the ground, it can put additional stress on a fault line. “In this case it may be the shear stresses that are being increased, rather than normal stresses,” Ellsworth says.

Take the example of the Gazli gas field in Uzbekistan, thought to be located in a seismically inactive area when drilling began in 1962. As drillers removed the natural gas, the pressure in the gas field dropped from 1030 psi in 1962 to 515 psi in 1976, then down to 218 psi in 1985. Meanwhile, three large magnitude-7.0 earthquakes struck: two in 1976 and one in 1984. Each quake had an epicenter within 12 miles of Gazli and caused a surface uplift of some 31 inches. Because the quakes occurred in Soviet-era Uzbekistan, information about the exact locations, magnitudes, and causes are not available. However, a report by the National Research Council concludes that “observations of crustal uplift and the proximity of these large earthquakes to the Gazli gas field in a previously seismically quiet region strongly suggest that they were induced by hydrocarbon extraction.” Extraction of oil is believed to have caused at least three big earthquakes in California, with magnitudes of 5.9, 6.1, and 6.5.

Some people worry that hydraulic fracturing, or fracking‚Äîwherein high-pressure fluids are used to crack through rock layers to extract oil and natural gas‚Äîwill lead to an increased risk of earthquakes. However, the National Research Council report points out that there are tens of thousands of hydrofracking wells in existence today, and there has only been one case in which a “felt” tremor was linked to fracking. That was a 2.3 earthquake in Blackpool, England, in 2011, which didn’t cause any significant damage. Although scientists have known since the 1920s that humans trigger earthquakes, experts caution that it’s not always easy to determine whether a specific event was induced.

Are Human Activities Making Quakes More Common?

Human activities have been linked to increased earthquake frequencies in certain areas. For instance, researchers have shown a strong correlation between the volume of fluid injected into the Rocky Mountain Arsenal well and the frequency of earthquakes in that area.

Geothermal-energy sites can also induce many earthquakes, possibly due to pressure, heat, and volume changes. The Geysers in California is the largest geothermal field in the U.S., generating 725 megawatts of electricity using steam from deep within the earth. Before The Geysers began operating in 1960, seismic activity was low in the area. Now the area experiences hundreds of earthquakes per year. Researchers have found correlations between the volume of steam production and the number of earthquakes in the region. In addition, as the area of the steam wells increased over the years, so did the spatial distribution of earthquakes.

Whether or not human activity is increasing the magnitude of earthquakes, however, is more of a gray area. When it comes to injection wells, evidence suggests that earthquake magnitudes rise along with the volume of injected wastewater, and possibly injection pressure and rate of injection as well, according to a statement from the Department of Interior.

The vast majority of earthquakes caused by The Geysers are considered to be microseismic events—too small for humans to feel. However, researchers from Lawrence Berkeley National Laboratory note that magnitude-4.0 earthquakes, which can cause minor damage, seem to be increasing in frequency.

The new study says that though earthquakes with a magnitude of 5.0 or greater are rare east of the Rockies, scientists have observed an 11-fold increase between 2008 and 2011, compared with 1976 through 2007. But the increase hasn’t been tied to human activity. “We do not really know what is causing this increase, but it is remarkable,” Abers says. “It is reasonable that at least some may be natural.”

Historic First Weather Satellite Image (Discovery)

By Tom Yulsman | April 2, 2013 7:46 pm

The first image ever transmitted back to Earth from a weather satellite. It was captured by TIROS-1. (Image: CIMSS Satellite Blog)

The awesome folks over at the satellite blog of the Cooperative Institute for Meteorological Satellite Studies posted this historic image yesterday — and I just couldn’t let it go without giving it more exposure.

The first weather satellite image ever, it was captured by TIROS-1 on April 1, 1960 — meaning yesterday was the 53rd anniversary of the event.

Okay, that may not be as significant as, say, the 50th anniversary was. But this is still a great opportunity to see how far we’ve come with remote sensing of the home planet.

On the same day that the satellite sent back this image, the U.S. Census Bureau determined that the resident population of the United States was 179,245,000. As I write this post, the bureau estimates the population to be 315,602,806. (By the time you read this, the population will be even larger!)

Thanks in part to the pioneering efforts of TIROS-1, and weather satellites that followed, today we have access to advanced warming of extreme events like hurricanes — a capability that has saved many lives.

“TIROS” stands for Television Infrared Observation Satellite Program. Here’s how NASA describes its mission:

The TIROS Program . . . was NASA’s first experimental step to determine if satellites could be useful in the study of the Earth. At that time, the effectiveness of satellite observations was still unproven. Since satellites were a new technology, the TIROS Program also tested various design issues for spacecraft: instruments, data and operational parameters. The goal was to improve satellite applications for Earth-bound decisions, such as “should we evacuate the coast because of the hurricane?”.

Here is the second image taken by Tiros-1 — 53 years ago today:

Tiros-1 transmitted a second weather image on April 2, 1960, 53 years ago today.

Head over to the CIMSS satellite blog for more details. The post there includes  spectacular comparison images from the SUOMI NPP satellite of the same general area: Maine and the Canadian Maritime provinces. One of them is a “visual image at night,” meaning it was shot under moonlight. You can see the sparkling lights of cities.

Futuristic predictions from 1988 LA Times Magazine come true… mostly (Singularity Hub)

Written By: 

Posted: 03/28/13 8:52 AM

los-angeles-banner

In 2013, a day in the life of a Los Angeles family of four is an amazing testament to technological progress and the idealistic society that can be achieved…or at least that’s what the Los Angeles Times Magazine was hoping for 25 years ago. Back in April 1988, the magazine ran a special cover story called “L.A. 2013″ and presented what a typical day would be like for a family living in the city.

The author of the story, Nicole Yorkin, spoke with over 30 experts and futurists to forecast daily life in 2013 and then wove these into a story akin to those “World of Tomorrow” MGM cartoons from the mid-20th century. But unlike the cartoons which often included far fetched technologies for humor, what’s most remarkable about the 1988 article is just how many of the predictions have actually come to pass, giving some leeway in how accurately the future can be imagined.

For anyone considering what will happen in the next 25 years, the article is worth a read as it serves as an amazing window into how well the future can be predicted in addition to what technology is able to achieve in a short period of time.

LA-2013-banner

Just consider the section on ‘smart cars’ speculated to be “smaller, more efficient, more automated and more personalized” than cars 25 years ago. While experts envisioned that cars would have more Transformer-like abilities to change from a sports car to a beach buggy, the key development in automobile technology will be “a central computer in the car that will control a number of devices.” Furthermore, cars were expected to be equipped with “electronic navigation or map systems,” or GPS systems. Although modern cars don’t have a ‘sonar shield’ that would cause a car to slow down when it came closer to another, parking sensors are becoming common and rearview cameras may soon be required by law.

Though the article doesn’t explicitly predict the Internet and all its consequences per se, computers were implicit to some of the predictions, such as telecommuting, virtual shopping, smart cards for health monitoring, a personalized ‘home newspaper,’ and video chatting. Integrated computers were also expected in the form of smart appliances, wall-to-ceiling computer displays in classrooms, and 3D video conferencing. These technologies exist today thanks to the networked computer revolution that was amazingly only in its infancy in 1988.

LA-2013-robot

‘The Ultimate Appliance’ is the mobile robot expected to be a ‘fixture’ in today’s homes.

But of all the technologies expected to be part of daily life in 2013, the biggest miss by the article comes with robots.

In fact, the mobile robot “Billy Rae” is depicted as an integral component to the household, much like Rosie The Robot was in The Jetsons. In the story, the family communicates with Billy Rae naturally as the mother reads a list of chores for cleaning the house and preparing meals. There’s even a pet canine robot named Max that helps the son learn to read and do math. The robots aren’t necessarily depicted as being super intelligent, but they were still expected to be vital, even being referred to as the “ultimate appliance.”

In recent years, great strides have been made with robots and artificial intelligence, but we are years away from having a maid-like robot that was hoped for in the article. We’re all familiar withcleaning robots like the Roomba and hospitals are starting to utilize healthcare robots.Personal assistants like Siri show that we’re getting closer to the day when people and computers can communicate verbally. But bringing all these technologies together is one of the most challenging problems to be solved, even with the high amounts of expectation and huge market potential that these bots will experience.

In light of this, it’s interesting to compare the predictions in this article to those in French illustrations drawn around 1900, which also include a fair share of robotic automation.

The piece is peppered with utopian speculation, but already on the radar were concerns about the shifting job market, increasing pollution, and the need for quality schooling, public transportation, and affordable housing, issues that have reached or are nearing crisis levels. It’s comforting to know that many of the problems that modern cities face were understood fairly well a quarter of a century ago, but it is sobering to recognize how technologies have been slow in some cases at handling these problems.

Perhaps the greatest lesson from reading the article is that few of the predictions are completely wrong, but the timescale was ambitious. Almost all of the technologies described will get here sooner or later. The real issue then is, what is preventing rapid innovation or broad-scale adoption of technologies?

Not surprisingly, the answers today are the same as they were 25 years ago: time and money.

LA-metro-rail

[images: kla4067/Flickr, LA Times]

Brain scans can now tell who you’re thinking about (Singularity Hub)

Written By: 

Posted: 03/23/13 7:48 AM

[Source: Listal]

[Source: Listal]

Beware stalkers, these neuroscientists can tell who you’re thinking of. Or, at least, the kind of personality he or she might have.

As a social species humans are highly attuned to the behavior of others around them. It’s a survival mechanism, helping us to safely navigate the social world. That awareness involves both evaluating people and predicting how they will behave in different situations in the future (“Uh oh, don’t get him started!”). But just how does the brain represent another person’s personality?

To answer this question a group of scientists at Cornell’s College of Human Ecology (whatever that means) used functional magnetic resonance imaging (fMRI) to measure neuronal activity while people thought about different types of personalities. The 19 participants – all young adults – learned about four protagonists, all of whom had considerably different personalities, based on agreeableness (e.g., “Likes to cooperate with others”) and extraversion (“Is sometimes shy”). They were then presented different scenarios (such as sitting on a bus with no empty seats and watching an elderly person get on) and asked to imagine how each of the four protagonists would react.

Varying degrees of a person's deemed "agreeableness" and "extraversion" combine to produce different brain activation patterns in the brain. [Source: Cerebral Cortex]

Varying degrees of a person’s deemed “agreeableness” and “extraversion” combine to produce different brain activation patterns in the brain. [Source: Cerebral Cortex]

The study’s lead author, Nathan Spreng, said they were “shocked” when they saw the results. The brain scans revealed that each of the four distinct personalities elicited four distinct activity patterns in the medial prefrontal cortex, an area at the front of the brain known to be involved in decision making. In essence, the researchers had succeeded in extracting mental pictures – the personalities of others – that people were thinking of.The study was published in the March 5 issue of Cerebral Cortex.

Sizing up the personality of another or thinking what they’re thinking is unique to social animals and in fact to do so was until recently thought to be uniquely human. But there’s now reason to believe the network – called the ‘default network’ – is a fundamental feature of social mammals in general. As Spreng explained in an email, “Macaque [monkeys] clearly have a similar network, observable even in the rat. All of these mammalian species are highly social.”

The fact that the mental snapshot of others was seen in the neurons of the medial prefrontal cortex means the current study may have implications for autism, Spreng said in a Cornell University news release. “Prior research has implicated the anterior mPFC in social cognition disorders such as autism, and our results suggest people with such disorders may have an inability to build accurate personality models. If further research bears this out, we may ultimately be able to identify specific brain activation biomarkers not only for diagnosing such diseases, but for monitoring the effects of interventions.”

Previous work has shown that brain scans can tell us a lot about what a person’s thinking. With an array of electrodes placed directly on the brain, researchers were able to decode specific words that people were thinking. In another experiment fRMI scans of the visual cortex were used to reconstruct movie trailers that participants were watching.

Much of neuroscience explores how the brain processes the sensory information that guides us through our physical environment. But, for many species, navigating the social environment can be just as important to survival. “For me, an important feature of the work is that our emotions and thoughts about other people are felt to be private experiences,” Spreng said. “In our life, we may choose to share our thoughts and feelings with peers, friends and loved ones. However, [thoughts and feelings] are also physical and biological processes that can be observed. Considering how important our social world is, we know very little about the brain processes that support social knowledge. The objective of this work is to understand the physical mechanisms that allow us to have an inner world, and a part of that is how we represent other people in our mind.”

Everybody Knows. Climate Denialism has peaked. Now what are we going to do? (EcoEquity)

– Tom Athanasiou (toma@ecoequity.org).  April 2, 2013.

It was never going to be easy to face the ecological crisis.  Even back in the 1970s, before climate took center stage, it was clear that we the prosperous were walking far too heavily.  And that “environmentalism,” as it was called, was only going to be a small beginning.  But it was only when the climate crisis pushed fossil energy into the spotlight that the real stakes were widely recognized.  Fossil fuels are the meat and potatoes of industrial civilization, and the need to rapidly and radically reduce their emissions cut right through to the heart of the great American dream.  And the European dream.  And, inevitably, the Chinese dream as well.

Decades later, 81% of global energy is still supplied by the fossil fuels: coal, gas, and oil.[1]  And though the solar revolution is finally beginning, the day is late.  The Arctic is melting, and, soon, as each year the northern ocean lies bare beneath the summer sun, the warming will accelerate.  Moreover, our plight is becoming visible.  We have discovered, to our considerable astonishment, that most of the fossil fuel on the books of our largest corporations is “unburnable” – in the precise sense that, if we burn it, we are doomed.[2]  Not that we know what to do with this rather strange knowledge.  Also, even as China rises, it’s obvious that it’s not the last in line for the promised land.  Billions of people, all around the world, watch the wealthy on TV, and most all of them want a drink from the well of modern prosperity.  Why wouldn’t they?  Life belongs to us all, as does the Earth.

The challenge, in short, is rather daunting.

The denial of the challenge, on the other hand, always came ready-made.  As Francis Bacon said so long ago, “what a man would rather were true, he more readily believes.”  And we really did want to believe that ours was still a boundless world.  The alternative – an honest reckoning – was just too challenging.  For one thing, there was no obvious way to reconcile the Earth’s finitude with the relentless expansion of the capitalist market.  And as long as we believed in a world without limits, there was no need to see that economic stratification would again become a fatal issue.  Sure, our world was bitterly riven between haves and have-nots, but this problem, too, would fade in time.  With enough growth – the universal balm – redistribution would never be necessary.  In time, every man would be a king.

The denial had many cheerleaders.  The chemical-company flacks who derided Rachel Carson as a “hysterical woman” couldn’t have known that they were pioneering a massive trend.  Also, and of course, big money always has plenty of mouthpieces.  But it’s no secret that, during the 20th Century, the “engineering of consent” reached new levels of sophistication.  The composed image of benign scientific competence became one of its favorite tools, and somewhere along the way tobacco-industry science became a founding prototype of anti-environmental denialism.  On this front, I’m happy to say that the long and instructive history of today’s denialist pseudo-science has already been expertly deconstructed.[3]  Given this, I can safely focus on the new world, the post-Sandy world of manifest climatic disruption in which the denialists have lost any residual aura of scientific legitimacy, and have ceased to be a decisive political force.  A world in which climate denialism is increasingly seen, and increasingly ridiculed, as the jibbering of trolls.

To be clear, I’m not claiming that the denialists are going to shut up anytime soon.  Or that they’ll call off their suicidal, demoralizing campaigns.  Or that their fogs and poisons are not useful to the fossil-fuel cartel.  But the battle of the science is over, at least as far as the scientists are concerned.  And even on the street, hard denialism is looking pretty ridiculous.  To be sure, the core partisans of the right will fight on, for the win and, of course, for the money.[4]  And they’ll continue to have real weight too, for just as long as people do not believe that life beyond carbon is possible.  But for all this, their influence has peaked, and their position is vulnerable.  They are – and visibly now – agents of a mad and dangerous ideology.  They are knaves, and often they are fools.[5]

As for the rest of us, we can at least draw conclusions, and make plans.

As bad as the human prospect may be – and it is quite bad – this is not “game over.”  We have the technology we need to save ourselves, or most of it in any case; and much of it is ready to go.  Moreover, the “clean tech” revolution is going to be disruptive indeed.  There will be cascades of innovation, delivering opportunities of all kinds, all around the world.  Also, our powers of research and development are strong.  Also, and contrary to today’s vogue for austerity and “we’re broke” political posturing, we have the money to rebuild, quickly and on a global scale.  Also, we know how to cooperate, at least when we have to.  All of which is to say that we still have options.  We are not doomed.

But we are in extremely serious danger, and it is too late to pretend otherwise.  So allow me to tip my hand by noting Jorgen Randers’ new book, 2052: A Global Forecast for the Next Forty Years.[6]  Randers is a Norwegian modeler, futurist, professor, executive, and consultant who made his name as co-author of 1972’s landmark The Limits to Growth.  Limits, of course, was a global blockbuster; it remains the best-selling environmental title of all times.  Also, Limits has been relentlessly ridiculed (the early denialists cut their teeth by distorting it[7]) so it must be said that – very much contrary to the mass-produced opinions of the denialist age – its central, climate-related projections are holding up depressingly well.[8]

By 2012 (when he published 2052) Randers had decided to step away from the detached exploration of multiple scenarios that was the methodological core of Limits, and to make actual predictions.  After a lifetime of frustrated efforts, these predictions are vivid, pessimistic and bitter.  In a nutshell, Randers doesn’t expect anything beyond what he calls “progress as usual,” and while he expects it to yield a “light green” buildout (e.g., solar on a large scale) he doesn’t think it will suffice to stabilize the climate system.  Such stabilization, he grants, is still possible, but it would require concerted global action on a scale that neither he nor Dennis Meadows, the leader of the old Limits team, see on today’s horizon.  Let’s call that kind of action global emergency mobilization.  Meadows, when he peers forwards, sees instead “many decades of uncontrolled climatic disruption and extremely difficult decline.”[9]  Randers is more precise, and predicts that we will by 2052 wake to find ourselves on a dark and frightening shore, knowing full well that our planet is irrevocably “on its way towards runaway climate change in the last third of the twenty-first century.”

This is an extraordinary claim, and it requires extraordinary evidence.[10]  Such evidence, unfortunately, is readily available, but for the moment let me simply state the public secret of this whole discussion.  To wit: we (and I use this pronoun advisedly) can still avoid a global catastrophe, but it’s not at all obvious that we will do so.  What is obvious is that stabilizing the global climate is going to be very, very hard.  Which is a real problem, because we don’t do hard anymore.  Rather, when confronted with a serious problem, we just do what we can, hoping that it will be enough and trying our best not to offend the rich.  In truth, and particularly in America, we count ourselves lucky if we can manage governance at all.

This essay is about climate politics after legitimate skepticism.  Climate politics in a world where, as Leonard Cohen put it, “everybody knows.”  What does this mean?  In the first place, it means that we’ve reached the end of what might be called “environmentalism-as-usual.”  This point is widely understood and routinely granted, as when people say something like “climate is not a merely environmental problem,” but my concern is a more particular one.  As left-green writer Eddie Yuen astutely noted in a recent book on “catastrophism,” the problems of the environmental movement are to a very large degree rooted in “the pairing of overwhelmingly bleak analysis with inadequate solutions.”[11]  This is exactly right.

The climate crisis demands a “new environmentalism,” and such a thing does seem to be emerging.  It’s final shape is unknowable, but one thing is certain – the environmentalism that we need will only exist when its solutions and strategies stand up to its own analyses.  The problem is that this requires us to take our “overwhelmingly bleak” analyses straight, rather than soft-pedaling them so that our “inadequate solutions” might look good.  Pessimism, after all, is closely related to realism.  It cannot just be wished away.

Soft-pedaling, alas, has long been standard practice, on both the scientific and the political sides of the climate movement.  Examples abound, but the best would have to be the IPCC itself, the U.N’s Intergovernmental Panel on Climate Change.  The world’s premier climate-science clearinghouse, the IPCC is often attacked from the right, and has developed a shy and reticent culture.  Even more importantly, though, and far more rarely noted, is that the IPCC is conservative by definition and by design.[12]  It almost has to be conservative to do its job, which is to herd the planet’s decision makers towards scientific realism.  The wrinkle is that, at this point, this isn’t even close to being good enough, not at least in the larger scheme.  At this point, we need strategic realism as well as baseline scientific realism, and it demands a brutal honesty in which underlying scientific and political truths are clearly drawn and publicly expressed.

Yet when it comes to strategic realism, we balk.  The first impulse of the “messaging” experts is always to repeat their perennial caution that sharp portraits of the danger can be frightening, and disempowering, and thus lead to despair and passivity.  This is an excellent point, but it’s only the beginning of the truth, not the end.  The deeper problem is that the physical impacts of climate disruption – the destruction and the suffering – will continue to escalate.  “Superstorm Sandy” was bad, but the future will be much worse.  Moreover, the most severe suffering will be far away, and easy for the good citizens of the wealthy world to ignore.  Imagine, for example, a major failure of the Indian Monsoon, and a subsequent South Asian famine.  Imagine it against a drumbeat background in which food is becoming progressively more expensive.  Imagine the permanence of such droughts, and increasing evidence of tipping points on the horizon, and a world in which ever more scientists take it upon themselves to deliver desperate warnings.  The bottom line will not be the importance of communications strategies, but rather the manifest reality, no longer distant and abstract, and the certain knowledge that we are in deep trouble.  And this is where the dangers of soft-pedaling lie.  For as people come to see the scale of the danger, and then to look about for commensurate strategies and responses, the question will be if such strategies are available, and if they are known, and if they are plausible.  If they’re not, then we’ll all going, together, down the road “from aware to despair.”

Absent the public sense of a future in which human resourcefulness and cooperation can make a decisive difference, we assuredly face an even more difficult future in which denial fades into a sense of pervasive hopelessness.  The last third of the century (when Randers is predicting “runaway climate change”) is not so very far away.  Which is to say that, as denialism collapses – and it will – the challenge of working out a large and plausible response to the climate crisis will become overwhelmingly important.  If we cannot imagine such a response, and explain how it would actually work, then people will draw their own conclusions.  And, so far, it seems that we cannot.  Even those of us who are now climate full-timers don’t have a shared vision, not in any meaningful detail, nor do we have a common sense of the strategic initiatives that could make such a vision cohere.

The larger landscape is even worse.  For though many scientists are steeling themselves to speak, the elites themselves are still stiff and timid, and show few signs of rising to the occasion.  Each month, it seems, there’s another major report on the approaching crisis – the World Bank, the National Intelligence Council, and the International Energy Agency have all recently made hair-raising contributions – but they never quite get around to the really important questions.  How should we contrive the necessary global mobilization?  What conditions are needed to absolutely maximize the speed of the clean-tech revolution?  By what strategy will we actually manage to keep the fossil-fuels in the ground?  What kind of international treaties are necessary, and how shall we establish them?  What would a fast-enough global transition cost, and how shall we pay for it?  What about all those who are forced to retreat from rising waters and drying lands?  How shall they live, and where?  How shall we talk about rights and responsibilities in the Greenhouse Century?  And what about the poor?  How shall they find futures in a climate-constrained world?  Can we even imagine a world in which they do?

In the face of such questions, you have a choice.  You can conclude that we’ll just have to do the best we can, and then you can have a drink.  Or maybe two.  Or you can conclude that, despite all evidence to the contrary, enough of us will soon awaken to reality.  What’s certain is that, all around us, there is a vast potentiality – for reinvention, for resistance, for redistribution, and for renewal of all kinds – and that it could at any time snap into solidity.  And into action.

Forget about “hope.”  What we need now is intention.

***

About a decade ago, in San Francisco, I was on a PBS talk show with, among others, Myron Ebell, chief of climate propaganda at the Competitive Enterprise Institute.  Ebell is an aggressive professional, and given the host’s commitment to phony balance he was easily able to frame the conversation.[13]  The result was a travesty, but not an entirely wasted time, at least not for me.  It was instructive to speak, tentatively, of the need for global climate justice, and to hear, in response, that I was a non-governmental fraud that was only in it for the money.  Moreover, as the hour wore on, I came to appreciate the brutal simplicity of the denialist strategy.  The whole point is to suck the oxygen out of the room, to weave such a tangle of confusionism and pseudo-debate that the Really Big Question – What is to be done? – becomes impossible to even ask, let alone discuss.

When Superstorm Sandy slammed into the New York City region, Ebell’s style of hard denialism took a body blow, though obviously it has not dropped finally to the mat.  Had it done do, the Big Question, in all its many forms, would be buzzing constantly around us.  Clearly, that great day has not yet come.  Still, back in November of 2012, when Bloomberg’s Business Week blared “It’s Global Warming, Stupid” from its front cover, this was widely welcomed as a overdue milestone.  It may even be that Michael Tobis, the editor of the excellent Planet 3.0, will prove correct in his long-standing, half-facetious prediction that 2015 will be the date when “the Wall Street Journal will acknowledge the indisputable and apparent fact of anthropogenic climate change; the year in which it will simply be ridiculous to deny it.”[14]  Or maybe not.  Maybe that day will never come.  Maybe Ebell’s style of well-funded, front-group denialism will live on, zombie-like, forever.  Or maybe (and this is my personal prediction) hard climate denialism will soon go the way of creationism and far-right Christianity, becoming a kind of political lifestyle choice, one that’s dangerous but contained.  One that’s ultimately more dangerous to the right than it is to the reality-based community.

If so, then at some point we’re going to have to ask ourselves if we’ve been so long distracted by the hard denialists that we’ve missed the parallel danger of a “soft denialism.”  By which I mean the denialism of a world in which, though the dangers of climate change are simply too ridiculous to deny, they still – somehow – are not taken to imply courage, and reckoning, and large-scale mobilization.  This is a long story, but the point is that, now that the Big Question is finally on the table, we’re going to have to answer it.  Which is to say that we’re going to have to face the many ways in which political timidity and small-bore realism have trained us to calibrate our sense of what must be done by our sense of what can be done, which these days is inadequate by definition.

And not just because of the denialists.

George Orwell once said that “To see what is in front of one’s nose needs a constant struggle.”[15]  As we hurtle forward, this struggle will rage as never before.  The Big Question, after all, changes everything.  Another way of saying this is that our futures will be shaped by the effort to avoid a full-on global climate catastrophe.  Despite all the rest of the geo-political and geo-economic commotion that will mark the 21st Century (and there’ll be plenty) it will be most fundamentally the Greenhouse Century.  We know this now, if we care to, though still only in preliminary outline.  The details, inevitably, will surprise us all.

The core problem, of course, will be “ambition” – action on the scale that’s actually necessary, rather than the scale that is or appears to be possible.  And here, the legacies of the denialist age – the long-ingrained habits of soft-pedaling and strained optimism – will weigh heavily.  Consider the quasi-official global goal (codified, for example, in the Copenhagen Accord) to hold total planetary warming to 2°C (Earth surface average) above pre-industrial levels.  This is the so-called “2°C target.”  What are we to do with it in the post-denialist age?  Let me count the complications: One, all sorts of Very Important People are now telling us it’s going to all but impossible to avoid overshooting 2°C.[16]  Two, in so doing, they are making a political and not a scientific judgment, though they’re not always clear on this point.  (It’s probably still technically possible to hold the 2°C line – if we’re not too unlucky – though it wouldn’t be easy under the best of circumstances.)[17]  Three, the 2°C line, which was once taken to be reasonably safe, is now widely seen (at least among the scientists) to mark the approximate point of transition from “dangerous” to “extremely dangerous,” and possibly to altogether unmanageable levels of warming.[18]  Four, and finally, it’s now widely recognized that any future in which we approach the 2°C line (which we will do) is one in which we also have a real possibility of pushing the average global temperature up by 3°C, and if this were to come to pass we’d be playing a very high-stakes game indeed, one in which uncontrolled positive feedbacks and worst-case scenarios were surrounding us on every side.

The bottom line is today as it was decades ago.  Greenhouse-gas emissions were increasing then, and they are increasing now.  In late 2012, the authoritative Global Carbon Project reported that, since 1990, they had risen by an astonishing 58 percent.[19]  The climate system has unsurprisingly responded with storms, droughts, ice-melt, conflagrations and floods.  The weather has become “extreme,” and may finally be getting our attention.  In Australia, according to the acute Mark Thomson of the Institute for Backyard Studies in Adelaide, the crushing heatwave of early 2013 even pushed aside “the idiot commentariat” and cleared the path for a bit of 11th-hour optimism: “Another year of this trend will shift public opinion wholesale.  We’re used to this sort of that temperature now and then and even take a perverse pride in dealing with it, but there seems to be a subtle shift in mood that ‘This Could Be Serious.’”  Let’s hope he’s right.  Let’s hope, too, that the mood shift that swept through America after Sandy also lasts, and leads us, too, to conclude that ‘This Could Be Serious.’  Not that this alone would be enough to support a real mobilization – the “moral equivalent of war” that we need – but it would be something.  It might even lead us to wonder about our future, and about the influence of money and power on our lives, and to ask how serious things will have to get before it becomes possible to imagine a meaningful change of direction.

The wrinkle is that, before we can advocate for a meaningful change of direction, we have to have one we believe in, one that we’re willing to explain in global terms that actually scale to the problem.  None of which is going to be easy, given that we’re fast approaching a point where only tales of existential danger ring true.  (cf the zombie apocalypse).  The Arctic ice, as noted above, offers an excellent marker.  In fact, the first famous photos of Earth from space – the “blue marble” photos taken in 1972 by the crew of the Apollo 17 – allow us to anchor our predicament in time and in memory.  For these are photos of an old Earth now passed away; they must be, because they show great expanses of ice that are nowhere to be found.  By August of 2012 the Arctic Sea’s ice cover had declined by 40%,[20] a melt that’s easily large enough to be visible from space.  Moreover, beneath the surface, ice volume is dropping even more precipitously.  The polar researchers who are now feverishly evaluating the great melting haven’t yet pushed the entire scientific community to the edge of despair, though they have managed to inspire a great deal of dark muttering about positive feedbacks and tipping points.  Soon, it seems, that muttering will become louder.  Perhaps as early as 2015, the Arctic Ocean will become virtually ice free for the first time in recorded history.[21]  When it does, the solar absorptivity of the Arctic waters will increase, and shift the planetary heat balance by a surprisingly large amount, and by so doing increase the rate of  planetary warming.  And this, of course, will not be end of it.  The feedbacks will continue.  The cycles will go on.

Should we remain silent about such matters, for risk of inflaming the “idiot commentariat?”  It’s absurd to even ask.  The suffering is already high, and if you know the science, you also know that the real surprise would be an absence of positive feedbacks.  The ice melt, the methane plumes, the drying of the rainforests – they’re all real.  Which is to say that there are obviously tipping points before us, though we do not and can not know how much time will pass before they force themselves upon our attention.  The real question is what we must do if we would talk of them in good earnest, while at the same time speaking, without despair and effectively, about the human future.


[1] Jorgen Randers, 2052: A Global Forecast for the Next Forty Years, Chelsea Green, 2012, page 99.

[2] Begin at the Carbon Track Initiative’s website.  http://www.carbontracker.org/

[3] Two excellent examples: Naomi Oreskes, Erik M. M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, Bloomsbury Press, 2011,  Chris Mooney, The Republican War on Science, Basic Books, 2006.

[4] See, for example, Suzanne Goldenberg, “Secret funding helped build vast network of climate denial thinktanks,” February 14, 2013, The Guardian.

[5] “Lord Monckton,” in particular, is fantastic.  See http://www.youtube.com/watch?v=w833cAs9EN0

[6] Randers, 2012.  See also Randers’ essay and video at the University of Cambridge 2013 “State of Sustainability Leadership,” athttp://www.cpsl.cam.ac.uk/About-Us/What-is-Sustainability-Leadership/The-State-of-Sustainability-Leadership.aspx

[7] Ugo Bardi, in The Limits to Growth Revisited (Springer Briefs, 2011) offers this summary:

“If, at the beginning, the debate on LTG had seemed to be balanced, gradually the general attitude on the study became more negative. It tilted decisively against the study when, in 1989, Ronald Bailey published a paper in “Forbes” where he accused the authors of having predicted that the world’s economy should have already run out of some vital mineral commodities whereas that had not, obviously, occurred.

Bailey’s statement was only the result of a flawed reading of the data in a single table of the 1972 edition of LTG. In reality, none of the several scenarios presented in the book showed that the world would be running out of any important commodity before the end of the twentieth century and not even of the twenty-first. However, the concept of the “mistakes of the Club of Rome” caught on. With the 1990s, it became commonplace to state that LTG had been a mistake if not a joke designed to tease the public, or even an attempt to force humankind into a planet-wide dictatorship, as it had been claimed in some earlier appraisals (Golub and Townsend 1977; Larouche 1983). By the end of the twentieth century, the victory of the critics of LTG seemed to be complete. But the debate was far from being settled.”

[8] See, for example, Graham Turner, “A Comparison of The Limits to Growth with Thirty Years of Reality.” Global Environmental Change, Volume 18, Issue 3, August 2008, Pages 397–411.  An unprotected copy (without the graphics) can be downloaded at www.csiro.au/files/files/plje.pdf.  Also

[9] In late 2012, Dennis Meadows said that “In the early 1970s, it was possible to believe that maybe we could make the necessary changes.  But now it is too late.  We are entering a period of many decades of uncontrolled climatic disruption and extremely difficult decline.”  See Christian Parenti, “The Limits to Growth’: A Book That Launched a Movement,” The Nation, December 24, 2012.

[11] Eddie Yuen, “The Politics of Failure Have Failed: The Environmental Movement and Catastrophism,” in Catastrophism: The Apocalyptic Politics of Collapse and Rebirth, Sasha Lilley, David McNally, Eddie Yuen, James Davis, with a foreword by Doug Henwood. PM Press 2012.  Yuen’s whole line is “the main reasons that [it] has not led to more dynamic social movements; these include catastrophe fatigue, the paralyzing effects of fear; the pairing of overwhelmingly bleak analysis with inadequate solutions, and a misunderstanding of the process of politicization.” 

[12] See Glenn Scherer, “Special Report: IPCC, assessing climate risks, consistently underestimates,” The Daily Climate, December 6, 2012.   More formally (and more interestingly) see Brysse, Oreskes, O’Reilly, and Oppenheimer, “Climate change prediction: Erring on the side of least drama?,” Global Environmental Change 23 (2013), 327-337.

[13] KQED-FM, Forum, July 22, 2003.

[14] Michael Tobis, editor of Planet 3.0, is amusing on this point.  He notes that “many data-driven climate skeptics are reassessing the issue,” that “In 1996 I defined the turning point of the discussion about climate science (the point where we could actually start talking about policy) as the date when theWall Street Journal would acknowledge the indisputable and apparent fact of anthropogenic climate change; the year in which it would simply be ridiculous to deny it.  My prediction was that this would happen around 2015… I’m not sure the WSJ has actually accepted reality yet.  It’s just starting to squint in its general direction.  2015 still looks like a good bet.”  See http://planet3.org/2012/08/07/is-the-tide-turning/

[15] The Collected Essays, Journalism and Letters of George Orwell: In Front of Your Nose, 1945-1950, Sonia Orwell and Ian Angus, Editors / Paperback / Harcourt Brace Jovanovich, 1968, p. 125.

[16] See for example, Fatih Birol and Nicholas Stern, “Urgent steps to stop the climate door closing,” The Financial Times, March 9, 2011.  And see Sir Robert Watson’s Union Frontiers of Geophysics Lecture at the 2012 meeting of the American Geophysical Union, athttp://fallmeeting.agu.org/2012/events/union-frontiers-of-geophysics-lecture-professor-sir-bob-watson-cmg-frs-chief-scientific-adviser-to-defra/

[17] I just wrote “probably still technically possible.”  I could have written “Excluding the small probability of a very bad case, and the even smaller probability of a very good case, it’s probably still technically possible to hold the 2°C line, though it wouldn’t be easy.”  This, however, is a pretty ugly sentence.  I could also have written “Unless we’re unlucky, and the climate sensitivity turns out be on the high side of the expected range, it’s still technically possible to hold the 2°C line, though it wouldn’t be easy, unless we’re very lucky, and the climate sensitivity turns out to be on the low side.”  Saying something like this, though, kind of puts the cart before the horse, since I haven’t said anything about “climate sensitivity,” or about how the scientists think about probability – and of course it’s even uglier.  The point, at least for now, is that climate projections are probabilistic by nature, which does not mean that they are merely “uncertain.”  We know a lot about the probabilities.

[18] See Kevin Anderson, a former director of Britain’s Tyndall Center, who has been unusually frank on this point.  His views are clearly laid out in a (non-peer-reviewed) essay published by the Dag Hammarskjold Foundation in Sweden.  See “Climate change going beyond dangerous – Brutal numbers and tenuous hope” in Development Dialog #61, September 2012, available at http://www.dhf.uu.se/wordpress/wp-content/uploads/2012/10/dd61_art2.pdf.  For a peer-reviewed paper, see Anderson and Bows, “Beyond ‘dangerous’ climate change: emission scenarios for a new world.”  Philosophical Transactions of The Royal Society, (2011) 369, 20-44 and for a lecture, see “Are climate scientists the most dangerous climate skeptics?” a Tyndall Centre video lecture (September 2010) at http://www.tyndall.ac.uk/audio/are-climate-scientist-most-dangerous-climate-sceptics.

[19] “The challenge to keep global warming below 2°C,” Glen P. Peters, et. al., Nature Climate Change (2012) 3, 4–6 (2013) doi:10.1038/nclimate1783.  December 2, 2012.  This figure might actually be revised upward, as 2012 saw the second-largest annual  concentration increase on record (http://climatedesk.org/2013/03/large-rise-in-co2-emissions-sounds-climate-change-alarm/)

[20] The story of the photos is on Wikipedia – see “blue marble.”  For the latest on the Arctic ice, see the “Arctic Sea Ice News and Analysis” page that the National Snow and Ice Data Center — http://nsidc.org/arcticseaicenews/

[21] Climate Progress is covering the “Arctic Death Spiral” in detail.  See for example Joe Romm, “NOAA: Climate Change Driving Arctic Into A ‘New State’ With Rapid Ice Loss And Record Permafrost Warming,” Climate Progress, Dec 6, 2012.  Give yourself a few hours and follow the links.

When Animals Learn to Control Robots, You Know We’re in Trouble (Wired)

BY WIRED SCIENCE

03.21.13 – 6:30 AM

Unless an asteroid or deadly pandemic wipes us out first, the force we are most afraid will rob us of our place as rulers of Earth is robots. The warnings range from sarcastic to nervous to dead serious, but they all describe the same scenario: Robots become sentient, join forces and turn on us en masse.

But with all the paranoia about machines, we’ve ignored another possibility: Animals learn to control robots and decide it’s their turn to rule the planet. This would be even more dangerous than dolphins evolving opposable thumbs. And the first signs of this coming threat are already starting to appear in laboratories around the world where robots are being driven by birds, trained by moths and controlled by the minds of monkeys.

The Mathematics of Averting the Next Big Network Failure (Wired)

BY NATALIE WOLCHOVER, SIMONS SCIENCE NEWS

03.19.13 – 9:30 AM

Data: Courtesy of Marc Imhoff of NASA GSFC and Christopher Elvidge of NOAA NGDC; Image: Craig Mayhew and Robert Simmon of NASA GSFC

Gene Stanley never walks down stairs without holding the handrail. For a fit 71-year-old, he is deathly afraid of breaking his hip. In the elderly, such breaks can trigger fatal complications, and Stanley, a professor of physics at Boston University, thinks he knows why.

“Everything depends on everything else,” he said.

Original story reprinted with permission from Simons Science News, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Three years ago, Stanley and his colleagues discovered the mathematics behind what he calls “the extreme fragility of interdependency.” In a system of interconnected networks like the economy, city infrastructure or the human body, their model indicates that a small outage in one network can cascade through the entire system, touching off a sudden, catastrophic failure.

First reported in 2010 in the journal Nature, the finding spawned more than 200 related studies, including analyses of the nationwide blackout in Italy in 2003, the global food-price crisis of 2007 and 2008, and the “flash crash” of the United States stock market on May 6, 2010.

“In isolated networks, a little damage will only lead to a little more,” said Shlomo Havlin, a physicist at Bar-Ilan University in Israel who co-authored the 2010 paper. “Now we know that because of dependency between networks, you can have an abrupt collapse.”

While scientists remain cautious about using the results of simplified mathematical models to re-engineer real-world systems, some recommendations are beginning to emerge. Based on data-driven refinements, new models suggest interconnected networks should have backups, mechanisms for severing their connections in times of crisis, and stricter regulations to forestall widespread failure.

“There’s hopefully some sweet spot where you benefit from all the things that networks of networks bring you without being overwhelmed by risk,” said Raissa D’Souza, a complex systems theorist at the University of California, Davis.

Power, gas, water, telecommunications and transportation networks are often interlinked. When nodes in one network depend on nodes in another, node failures in any of the networks can trigger a system-wide collapse. (Illustration: Leonardo Dueñas-Osorio)

To understand the vulnerability in having nodes in one network depend on nodes in another, consider the “smart grid,” an infrastructure system in which power stations are controlled by a telecommunications network that in turn requires power from the network of stations. In isolation, removing a few nodes from either network would do little harm, because signals could route around the outage and reach most of the remaining nodes. But in coupled networks, downed nodes in one automatically knock out dependent nodes in the other, which knock out other dependent nodes in the first, and so on. Scientists model this cascading process by calculating the size of the largest cluster of connected nodes in each network, where the answer depends on the size of the largest cluster in the other network. With the clusters interrelated in this way, a decrease in the size of one of them sets off a back-and-forth cascade of shrinking clusters.

When damage to a system reaches a “critical point,” Stanley, Havlin and their colleagues find that the failure of one more node drops all the network clusters to zero, instantly killing connectivity throughout the system. This critical point will vary depending on a system’s architecture. In one of the team’s most realistic coupled-network models, an outage of just 8 percent of the nodes in one network — a plausible level of damage in many real systems — brings the system to its critical point. “The fragility that’s implied by this interdependency is very frightening,” Stanley said.

However, in another model recently studied by D’Souza and her colleagues, sparse links between separate networks actually help suppress large-scale cascades, demonstrating that network models are not one-size-fits-all. To assess the behavior of smart grids, financial markets, transportation systems and other real interdependent networks, “we have to start from the data-driven, engineered world and come up with the mathematical models that capture the real systems instead of using models because they are pretty and analytically tractable,” D’Souza said.

In a series of papers in the March issue of Nature Physics, economists and physicists used the science of interconnected networks to pinpoint risk within the financial system. In one study, an interdisciplinary group of researchers including the Nobel Prize-winning economist Joseph Stiglitz found inherent instabilities within the highly complex, multitrillion-dollar derivatives market and suggested regulations that could help stabilize it.

Irena Vodenska, a professor of finance at Boston University who collaborates with Stanley, custom-fit a coupled network model around data from the 2008 financial crisis. Her and her colleagues’ analysis, published in February in Scientific Reports, showed that modeling the financial system as a network of two networks — banks and bank assets, where each bank is linked to the assets it held in 2007 — correctly predicted which banks would fail 78 percent of the time.

“We consider this model as potentially useful for systemic risk stress testing for financial systems,” said Vodenska, whose research is financially supported by the European Union’s Forecasting Financial Crisis program. As globalization further entangles financial networks, she said, regulatory agencies must monitor “sources of contagion” — concentrations in certain assets, for example — before they can cause epidemics of failure. To identify these sources, “it’s imperative to think in the sense of networks of networks,” she said.

Leonardo Dueñas-Osorio, a civil engineer at Rice, visited a damaged high-voltage substation in Chile after a major earthquake in 2010 to gather information about the power grid’s response to the crisis. (Photo: Courtesy of Leonardo Dueñas-Osorio)

Scientists are applying similar thinking to infrastructure assessment. Leonardo Dueñas-Osorio, a civil engineer at Rice University, is analyzing how lifeline systems responded to recent natural disasters. When a magnitude 8.8 earthquake struck Chile in 2010, for example, most of the power grid was restored after just two days, aiding emergency workers. The swift recovery, Dueñas-Osorio’s researchsuggests, occurred because Chile’s power stations immediately decoupled from the centralized telecommunications system that usually controlled the flow of electricity through the grid, but which was down in some areas. Power stations were operated locally until the damage in other parts of the system subsided.

“After an abnormal event, the majority of the detrimental effects occur in the very first cycles of mutual interaction,” said Dueñas-Osorio, who is also studying New York City’s response to Hurricane Sandy last October. “So when something goes wrong, we need to have the ability to decouple networks to prevent the back-and-forth effects between them.”

D’Souza and Dueñas-Osorio are collaborating to build accurate models of infrastructure systems in Houston, Memphis and other American cities in order to identify system weaknesses. “Models are useful for helping us explore alternative configurations that could be more effective,” Dueñas-Osorio explained. And as interdependency between networks naturally increases in many places, “we can model that higher integration and see what happens.”

Scientists are also looking to their models for answers on how to fix systems when they fail. “We are in the process of studying what is the optimal way to recover a network,” Havlin said. “When networks fail, which node do you fix first?”

The hope is that networks of networks might be unexpectedly resilient for the same reason that they are vulnerable. As Dueñas-Osorio put it, “By making strategic improvements, can we have what amounts to positive cascades, where a small improvement propagates much larger benefits?”

These open questions have the attention of governments around the world. In the U.S., the Defense Threat Reduction Agency, an organization tasked with safeguarding national infrastructure against weapons of mass destruction, considers the study of interdependent networks its “top mission priority” in the category of basic research. Some defense applications have emerged already, such as a new design for electrical network systems at military bases. But much of the research aims at sorting through the mathematical subtleties of network interaction.

“We’re not yet at the ‘let’s engineer the internet differently’ level,” said Robin Burk, an information scientist and former DTRA program manager who led the agency’s focus on interdependent networks research. “A fair amount of it is still basic science — desperately needed science.”

Original story reprinted with permission from Simons Science News, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Brain Scans Predict Which Criminals Are Most Likely to Reoffend (Wired)

BY GREG MILLER

03.26.13 – 3:40 PM

Photo: Erika Kyte/Getty Images

Brain scans of convicted felons can predict which ones are most likely to get arrested after they get out of prison, scientists have found in a study of 96 male offenders.

“It’s the first time brain scans have been used to predict recidivism,” said neuroscientist Kent Kiehl of the Mind Research Network in Albuquerque, New Mexico, who led the new study. Even so, Kiehl and others caution that the method is nowhere near ready to be used in real-life decisions about sentencing or parole.

Generally speaking, brain scans or other neuromarkers could be useful in the criminal justice system if the benefits in terms of better accuracy outweigh the likely higher costs of the technology compared to conventional pencil-and-paper risk assessments, says Stephen Morse, a legal scholar specializing in criminal law and neuroscience at the University of Pennsylvania. The key questions to ask, Morse says, are: “How much predictive accuracy does the marker add beyond usually less expensive behavioral measures? How subject is it to counter-measures if a subject wishes to ‘defeat’ a scan?”

Those are still open questions with regard to the new method, which Kiehl and colleagues, including postdoctoral fellow Eyal Aharoni, describe in a paper to be published this week in the Proceedings of the National Academy of Sciences.

The test targets impulsivity. In a mobile fMRI scanner the researchers trucked in to two state prisons, they scanned inmates’ brains as they did a simple impulse control task. Inmates were instructed to press a button as quickly as possible whenever they saw the letter X pop up on a screen inside the scanner, but not to press it if they saw the letter K. The task is rigged so that X pops up 84 percent of the time, which predisposes people to hit the button and makes it harder to suppress the impulse to press the button on the rare trials when a K pops up.

Based on previous studies, the researchers focused on the anterior cingulate cortex, one of several brain regions thought to be important for impulse control. Inmates with relatively low activity in the anterior cingulate made more errors on the task, suggesting a correlation with poor impulse control.

They were also more likely to get arrested after they were released. Inmates with relatively low anterior cingulate activity were roughly twice as likely as inmates with high anterior cingulate activity to be rearrested for a felony offense within 4 years of their release, even after controlling for other behavioral and psychological risk factors.

“This is an exciting new finding,” said Essi Viding, a professor of developmental psychopathology at University College London. “Interestingly this brain activity measure appears to be a more robust predictor, in particular of non-violent offending, than psychopathy or drug use scores, which we know to be associated with a risk of reoffending.” However, Viding notes that Kiehl’s team hasn’t yet tried to compare their fMRI test head to head against pencil-and-paper tests specifically designed to assess the risk of recidivism. ”It would be interesting to see how the anterior cingulate cortex activity measure compares against these measures,” she said.

“It’s a great study because it brings neuroimaging into the realm of prediction,” said clinical psychologistDustin Pardini of the University of Pittsburgh. The study’s design is an improvement over previous neuroimaging studies that compared groups of offenders with groups of non-offenders, he says. All the same, he’s skeptical that brain scans could be used to predict the behavior of a given individual. ”In general we’re horrible at predicting human behavior, and I don’t see this as being any different, at least not in the near future.”

Even if the findings hold up in a larger study, there would be limitations, Pardini adds. “In a practical sense, there are just too many ways an offender could get around having an accurate representation of his brain activity taken,” he said. For example, if an offender moves his head while inside the scanner, that would render the scan unreadable. Even more subtle strategies, such as thinking about something unrelated to the task, or making mistakes on purpose, could also thwart the test.

Kiehl isn’t convinced either that this type of fMRI test will ever prove useful for assessing the risk to society posed by individual criminals. But his group is collecting more data — lots more — as part of a much larger study in the New Mexico state prisons. “We’ve scanned 3,000 inmates,” he said. “This is just the first 100.”

Kiehl hopes this work will point to new strategies for reducing criminal behavior. If low activity in the anterior cingulate does in fact turn out to be a reliable predictor of recidivism, perhaps therapies that boost activity in this region would improve impulse control and prevent future crimes, Kiehl says. He admits it’s speculative, but his group is already thinking up experiments to test the idea. ”Cognitive exercises is where we’ll start,” he said. “But I wouldn’t rule out pharmaceuticals.”

‘Networked Minds’ Require Fundamentally New Kind of Economics (Science Daily)

Mar. 20, 2013 — In their computer simulations of human evolution, scientists at ETH Zurich find the emergence of the “homo socialis” with “other-regarding” preferences. The results explain some intriguing findings in experimental economics and call for a new economic theory of “networked minds”.

In their computer simulations of human evolution, scientists at ETH Zurich find the emergence of the “homo socialis” with “other-regarding” preferences. The results explain some intriguing findings in experimental economics and call for a new economic theory of “networked minds”. (Credit: © violetkaipa / Fotolia)

Economics has a beautiful body of theory. But does it describe real markets? Doubts have come up not only in the wake of the financial crisis, since financial crashes should not occur according to the then established theories. Since ages, economic theory is based on concepts such as efficient markets and the “homo economicus”, i.e. the assumption of competitively optimizing individuals and firms. It was believed that any behavior deviating from this would create disadvantages and, hence, be eliminated by natural selection. But experimental evidence from behavioral economics show that, on average, people behave more fairness-oriented and other-regarding than expected. A new theory by scientists from ETH Zurich now explains why.

“We have simulated interactions of individuals facing social dilemma situations, where it would be favorable for everyone to cooperate, but non-cooperative behavior is tempting,” explains Dr. Thomas Grund, one of the authors of the study. “Hence, cooperation tends to erode, which is bad for everyone.” This may create tragedies of the commons such as over-fishing, environmental pollution, or tax evasion.

Evolution of “friendliness”

Prof. Dirk Helbing of ETH Zurich, who coordinated the study, adds: “Compared to conventional models for the evolution of social cooperation, we have distinguished between the actual behavior – cooperation or not – and an inherited character trait, describing the degree of other-regarding preferences, which we call the friendliness.” The actual behavior considers not only the own advantage (“payoff”), but also gives a weight to the payoff of the interaction partners depending on the individual friendliness. For the “homo economicus”, the weight is zero. The friendliness spreads from one generation to the next according to natural selection. This is merely based on the own payoff, but mutations happen.

For most parameter combinations, the model predicts the evolution of a payoff-maximizing “homo economicus” with selfish preferences, as assumed by a great share of the economic literature. Very surprisingly, however, biological selection may create a “homo socialis” with other-regarding preferences, namely if offsprings tend to stay close to their parents. In such a case, clusters of friendly people, who are “conditionally cooperative”, may evolve over time.

If an unconditionally cooperative individual is born by chance, it may be exploited by everyone and not leave any offspring. However, if born in a favorable, conditionally cooperative environment, it may trigger cascade-like transitions to cooperative behavior, such that other-regarding behavior pays off. Consequently, a “homo socialis” spreads.

Networked minds create a cooperative human species

“This has fundamental implications for the way, economic theories should look like,” underlines Professor Helbing. Most of today’s economic knowledge is for the “homo economicus”, but people wonder whether that theory really applies. A comparable body of work for the “homo socialis” still needs to be written.

While the “homo economicus” optimizes its utility independently, the “homo socialis” puts himself or herself into the shoes of others to consider their interests as well,” explains Grund, and Helbing adds: “This establishes something like “networked minds”. Everyone’s decisions depend on the preferences of others.” This becomes even more important in our networked world.

A participatory kind of economy

How will this change our economy? Today, many customers doubt that they get the best service by people who are driven by their own profits and bonuses. “Our theory predicts that the level of other-regarding preferences is distributed broadly, from selfish to altruistic. Academic education in economics has largely promoted the selfish type. Perhaps, our economic thinking needs to fundamentally change, and our economy should be run by different kinds of people,” suggests Grund. “The true capitalist has other-regarding preferences,” adds Helbing, “as the “homo socialis” earns much more payoff.” This is, because the “homo socialis” manages to overcome the downwards spiral that tends to drive the “homo economicus” towards tragedies of the commons. The breakdown of trust and cooperation in the financial markets back in 2008 might be seen as good example.

“Social media will promote a new kind of participatory economy, in which competition goes hand in hand with cooperation,” believes Helbing. Indeed, the digital economy’s paradigm of the “prosumer” states that the Internet, social platforms, 3D printers and other developments will enable the co-producing consumer. “It will be hard to tell who is consumer and who is producer”, says Christian Waloszek. “You might be both at the same time, and this creates a much more cooperative perspective.”

Journal Reference:

  1. Thomas Grund, Christian Waloszek, Dirk Helbing. How Natural Selection Can Create Both Self- and Other-Regarding Preferences, and Networked Minds.Scientific Reports, 2013; 3 DOI: 10.1038/srep01480

Cracking the Semantic Code: Half a Word’s Meaning Is 3-D Summary of Associated Rewards (Science Daily)

Feb. 13, 2013 — We make choices about pretty much everything, all the time — “Should I go for a walk or grab a coffee?”; “Shall I look at who just came in or continue to watch TV?” — and to do so we need something common as a basis to make the choice.

Half of a word’s meaning is simply a three dimensional summary of the rewards associated with it, according to an analysis of millions of blog entries. (Credit: © vlorzor / Fotolia)

Dr John Fennell and Dr Roland Baddeley of Bristol’s School of Experimental Psychology followed a hunch that the common quantity, often referred to simply as reward, was a representation of what could be gained, together with how risky and uncertain it is. They proposed that these dimensions would be a unique feature of all objects and be part of what those things mean to us.

Over 50 years ago, psychologist Charles Osgood developed an influential method, known as the ‘semantic differential’, that attempts to measure the connotative, emotional meaning of a word or concept. Osgood found that about 50 per cent of the variation in a large number of ratings that people made about words and concepts could be captured using just three summary dimensions: ‘evaluation’ (how nice or good the object is), ‘potency’ (how strong or powerful an object is) and ‘activity’ (whether the object is active, unpredictable or chaotic). So, half of a concept’s meaning is simply a measure of how nice, strong, and active it is. The main problem is that, until now, no one knew why.

Dr Baddeley explained: “Over time, we keep a running tally of all the good and bad things associated with a particular object. Later, when faced with a decision, we can simply choose the option that in the past has been associated with more good things than bad. This dimension of choice sounds very much like the ‘evaluation’ dimension of the semantic differential.”

To test this, the researchers needed to estimate the number of good or bad things happening. At first sight, estimating this across a wide range of contexts and concepts seems impossible; someone would need to be observed throughout his or her lifetime and, for each of a large range of contexts and concepts, the number of times good and bad things happened recorded. Fortunately, a more practical solution is provided by the recent phenomenon of internet blogs, which describe aspects of people’s lives and are also searchable. Sure enough, after analysing millions of blog entries, the researchers found that the evaluation dimension was a very good predictor of whether a particular word was found in blogs describing good situations or bad.

Interestingly, they also found that how frequently a word was used was also a good predictor of how much we like it. This is a well-known effect — the ‘mere exposure effect’ — and a mainstay of the multi-billion dollar advertising industry. When comparing two options we just choose the option we like the most — and we like it because in the past it has been associated with more good things.

Analysing the data showed that ‘potency’ was a very good predictor of the probability of bad situations being associated with a given object: it measured one kind of risk.

Dr Fennell said: “This kind of way of quantifying risk is called ‘value at risk’ in financial circles, and the perils of ignoring it have been plain to see. Russian Roulette may be, on average, associated with positive rewards, but the risks associated with it are not for everyone!”

It is not the only kind of risk, though. In many situations, ‘activity’ — that is, unpredictability, or more importantly uncontrollability — is a highly relevant measure of risk: a knife in the hands of a highly trained sushi chef is probably safe, a knife in the hands of a drunk, erratic stranger is definitely not.

Dr Fennell continued: “Again, this different kind of risk is relevant in financial dealings and is often called volatility. It seems that the mistake that was made in the credit crunch was not ignoring this kind of risk, but to assume that you could perfectly guess it based on how unpredictable it had been in the past.”

Thus, the researchers propose that half of meaning is simply a summary of how rewarding, and importantly, how much of two kinds of risk is associated with an object. Being sensitive not only to rewards, but also to risks, is so important to our survival, that it appears that its representation has become wrapped up in the very nature of the language we use to represent the world.

Journal Reference:

  1. John G. Fennell, Roland J. Baddeley. Reward Is Assessed in Three Dimensions That Correspond to the Semantic DifferentialPLoS ONE, 2013; 8 (2): e55588 DOI:10.1371/journal.pone.0055588