Arquivo da tag: Mediação tecnológica

We Are All Mosaics (National Geographic)

by Virginia Hughes, 21 December 2012

Here’s something you probably learned once in a biology class, more or less. There’s this molecule called DNA. It contains a long code that created you and is unique to you. And faithful copies of the code live inside the nucleus of every one of the trillions of cells in your body.

In a later class you may have learned a few exceptions to that “faithful copies” bit. Sometimes, especially during development, when cells are dividing into more cells, a mutation pops up in the DNA of a daughter cell. This makes the daughter cell and all of its progeny genetically distinct. The phenomenon is called ‘somatic mosaicism’, and it tends to happen in sperm cells, egg cells, immune cells, and cancer cells. But it’s pretty infrequent and, for most healthy people, inconsequential.

That’s what the textbooks say, anyway, and it’s also a common assumption in medical research. For instance, genetic studies of living people almost always collect DNA from blood draws or cheek swabs, even if investigating the tangled roots of, say, heart disease or diabetes or autism. The assumption is that whatever genetic blips show up in blood or saliva will recapitulate what’s in the (far less accessible) cells of the heart, pancreas, or brain.

Two recent reports suggest that somatic mosaicism is far more common than anybody ever realized — and that might be a good thing.

 

Colored bars show the locations of genetic glitches in tissues from each of the six subjects (inner vertical numbers). The numbers on the outer edge of the circle correspond to each of our 23 chromosomes, and each color represents a different organ. Image courtesy of PNAS

In the first study Michael Snyder and colleagues looked at cells in 11 different organs and tissues obtained from routine autopsies of six unrelated people who had not died of cancer or any hereditary disease.

Then the scientists screened each tissue for small deletions or duplications of DNA, called copy number variations, or CNVs. These are fairly common in all of us.

In order to do genetic screens, researchers have to mash up a bunch of cells and pull DNA out of the aggregate. That makes research on somatic mutations tricky, because you can’t tell how some cells in the tissue might be different from others. The researchers got around that problem by doing side-by-side comparisons of the tissues from each person. If one tissue has a CNV and the other one doesn’t, they reasoned, then it must be a somatic glitch.

As they reported in October in the Proceedings of the National Academy of Sciences, Snyder’s team found a total of 73 somatic CNVs in the six people, cropping up in tissues all over the body, including the brain, liver, pancreas and small intestine. “Your genome is not static — it does change through development,” says Snyder, chair of the genetics department at Stanford. “People knew that, but it had never been systematically studied.”

OK, but do somatic mutations do anything? It’s hard to tell, particularly because postmortem studies offer no living person to observe. Still, the scientists showed that 79 percent of the somatic mutations fell inside of genes, and most of those genes play a role in the cell’s everyday regulatory processes, like metabolism, phosphorylation, and turning genes on. So the somatic mutations could very well have had an impact.

In the last paragraph of their paper the researchers mention that the findings could also have big implications for studies of induced pluripotent stem (iPS) cells. This line of research is getting increasingly popular, for good reason. With iPS technology, researchers start with a small piece of skin (or…) from a living person. They then expose those skin cells to a certain chemical concoction that reprograms them back into a primordial state. Once the stem cells are created, researchers can put them in yet another chemical soup that coaxes them to differentiate into whatever type of cell the scientists want to study. You can see why it’s cool: The technique allows scientists to create cells — each holding an individual’s unique DNA code, remember — in a Petri dish. Researchers can study neurons of children with autism, for example, without ever touching their brains.

Trouble is, several groups have reported that iPS cells carry mutations that the original skin cells don’t have. This suggests that something screwy is happening during the reprogramming process, defeating the whole purpose of making the cells. (Fellow Phenomena contributor Ed Yong wrote a fantastic post about the hoopla last year.)

But that last paragraph of Snyder’s study offers a bit of hope. What if the mutations that crop up in iPS cells actually were in the skin cells they came from, but just didn’t get picked up because those skin cells were mixed with other skin cells that didn’t have the mutations? In other words, what if skin cells, like all those other tissues they looked at in the paper, are mosaics?

The second new study, published last month in Naturefinds exactly that.

Flora Vaccarino‘s team at Yale sequenced the entire genome of 21 iPS cell lines, three each from seven people, as well as the skin cells that the iPS cells originated from. It turns out that each iPS line has an average of two CNVs and that at least half of these come from somatic mutations in the skin cells. (The researchers used special techniques for amplifying the DNA of the skin cells, so that they could detect CNVs that are present only in a fraction of the cells.)

That means two things. First, researchers using iPS cells can exhale. Their freaky reprogramming process doesn’t seem to create too much genetic havoc in the iPS cells. And second, somatic mosaicism happens a lot. Vaccarino’s study estimates that a full 30 percent of the skin cells carry somatic mutations.

Our widespread mosaicism may have implications for certain diseases. Somatic mutations have been strongly linked to tumors, for example, so it could be that people who have a lot of mosaicism are at a higher risk of cancer. But there’s also a positive way to spin it. Somatic mutations give our genomes an extra layer of flexibility, in a sense, that can come in handy. Snyder gives a good example in his study. If you have a group of cells that are constantly exposed to viruses, say, then it might be beneficial to have a somatic mutation pop up that damages receptors on the cell that viruses can latch on to.

But there’s likely a more parsimonious explanation for all of those genetic copying mistakes. “When you’re replicating DNA, there’s a certain expense to keep everything perfect,” Snyder says, meaning that it would cost the cell a lot of energy to ensure that every new cell was identical to the last. And in the end, he adds, that extra expense may not be worth it. “Having imperfections could just be an economically beneficial way for organisms to do things.”

Photos from Shannon O’Hara and James Diin, courtesy of National Geographic’s My Shot

AL aprova lei que institui Sistema Estadual de REDD+ em MT (ICV)

André Alves – Especial para o Institutto Centro de Vida – ICV

21/12/2012

A Assembleia Legislativa de Mato Grosso aprovou nesta quarta-feira (19/12) projeto de lei que cria o Sistema Estadual de REDD+ em Mato Grosso. O projeto, de autoria do poder executivo, segue agora para a sanção do governador Silval Barbosa (PMDB) e não deverá sofrer alterações no texto. O sistema tem como objetivo promover a redução das emissões dos gases de efeito estufa com origem no desmatamento e degradação florestal e também estimular o manejo florestal sustentável, além do aumento de estoques de carbono no estado.

“A aprovação desta lei representa um marco regulatório para o estado, pois vamos compartilhar os benefícios da conservação ambiental”, declarou o secretário estadual de Meio Ambiente Vicente Falcão. “É uma conquista do governo, mas também da sociedade civil que durante dois anos discutiu uma proposta que veio na maturidade certa”, complementou.

O texto aprovado na Assembleia prevê ainda a participação efetiva dos diferentes grupos sociais envolvidos ou afetados pelas ações de REDD. Ou seja, os projetos e programas de desmatamento evitado em áreas de assentamentos ou terras indígenas, por exemplo, terão que atender as demandas dessas comunidades, além de prever um mecanismo de distribuição justa de benefícios.

Para o secretário a implantação de um sistema de REDD+ consolida as políticas ambientais e significa um passo importante para cumprir a meta de reduzir o desmatamento no estado em 89% até o ano de 2020. “Agora há uma nova leitura, pois além do comando e controle vamos ter instrumentos de incentivo para inibir o desmatamento”, concluiu.

Laurent Micol, coordenador executivo do Instituto Centro de Vida – ICV, entidade que coordena o GT REDD no Fórum Mato-grossense de Mudanças Climáticas, explica que com a aprovação da lei, Mato Grosso assume um protagonismo nacional em relação a instrumentos de desmatamento evitado. “Os futuros projetos e programas de redução de desmatamento em andamento poderão se enquadrar na lei assim como os futuros projetos terão que assegurar as questões sociais e ambientais previstas na lei”, explicou. “Há também uma maior segurança para os investidores e doadores para estes projetos e programas”, completou. Micol usou como exemplo a recente doação do banco alemão KFW que repassou 8 milhões de reais ao governo do Acre, o primeiro estado na Amazônia a ter uma legislação com esta finalidade, como pagamento por serviços ambientais.

A discussão da proposta da lei começou com a instituição do Grupo de Trabalho REDD, em março de 2009, no âmbito do Fórum Mato-grossense de Mudanças Climáticas. O grupo trabalhou durante dois anos na elaboração da proposta, que foi debatida em consultas públicas e recebeu propostas de modificações pela internet. Ao todo foram 171 proposições que foram analisadas até a versão final da minuta ser validada pelo Fórum.

Assim que sancionada a lei, o governo deverá instituir o Conselho Gestor do Sistema Estadual de REDD+, que terá função deliberativa. O conselho terá 12 representantes e será paritário entre governo estadual e federal com a sociedade civil. Enquanto isso, o GT REDD está trabalhando na proposta de um programa setorial para o manejo florestal para ser apresentado a Secretaria de Estado de Meio Ambiente (Sema).

Sobre o GT REDD

O GT REDD MT conta com 78 membros, incluindo a Sema e outras secretarias estaduais, a Procuradoria do Estado, a Assembleia Legislativa, representações de organizações dos setores agropecuário, florestal, organizações da sociedade civil e movimentos sociais, a Ordem dos Advogados do Brasil e a Universidade Federal de Mato Grosso. O ICV foi eleito para coordenar e facilitar os trabalhos do grupo.

REDD+

REDD+ é a sigla em inglês para Redução de Emissões por Desmatamento e Degradação Florestal, incluindo a conservação e ao manejo das florestas e o aumento dos estoques de carbono.

Outras informações ICV: 65 3621-3148

Will we ever have cyborg brains? (IO9)

Will we ever have cyborg brains?

DEC 19, 2012 2:40 PM

By George Dvorsky

Over at BBC Future, computer scientist Martin Angler has put together a provocative piece about humanity’s collision course with cybernetic technologies. Today, says Angler, we’re using neural interface devices and other assistive technologies to help the disabled. But in short order we’ll be able to radically enhance human capacites — prompting him to wonder about the extent to which we might cyborgize our brains.

Angler points to two a recent and equally remarkable breakthroughs, including a paralyzed stroke victim who was able to guide a robot arm that delivered a hot drink, and a thought-controlled prosthetic hand that could grasp a variety of objects.

Admitting that it’s still early days, Angler speculates about the future:

Yet it’s still a far cry from the visions of man fused with machine, or cyborgs, that grace computer games or sci-fi. The dream is to create the type of brain augmentations we see in fiction that provide cyborgs with advantages or superhuman powers. But the ones being made in the lab only aim to restore lost functionality – whether it’s brain implants that restore limb control, or cochlear implants for hearing.

Creating implants that improve cognitive capabilities, such as an enhanced vision “gadget” that can be taken from a shelf and plugged into our brain, or implants that can restore or enhance brain function is understandably a much tougher task. But some research groups are being to make some inroads.

For instance, neuroscientists Matti Mintz from Tel Aviv University and Paul Verschure from Universitat Pompeu Fabra in Barcelona, Spain, are trying to develop an implantable chip that can restore lost movement through the ability to learn new motor functions, rather than regaining limb control. Verschure’s team has developed a mathematical model that mimics the flow of signals in the cerebellum, the region of the brain that plays an important role in movement control. The researchers programmed this model onto a circuit and connected it with electrodes to a rat’s brain. If they tried to teach the rat a conditioned motor reflex – to blink its eye when it sensed an air puff – while its cerebellum was “switched off” by being anaesthetised, it couldn’t respond. But when the team switched the chip on, this recorded the signal from the air puff, processed it, and sent electrical impulses to the rat’s motor neurons. The rat blinked, and the effect lasted even after it woke up.

Be sure to read the entire article, as Angler discusses uplifted monkeys, the tricky line that divides a human brain from a cybernetic one, and the all-important question of access.

Image: BBC/Science Photo Library.

Bullying by Childhood Peers Leaves a Trace That Can Change the Expression of a Gene Linked to Mood (Science Daily)

Dec. 18, 2012 — A recent study by a researcher at the Centre for Studies on Human Stress (CSHS) at the Hôpital Louis-H. Lafontaine and professor at the Université de Montréal suggests that bullying by peers changes the structure surrounding a gene involved in regulating mood, making victims more vulnerable to mental health problems as they age.

The study published in the journal Psychological Medicine seeks to better understand the mechanisms that explain how difficult experiences disrupt our response to stressful situations. “Many people think that our genes are immutable; however this study suggests that environment, even the social environment, can affect their functioning. This is particularly the case for victimization experiences in childhood, which change not only our stress response but also the functioning of genes involved in mood regulation,” says Isabelle Ouellet-Morin, lead author of the study.

A previous study by Ouellet-Morin, conducted at the Institute of Psychiatry in London (UK), showed that bullied children secrete less cortisol — the stress hormone — but had more problems with social interaction and aggressive behaviour. The present study indicates that the reduction of cortisol, which occurs around the age of 12, is preceded two years earlier by a change in the structure surrounding a gene (SERT) that regulates serotonin, a neurotransmitter involved in mood regulation and depression.

To achieve these results, 28 pairs of identical twins with a mean age of 10 years were analyzed separately according to their experiences of bullying by peers: one twin had been bullied at school while the other had not. “Since they were identical twins living in the same conditions, changes in the chemical structure surrounding the gene cannot be explained by genetics or family environment. Our results suggest that victimization experiences are the source of these changes,” says Ouellet-Morin. According to the author, it would now be worthwhile to evaluate the possibility of reversing these psychological effects, in particular, through interventions at school and support for victims.

Journal Reference:

  1. I. Ouellet-Morin, C. C. Y. Wong, A. Danese, C. M. Pariante, A. S. Papadopoulos, J. Mill, L. Arseneault. Increased serotonin transporter gene (SERT) DNA methylation is associated with bullying victimization and blunted cortisol response to stress in childhood: a longitudinal study of discordant monozygotic twinsPsychological Medicine, 2012; DOI: 10.1017/S0033291712002784

Emerging Ethical Dilemmas in Science and Technology (Science Daily)

Dec. 17, 2012 — As a new year approaches, the University of Notre Dame’s John J. Reilly Center for Science, Technology and Values has announced its inaugural list of emerging ethical dilemmas and policy issues in science and technology for 2013.

The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.

The center generated its inaugural list with the help of Reilly fellows, other Notre Dame experts and friends of the center.

The center aimed to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. It will feature one of these issues on its website each month in 2013, giving readers more information, questions to ask and resources to consult.

The ethical dilemmas and policy issues are:

Personalized genetic tests/personalized medicine

Within the last 10 years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. What are the potential privacy issues, and how do we protect this very personal and private information? Are we headed toward a new era of therapeutic intervention to increase quality of life, or a new era of eugenics?

Hacking into medical devices

Implanted medical devices, such as pacemakers, are susceptible to hackers. Barnaby Jack, of security vendor IOActive, recently demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. How do we make sure these devices are secure?

Driverless Zipcars

In three states — Nevada, Florida, and California — it is now legal for Google to operate its driverless cars. Google’s goal is to create a fully automated vehicle that is safer and more effective than a human-operated vehicle, and the company plans to marry this idea with the concept of the Zipcar. The ethics of automation and equality of access for people of different income levels are just a taste of the difficult ethical, legal and policy questions that will need to be addressed.

3-D printing

Scientists are attempting to use 3-D printing to create everything from architectural models to human organs, but we could be looking at a future in which we can print personalized pharmaceuticals or home-printed guns and explosives. For now, 3-D printing is largely the realm of artists and designers, but we can easily envision a future in which 3-D printers are affordable and patterns abound for products both benign and malicious, and that cut out the manufacturing sector completely.

Adaptation to climate change

The differential susceptibility of people around the world to climate change warrants an ethical discussion. We need to identify effective and safe ways to help people deal with the effects of climate change, as well as learn to manage and manipulate wild species and nature in order to preserve biodiversity. Some of these adaptation strategies might be highly technical (e.g. building sea walls to stem off sea level rise), but others are social and cultural (e.g., changing agricultural practices).

Low-quality and counterfeit pharmaceuticals

Until recently, detecting low-quality and counterfeit pharmaceuticals required access to complex testing equipment, often unavailable in developing countries where these problems abound. The enormous amount of trade in pharmaceutical intermediaries and active ingredients raise a number of issues, from the technical (improvement in manufacturing practices and analytical capabilities) to the ethical and legal (for example, India ruled in favor of manufacturing life-saving drugs, even if it violates U.S. patent law).

Autonomous systems

Machines (both for peaceful purposes and for war fighting) are increasingly evolving from human-controlled, to automated, to autonomous, with the ability to act on their own without human input. As these systems operate without human control and are designed to function and make decisions on their own, the ethical, legal, social and policy implications have grown exponentially. Who is responsible for the actions undertaken by autonomous systems? If robotic technology can potentially reduce the number of human fatalities, is it the responsibility of scientists to design these systems?

Human-animal hybrids (chimeras)

So far scientists have kept human-animal hybrids on the cellular level. According to some, even more modest experiments involving animal embryos and human stem cells violate human dignity and blur the line between species. Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species?

Ensuring access to wireless and spectrum

Mobile wireless connectivity is having a profound effect on society in both developed and developing countries. These technologies are completely transforming how we communicate, conduct business, learn, form relationships, navigate and entertain ourselves. At the same time, government agencies increasingly rely on the radio spectrum for their critical missions. This confluence of wireless technology developments and societal needs presents numerous challenges and opportunities for making the most effective use of the radio spectrum. We now need to have a policy conversation about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underserved (rural, low-income, developing areas) populations.

Data collection and privacy

How often do we consider the massive amounts of data we give to commercial entities when we use social media, store discount cards or order goods via the Internet? Now that microprocessors and permanent memory are inexpensive technology, we need think about the kinds of information that should be collected and retained. Should we create a diabetic insulin implant that could notify your doctor or insurance company when you make poor diet choices, and should that decision make you ineligible for certain types of medical treatment? Should cars be equipped to monitor speed and other measures of good driving, and should this data be subpoenaed by authorities following a crash? These issues require appropriate policy discussions in order to bridge the gap between data collection and meaningful outcomes.

Human enhancements

Pharmaceutical, surgical, mechanical and neurological enhancements are already available for therapeutic purposes. But these same enhancements can be used to magnify human biological function beyond the societal norm. Where do we draw the line between therapy and enhancement? How do we justify enhancing human bodies when so many individuals still lack access to basic therapeutic medicine?

Scientists Pioneer Method to Predict Environmental Collapse (Science Daily)

Researcher Enlou Zhang takes a core sample from the bed of Lake Erhai in China. (Credit: University of Southampton)

Nov. 19, 2012 — Scientists at the University of Southampton are pioneering a technique to predict when an ecosystem is likely to collapse, which may also have potential for foretelling crises in agriculture, fisheries or even social systems.

The researchers have applied a mathematical model to a real world situation, the environmental collapse of a lake in China, to help prove a theory which suggests an ecosystem ‘flickers’, or fluctuates dramatically between healthy and unhealthy states, shortly before its eventual collapse.

Head of Geography at Southampton, Professor John Dearing explains: “We wanted to prove that this ‘flickering’ occurs just ahead of a dramatic change in a system — be it a social, ecological or climatic one — and that this method could potentially be used to predict future critical changes in other impacted systems in the world around us.”

A team led by Dr Rong Wang extracted core samples from sediment at the bottom of Lake Erhai in Yunnan province, China and charted the levels and variation of fossilised algae (diatoms) over a 125-year period. Analysis of the core sample data showed the algae communities remained relatively stable up until about 30 years before the lake’s collapse into a turbid or polluted state. However, the core samples for these last three decades showed much fluctuation, indicating there had been numerous dramatic changes in the types and concentrations of algae present in the water — evidence of the ‘flickering’ before the lake’s final definitive change of state.

Rong Wang comments: “By using the algae as a measure of the lake’s health, we have shown that its eco-system ‘wobbled’ before making a critical transition — in this instance, to a turbid state.

“Dramatic swings can be seen in other data, suggesting large external impacts on the lake over a long time period — for example, pollution from fertilisers, sewage from fields and changes in water levels — caused the system to switch back and forth rapidly between alternate states. Eventually, the lake’s ecosystem could no longer cope or recover — losing resilience and reaching what is called a ‘tipping point’ and collapsing altogether.”

The researchers hope the method they have trialled in China could be applied to other regions and landscapes.

Co-author Dr Pete Langdon comments: “In this case, we used algae as a marker of how the lake’s ecosystem was holding-up against external impacts — but who’s to say we couldn’t use this method in other ways? For example, perhaps we should look for ‘flickering’ signals in climate data to try and foretell impending crises?”

Journal Reference:

  1. Rong Wang, John A. Dearing, Peter G. Langdon, Enlou Zhang, Xiangdong Yang, Vasilis Dakos, Marten Scheffer.Flickering gives early warning signals of a critical transition to a eutrophic lake stateNature, 2012; DOI:10.1038/nature11655

Do We Live in a Computer Simulation Run by Our Descendants? Researchers Say Idea Can Be Tested (Science Daily)

The conical (red) surface shows the relationship between energy and momentum in special relativity, a fundamental theory concerning space and time developed by Albert Einstein, and is the expected result if our universe is not a simulation. The flat (blue) surface illustrates the relationship between energy and momentum that would be expected if the universe is a simulation with an underlying cubic lattice. (Credit: Martin Savage)

Dec. 10, 2012 — A decade ago, a British philosopher put forth the notion that the universe we live in might in fact be a computer simulation run by our descendants. While that seems far-fetched, perhaps even incomprehensible, a team of physicists at the University of Washington has come up with a potential test to see if the idea holds water.

The concept that current humanity could possibly be living in a computer simulation comes from a 2003 paper published inPhilosophical Quarterly by Nick Bostrom, a philosophy professor at the University of Oxford. In the paper, he argued that at least one of three possibilities is true:

  • The human species is likely to go extinct before reaching a “posthuman” stage.
  • Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
  • We are almost certainly living in a computer simulation.

He also held that “the belief that there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”

With current limitations and trends in computing, it will be decades before researchers will be able to run even primitive simulations of the universe. But the UW team has suggested tests that can be performed now, or in the near future, that are sensitive to constraints imposed on future simulations by limited resources.

Currently, supercomputers using a technique called lattice quantum chromodynamics and starting from the fundamental physical laws that govern the universe can simulate only a very small portion of the universe, on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom, said Martin Savage, a UW physics professor.

Eventually, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.

However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.

The supercomputers performing lattice quantum chromodynamics calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms.

“If you make the simulations big enough, something like our universe should emerge,” Savage said. Then it would be a matter of looking for a “signature” in our universe that has an analog in the current small-scale simulations.

Savage and colleagues Silas Beane of the University of New Hampshire, who collaborated while at the UW’s Institute for Nuclear Theory, and Zohreh Davoudi, a UW physics graduate student, suggest that the signature could show up as a limitation in the energy of cosmic rays.

In a paper they have posted on arXiv, an online archive for preprints of scientific papers in a number of fields, including physics, they say that the highest-energy cosmic rays would not travel along the edges of the lattice in the model but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.

“This is the first testable signature of such an idea,” Savage said.

If such a concept turned out to be reality, it would raise other possibilities as well. For example, Davoudi suggests that if our universe is a simulation, then those running it could be running other simulations as well, essentially creating other universes parallel to our own.

“Then the question is, ‘Can you communicate with those other universes if they are running on the same platform?'” she said.

Journal References:

  1. Silas R. Beane, Zohreh Davoudi, Martin J. Savage.Constraints on the Universe as a Numerical SimulationArxiv, 2012 [link]
  2. Nick Bostrom. Are You Living in a Computer Simulation? Philosophical Quarterly, (2003) Vol. 53, No. 211, pp. 243-255 [link]

‘Missing’ Polar Weather Systems Could Impact Climate Predictions (Science Daily)

Intense but small-scale polar storms could make a big difference to climate predictions according to new research. (Credit: NEODAAS / University of Dundee)

Dec. 16, 2012 — Intense but small-scale polar storms could make a big difference to climate predictions, according to new research from the University of East Anglia and the University of Massachusetts.

Difficult-to-forecast polar mesoscale storms occur frequently over the polar seas; however, they are missing in most climate models.

Research published Dec. 16 inNature Geoscience shows that their inclusion could paint a different picture of climate change in years to come.

Polar mesoscale storms are capable of producing hurricane-strength winds which cool the ocean and lead to changes in its circulation.

Prof Ian Renfrew, from UEA’s School of Environmental Sciences, said: “These polar lows are typically under 500 km in diameter and over within 24-36 hours. They’re difficult to predict, but we have shown they play an important role in driving large-scale ocean circulation.

“There are hundreds of them a year in the North Atlantic, and dozens of strong ones. They create a lot of stormy weather, strong winds and snowfall — particularly over Norway, Iceland, and Canada, and occasionally over Britain, such as in 2003 when a massive dump of snow brought the M11 to a standstill for 24 hours.

“We have shown that adding polar storms into computer-generated models of the ocean results in significant changes in ocean circulation — including an increase in heat travelling north in the Atlantic Ocean and more overturning in the Sub-polar seas.

“At present, climate models don’t have a high enough resolution to account for these small-scale polar lows.

“As Arctic Sea ice continues to retreat, polar lows are likely to migrate further north, which could have consequences for the ‘thermohaline’ or northward ocean circulation — potentially leading to it weakening.”

Alan Condron from the University of Massachusetts said: “By simulating polar lows, we find that the area of the ocean that becomes denser and sinks each year increases and causes the amount of heat being transported towards Europe to intensify.

“The fact that climate models are not simulating these storms is a real problem because these models will incorrectly predict how much heat is being moved northward towards the poles. This will make it very difficult to reliably predict how the climate of Europe and North America will change in the near-future.”

Prof Renfrew added: “Climate models are always improving, and there is a trade-off between the resolution of the model, the complexity of the model, and the number of simulations you can carry out. Our work suggests we should put some more effort into resolving such storms.”

‘The impact of polar mesoscale storms on Northeast Atlantic ocean circulation’ by Alan Condron from the University of Massachusetts (US) and Ian Renfrew from UEA (UK), is published in Nature Geoscience on December 16, 2012.

Journal Reference:

  1. Alan Condron, Ian A. Renfrew. The impact of polar mesoscale storms on northeast Atlantic Ocean circulationNature Geoscience, 2012; DOI:10.1038/ngeo1661

Water Resources Management and Policy in a Changing World: Where Do We Go from Here? (Science Daily)

Nov. 26, 2012 — Visualize a dusty place where stream beds are sand and lakes are flats of dried mud. Are we on Mars? In fact, we’re on arid parts of Earth, a planet where water covers some 70 percent of the surface.

How long will water be readily available to nourish life here?

Scientists funded by the National Science Foundation’s (NSF) Dynamics of Coupled Natural and Human Systems (CNH) program are finding new answers.

NSF-supported CNH researchers will address water resources management and policy in a changing world at the fall meeting of the American Geophysical Union (AGU), held in San Francisco from Dec. 3-7, 2012.

In the United States, more than 36 states face water shortages. Other parts of the world are faring no better.

What are the causes? Do the reasons lie in climate change, population growth or still other factors?

Among the topics to be covered at AGU are sociohydrology, patterns in coupled human-water resource systems and the resilience of coupled natural and human systems to global change.

Researchers will report, for example, that human population growth in the Andes outweighs climate change as the culprit in the region’s dwindling water supplies. Does the finding apply in other places, and perhaps around the globe?

Scientists presenting results are affiliated with CHANS-Net, an international network of researchers who study coupled natural and human systems.

NSF’s CNH program supports CHANS-Net, with coordination from the Center for Systems Integration and Sustainability at Michigan State University.

CHANS-Net facilitates communication and collaboration among scientists, engineers and educators striving to find sustainable solutions that benefit the environment while enabling people to thrive.

“For more than a decade, NSF’s CNH program has supported projects that explore the complex ways people and natural systems interact with each other,” says Tom Baerwald, NSF CNH program director.

“CHANS-Net and its investigators represent a broad range of projects. They’re developing a new, better understanding of how our planet works. CHANS-Net researchers are finding practical answers for how people can prosper while maintaining environmental quality.”

CNH and CHANS-Net are part of NSF’s Science, Engineering and Education for Sustainability (SEES) investment. NSF’s Directorates for Geosciences; Social, Behavioral and Economic Sciences; and Biological Sciences support the CNH program.

“CHANS-Net has grown to more than 1,000 members who span generations of natural and social scientists from around the world,” says Jianguo “Jack” Liu, principal investigator of CHANS-Net and Rachel Carson Chair in Sustainability at Michigan State University.

“CHANS-Net is very happy to support another 10 CHANS Fellows–outstanding young scientists–to attend AGU, give presentations there, and learn from leaders in CHANS research and build professional networks. We’re looking forward to these exciting annual CHANS-Net events.”

Speakers at AGU sessions organized by CHANS-Net will discuss such subjects as the importance of water conservation in the 21st century; the Gila River and whether its flows might reduce the risk of water shortages in the Colorado River Basin; and historical evolution of the hydrological functioning of the old Lake Xochimilco in the southern Mexico Basin.

Other topics to be addressed include water conflicts in a changing world; system modeling of the Great Salt Lake in Utah to improve the hydro-ecological performance of diked wetlands; and integrating economics into water resources systems analysis.

“Of all our natural resources, water has become the most precious,” wrote Rachel Carson in 1962 in Silent Spring. “By a strange paradox, most of the Earth’s abundant water is not usable for agriculture, industry, or human consumption because of its heavy load of sea salts, and so most of the world’s population is either experiencing or is threatened with critical shortages.”

Fifty years later, more than 100 scientists will present research reflecting Rachel Carson’s conviction that “seldom if ever does nature operate in closed and separate compartments, and she has not done so in distributing Earth’s water supply.”

Why Sandy Has Meteorologists Scared, in 4 Images (The Atlantic)

By Alexis Madrigal

OCT 28 2012, 12:23 PM ET 126

She’s huge. She’s strong and might get stronger. She’s strange. She’s directing the might of her storm surge right at New York City.

sandycomes_615.jpgUpdate 10/29, 4:49pm: The Eastern seaboard has battened down the hatches. Hurricane Sandy is expected to make landfall in New Jersey in the next few hours, but flooding has been reported in Atlantic City and pieces of New York during this morning’s high tide cycle. The Metropolitan Transportation Authority already shut down rail, bus, and subway service in NYC, as did Washington DC’s authorities. All eyes are on the 8 o’clock hour, when the storm surge from Sandy will combine with a very high tide to create maximum water levels. In the worst case scenario, the storm surge will hit precisely at the moment the tide peaks at 8:53pm. In that scenario, New York City, in particular, could sustain substantial damage, especially to its transportation infrastructure.

The good news, if there is any, is that the forecast hasn’t worsened much. It is what it has been, which is grim. Meteorologist Jeff Masters put it in simple terms. “As the core of Sandy moves ashore, the storm will carry with it a gigantic bulge of water that will raise waters levels to the highest storm tides ever seen in over a century of record keeping, along much of the coastline of New Jersey and New York,” Masters wrote today. “The peak danger will be between 7 pm – 10 pm, when storm surge rides in on top of the high tide.”

Here’s the latest map of the prospective storm surge tonight. You can compare it to the image at the bottom, which shows what the forecast was yesterday.

probofstormsurge_1029.jpg

* * *

Hurricane Sandy has already caused her first damage in New York: the subway system will be shut as of 7pm tonight. Meteorologists are scared, so city planners are scared.

For many, the hullabaloo raises memories of Irene, which despite causing $15.6 billion worth of damages in the United States, did not live up to its pre-arrival hype.

By almost all measures, this storm looks like it could be worse: higher winds, a path through a more populated area, worse storm surge, and a greater chance it’ll linger. The atmospherics, you might say, all point to this being the worst storm in recent history.

I’ve been watching weather nerds freak out about a few different graphs over the last several days, which they’ve sent around like sports fans would tweet a particularly vicious hit in the NFL. You don’t want to look, but you also can’t help it.

Dr. Ryan Maue, a meteorologist at WeatherBELL, put out this animated GIF of the storm’s approach yesterday. “This is unprecedented –absolutely stunning upper-level configuration pinwheeling #Sandy on-shore like ping-pong ball,” he tweeted. It shows how cold air to the north and west of the storm spin Sandy into the mid-atlantic coastline. (Nota bene: his models also show very high winds at skyscraper altitudes.)

 

hurricanegif.gifThis morning, the Wall Street Journal’s Eric Holthaus (@WSJweather), tweeted the following map. “Oh my…. I have never seen so much purple on this graphic. By far. Never,” he said. “Folks, please take this storm seriously.” The storm is strong *and* huge. And when it encounters the cold air from the north and west, it will develop renewed strength thanks to that interaction, a process known as “baroclinic enhancement.”

sandyboom.gif

This last graphic I created from National Oceanographic and Atmospheric Administration data that has weather watchers worried. It shows the probability of a greater than six foot storm surge  in and around New York City. Hurricane Irene, by comparison, caused a four foot surge.
probofstormsurge.jpg
Note that the highest probabilities are focused tightly around New York City, which also happens to be the most densely populated area in the country. That’s a very bad combination. Jeff Masters, author of the must-read storm blog Wunderground, laid out the general problem.
“[According to last night’s forecast], the destructive potential of the storm surge was exceptionally high: 5.7 on a scale of 0 to 6,” he wrote. “This is a higher destructive potential than any hurricane observed between 1969 – 2005, including Category 5 storms like Katrina, Rita, Wilma, Camille, and Andrew.”
Specifically, New York City’s infrastructure may take an unprecedented hit. The subway narrowly escaped flooding during Irene, and Sandy (for all the reasons above) is expected to be worse. So…

“According to the latest storm surge forecast for NYC from NHC, Sandy’s storm surge is expected to be several feet higher than Irene’s. If the peak surge arrives near Monday evening’s high tide at 9 pm EDT, a portion of New York City’s subway system could flood, resulting in billions of dollars in damage,” Masters concluded. “I give a 50% chance that Sandy’s storm surge will end up flooding a portion of the New York City subway system.”

Update 1:06pm: To get a taste of how forecasters are feeling, here is The Weather Channel’s senior meteorologist, Stu Ostro:

History is being written as an extreme weather event continues to unfold, one which will occupy a place in the annals of weather history as one of the most extraordinary to have affected the United States.

On Twitter, Alan Robinson pointed out that I left out another scary map, the rainfall forecast, which shows the storm “sitting over the Delaware and Susquehanna watersheds.” Much of the damage that Irene caused came from flooding rivers. However, there is one key factor militating against similar damage, Jeff Masters of Wunderground says. Irene hit when the ground was already very wet. Sandy is striking when ground moisture is roughly average. Here’s Masters whole statement:

Hurricane Irene caused $15.8 billion in damage, most of it from river flooding due to heavy rains. However, the region most heavily impacted by Irene’s heavy rains had very wet soils and very high river levels before Irene arrived, due to heavy rains that occurred in the weeks before the hurricane hit. That is not the case for Sandy; soil moisture is near average over most of the mid-Atlantic, and is in the lowest 30th percentile in recorded history over much of Delaware and Southeastern Maryland. One region of possible concern is the Susquehanna River Valley in Eastern Pennsylvania, where soil moisture is in the 70th percentile, and river levels are in the 76th – 90th percentile. This area is currently expected to receive 3 – 6 inches of rain (Figure 4), which is probably not enough to cause catastrophic flooding like occurred for Hurricane Irene. I expect that river flooding from Sandy will cause less than $1 billion in damage.

When data prediction is a game, the experts lose out (New Scientist)

Specialist Knowledge Is Useless and Unhelpful

By |Posted Saturday, Dec. 8, 2012, at 7:45 AM ET

 Airplanes at an airport.Airplanes at an airport. iStockphoto/Thinkstock.

Jeremy Howard founded email company FastMail and the Optimal Decisions Group, which helps insurance companies set premiums. He is now president and chief scientist of Kaggle, which has turned data prediction into sport.

Peter Aldhous: Kaggle has been described as “an online marketplace for brains.” Tell me about it.
Jeremy Howard: It’s a website that hosts competitions for data prediction. We’ve run a whole bunch of amazing competitions. One asked competitors to develop algorithms to mark students’ essays. One that finished recently challenged competitors to develop a gesture-learning system for the Microsoft Kinect. The idea was to show the controller a gesture just once, and the algorithm would recognize it in future. Another competition predicted the biological properties of small molecules being screened as potential drugs.

PA: How exactly do these competitions work?
JH: They rely on techniques like data mining and machine learning to predict future trends from current data. Companies, governments, and researchers present data sets and problems, and offer prize money for the best solutions. Anyone can enter: We have nearly 64,000 registered users. We’ve discovered that creative-data scientists can solve problems in every field better than experts in those fields can.

PA: These competitions deal with very specialized subjects. Do experts enter?
JH: Oh yes. Every time a new competition comes out, the experts say: “We’ve built a whole industry around this. We know the answers.” And after a couple of weeks, they get blown out of the water.

PA: So who does well in the competitions?
JH: People who can just see what the data is actually telling them without being distracted by industry assumptions or specialist knowledge. Jason Tigg, who runs a pretty big hedge fund in London, has done well again and again. So has Xavier Conort, who runs a predictive analytics consultancy in Singapore.

PA: You were once on the leader board yourself. How did you get involved?
JH: It was a long and strange path. I majored in philosophy in Australia, worked in management consultancy for eight years, and then in 1999 I founded two start-ups—one an email company, the other helping insurers optimize risks and profits. By 2010, I had sold them both. I started learning Chinese and building amplifiers and speakers because I hadn’t made anything with my hands. I travelled. But it wasn’t intellectually challenging enough. Then, at a meeting of statistics users in Melbourne, somebody told me about Kaggle. I thought: “That looks intimidating and really interesting.”

PA: How did your first competition go?
JH: Setting my expectations low, my goal was to not come last. But I actually won it. It was on forecasting tourist arrivals and departures at different destinations. By the time I went to the next statistics meeting I had won two out of the three competitions I entered. Anthony Goldbloom, the founder of Kaggle, was there. He said: “You’re not Jeremy Howard, are you? We’ve never had anybody win two out of three competitions before.”

PA: How did you become Kaggle’s chief scientist?
JH: I offered to become an angel investor. But I just couldn’t keep my hands off the business. I told Anthony that the site was running slowly and rewrote all the code from scratch. Then Anthony and I spent three months in America last year, trying to raise money. That was where things got really serious, because we raised $11 million. I had to move to San Francisco and commit to doing this full-time.

PA: Do you still compete?
JH: I am allowed to compete, but I can’t win prizes. In practice, I’ve been too busy.

PA: What explains Kaggle’s success in solving problems in predictive analytics?
JH: The competitive aspect is important. The more people who take part in these competitions, the better they get at predictive modeling. There is no other place in the world I’m aware of, outside professional sport, where you get such raw, harsh, unfettered feedback about how well you’re doing. It’s clear what’s working and what’s not. It’s a kind of evolutionary process, accelerating the survival of the fittest, and we’re watching it happen right in front of us. More and more, our top competitors are also teaming up with each other.

PA: Which statistical methods work best?
JH: One that crops up again and again is called the random forest. This takes multiple small random samples of the data and makes a “decision tree” for each one, which branches according to the questions asked about the data. Each tree, by itself, has little predictive power. But take an “average” of all of them and you end up with a powerful model. It’s a totally black-box, brainless approach. You don’t have to think—it just works.

PA: What separates the winners from the also-rans?
JH: The difference between the good participants and the bad is the information they feed to the algorithms. You have to decide what to abstract from the data. Winners of Kaggle competitions tend to be curious and creative people. They come up with a dozen totally new ways to think about the problem. The nice thing about algorithms like the random forest is that you can chuck as many crazy ideas at them as you like, and the algorithms figure out which ones work.

PA: That sounds very different from the traditional approach to building predictive models. How have experts reacted?
JH: The messages are uncomfortable for a lot of people. It’s controversial because we’re telling them: “Your decades of specialist knowledge are not only useless, they’re actually unhelpful; your sophisticated techniques are worse than generic methods.” It’s difficult for people who are used to that old type of science. They spend so much time discussing whether an idea makes sense. They check the visualizations and noodle over it. That is all actively unhelpful.

PA: Is there any role for expert knowledge?
JH: Some kinds of experts are required early on, for when you’re trying to work out what problem you’re trying to solve. The expertise you need is strategy expertise in answering these questions.

PA: Can you see any downsides to the data-driven, black-box approach that dominates on Kaggle?
JH: Some people take the view that you don’t end up with a richer understanding of the problem. But that’s just not true: The algorithms tell you what’s important and what’s not. You might ask why those things are important, but I think that’s less interesting. You end up with a predictive model that works. There’s not too much to argue about there.

Reading history through genetics (Columbia University)

5-Dec-2012, by Holly Evarts

New method analyzes recent history of Ashkenazi and Masai populations, paving the way to personalized medicine

New York, NY—December 5, 2012—Computer scientists at Columbia’s School of Engineering and Applied Science have published a study in the November 2012 issue of The American Journal of Human Genetics (AJHG) that demonstrates a new approach used to analyze genetic data to learn more about the history of populations. The authors are the first to develop a method that can describe in detail events in recent history, over the past 2,000 years. They demonstrate this method in two populations, the Ashkenazi Jews and the Masai people of Kenya, who represent two kinds of histories and relationships with neighboring populations: one that remained isolated from surrounding groups, and one that grew from frequent cross-migration across nearby villages.

“Through this work, we’ve been able to recover very recent and refined demographic history, within the last few centuries, in contrast to previous methods that could only paint broad brushstrokes of the much deeper past, many thousands of years ago,” says Computer Science Associate Professor Itsik Pe’er, who led the research. “This means that we can now use genetics as an objective source of information regarding history, as opposed to subjective written texts.”

Pe’er’s group uses computational genetics to develop methods to analyze DNA sequence variants. Understanding the history of a population, knowing which populations had a shared origin and when, which groups have been isolated for a long time, or resulted from admixture of multiple original groups, and being able to fully characterize their genetics is, he explains, “essential in paving the way for personalized medicine.”

For this study, the team developed the mathematical framework and software tools to describe and analyze the histories of the two populations and discovered that, for instance, Ashkenazi Jews are descendants of a small number—in the hundreds—of individuals from the late medieval times, and since then have remained genetically isolated while their population has expanded rapidly to several millions today.

“Knowing that the Ashkenazi population has expanded so recently from a very small number has practical implications,” notes Pe’er. “If we can obtain data on only a few hundreds of individuals from this population, a perfectly feasible task in today’s technology, we will have effectively collected the genomes of millions of current Ashkenazim.” He and his team are now doing just that, and have already begun to analyze a first group of about 150 Ashkenazi genomes.

The genetic data of the Masai, a semi-nomadic people, indicates the village-by-village structure of their population. Unlike the isolated Ashkenazi group, the Masai live in small villages but regularly interact and intermarry across village boundaries. The ancestors of each village therefore typically come from many different places, and a single village hosts an effective gene pool that is much larger than the village itself.

Previous work in population genetics was focused on mutations that occurred very long ago, say the researchers, and therefore able to only describe population changes that occurred at that timescale, typically before the agricultural revolution. Pe’er’s research has changed that, enabling scientists to learn more about recent changes in populations and start to figure out, for instance, how to pinpoint severe mutations in personal genomes of specific individuals—mutations that are more likely to be associated with disease.

“This is a thrilling time to be working in computational genetics,” adds Pe’er, citing the speed in which data acquisition has been accelerating; much faster than the ability of computing hardware to process such data. “While the deluge of big data has forced us to develop better algorithms to analyze them, it has also rewarded us with unprecedented levels of understanding.”

###

Pe’er’s team worked closely on this research with study co-authors, Ariel Darvasi, PhD of the Hebrew University of Jerusalem, who was responsible for collecting most of the study samples, and Todd Lencz, PhD of Feinstein institute for Medical Research, who handled genotyping of the DNA samples. The team’s computing and analysis took place in the Columbia Initiative in Systems Biology (CISB).

This research is supported by the National Science Foundation (NSF). The computing facility of CISB is supported by the National Institutes of Health (NIH).

Lully & nós (Valor)

23/11/2012 às 00h00

Por Joselia Aguiar | Para o Valor, de São Paulo

Daryan Dornelles/FolhapressCosta Lima ou Bruno Negri, que homenageia sua shitzu branca e preta: livro traz as reflexões filosófico-caninas capturadas por uma máquina inventada para traduzir “auês”

A obra podia entrar na prateleira reservada aos livros fofos, se tal existisse. O que existe, de verdade, é a chance de parar na lista de best-sellers como um “Marley & Eu” à brasileira. “Confesso minha ignorância: não sei que livro é esse, ‘Marley & Eu'”, responde o ficcionista novato Bruno Negri sobre uma possível influência ao escrever “Me chamo Lully”, seu relato de uma vidinha de cachorro que chega agora às livrarias – o lançamento, pela Book Makers, será no dia 5, na Livraria Argumento, no Leblon, no Rio. Vai ter jazz, MPB e coquetel para gente e bichos.

A ignorância confessada, que seria fatal em alguém que pretendesse fazer sucesso no metier dos livros comerciais – afinal, “Marley & Eu” foi lido por milhões no mundo inteiro -, pode ser vista como um divertido alheamento intencional ou uma saudável distância técnica, quando se conhece enfim a identidade de quem está disfarçado pelo pseudônimo. Bruno Negri é Luiz Costa Lima, de 75 anos, um dos críticos literários mais importantes do país, há quatro décadas em atividade, mais de 20 títulos publicados, obra premiada aqui e no exterior.

De parecido com o livro do jornalista americano John Grogan, que narrou as peripécias de seu Marley, há, além do tema, uma capa com seu apelo emotivo: em close, um cãozinho se apresenta com olhar sedutor. Aí param as semelhanças. A grande diferença se configura pelo ponto de vista. Marley teve a história contada por Grogan, seu dono. Lully, ao contrário, é autora da própria história. Pois um laboratório nos Estados Unidos desenvolveu equipamento ainda em fase de testes que captura o pensamento de animais e o decodifica em linguagem de humanos. Aparentemente, só com alguns o experimento parece funcionar, com outros o resultado não é o mesmo. Com Lully, cachorra filósofa, funciona. Não com Benjy, seu filho e companheiro, incapaz da concentração necessária, muito menos raciocínio organizado para ter o pensamento capturado ou decodificado. Benjy é, por assim dizer, um cão atávico – uma de suas raras preocupações é impedir que Lully brinque com uma bolinha, enquanto ele mesmo não parece saber o motivo de cultivar tal hábito, já que nunca aproveita o objeto furtado.

O grau de autoconsciência de Lully é evidente desde o título, retirado da primeira frase que diz à máquina, “Me chamo”, e não “Me chamam”. Lully sobretudo compreende que os fios que a conectam da cabeça ao computador transmitem seu “auês”, a língua que domina. A seu jeito canino – filosófico, mas ainda canino -, ela narra dos primeiros dias no canil até os oito anos na casa de Pedro, Joana e o filho, Dani. Lully pensa não só sobre as coisas que observa como também as coisas que sente: medo, um tipo de afeição que não sabe dar nome (seria amor? paixão? decerto não é cio), a maternidade e a finitude. O que ela nunca consegue compreender é a passagem do tempo – o que são mesmo os dias, semanas, passado e futuro – e a divisão de classe social – o que se nota pela dificuldade de entender o que é uma princesa, título que lhe atribuem, e o que são os mendigos catadores no pós-Carnaval do Rio.

Cachorrinha que inspirou o crítico é “faceira e sedutora como uma teenager”, apesar de já ter oito anos; sugestão para livro foi da mulher dele

As perguntas ao crítico se encaminham com a devida vênia. Das fábulas de La Fontaine às de Orwell, os livros protagonizados por bichos, o “Flush” de Virginia Woolf ou o “Timbuktu” de Paul Auster, o que um crítico conhecido pelo rigor e exigência pensou em fazer ao publicar um livro fofo? Algum experimento? “Não pensei em coisa alguma, senão em dar alguma verossimilhança à história que queria fosse de minha querida Lully.”

Eis que Lully existe mesmo. É a shitzu de oito anos da família. “Não pense que é brincadeira ou fingimento. Embora saiba de ficções sobre animais de estimação, nunca li nenhuma delas”, prossegue Costa Lima. “Só lhe garanto que não quis brincar com Lully. Ela nos é muito querida para sujeitá-la a uma brincadeira. Seria explorar sua admirável ingenuidade canina.”

O campo dos estudos animais, da animalidade, dos limites do humano tem crescido nas universidades: trata-se de área multidisciplinar, que combina filosofia, literatura, ciências sociais. Uma nova pergunta quer identificar se houve, da parte do professor emérito da PUC-SP, uma tentativa de se aproximar desse tipo de reflexão a partir da própria experiência. “Sei disso, de livros escritos há décadas por Günter Lorenz. Mas lhe confesso que nunca li nada a respeito.” O processo da escrita? A resposta não dá mais margem para teorizações previsíveis: “Simplesmente não houve”.

A sugestão veio da mulher, a psicanalista Rebeca Schwartz. Então ele se sentou à mesa e, como diz, escreveu como sempre faz: primeiro à mão, depois no computador. “Creio que as emendas foram mínimas. Era como se a história estivesse amadurecida dentro de mim.” De que modo o crítico agiu no escritor, desmontando e remontando a maquinaria ficcional? “Alguém já disse que a crítica que se limite a ser o julgamento de um livro é algo bastante chato. O crítico seria uma espécie semelhante aos juízes do nosso STF que têm seu instante de glória à custa do que outros fizeram”, pondera. Temos algo diferente, portanto. “Embora a crítica não seja e não deva ser ficção, ela só presta quando traz consigo um ‘impulso ficcional’. No “Me chamo Lully”, a máquina ficcional pôde se mostrar explicitamente, sem disfarces ou transformações.”

É aqui, leitor, nessa parte da conversa, que você se lembra que o tal aparelho recém-inventado nos Estados Unidos, aquele que captura e decodifica as reflexões filosófico-caninas de Lully, é pura ficção. Não por outra, críticos costumam ser vistos pelos leitores como “desmancha-prazeres”, como nota Costa Lima. As engrenagens se expõem para quem quiser ver.

A perspectiva de atrair um leitor quase oposto ao seu parece animadora, horrorosa ou engraçada? “Alguns por certo me dizem que o livro atrairá muitos leitores, algo bem diferente do que conheço com meus livros de teoria e crítica literária. Se isso se der, ficarei muito contente. Em vez de engraçada, a hipótese me parece surpreendente. Mas não creio que seja possível.”

Lully tem longuíssimos pelos lisos – por sua pelagem, a raça é identificada no nome original em chinês como o “cão leão” -, é pequenina – a espécie nunca ultrapassa os 25 cm – e, na descrição de seu dono, “faceira e sedutora como uma teenager”, apesar dos oito anos, idade da maturidade em sua categoria. “Melhor, mais do que a maioria das que vejo frequentar a PUC.”

Benjy, que também existe, tem no livro um nome falso. O verdadeiro é Billy. O cão é “meio bobão, manhoso e longe do charme de Lully”, descreve-o o dono bastante crítico (a palavra “crítico” no sentido comezinho), para mais à frente reconhecer a possibilidade de ter sido injusto no relato que faz do cão macho por uma inconsciente competição pela fêmea.

Se escrever o primeiro manuscrito lhe custou duas horas, foi só depois da leitura de Rebeca, mais minuciosa e atenta aos acréscimos, que vicissitudes da vidinha de cachorro puderam ser registradas – desde a ração antialergênica às bolinhas homeopáticas – e muitos episódios, recordados com exatidão, do treinamento avançado de artes marciais para cachorro às crises de pânico de Billy ao entrar num carro, o que fez o casal ter de se desfazer de uma casa de praia. Quase tudo o que é contado Lully de fato viveu, à exceção de um sequestro, este completamente fictício. E há mais uma coisa ou outra recriada. “A cena da paixão pelo vira-lata tem um fundo de verdade, mas é um tanto estilizada”, diz Costa Lima. De fato, esta já dava para notar.

Resta saber por que escolheu o nome de Bruno Negri. “Eu mesmo não sei!”, diz. “Talvez porque de imediato pensei o título como ‘Me Chiamo Lully’. Sei apenas que tanto Bruno como Negri pretendiam acentuar, direta ou indiretamente, a cor da ‘autobiografada’: branca com manchas negras. Mas, no fundo, o nome não tem maiores razões.” Existe razão, essa sim, para adotar um pseudônimo, como explica: “Temia que o nome do crítico prejudicasse a circulação do livro”.

A trajetória de crítico não se interrompe. Meses atrás, saiu o recente “A Ficção e o Poema”, pela Companhia das Letras, desdobramento de dois anteriores, “História. Ficção. Literatura” e “O Controle do Imaginário e o Romance”. Costa Lima conclui agora um novo volume, que se chamará “Frestas” e deve sair apenas em 2014. Outra notícia recente vem de fora: um dos seus livros clássicos, “Mímesis: Desafio ao Pensamento”, acaba de ter tradução para o mercado de língua alemã. E para Bruno Negri, há futuro literário? John Grogan fez vários na linha do “Marley & Eu”. “Não, não creio. Pode parecer louco. Mas tenho muitos projetos de livros longos e trabalhosos. ‘Me Chamo Lully’ foi um felicíssimo acidente. Ainda que não fosse difícil continuar a aventura ficcional, suponho que minha opção de vida é outra.”

© 2000 – 2012. Todos os direitos reservados ao Valor Econômico S.A. . Verifique nossos Termos de Uso em http://www.valor.com.br/termos-de-uso. Este material não pode ser publicado, reescrito, redistribuído ou transmitido por broadcast sem autorização do Valor Econômico. Leia mais em: http://www.valor.com.br/cultura/2914400/lully-nos#ixzz2E5M8kORc

Origin of intelligence and mental illness linked to ancient genetic accident (University of Edinburgh)

2-Dec-2012 – By Tara Womersley, University of Edinburgh

Scientists have discovered for the first time how humans – and other mammals – have evolved to have intelligence

Scientists have discovered for the first time how humans – and other mammals – have evolved to have intelligence.

Researchers have identified the moment in history when the genes that enabled us to think and reason evolved.

This point 500 million years ago provided our ability to learn complex skills, analyse situations and have flexibility in the way in which we think.

Professor Seth Grant, of the University of Edinburgh, who led the research, said: “One of the greatest scientific problems is to explain how intelligence and complex behaviours arose during evolution.”

The research, which is detailed in two papers in Nature Neuroscience, also shows a direct link between the evolution of behaviour and the origins of brain diseases.

Scientists believe that the same genes that improved our mental capacity are also responsible for a number of brain disorders.

“This ground breaking work has implications for how we understand the emergence of psychiatric disorders and will offer new avenues for the development of new treatments,” said John Williams, Head of Neuroscience and Mental Health at the Wellcome Trust, one of the study funders.

The study shows that intelligence in humans developed as the result of an increase in the number of brain genes in our evolutionary ancestors.

The researchers suggest that a simple invertebrate animal living in the sea 500 million years ago experienced a ‘genetic accident’, which resulted in extra copies of these genes being made.

This animal’s descendants benefited from these extra genes, leading to behaviourally sophisticated vertebrates – including humans.

The research team studied the mental abilities of mice and humans, using comparative tasks that involved identifying objects on touch-screen computers.

Researchers then combined results of these behavioural tests with information from the genetic codes of various species to work out when different behaviours evolved.

They found that higher mental functions in humans and mice were controlled by the same genes.

The study also showed that when these genes were mutated or damaged, they impaired higher mental functions.

“Our work shows that the price of higher intelligence and more complex behaviours is more mental illness,” said Professor Grant.

The researchers had previously shown that more than 100 childhood and adult brain diseases are caused by gene mutations.

“We can now apply genetics and behavioural testing to help patients with these diseases”, said Dr Tim Bussey from Cambridge University, which was also involved in the study.

The study was funded by the Wellcome Trust, the Medical Research Council and European Union.

The State of Climate Science (scienceprogress.org)

CLIMATE SCIENCE

A Thorough Review of the Scientific Literature on Global Warming

By Dr. James Powell | Thursday, November 15th, 2012

Polls show that many members of the public believe that scientists substantially disagree about human-caused global warming. The gold standard of science is the peer-reviewed literature. If there is disagreement among scientists, based not on opinion but on hard evidence, it will be found in the peer-reviewed literature.

I searched the Web of Science, an online science publication tool, for peer-reviewed scientific articles published between January first 1991 and November 9th 2012 that have the keyword phrases “global warming” or “global climate change.” The search produced 13,950 articles. See methodology.

I read whatever combination of titles, abstracts, and entire articles was necessary to identify articles that “reject” human-caused global warming. To be classified as rejecting, an article had to clearly and explicitly state that the theory of global warming is false or, as happened in a few cases, that some other process better explains the observed warming. Articles that merely claimed to have found some discrepancy, some minor flaw, some reason for doubt, I did not classify as rejecting global warming.

Articles about methods, paleoclimatology, mitigation, adaptation, and effects at least implicitly accept human-caused global warming and were usually obvious from the title alone. John Cook and Dana Nuccitelli also reviewed and assigned some of these articles; John provided invaluable technical expertise.

This work follows that of Oreskes (Science, 2005) who searched for articles published between 1993 and 2003 with the keyword phrase “global climate change.” She found 928, read the abstracts of each and classified them. None rejected human-caused global warming. Using her criteria and time-span, I get the same result. Deniers attacked Oreskes and her findings, but they have held up.

Some articles on global warming may use other keywords, for example, “climate change” without the “global” prefix. But there is no reason to think that the proportion rejecting global warming would be any higher.

By my definition, 24 of the 13,950 articles, 0.17 percent or 1 in 581, clearly reject global warming or endorse a cause other than CO2 emissions for observed warming. The list of articles that reject global warming is here.

The 24 articles have been cited a total of 113 times over the nearly 21-year period, for an average of close to 5 citations each. That compares to an average of about 19 citations for articles answering to “global warming,” for example. Four of the rejecting articles have never been cited; four have citations in the double-digits. The most-cited has 17.

Of one thing we can be certain: had any of these articles presented the magic bullet that falsifies human-caused global warming, that article would be on its way to becoming one of the most-cited in the history of science.

The articles have a total of 33,690 individual authors. The top ten countries represented, in order, are USA, England, China, Germany, Japan, Canada, Australia, France, Spain, and Netherlands. (The chart shows results through November 9th, 2012.)

Global warming deniers often claim that bias prevents them from publishing in peer-reviewed journals. But 24 articles in 18 different journals, collectively making several different arguments against global warming, expose that claim as false. Articles rejecting global warming can be published, but those that have been have earned little support or notice, even from other deniers.

A few deniers have become well known from newspaper interviews, Congressional hearings, conferences of climate change critics, books, lectures, websites and the like. Their names are conspicuously rare among the authors of the rejecting articles. Like those authors, the prominent deniers must have no evidence that falsifies global warming.

Anyone can repeat this search and post their findings. Another reviewer would likely have slightly different standards than mine and get a different number of rejecting articles. But no one will be able to reach a different conclusion, for only one conclusion is possible: Within science, global warming denial has virtually no influence. Its influence is instead on a misguided media, politicians all-too-willing to deny science for their own gain, and a gullible public.

Scientists do not disagree about human-caused global warming. It is the ruling paradigm of climate science, in the same way that plate tectonics is the ruling paradigm of geology. We know that continents move. We know that the earth is warming and that human emissions of greenhouse gases are the primary cause. These are known facts about which virtually all publishing scientists agree.

James Lawrence Powell is the author of The Inquisition of Climate Science. Powell is also the executive director of the National Physical Science Consortium, a partnership among government agencies and laboratories, industry, and higher education dedicated to increasing the number of American citizens with graduate degrees in the physical sciences and related engineering fields. This article is cross-posted with permission with the Columbia University Press blog.

This article is a cross-post with our partners at DeSmogBlog.

Wallerstein: a crise estrutural do capitalismo vai continuar (Revista Fórum)

13/11/2012 8:05 pm

(http://www.flickr.com/photos/buridan/)

Para sociólogo, o que ele denomina como sistema-mundo tem problemas de tal magnitude que não será possível sua sobrevivência, mas o que virá depois é algo ainda totalmente incerto

Entrevista a por Lee Su-hoon | Tradução de Hugo Albuquerque e Inês Castilho para o Outras Palavras

Em dois sentidos, pelo menos, o sociólogo norte-americano Immanuel Wallerstein parece disposto a contrariar as ideias que ainda predominam sobre a crise iniciada em 2007. Primeiro, no diagnóstico do fenômeno. Para ele, estamos diante de algo muito mais profundo que uma mera turbulência financeira. Foram abaladas as bases do próprio capitalismo. Ou, para usar um conceito caro a Wallerstein, do “sistema-mundo” que se desenhou a partir do século 16, em algumas partes da Europa, e se tornou globalmente hegemônico desde os anos 1800. Tal sistema teria atingido “o limite de suas possibilidades”, sendo incapaz de sobreviver à crise atual. Se ainda temos dificuldade para compreender o alcance das transformações em curso é porque, presos à inércia, demoramos a aceitar que “há alguns dilemas insolúveis”. “Nada dura para sempre – nem o Universo”, lembra Wallerstein, um tanto irônico.

O segundo ponto de vista não-convencional deste sociólogo – também um pesquisador de enorme repercussão internacional nos terrenos da História e da Geopolítica – diz respeito ao que virá, diante do eventual colapso do atual sistema-mundo. Ele diverge dos que pensam, baseados numa interpretação pouco refinada do marxismo, que podemos permanecer tranquilos – já que o declínio do sistema atual dará necessariamente lugar a uma ordem fraterna e socialista.

Não – diz Wallerstein – o futuro está mais aberto que nunca. O declínio do capitalismo pode abrir espaço, inclusive, a um sistema mais desumano – como sugere a forte presença, em todo o mundo, de correntes de pensamento autoritárias e xenófobas.

Estamos, portanto, condenados à ação, sugere este pensador, em cuja obra destaca-se a tetralogia “O Sistema Mundial Moderno”. Se o sentido do século 21 é imprevisível, isso deve-se ao fato de ele estar sendo construído neste exato momento, “em uma infinidade de nano-ações, desempenhadas por uma infinidade de nano-atores, em múltiplos nano-momentos. Em outras palavras, convoca Wallerstein, não se trata de prever o futuro, mas de construí-lo, inclusive em ações e atitudes quotidianas.

Para transformar, contudo, é preciso conhecer. Talvez por isso, embora aos 83 anos e consagrado por vasta obra teórica, Wallerstein dedica-se, em seu site, a análises quinzenais sobre temas contemporâneos muito concretos. Boa parte do material produzindo nos últimos dois anos traduzida e publicada por “Outras Palavras”. Entrevistado há poucas semanas pelo cientista político coreano Lee Su-hoon, ele avança no exame destes temas, muitas vezes expressando pontos de vista pouco usuais.

Indigado sobre a Europa, onde os cortes de direitos sociais e serviços públicos parecem não têm fim, propõe que se busque alternativas olhando, por exemplo, para a Argentina e Malásia. Estes países saíram da crise porque contrariaram, nas décadas de 1990 e 2000. Agora, pensa Wallerstein, o espaço para fazê-lo é ainda maior – mas é preciso ter coragem política.

O mundo irá tornar-se mais seguro se o Irã for impedido de desenvolver energia atômica? A resposta é “não”, garante este professor da Universidade de Yale: o atual Tratado de Não-Proliferação nuclear (TNP) é absolutamente hipócrita e será cada vez mais ineficaz. Contra o que ele preconiza, prevê Wallerstein, diversos países do Sul desenvolverão armas atômicas nos próximos anos – inclusive o Brasil…

China e Estados Unidos tendem a se converter em potências globais inimigas? Nada demonstra esta hipótese, frisa ele. A despeito da retórica, e da necessidade de satisfazer audiências locais, na prática Washington e Beijing mantêm cada vez mais interesses em comum. A entrevista completa, publicada pelo ótimo jornal sul-coreano Hankioreh, vem a seguir. (Antonio Martins)

Lee Su-hoon: Você disse: “Nos próximos 50 anos o mundo vai mergulhar em uma turbulência econômica séria e, mais tarde, o capitalismo vai enfrentar uma crise tremenda, como a da Grande Depressão”. As pessoas dizem que a crise se deve à ganância de Wall Street e à bolha imobiliária etc. Como você analisa essa crise?

Wallerstein: Faz cinco anos que eu não mudo de opinião. Basicamente, a meu ver, estamos em uma crise estrutural da economia capitalista mundial desde os anos 1970, e ela vai continuar. E não vai ser totalmente resolvida até talvez 2040 ou 2050. É difícil prever a data exata, mas vai levar muito tempo. No momento, o sistema mundial está bifurcado. Tem problemas de tal magnitude que não poderá sobreviver, está tão longe do equilíbrio que não há como voltar atrás. Mas para onde ele vai é totalmente incerto, porque, como disse, essa bifurcação significa que, tecnicamente, há duas formas de resolver uma mesma equação, o que não é normal.

Em linguagem leiga, isso significa simplesmente que o futuro sistema mundial, ou sistemas mundiais (porque não sabemos se haverá um só) que vai ou vão surgir no final desse processo podem ter, no mínimo, duas variedades fundamentais. Assim, não se pode prever qual sistema teremos, porque ele vai ser uma consequência de uma infinidade de nano-ações, desempenhadas por uma infinidade de nano-atores, em múltiplos nano-momentos – e ninguém é capaz de elaborar tanta coisa. Mas vai acontecer. Então, aqui estamos nós, no meio de tudo isso. É caótico, como se diz.

E o que significa dizer “É caótico”? Significa que as flutuações são enormes e, portanto, há incertezas inclusive no prazo muito curto. Isso significa que uma pessoa que preveja qual será a relação entre o iene, o dólar, o euro e a libra dentro de um ano será alguém muito corajoso. Não há como saber. Mas os empresários precisam dessa informação. Eles têm de ter o mínimo de estabilidade, do contrário correm o risco de sofrer perdas enormes. Isso os deixa paralisados, com muito receio de se envolver em qualquer tipo de investimento, uma das coisas que está acontecendo no mundo todo. É por isso que o desemprego explodiu. E é também por isso que os governos estão em tal dificuldade financeira, pois sem essa produção adicional não há receitas fiscais, e sem receitas os governos passaram a sofrer um grande aperto. E então o desemprego aumenta, o que coloca mais pressão sobre o governo. É o que acontece hoje em praticamente todos os países do mundo. Os governos têm menos dinheiro e enfrentam demandas para gastar mais. Isso, naturalmente, é impossível: não se pode ter menos e gastar mais. Então, eles vêm com tudo quanto é tipo de solução. Nenhuma parece funcionar. É onde nos encontramos atualmente.

Lee: E muitos países europeus estão enfrentando uma crise fiscal, uma espécie de moratória, o que os leva a tentar obter ajuda da UE (União Europeia) e do BCE (Banco Central Europeu).

Wallerstein: Os europeus têm um problema básico. Possuem pelo menos nove moedas, e 17 países compartilham o euro. Mas não têm um governo federal. É uma situação muito complicada, pois significa que os governos não podem intervir em sua própria moeda. Uma dos instrumentos que os governos utilizam tradicionalmente para lidar com suas dificuldades é aumentar ou diminuir o valor da moeda. Ao diminuir o valor da moeda pode-se vender mais; aumentando o seu valor, pode-se comprar mais. Os países da zona do euro não têm essa opção, porque nenhum país tem moeda própria. E eles estão enfrentando os mesmos problemas de todos os outros. Ou seja, exigências crescentes, porque o aumento do desemprego gera mais demandas sobre o governo. Ao mesmo tempo, a receita do governo diminui, porque não há empregos.

Sua única opção (da Grécia, Espanha, Portugal ou Irlanda) é obter ajuda, algum tipo de solidariedade. Então eles se deparam com a relutância, por parte dos países mais ricos, em “salvar” os mais pobres. Isso não leva em conta o fato de que o único e maior beneficiário da zona do euro é, de fato, a Alemanha. E é justamente o país que está fazendo o maior estardalhaço sobre não querer ajudar outros países, a menos que façam X, Y ou Z – medidas que, na verdade, só pioram a situação. Essa é a questão da zona do euro. É o problema enfrentado por todo o mundo, acrescido do fato de que esses países não podem manipular individualmente suas próprias moedas. Mas o problema básico não é diferente daquele dos EUA, da Rússia, do Egito ou de qualquer outro lugar onde haja aperto.

Lee: Aqui na Coreia, os especialistas e a mídia apresentam dois argumentos diferentes. A Irlanda, a Grécia e outros gastam muito dinheiro em benefícios sociais – essa é uma linha de argumentação. A outra é o efeito de contágio, por causa da facilidade de migração na zona do euro.

Wallerstein: Vamos lidar com os dois argumentos. O primeiro é “a Grécia está em apuros porque exagerou no bem-estar social”. Isso é exatamente o que o Partido Republicano diz sobre os EUA. É um mesmo argumento para todo o mundo, não um argumento especial para a Grécia. A reação das forças mais conservadoras a essa crise é dizer “corte benefícios”, o que significa “reduzir os gastos do governo”. Mas se você cortar benefícios reduz também o poder de compra das pessoas. Cria assim uma demanda menos eficaz. Por exemplo, uma pessoa que fabrica camisetas, ou algo assim, tem menos clientes. De forma que essa não parece ser a solução. Para mim, só piora o problema. De qualquer forma, a questão é que não é um problema específico da Grécia, da Espanha ou de Portugal. É um problema de todos os países.

Agora, o efeito de contágio. O que acontece é que, como os governos estão sem recursos, precisam de dinheiro emprestado. E para obter esse dinheiro, dependem do mercado. As pessoas emprestam dinheiro com mais facilidade quando veem possibilidades de obter reembolso. Então há, sim, um efeito de contágio na Europa: a Grécia começa a ter problemas, Portugal e Irlanda começam a ter problemas, e Espanha e Itália começam a ter problemas. E agora é a França que está se metendo em encrencas, e depois a Holanda e a própria Alemanha. É o efeito de contágio, em parte criado pelas agências de classificação de risco – que não são neutras –, mas também um problema muito real. O efeito de contágio vai da Europa para os EUA, e da Europa para o resto do mundo. Vai deixando as pessoas paralisadas. Isso significa que, quando veem as coisas indo tão mal, dizem “bem, pode dar errado em outros lugares também, portanto, não vamos emprestar o dinheiro”, ou “vamos exigir taxas de juro mais elevadas”.

Mas se tomamos o dinheiro emprestado a taxas de juros mais altas, sobra ainda menos dinheiro para gastar em outras coisas. Esse é exatamente o problema mundial. Então, novamente, não vejo isso como um problema especialmente europeu. A questão na Europa, no momento, é saber se as forças que dizem ”os países europeus estariam em situação melhor se não houvesse euro” conseguirão aboliro euro e voltar para suas moedas nacionais. Há um certo movimento nessa direção, tanto da direita como de alguns setores de esquerda.

A esquerda europeia não gosta do fato de que Bruxelas, com tanta influência, tenha um viés neoliberal tão forte. Diz-se (em alguns países escandinavos e mesmo na França): “estaríamos melhor se estivéssemos livres do controle de Bruxelas”, em oposição ao ponto de vista ainda dominante – o de que o euro fortalece a posição europeia frente ao resto do mundo e, mais especificamente, frente aos Estados Unidos.

Está acontecendo uma luta política, não há dúvida. Tendo a acreditar que, em geral, deve-se separar a retórica política da realidade e das pressões geopolíticas. A retórica política é em geral uma resposta a uma circunstância política imediata de um país. Se a chanceler Angela Merkel diz certas coisas na Alemanha, não é necessariamente porque ela acredita naquilo, mas porque, na próxima eleição, que pode ser muito em breve, ela julga que com isso ganharia votos. A mesma coisa vale para Obama. Vale também, tenho certeza, para o presidente da Coreia. Os políticos têm de se preocupar com a próxima eleição. Isso não significa que: (a) eles querem realmente dizer o que falam, e (b) o que dizem tem importância. Não acho que importe muito.

Ainda que, numa situação muito volátil, a estupidez possa prevalecer. Em geral, o que acontece é decorrente de pressões geopolíticas. Então, penso que a pressão para manter o euro, os benefícios em termos de geopolítica, são muito maiores do que a pressão para voltar às moedas individuais.

A chanceler Merkel está dizendo às pessoas, em toda a Europa, “deixem-me fazer isso, e então terei cacife político para convencer os políticos e eleitores alemães a me acompanhar”. Penso que a Europa vai concordar com um aumento do federalismo, ainda que não chamem isso de federalismo, porque não gostam dessa palavra. Mas um fortalecimento do poder central e, em consequência, um aumento do fluxo de dinheiro. Nos EUA, um estado como o Mississippi só não vai à falência porque o governo federal pode redirecionar dinheiro para lá. É disso que a Europa precisa. É isso o que querem realmente dizer as pessoas que estão clamando por “solidariedade”.

Se você me pedir que faça previsões, penso que a probabilidade de vermos, em três anos, não apenas um euro, mas um euro fortalecido, é muito maior do que o contrário. E algum tipo de mecanismo que permita enfatizar menos a prosperidade e mais a volta de recursos, ter o dinheiro fluindo novamente, é a única solução de curto prazo para os problemas europeus, assim como para os dos EUA.

Lee: Gostaria de acrescentar algo em sua análise da situação da zona do euro. Você mencionou os países escandinavos, que são mais fortes em termos de benefícios sociais. São os que mais gastam com bem-estar social e os que pagam mais impostos. Mas não estão em crise, embora se argumente que o chamado “populismo do bem-estar” social é inteiramente errado.

Wallerstein: Sim, evidente. Isso pode ser demonstrada de várias maneiras. É claro, existem cinco países nórdicos diferentes, cada um com uma situação um pouco diferente, inclusive aqueles que estão e aqueles que não estão na zona do euro, e os que estão e os que não estão na OTAN. Mas, em geral, você tem toda a razão ao dizer que aqueles cinco países nórdicos ainda são estados de bem-estar fortes, com impostos relativamente altos.

Lee: Sim, na verdade o problema fiscal da Europa é um problema mundial. Quando você olha para países específicos, há diferenças. Em alguns países, a corrupção é mais grave do que em outros.

Wallerstein: Vamos nos deter um pouco na corrupção. Penso que a corrupção é mais grave nos EUA, na Grã-Bretanha, na França e na Alemanha, do que em alguns casos de países muito citados em todo o mundo. Eles são fichinha, perto da corrupção real. Temos escândalos o tempo todo nos EUA, França e Grã-Bretanha. Quando você se depara com esses escândalos, de repente descobre que se trata de trilhões de dólares. Já quando ocorre algo do tipo em Myanmar ou no Iraque, por exemplo, estamos lidando com milhões, nem sequer com bilhões de dólares.

Assim, a corrupção é uma arma deveras etnocêntrica. Os países do Norte tendem a dizer que os do Sul são imorais, porque são corruptos. Mas não dizem que somos imorais porque somos corruptos. A corrupção é geral em nosso sistema. É geral porque, se você tem um sistema em que o principal objetivo é a acumulação de capital, a corrupção é simplesmente um aluguel que as pessoas que estão no lugar certo cobram, da acumulação sem fim do capital. Dizer que “eles não deveriam” é uma posição moral correta, mas retórica, porque eles irão até onde der, já que a opinião pública não gosta de enxergar a corrupção. E talvez uma ou duas pessoas sejam presas por um tempo relativamente pequeno, mas, basicamente, nada mais é feito contra a corrupção. Quando foi a última vez que uma pessoa corrupta dessas foi mandada para uma prisão de verdade, por um período realmente longo e teve de devolver todo o dinheiro que levou? Isso simplesmente não acontece.

Lee: Quando ouvi o discurso de feito por Obama ao se candidatar à reeleição, anotei o que ele apresentou como receitas para salvar os EUA dos tempos difíceis: criar mais postos de trabalho na indústria, reconstruir a classe média, enfatizar a educação, cortar tributos sobre a riqueza, uma nova política energética, a redução das importações e benefícios sociais que incluíssem assistência médica – um tema sempre muito controverso nas eleições norte-americanas. Mas eu me surpreendi ao ouvir as mesmas coisas dos candidatos presidenciais aqui na Coreia do Sul. Claro, a Coreia tem uma situação peculiar: a divisão da península, razão pela qual a questão da paz e a questão nuclear são importantes. Fora isso, os programas e políticas socioeconômicas eram mais ou menos idênticos. Isso me levou a pensar se a Coreia do Sul seria como os EUA socioeconomicamente. Cerca de vinte anos atrás a Coreia do Sul foi saudada como modelo para os países de Terceiro Mundo, uma vez que alcançou o crescimento econômico com relativa igualdade. Mas após as crises de 1997 e 2008 a Coreia do Sul revelou-se muito parecida com os EUA, e então as receitas políticas são quase idênticas nos dois países, penso eu.

Wallerstein: Bem, não discordo. Dentre os países mais ricos do mundo, a Coreia do Sul não está no topo, mas não está muito mal. As opiniões sobre o bem-estar social parecem estar divididas entre os conservadores e as pessoas de esquerda. Mas penso que, na verdade, a divisão pode ser mais ampla. Quando se olha para o papel do governo nos países mais pobres do mundo, ainda há a questão de quanto eles têm de benefícios sociais. Uma das coisas que o neoliberalismo, como um movimento atuante desde os anos 1980, tem prescrito para os países do Sul é: “Vejam, ocês têm todos esses problemas econômicos. Querem emprestar dinheiro de nós? Então reduzam os benefícios sociais, porque isso é dinheiro jogado fora”. A teoria age como uma força conservadora contra o governo local, que está atuando mais à esquerda. É o mesmo tipo de debate.

Você se lembra da chamada ”crise da dívida asiática” de 1997? De repente, uma série de países do Leste e do Sudeste da Ásia se viu encrencado economicamente. Ou seja, o dinheiro desapareceu. Os governos viram-se em apuros. Alguns buscaram ajuda, dizendo: “emprestem-nos dinheiro.” E esses governos contaram que a resposta recebida em geral foi: “emprestar dinheiro para vocês? Sim, desde que façam assim e assado”.

O único país que se recusou a tomar dinheiro emprestado nesses termos foi a Malásia — e ela foi o que se recuperou mais rapidamente, por ter recusado. Ao aceitar as exigências, a Indonésia provocou a queda de Suharto. E eu gostaria de citar este episódio. Trata-se de uma famosa atuação de Henry Kissinger, um político reconhecidamente de direita. Após a queda de Suharto, ele escreveu: ”como vocês (FMI e governo dos EUA) podem ser tão estúpidos? Vocês prescrevem para o governo de Suharto medidas que provocam sua queda e colocam, no seu lugar, um governo à esquerda dele. É mais importante manter Suharto no poder do que negar-lhe dinheiro. Vocês não entenderam suas prioridades. A prioridade é geopolítica, e não econômica”. Ele os repreendeu por fazer o que vinham fazendo há dez ou vinte anos em países menos importantes que a Indonesia.

A Coreia ficou no meio, tendo em vista o modo como respondeu. Teve uma atuação melhor do que a dos países que se entregaram completamente ao FMI, mas não tão boa quanto a da Malásia. Uma das coisas que se aprende com isso, e depois do que aconteceu na Argentina, é que esses países têm mais poder geopolítico do que acreditam ter e são mais capazes de reagir contra agências tipo FMI. Naturalmente, o FMI e o Banco Mundial aprenderam a lição. E começaram a falar em programas contra a pobreza. De repente, sua linguagem mudou, como resultado da crise da dívida asiática, porque se deram conta daquilo que Kissinger estava lhes dizendo: precisam ser mais astutos politicamente; não podem ser estritamente econômicos em suas exigências.

Lee: Na convenção do Partido Democrata norte-americano deste ano, Joseph Biden afirmou, repetidamente, que “os EUA não estão em declínio”, e Obama disse que “os EUA são um país do Pacífico”. Isso pode ser interpretado como um retorno dos EUA à zona asiática do Pacífico, inclusive sugerindo a contenção da China.

Wallerstein: Aqui há duas questões. Uma delas é afirmar que os EUA não estão em declínio. A outra é o que eles estão tentando fazer com essa ênfase na Ásia e no Pacífico.

“Os EUA não estão em declínio” é um mantra nos Estados Unidos. Nenhum político pode dizer que os EUA estão em decadência. Na verdade, todos eles se esforçam para negar essa realidade, porque a população dos EUA não está preparada para aceitar o fato de que os EUA não são mais o “Número 1”, um exemplo admirado no mundo inteiro. Eles não vão dizer isso publicamente. É uma pena porque, a meu ver, uma das coisas importantes é tornar a população dos Estados Unidos mais consciente da realidade geopolítica e do fato de que os EUA são um país muito forte – mas não mais, em nenhum sentido, acima dos demais. Há vários países com avaliação melhor que os EUA em determinadas questões. E a capacidade de os EUA para influenciar a situação em várias partes do mundo diminuiu enormemente. Então, penso que é preciso separar a retórica política da realidade política.

E agora, o que os Estados Unidos estavam fazendo na Ásia? A primeira coisa a notar é que os EUA não têm força econômica e militar suficiente para engajar-se por completo, como costumavam, na Europa e na Ásia. Se eles dizem publicamente “vamos estar fazer isso na Ásia”, querem dizer ao mesmo tempo que não vão fazer isso na Europa. Isso não está sendo ignorado pelos europeus. Está sendo ignorado pela opinião pública dos Estados Unidos. Ou seja: isso, em parte, é admitir o declínio.

Agora, a segunda parte é ”conter” a China. Os comunistas chegaram ao poder em 1948. A China não tem sido politicamente popular nos EUA. A Guerra da Coreia, entre o Norte e o Sul da península, foi também uma guerra entre os EUA e a China. Não a denominamos assim, mas essa é a realidade. E a linha de armistício não é tão diferente da linha anterior à guerra. Considero que houve um empate militar entre a China e os EUA. Nenhum dos lados ganhou. No entanto, a retórica era muito forte nos dois lados, China e EUA denunciando um ao outro de todas as maneiras possíveis, até que Nixon foi à China, guiado por seus instintos geopolíticos e os de Henry Kissinger. A combinação era bastante forte. Ambos eram muito cínicos e muito inteligentes. Naquele momento, a China travava uma grande disputa com a União Soviética. Tinham um terreno comum. Uniram-se contra a União Soviética, é simples assim.

Agora, a Guerra Fria acabou, e a União Soviética não existe mais, e há algo chamado Rússia, que é o mesmo país e ao mesmo tempo um país extremamente diferente. A China ficou mais forte do que era antes – militarmente e economicamente. Mas não se deve exagerar. A China está se afirmando geopoliticamente como líder da Ásia. Mas, trinta anos atrás, ninguém na África ou na América Latina pensava na China. A China simplesmente não fazia parte da cena. Agora, mudou. A China ambiciona ser uma potência, e uma potência mundial precisa interessar-se por todas as partes do mundo, da mesma forma que os EUA e a Grã-Bretanha, que são potências mundiais, estão interessados em todas as partes do mundo. Nesse sentido, a União Soviética era uma potência mundial.

A China e os Estados Unidos têm muitas diferenças sobre questões imediatas, e esfregam isso na cara um do outro, de modo errado, de tempos em tempos. E atualmente há um monte de difamadores da China nos EUA. Os políticos gostam de culpá-la por tudo. Isso irrita os chineses, mas é um jogo. Se você olhar para a realidade das políticas dos Estados Unidos e a realidade das políticas chinesas ao longo dos últimos trinta anos, verá que eles nunca fizeram nada que ultrapassasse os limites um do outro. Têm sido muito cuidadosos em manter boas relações geopolíticas.

Então, não considero tão significativa a nova ênfase dos EUA na Ásia e no Pacífico. Primeiro, vejo isso como um show de retórica, em parte para os EUA e em parte para os outros países da Ásia, porque há que se preocupar com a Coreia do Sul, Japão, Vietnã e Filipinas. Estes países são ambivalentes com relação aos EUA. Eles gostam dos EUA, porque Washington os ajuda em certas coisas. Por outro lado, não querem realmente os EUA. Então, têm relações complicadas. E os EUA sentiram que precisavam reassegurar a esses aliados que não os haviam excluído da cena completamente. Não acho que seja mais do que isso. Penso que, quanto a isso, os dois lados não vão cruzar a linha, a não ser a linha retórica, no máximo.

Agora, a península coreana é de fato uma das questões cruciais nas relações EUA-China, porque temos um país chamado Coreia do Norte e outro chamado Coreia do Sul. Ambos são muito coreanos, e o nacionalismo coreano é muito forte. A pressão geopolítica pela reunificação é enorme. E agora os EUA e a China têm de se preocupar com isso. Se as tropas americanas tiverem que sair, isso significa que a Coreia reunificada possuiria armas nucleares? E se eles tiverem armas nucleares, o que os japoneses diriam sobre isso? E Taiwan? Penso que a pressão para nuclearizar, para acabar com a abstenção de armas nucleares na Coreia do Sul, no Japão e em Taiwan é muito forte. Não acho que os EUA estejam felizes com isso. Nem a China. O que leva à aproximação, não ao distanciamento dos EUA e da China. E ambos estão tentando descobrir, “podemos parar este processo?”

Não posso enxergar o que têm em mente, mas suspeito que isso está no topo da sua lista de preocupações. O fato é que eles antecipam, não que a Coreia do Norte vá se desnuclearizar, mas que a Coreia do Sul, o Japão e Taiwan venham a se nuclearizar. Se você me pedir novamente uma previsão, diria que em dez anos, todos eles estarão nuclearizados. E não acho isso desastroso. O fato de os EUA e a União Soviética terem, ambos, armas nucleares, foi um fator importante para garantir que não haveria guerra entre eles. Foi uma coisa positiva, e não negativa.

Agora, é claro, com armas nucleares existe sempre a possibilidade de desastre. As armas nucleares estão em determinado lugar, sob um comandante militar. Ele pode apertar um botão qualquer e dispará-las. Nossa aposta é que ele, como indivíduo, irá obedecer ao comandante-em-chefe do seu país. Em 999 das vezes, é possível contar com isso. Mas há sempre uma chance em mil de haver um oficial descontrolado. Ademais, é bem verdade que, havendo mais armas nucleares no mundo, as pessoas podem roubá-las. Isso vem sendo discutido com relação ao Paquistão. Continua-se a dizer: ”Você sabe, o Paquistão tem de 70 a 80 armas nucleares e bombas” e “Será que os lugares onde estão armazenadas são realmente bem protegidos?”, “Alguém, afiliados à Al Qaeda ou talvez a outro grupo, poderia atacá-los e roubá-los?”

Assim, não excluo o potencial negativo da nuclearização generalizada. Mas não penso que isso significa que o Irã irá bombardear alguém. Na verdade, os governos usam as armas nucleares como um mecanismo de defesa, e não um mecanismo agressivo. Usam como um modo de se safar de ser bombardeados. Os EUA foram para o Iraque não porque ele tinha armas nucleares, mas porque ele não tinha. Os EUA sabiam que, portanto, Bagdá não poderia responder com uma arma nuclear.

Penso que essa é a lição que o Irã e a Coreia do Norte tiraram imediatamente do que aconteceu no Iraque. Na verdade, do ponto de vista da Coreia do Norte, essa é a única proteção real que eles têm militarmente, no momento. Minha previsão é de que, em dez anos, todos os países da Ásia Oriental terão essas armas. E também muitos outros países, como Brasil e Argentina. Suécia, Egito e Arábia Saudita as terão. Sempre pelas mesmas razões: para evitar de ser bombardeado pelos outros.

Lee: E se todo mundo desistisse das armas nucleares, inclusive aqueles que já as possuem?

Wallerstein: Isso seria o ideal, se você considera possível convencer os EUA ou o Paquistão, Índia, Israel, França e Grã-Bretanha. Mas não há política que possa persuadir esses países a reduzir os armamentos nucleares a zero. Você poderá persuadi-los a reduzir o número de bombas que têm, em certas condições. Mas voltar a zero não seria prático. Pela simples razão de que é difícil verificar se os outros estão de fato reduzidos a zero. Há muitas maneiras de esconder essas coisas. É por isso que eles não vão aceitar.

Mas essa é a razão porque o tratado de não-proliferação nuclear é uma farsa, pois basicamente o que ele diz é que ninguém deve possuir armas nucleares, exceto os cinco membros permanentes do Conselho de Segurança da ONU. O resto de vocês, o mundo todo, deve renunciar a qualquer tentativa de ter armas nucleares, e em troca disso nós prometemos duas coisas: (1) vamos reduzir significativamente o nosso estoque, e (2) vamos permitir que você desenvolva a energia nuclear para fins pacíficos.

Desde que o tratado entrou em vigor, não houve uma redução significativa, e agora todo o mundo está falando novamente em renovar e expandir. Os três únicos países que se recusaram a assinar o tratado são a Índia, o Paquistão e Israel. E isso agora está praticamente aceito. Eles desafiam o mundo, desafiam todas as regras, e agora são membros do clube. Os EUA têm boas relações com os três países, e nenhum foi penalizado por ter armas nucleares.

Lee: Então, o que você diz sobre a nossa tentativa de persuadir a Coreia do Norte a desistir das armas nucleares…

Wallerstein: É que é impossível. Se eu estivesse dirigindo a Coreia do Norte, certamente não concordaria.

Lee: Se for esse o caso, acha que o impasse atual entre os EUA e a Coreia do Norte vai continuar? E o que dizer da China?

Wallerstein: Mais uma vez, há a retórica e a realidade. De fato, os diplomatas norte-americanos sabem, todos, que essa proibição é impossível. Mas não sabem o que fazer. Eles certamente não podem dizer, por razões políticas internas, que “não há esperança”. Então imaginam que, colocando pressão sobre a China, estão, por tabela, pressionando a Coreia do Norte. E usam um mecanismo de retardo, não um mecanismo sério. Os militares dos EUA dizem “não vamos enviar tropas ao Irã em hipótese nenhuma”. Por outro lado, os EUA estão comprometidos com Israel e Israel, por sua vez, está dizendo: “Temos que bombardear o Irã”. Então, o que fazem os EUA? Operam com seu mecanismo de retardo. Isso reflete as limitações essenciais do poder dos EUA, o que revela parte de seu declínio. Houve um tempo em que eles não precisavam retardar. Houve um tempo em que podiam tomar decisões fortes sobre outros países. Já não podem. Aqui estamos. Separemos a retórica da realidade geopolítica.

Lee: Isso deixa muitos coreanos progressistas, que são-aliança, pró-negociações, pró-diplomacia, pró-processo de paz, muito pessimistas.

Wallerstein: Por que? Há muitos possíveis acordos entre as Coreias do Norte e do Sul, a começar pelas questões econômicas. Veja, se você está no comando de um regime como o da Coreia do Norte, tem que levar em conta a realidade geopolítica. Por outro lado, quer permanecer no poder. Até agora, eles contaram com um regime de mão pesada, muito repressivo, e o apoio do exército. Podem tentar continuar a reprimir a maioria, os famintos, podem tentar ludibriá-los com a ideologia, tentando fazê-los acreditar que vivem maravilhosamente bem. Mas hoje é cada vez mais difícil fazê-los acreditar nisso. Então é preciso dar-lhes um pouco de bem-estar social – o que significa que deve haver algumas mudanças na política econômica da Coreia do Norte, na linha das que foram feitas pela China e Vietnã. Tanto a China quanto o Vietnã mostraram a eles um modelo, no qual um partido único pode permanecer no poder e ainda assim promover uma abertura econômica. E acho que o novo líder está tentado pela idéia, mas é um caminho difícil. Ele tem as mesmas dificuldades em negociar com o seu público interno que a chanceler Merkel tem, que Obama tem, e certamente todo o mundo precisa se preocupar em manter a retórica satisfatória, internamente. Assim, ele pode ser capaz de ter algo equivalente ao que os chineses fizeram, como as Zonas Econômicas Especiais.

Lee: Se você fosse o presidente da Coreia do Sul, interessado em desenvolver boas relações com a Coreia do Norte, se esforçaria mais para ajudá-la nesse esforço?

Wallerstein: Se eu fosse o presidente da Coreia do Sul é o que eu faria, até onde fosse politicamente possível. Você precisa assegurar um equilíbrio, mantendo o poder político na sua base e as demandas geopolíticas. Mas penso que esse vai ser o caminho a seguir. Sei que a resposta das forças mais conservadoras na Coreia do Sul seria dizer ”bem, nós tentamos uma política de diálogo e não funcionou.” E a resposta é ”sim, não funcionou, em parte porque os tempos eram diferentes, o líder era diferente, com uma atitude diferente. E em segundo lugar porque as coisas foram feitas sem entusiasmo. Talvez a gente tenha que fazer ainda mais.” Esse tipo de debate acontece o tempo todo na política.

Lee: Tocamos em muitas questões hoje. Uma última questão é sobre o capitalismo fundamentalista. Depois da crise de 2008, houve uma volta à abordagem keynesiana do mercado. Pessoalmente, acho que eles não estão certos, mas isso levanta a questão do futuro do capitalismo.

Wallerstein: Algumas reformas vão resolver esse problema. Mas as pessoas estão muito reformistas na sua abordagem dos problemas. É muito difícil para elas aceitar o fato de que há alguns dilemas insolúveis. Quando digo que alguma coisa é insolúvel, elas dizem “oh, nós gostamos do seu argumento até aqui, mas esse ponto nos incomoda.” Os sistemas têm vida. Nenhum sistema dura para sempre. Seja o universo, o maior sistema que possamos conhecer, ou o menor dos nano-sistemas que não podemos ver, nenhum deles vai durar para sempre. Em sua vida, os sistemas se movem gradualmente para mais e mais longe do equilíbrio até atingir um ponto em que já não podem equilibrar-se novamente. E nós somos um sistema. É o chamado sistema mundial moderno. Foi um sistema bem sucedido, mas atingiu o limite das possibilidades. Quando comecei a dizer isso, trinta anos atrás, as pessoas riam. Agora elas não riem, argumentam contra. Já é um progresso. Penso que daqui a vinte anos as pessoas vão estar bem conscientes disso. Pelo menos assim espero, porque é muito difícil empenhar-se em políticas inteligentes para tentar empurrar o mundo para a direção certa, sem que se esteja ciente da realidade.

Government, Industry Can Better Manage Risks of Very Rare Catastrophic Events, Experts Say (Science Daily)

ScienceDaily (Nov. 15, 2012) — Several potentially preventable disasters have occurred during the past decade, including the recent outbreak of rare fungal meningitis linked to steroid shots given to 13,000 patients to relieve back pain. Before that, the 9/11 terrorist attacks in 2001, the Space Shuttle Columbia explosion in 2003, the financial crisis that started in 2008, the Deepwater Horizon accident in the Gulf of Mexico in 2011, and the Fukushima tsunami and ensuing nuclear accident also in 2011 were among rare and unexpected disasters that were considered extremely unlikely or even unthinkable.

A Stanford University engineer and risk management expert has analyzed the phenomenon of government and industry waiting for rare catastrophes to happen before taking risk management steps. She concluded that a different approach to these events would go far towards anticipating them, preventing them or limiting the losses.

To examine the risk management failures discernible in several major catastrophes, the research draws upon the combination of systems analysis and probability as used, for example, in engineering risk analysis. When relevant statistics are not available, it discusses the powerful alternative of systemic risk analysis to try to anticipate and manage the risks of highly uncertain, rare events. The paper by Stanford University researcher Professor Elisabeth Paté-Cornell recommends “a systematic risk analysis anchored in history and fundamental knowledge” as opposed to both industry and regulators sometimes waiting until after a disaster occurs to take safety measures as was the case, for example, of the Deepwater Horizon accident in 2011. Her paper, “On ‘Black Swans’ and ‘Perfect Storms’: Risk Analysis and Management When Statistics Are Not Enough,” appears in the November 2012 issue of Risk Analysis, published by the Society for Risk Analysis.

Paté-Cornell’s paper draws upon two commonly cited images representing different types of uncertainty — “black swans” and “perfect storms” — that are used both to describe extremely unlikely but high-consequence events and often to justify inaction until after the fact. The uncertainty in “perfect storms” derives mainly from the randomness of rare but known events occurring together. The uncertainty in “black swans” stems from the limits of fundamental understanding of a phenomenon, including in extreme cases, a complete lack of knowledge about its very existence.

Given these two extreme types of uncertainties, Paté-Cornell asks what has been learned about rare events in engineering risk analysis that can be incorporated in other fields such as finance or medicine. She notes that risk management often requires “an in-depth analysis of the system, its functions, and the probabilities of its failure modes.” The discipline confronts uncertainties by systematic identification of failure “scenarios,” including rare ones, using “reasoned imagination,” signals (new intelligence information, medical alerts, near-misses and accident precursors) and a set of analytical tools to assess the chances of events that have not happened yet. A main emphasis of systemic risk analysis is on dependencies (of failures, human errors, etc.) and on the role of external factors, such as earthquakes and tsunamis that become common causes of failure.

The “risk of no risk analysis” is illustrated by the case of the 14 meter Fukushima tsunami resulting from a magnitude 9 earthquake. Historical records showed that large tsunamis had occurred at least twice before in the same area. The first time was the Sanriku earthquake in the year 869, which was estimated at magnitude 8.6 with a tsunami that penetrated 4 kilometers inland. The second was the Sanriku earthquake of 1611, estimated at magnitude 8.1 that caused a tsunami with an estimated maximum wave height of about 20 meters. Yet, those previous events were not factored into the design of the Fukushima Dai-ichi nuclear reactor, which was built for a maximum wave height of 5.7 meters, simply based on the tidal wave caused in that area by the 1960 earthquake in Chile. Similar failures to capture historical data and various “signals” occurred in the cases of the 9/11 attacks, the Columbia Space Shuttle explosion and other examples analyzed in the paper.

The risks of truly unimaginable events that have never been seen before (such as the AIDS epidemics) cannot be assessed a priori, but careful and systematic monitoring, signals observation and a concerted response are keys to limiting the losses. Other rare events that place heavy pressure on human or technical systems are the result of convergences of known events (“perfect storms”) that can and should be anticipated. Their probabilities can be assessed using a set of analytical tools that capture dependencies and dynamics in scenario analysis. Given the results of such models, there should be no excuse for failing to take measures against rare but predictable events that have damaging consequences, and to react to signals, even imperfect ones, that something new may be unfolding.

Journal Reference:

  1. Elisabeth Paté-Cornell. On “Black Swans” and “Perfect Storms”: Risk Analysis and Management When Statistics Are Not EnoughRisk Analysis, 2012; DOI:10.1111/j.1539-6924.2011.01787.x

Mathematical Counseling for All Who Wonder Why Their Relationship Is Like a Sinus Wave (Science Daily)

ScienceDaily (Nov. 15, 2012) — Neuroinformaticians from Radboud University Nijmegen provide a mathematical model for efficient communication in relationships. Love affair dynamics can look like a sinus wave: a smooth repetitive oscillation of highs and lows. For some couples these waves grow out of control, leading to breakup, while for others they smooth into a state of peace and quietness. Natalia Bielczyk and her colleagues show that the ‘relationship-sinus’ depends on the time partners take to form their emotional reactions towards each other.

The publication in Applied Mathematics and Computation is now available online.

An example of a modeled relationship, in this case between Romeo (solid lines) and Juliet (dashed lines). The tau (τ) above the individual figures indicates the delay in reactivity. Delays that are too short (<0,83) cause instability, just like delays that are too long (>2,364). Delays in the range of 0,83-2,364 cause stability in Romeo and Juliet’s relationship. (Credit: Image courtesy of Radboud University Nijmegen)

In 1988, Steven Strogatz was the first to describe romantic relationships with mathematical dynamical systems. He constructed a two-dimensional model describing two hypothetical partners that interact emotionally. He used a well known example: the changes of Romeo’s and Juliet’s love (and hate) over time. His model became famous and inspired others to analyze (fictional) relationship case studies like Jack and Rose in the Titanic movie. However, the Strogatz model does not include delays in the partner’s responses to one another. Therefore it is only a good start for fruitful studies on human emotions and relationships.

That is why Natalia Bielczyk adjusted Strogatz to a more life-like model by considering the time necessary for processing and forming the complex emotions in relationships. The reactivity in the relationship model is based on four parameters: both partners have a personal history (their ‘past’), and a certain reactivity to their partner and his/her history. Depending on these parameters, different classes of relationships can be found: some seem doomed to break regardless of the partners promptness to one another while others are solid enough to always be stable. In the calculated models, stability occurs when both partners reach a stable level of satisfaction and the sinus wave disappears. The paper concludes that for a broad class of relationships, delays in reactivity can bring stability to couples that are originally unstable.

These results are pretty intuitive: too prompt or too delayed responses evoke trouble. Below a certain value, delays caused instability and above this value they caused stability, showing that some minimum level of sloth can be beneficial for a relationship. The fact that too fast emotional reactivity can lead to destabilization, shows that reflecting each other’s moods is not enough for a stable relationship: a certain time range is necessary for compound emotions to form. Summarized, the publication offers mathematical justification for intuitive phenomena in social psychology. Working on good communication, studying each other’s emotions and working out the right timing can improve your relationship, even without trying to change your partners traits (which is harder and takes more time).

Journal Reference:

  1. Natalia Bielczyk, Marek Bodnar, Urszula Foryś. Delay can stabilize: Love affairs dynamicsApplied Mathematics and Computation, 2012; DOI: 10.1016/j.amc.2012.10.028

Brazilian Mediums Shed Light On Brain Activity During a Trance State (Science Daily)

ScienceDaily (Nov. 16, 2012) — Researchers at Thomas Jefferson University and the University of Sao Paulo in Brazil analyzed the cerebral blood flow (CBF) of Brazilian mediums during the practice of psychography, described as a form of writing whereby a deceased person or spirit is believed to write through the medium’s hand. The new research revealed intriguing findings of decreased brain activity during the mediums’ dissociative state which generated complex written content. Their findings will appear in the November 16th edition of the online journal PLOS ONE.

The 10 mediums — five less expert and five experienced — were injected with a radioactive tracer to capture their brain activity during normal writing and during the practice of psychography which involves the subject entering a trance-like state. The subjects were scanned using SPECT (single photon emission computed tomography) to highlight the areas of the brain that are active and inactive during the practice.

“Spiritual experiences affect cerebral activity, this is known. But, the cerebral response to mediumship, the practice of supposedly being in communication with, or under the control of the spirit of a deceased person, has received little scientific attention, and from now on new studies should be conducted,” says Andrew Newberg, MD, director of Research at the Jefferson-Myrna Brind Center of Integrative Medicine and a nationally-known expert on spirituality and the brain, who collaborated with Julio F. P. Peres, Clinical Psychologist, PhD in Neuroscience and Behavior, Institute of Psychology at the University of Sao Paulo in Brazil, and colleagues on the research.

The mediums ranged from 15 to 47 years of automatic writing experience, performing up to 18 psychographies per month. All were right-handed, in good mental health, and not currently using any psychiatric drugs. All reported that during the study, they were able to reach their usual trance-like state during the psychography task and were in their regular state of consciousness during the control task.

The researchers found that the experienced psychographers showed lower levels of activity in the left hippocampus (limbic system), right superior temporal gyrus, and the frontal lobe regions of the left anterior cingulate and right precentral gyrus during psychography compared to their normal (non-trance) writing. The frontal lobe areas are associated with reasoning, planning, generating language, movement, and problem solving, perhaps reflecting an absence of focus, self-awareness and consciousness during psychography, the researchers hypothesize.

Less expert psychographers showed just the opposite — increased levels of CBF in the same frontal areas during psychography compared to normal writing. The difference was significant compared to the experienced mediums. This finding may be related to their more purposeful attempt at performing the psychography. The absence of current mental disorders in the groups is in line with current evidence that dissociative experiences are common in the general population and not necessarily related to mental disorders, especially in religious/spiritual groups. Further research should address criteria for distinguishing between healthy and pathological dissociative expressions in the scope of mediumship.

The writing samples produced were also analyzed and it was found that the complexity scores for the psychographed content were higher than those for the control writing across the board. In particular, the more experienced mediums showed higher complexity scores, which typically would require more activity in the frontal and temporal lobes, but this was not the case. Content produced during psychographies involved ethical principles, the importance of spirituality, and bringing together science and spirituality.

Several possible hypotheses for these many differences have been considered. One speculation is that as frontal lobe activity decreases, the areas of the brain that support mediumistic writing are further disinhibited (similar to alcohol or drug use) so that the overall complexity can increase. In a similar manner, improvisational music performance is associated with lower levels of frontal lobe activity which allows for more creative activity. However, improvisational music performance and alcohol/drug consumption states are quite peculiar and distinct from psychography. “While the exact reason is at this point elusive, our study suggests there are neurophysiological correlates of this state,” says Newberg.

“This first-ever neuroscientific evaluation of mediumistic trance states reveals some exciting data to improve our understanding of the mind and its relationship with the brain. These findings deserve further investigation both in terms of replication and explanatory hypotheses,” states Newberg.

Journal Reference:

  1. Julio Fernando Peres, Alexander Moreira-Almeida, Leonardo Caixeta, Frederico Leao, Andrew Newberg. Neuroimaging during Trance State: A Contribution to the Study of DissociationPLoS ONE, 2012; 7 (11): e49360 DOI:10.1371/journal.pone.0049360

Far from random, evolution follows a predictable genetic pattern, Princeton researchers find (Princeton)

Posted October 25, 2012; 12:00 p.m.

by Morgan Kelly, Office of Communications

Evolution, often perceived as a series of random changes, might in fact be driven by a simple and repeated genetic solution to an environmental pressure that a broad range of species happen to share, according to new research.

Princeton University research published in the journal Science suggests that knowledge of a species’ genes — and how certain external conditions affect the proteins encoded by those genes — could be used to determine a predictable evolutionary pattern driven by outside factors. Scientists could then pinpoint how the diversity of adaptations seen in the natural world developed even in distantly related animals.

Andolfatto bug

The Princeton researchers sequenced the expression of a poison-resistant protein in insect species that feed on plants such as milkweed and dogbane that produce a class of steroid-like cardiotoxins called cardenolides as a natural defense. The insects surveyed spanned three orders: butterflies and moths (Lepidoptera); beetles and weevils (Coleoptera); and aphids, bed bugs, milkweed bugs and other sucking insects (Hemiptera). Above: Dogbane beetle(Photo courtesy of Peter Andolfatto)

“Is evolution predictable? To a surprising extent the answer is yes,” said senior researcher Peter Andolfatto, an assistant professor in Princeton’s Department of Ecology and Evolutionary Biology and the Lewis-Sigler Institute for Integrative Genomics. He worked with lead author and postdoctoral research associate Ying Zhen, and graduate students Matthew Aardema and Molly Schumer, all from Princeton’s ecology and evolutionary biology department, as well as Edgar Medina, a biological sciences graduate student at the University of the Andes in Colombia.

The researchers carried out a survey of DNA sequences from 29 distantly related insect species, the largest sample of organisms yet examined for a single evolutionary trait. Fourteen of these species have evolved a nearly identical characteristic due to one external influence — they feed on plants that produce cardenolides, a class of steroid-like cardiotoxins that are a natural defense for plants such as milkweed and dogbane.

Though separated by 300 million years of evolution, these diverse insects — which include beetles, butterflies and aphids — experienced changes to a key protein called sodium-potassium adenosine triphosphatase, or the sodium-potassium pump, which regulates a cell’s crucial sodium-to-potassium ratio. The protein in these insects eventually evolved a resistance to cardenolides, which usually cripple the protein’s ability to “pump” potassium into cells and excess sodium out.

Andolfatto lab

Lead author Ying Zhen (foreground), Andolfatto (far left), fourth author and graduate student Molly Schumer (near left), and their co-authors sequenced and assembled all the expressed genes in 29 distantly related insect species, the largest sample of organisms yet examined for a single evolutionary trait. They used these sequences to predict how a certain protein would be encoded in the genes of 14 distantly related species that evolved a similar resistance to toxic plants. Similar techniques could be used to trace protein changes in a species’ DNA to understand how many diverse organisms evolved as a result of environmental factors. At right is research assistant Ilona Ruhl, who was not involved in the research. (Photo by Denise Applewhite)

Andolfatto and his co-authors first sequenced and assembled all the expressed genes in the studied species. They used these sequences to predict how the sodium-potassium pump would be encoded in each of the species’ genes based on cardenolide exposure.

Scientists using similar techniques could trace protein changes in a species’ DNA to understand how many diverse organisms evolved as a result of environmental factors, Andolfatto said. “To apply this approach more generally a scientist would have to know something about the genetic underpinnings of a trait and investigate how that trait evolves in large groups of species facing a common evolutionary problem,” Andolfatto said.

“For instance, the sodium-potassium pump also is a candidate gene location related to salinity tolerance,” he said. “Looking at changes to this protein in the right organisms could reveal how organisms have or may respond to the increasing salinization of oceans and freshwater habitats.”

Andolfatto bug

Milkweed tussock moth (Photo courtesy of Peter Andolfatto)

Jianzhi Zhang, a University of Michigan professor of ecology and evolutionary biology, said that the Princeton-based study shows that certain traits have a limited number of molecular mechanisms, and that numerous, distinct species can share the few mechanisms there are. As a result, it is likely that a cross-section of certain organisms can provide insight into the development of other creatures, he said.

“The finding of parallel evolution in not two, but numerous herbivorous insects increases the significance of the study because such frequent parallelism is extremely unlikely to have happened simply by chance,” said Zhang, who is familiar with the study but had no role in it.

“It shows that a common molecular mechanism is used by many different insects to defend themselves against the toxins in their food, suggesting that perhaps the number of potential mechanisms for achieving this goal is very limited,” he said. “That many different insects independently evolved the same molecular tricks to defend themselves against the same toxin suggests that studying a small number of well-chosen model organisms can teach us a lot about other species. Yes, evolution is predictable to a certain degree.”

Andolfatto and his co-authors examined the sodium-potassium pump protein because of its well-known sensitivity to cardenolides. In order to function properly in a wide variety of physiological contexts, cells must be able to control levels of potassium and sodium. Situated on the cell membrane, the protein generates a desired potassium to sodium ratio by “pumping” three sodium atoms out of the cell for every two potassium atoms it brings in.

Cardenolides disrupt the exchange of potassium and sodium, essentially shutting down the protein, Andolfatto said. The human genome contains four copies of the pump protein, and it is a candidate gene for a number of human genetic disorders, including salt-sensitive hypertension and migraines. In addition, humans have long used low doses of cardenolides medicinally for purposes such as controlling heart arrhythmia and congestive heart failure.

Andolfatto bug

Large milkweed bugs (Photo courtesy of Peter Andolfatto)

The Princeton researchers used the DNA microarray facility in the University’s Lewis-Sigler Institute for Integrative Genomics to sequence the expression of the sodium-potassium pump protein in insect species spanning three orders: butterflies and moths (Lepidoptera); beetles and weevils (Coleoptera); and aphids, bed bugs, milkweed bugs and other sucking insects (Hemiptera).

The researchers found that the genes of cardenolide-resistant insects incorporated various mutations that allowed it to resist the toxin. During the evolutionary timeframe examined, the sodium-potassium pump of insects feeding on dogbane and milkweed underwent 33 mutations at sites known to affect sensitivity to cardenolides. These mutations often involved similar or identical amino-acid changes that reduced susceptibility to the toxin. On the other hand, the sodium-potassium pump mutated just once in insects that do not feed on these plants.

Significantly, the researchers found that multiple gene duplications occurred in the ancestors of several of the resistant species. These insects essentially wound up with one conventional sodium-potassium pump protein and one “experimental” version, Andolfatto said. In these insects, the newer, hardier versions of the sodium-potassium pump are mostly expressed in gut tissue where they are likely needed most.

“These gene duplications are an elegant solution to the problem of adapting to environmental changes,” Andolfatto said. “In species with these duplicates, the organism is free to experiment with one copy while keeping the other constant, avoiding the risk that the new version of the protein will not perform its primary job as well.”

The researchers’ findings unify the generally separate ideas of what predominately drives genetic evolution: protein evolution, the evolution of the elements that control protein expression or gene duplication. This study shows that all three mechanisms can be used to solve the same evolutionary problem, Andolfatto said.

Central to the work is the breadth of species the researchers were able to examine using modern gene sequencing equipment, Andolfatto said.

“Historically, studying genetic evolution at this level has been conducted on just a handful of ‘model’ organisms such as fruit flies,” Andolfatto said. “Modern sequencing methods allowed us to approach evolutionary questions in a different way and come up with more comprehensive answers than had we examined one trait in any one organism.

“The power of what we’ve done is to survey diverse organisms facing a similar problem and find striking evidence for a limited number of possible solutions,” he said. “The fact that many of these solutions are used over and over again by completely unrelated species suggests that the evolutionary path is repeatable and predictable.”

The paper, “Parallel Molecular Evolution in an Herbivore Community,” was published Sept. 28 by Science. The research was supported by grants from the Centre for Genetic Engineering and Biotechnology, the National Science Foundation and the National Institutes of Health.

Itália condena sete cientistas por não prever terremoto (Folha de São Paulo)

JC e-mail 4609, de 23 de Outubro de 2012.

Em 2009, o abalo sísmico em L’Aquila matou mais de 300 pessoas e deixou cerca de 65 mil desabrigadas. Justiça alega que os especialistas foram negligentes.

Um tribunal da Itália condenou ontem (22) sete cientistas a cumprir seis anos de prisão por não terem previsto o terremoto que atingiu o país em 2009, na cidade de L’Aquila, região de Abruzzo. Mais de 300 pessoas morreram.

Todos os cientistas, que vão recorrer em liberdade, eram membros da Comissão Nacional para Previsão e Prevenção de Riscos. Foram acusados de negligência, por não terem analisado corretamente as possibilidades do terremoto acontecer e, assim, alertar as autoridades.

Entre os sete condenados estão grandes nomes da ciência italiana, como o professor Enzo Boschi, que presidiu o Instituto Nacional de Geofísica e Vulcanologia, e o vice-diretor da Defesa Civil, Bernardo de Bernardinis.

Cientistas de diversas partes do mundo protestaram contra a decisão do tribunal em condená-los por homicídio culposo (quando não há intenção de matar). Em protesto, uma carta com mais de 5.000 assinaturas de cientistas foi entregue ao presidente italiano, Giorgio Napolitano, alegando que a ciência não possui meios para prever terremotos, e que o processo pode impedir que futuramente especialistas aconselhem governos a respeito de riscos sísmicos.

Imprevisível – Segundo a técnica de sismologia do Instituto de Astronomia, Geofísica e Ciências Atmosféricas da USP (IAG-USP) Célia Fernandes, é muito difícil identificar o momento exato em que irá acontecer um abalo sísmico. “Todos os profissionais de sismologia trabalham com o objetivo de prever terremotos, mas não existe regra na natureza. Mesmo a recorrência de sismos não é garantia de que um terremoto de grande magnitude está prestes a acontecer”, afirma.

Os cientistas se reuniram na cidade de L’Aquila em 31 de março de 2009, seis dias antes do terremoto, e não comunicaram sobre a chance de um abalo sísmico. Para o tribunal, eles falharam por terem subestimado os riscos, limitando a ação das autoridades públicas, que não tiveram tempo suficiente para tomar medidas necessárias para proteger a população.

Segundo os promotores, uma série de tremores de baixo nível atingiu a região nos meses que antecederam o terremoto e isso deveria ter sido interpretado pelos especialistas como um sinal do que estava para acontecer.

O terremoto de magnitude 6,3 graus atingiu L’Aquila em abril de 2009. Além das mortes, também feriu outras 1.500 pessoas. Estima-se que 65 mil tenham ficado desabrigadas. A condenação dos cientistas ainda não é definitiva. Eles devem entrar com um recurso.

*   *   *

Artigos:

David Alexander. An evaluation of medium-term recovery processes after the 6 April 2009 earthquake in L’Aquila, Central Italy. Environmental Hazards, iFirst.

Abstract

This article uses the earthquake of 6 April 2009 at L’Aquila, central Italy (magnitude 6.3) as a case history of processes of recovery from disaster. These are evaluated according to criteria linked to both vulnerability analysis and disaster risk-reduction processes. The short- and medium-term responses to the disaster are evaluated, and 11 criticisms are made of the Italian Government’s policy on transitional shelter, which has led to isolation, social fragmentation and deprivation of services. Government policy on disaster risk is further evaluated in the light of the UNISDR Hyogo Framework for Action. Lack of governance and democratic participation is evident in the response to disasters. It is concluded that without an adequately planned strategy for managing the long-term recovery process, events such as the L’Aquila earthquake open up Pandora’s box of unwelcome consequences, including economic stagnation, stalled reconstruction, alienation of the local population, fiscal deprivation and corruption. Such phenomena tend to perpetuate rather than reduce vulnerability to disasters.

“[…] science and scientists were not on trial. The hypothesis of culpability being tested in the courts referred to the failure to adopt a precautionary approach in the face of clear indications of impending seismic impact, not failure to predict an earthquake, and this is amply documented in official records”.

David E. Alexander. The L’Aquila Earthquake of 6 April 2009 and Italian Government Policy on Disaster Response. Journal of Natural Resources Policy Research, Vol. 2, Iss. 4, 2010

Abstract

This paper describes the impact of the earthquake that struck the central Italian city of L’Aquila on 6 April 2009, killing 308 people and leaving 67 500 homeless. The pre-impact, emergency, and early recovery phases are discussed in terms of the nature and effectiveness of government policy. Disaster risk reduction (DRR) in Italy is evaluated in relation to the structure of civil protection and changes wrought by both the L’Aquila disaster and public scandals connected with the misappropriation of funds. Six of the most important lessons are derived from this analysis and related to DRR needs both in Italy and elsewhere in the world.

“As articulated at the meeting of the Commission on Major Risks on 31 March 2009, the Italian Government’s position was unequivocal: there was no cause for alarm. This attitude permeated its way down the ranks of the civil protection system. Then, at 00:30 hrs on Monday 6 April 2010, a tremor that was larger than usual shook L’Aquila. Residents rushed out of their houses in alarm. The strategy adopted by civil protection authorities was to tour the streets with loudspeakers advising people to calm down and return home. In the town of Pagánica, less than 10 km northeast of L’Aquila, residents did exactly that: in the ensuing main shock three hours later, eight of them died and 40 were seriously injured. In L’Aquila city I investigated one case in which a young lady had decided to remain out of doors after the foreshock, while her parents returned home. Their bodies were recovered by firemen from a space barely 15 cm wide into which the building had compressed as it collapsed”.

L’Aquila quake: Italy scientists guilty of manslaughter (BBC)

22 October 2012

The BBC’s Alan Johnston in Rome says the prosecution argued that the scientists were “just too reassuring”

Six Italian scientists and an ex-government official have been sentenced to six years in prison over the 2009 deadly earthquake in L’Aquila.

A regional court found them guilty of multiple manslaughter.

Prosecutors said the defendants gave a falsely reassuring statement before the quake, while the defence maintained there was no way to predict major quakes.

The 6.3 magnitude quake devastated the city and killed 309 people.

Many smaller tremors had rattled the area in the months before the quake that destroyed much of the historic centre.

It took Judge Marco Billi slightly more than four hours to reach the verdict in the trial, which had begun in September 2011.

Lawyers have said that they will appeal against the sentence. As convictions are not definitive until after at least one level of appeal in Italy, it is unlikely any of the defendants will immediately face prison.

‘Alarming’ case

The seven – all members of the National Commission for the Forecast and Prevention of Major Risks – were accused of having provided “inaccurate, incomplete and contradictory” information about the danger of the tremors felt ahead of 6 April 2009 quake, Italian media report.

In addition to their sentences, all have been barred from ever holding public office again, La Repubblica reports.

In the closing statement, the prosecution quoted one of its witnesses, whose father died in the earthquake.

It described how Guido Fioravanti had called his mother at about 11:00 on the night of the earthquake – straight after the first tremor.

“I remember the fear in her voice. On other occasions they would have fled but that night, with my father, they repeated to themselves what the risk commission had said. And they stayed.”

‘Hasty sentence’

The judge also ordered the defendants to pay court costs and damages.

Reacting to the verdict against him, Bernardo De Bernardinis said: “I believe myself to be innocent before God and men.”

“My life from tomorrow will change,” the former vice-president of the Civil Protection Agency’s technical department said, according to La Repubblica.

“But, if I am judged by all stages of the judicial process to be guilty, I will accept my responsibility.”

Another, Enzo Boschi, described himself as “dejected” and “desperate” after the verdict was read.

“I thought I would have been acquitted. I still don’t understand what I was convicted of.”

One of the lawyers for the defence, Marcello Petrelli, described the sentences as “hasty” and “incomprehensible”.

‘Inherently unpredictable’

The case has alarmed many in the scientific community, who feel science itself has been put on trial.

Some scientists have warned that the case might set a damaging precedent, deterring experts from sharing their knowledge with the public for fear of being targeted in lawsuits, the BBC’s Alan Johnston in Rome reports.

Among those convicted were some of Italy’s most prominent and internationally respected seismologists and geological experts.

Earlier, more than 5,000 scientists signed an open letter to Italian President Giorgio Napolitano in support of the group in the dock.

After the verdict was announced, David Rothery, of the UK’s Open University, said earthquakes were “inherently unpredictable”.

“The best estimate at the time was that the low-level seismicity was not likely to herald a bigger quake, but there are no certainties in this game,” he said.

Malcolm Sperrin, director of medical physics at the UK’s Royal Berkshire Hospital said that the sentence was surprising and could set a worrying precedent.

“If the scientific community is to be penalised for making predictions that turn out to be incorrect, or for not accurately predicting an event that subsequently occurs, then scientific endeavour will be restricted to certainties only and the benefits that are associated with findings from medicine to physics will be stalled.”

Analysis

by Jonathan Amos – Science correspondent

The Apennines, the belt of mountains that runs down through the centre of Italy, is riddled with faults, and the “Eagle” city of L’Aquila has been hammered time and time again by earthquakes. Its glorious old buildings have had to be patched up and re-built on numerous occasions.

Sadly, the issue is not “if” but “when” the next tremor will occur in L’Aquila. But it is simply not possible to be precise about the timing of future events. Science does not possess that power. The best it can do is talk in terms of risk and of probabilities, the likelihood that an event of a certain magnitude might occur at some point in the future.

The decision to prosecute some of Italy’s leading geophysicists drew condemnation from around the world. The scholarly bodies said it had been beyond anyone to predict exactly what would happen in L’Aquila on 6 April 2009.

But the authorities who pursued the seven defendants stressed that the case was never about the power of prediction – it was about what was interpreted to be an inadequate characterisation of the risks; of being misleadingly reassuring about the dangers that faced their city.

Nonetheless, the verdicts will come as a shock to all researchers in Italy whose expertise lies in the field of assessing natural hazards. Their pronouncements will be scrutinised as never before, and their fear will be that they too could find themselves embroiled in legal action over statements that are inherently uncertain.

THOSE CONVICTED

Bernardo De Bernardinis, former deputy chief of Italy's civil protection department

Franco Barberi, head of Serious Risks Commission

Enzo Boschi, former president of the National Institute of Geophysics

Giulio Selvaggi, director of National Earthquake Centre

Gian Michele Calvi, director of European Centre for Earthquake Engineering

Claudio Eva, physicist

Mauro Dolce, director of the the Civil Protection Agency’s earthquake risk office

Bernardo De Bernardinis, former vice-president of Civil Protection Agency’s technical department

 

*   *   *

Scientists in the dock over L’Aquila earthquake

By Susan Watts 

BBC Newsnight Science editor

20 September 2011

Next week six scientists and an official go on trial in Italy for manslaughter over the earthquake in L’Aquila that killed 309 people two years ago.

This extraordinary case has attracted international attention because science itself seemed to be on trial, with the seven defendants apparently charged for failing to predict the magnitude 6.3 earthquake that struck on the night of 6 April 2009.

Scientists cannot yet say when an earthquake is going to happen with any precision, even in a seismically active zone. And over 5,000 scientists from around the world have signed a letter supporting those on trial.

Quake damaged buildings in OnnaThe earthquake was felt throughout central Italy

“I’m afraid that like an earthquake, nothing in this case is predictable. Let’s not forget, this trial is happening in L’Aquila, where the entire population has been personally affected, and awaiting a sentence that should not happen, but could happen,” Marcello Milandri said.Yet the lawyer for one of the scientists, in an interview with Newsnight, said it is possible his client will be convicted:

Seismologists can assess only the probability that a quake may happen, and then with a large degree of uncertainty about its properties.

In some circumstances, they may be able to say that the likelihood of an event has gone up, to help authorities prepare for an emergency, perhaps by concentrating on particularly vulnerable buildings or sectors of the population, such as school-children.

Weighing the risks

The signatories to the letter say the authorities should focus on earthquake protection, instead of pursuing scientists in what some feel is a Galileo-style inquisition.

The Commission calmed the local population down following a number of earth tremors. After the quake, we heard people’s accounts and they told us they changed their behaviour following the advice of the commission 

Inspector Lorenzo Cavallo

Newsnight went to L’Aquila to find out why this case has come about.

The prosecution team said they never intended to put science on trial, that they know it is not possible to predict an earthquake.

What they are questioning is whether the six scientists and the official on trial, who together constitute Italy’s Commission of Grand Risks, did their jobs properly.

That is, did they weigh up all the risks, and communicate these clearly to the authorities seeking their advice?

The local investigator, Inspector Lorenzo Cavallo, said: “The Commission calmed the local population down following a number of earth tremors. After the quake, we heard people’s accounts and they told us they changed their behaviour following the advice of the commission.

“It is our duty to investigate what has been said in each case and pass it on to the legal authority.”

Radon gas claims

A local journalist, Giustino Parisse, who lived in Onna, a small hamlet outside L’Aquila at the time, is one of those bringing the case.

In the weeks leading up to the major quake there had been a series of tremors. On the night of 5 April, several large shocks kept his children awake.

They were anxious, but he told them to go back to bed, that there was no need to worry, the scientists had said so.

Rescuers carrying bodyThe quake was the deadliest to hit Italy since 1980

His 16-year-old daughter and 17-year-old son both died in the earthquake that night, along with his father, when the family home collapsed.

He told Newsnight that people had been becoming increasingly anxious, in part because of warnings from a local nuclear scientist, Giampaolo Giuliani, that raised levels of radon gas in the area suggested to him an earthquake might be imminent.

How valuable this is as an indicator is widely disputed, and most experts in this field believe it is unreliable.

At the time the head of Italy’s civil protection agency, Guido Bertolaso,took the unusual step of asking his Commission of Grand Risks to fly to L’Aquila to discuss the situation.

They held a meeting that lasted only an hour or so, then the official now on trial, Bernardo de Bernadinis, who was then deputy director of the civil protection department, held a hurried press briefing, in reassuring tones.

Two of those on trial are linked to Italy’s National institute of Geophysics and Volcanology (INGV).

The institute’s head of public affairs, Pasquale de Santis, told Newsnight that the trial is a distraction, that seismologists have been saying since 1998 that this is a high risk area, and that people should instead be focussing on those who failed properly to enforce building codes in L’Aquila.

Funding needed

We put this to the mayor of L’Aquila, Massimo Cialente. He hopes the trial will prompt a national debate, and make it easier for him to raise the funds and support he needs to protect people against future earthquakes.

He said six days before the major quake he moved local children from a school damaged in an earlier tremor. He said he had no official budget to do that, because prevention is not a national priority.

“We closed the school and we had to transfer 500 pupils. I needed money, but I started the work without the money. If the quake did not happen I would be charged for that.”

Those bringing the case say the people of L’Aquila have a right to know what happened. Many hope the trial will bring some peace of mind.

But some of those who signed the letter of support told Newsnight they fear the case will dissuade scientists from leaving their labs to engage with politicians and the public.

John McCloskey, professor of geophysics at Ulster University, said these scientists have spent their lives producing some of the most sophisticated seismic maps in the world.

He said it is an “outrage” that they are now on trial for manslaughter, adding that he signed the letter because “their peril is our peril”.

*   *   *

Can we predict when and where quakes will strike?

By Leila Battison – Science reporter

20 September 2011

l'Aquila earthquakeSeismologists try to manage the risk of building damage and loss of life

This week, six seismologists go on trial for the manslaughter of 309 people, who died as a result of the 2009 earthquake in l’Aquila, Italy.

The prosecution holds that the scientists should have advised the population of l’Aquila of the impending earthquake risk.

But is it possible to pinpoint the time and location of an earthquake with enough accuracy to guide an effective evacuation?

There are continuing calls for seismologists to predict where and when a large earthquake will occur, to allow complete evacuation of threatened areas.

Predicting an earthquake with this level of precision is extremely difficult, because of the variation in geology and other factors that are unique to each location.

Attempts have been made, however, to look for signals that indicate a large earthquake is about to happen, with variable success.

Historically, animals have been thought to be able to sense impending earthquakes.

Noticeably erratic behaviour of pets, and mass movement of wild animals like rats, snakes and toads have been observed prior to several large earthquakes in the past.

Following the l’Aquila quake, researchers published a study in the Journal of Zoology documenting the unusual movement of toads away from their breeding colony.

But scientists have been unable to use this anecdotal evidence to predict events.

The behaviour of animals is affected by too many factors, including hunger, territory and weather, and so their erratic movements can only be attributed to earthquakes in hindsight.

Precursor events

When a large amount of stress is built up in the Earth’s crust, it will mostly be released in a single large earthquake, but some smaller-scale cracking in the build-up to the break will result in precursor earthquakes.

“There is no scientific basis for making a prediction” – Richard Walker, University of Oxford

These small quakes precede around half of all large earthquakes, and can continue for days to months before the big break.

Some scientists have even gone so far as to try to predict the location of the large earthquake by mapping the small tremors.

The “Mogi Doughnut Hypothesis” suggests that a circular pattern of small precursor quakes will precede a large earthquake emanating from the centre of that circle.

While half of the large earthquakes have precursor tremors, only around 5% of small earthquakes are associated with a large quake.

So even if small tremors are felt, this cannot be a reliable prediction that a large, devastating earthquake will follow.

“There is no scientific basis for making a prediction”, said Dr Richard Walker of the University of Oxford.

In several cases, increased levels of radon gas have been observed in association with rock cracking that causes earthquakes.

Leaning buildingSmall ground movements sometimes precede a large quake

Radon is a natural and relatively harmless gas in the Earth’s crust that is released to dissolve into groundwater when the rock breaks.

Similarly, when rock cracks, it can create new spaces in the crust, into which groundwater can flow.

Measurements of groundwater levels around earthquake-prone areas see sudden changes in the level of the water table as a result of this invisible cracking.

Unfortunately for earthquake prediction, both the radon emissions and water level changes can occur before, during, or after an earthquake, or not at all, depending on the particular stresses a rock is put under.

Advance warning systems

The minute changes in the movement, tilt, and the water, gas and chemical content of the ground associated with earthquake activity can be monitored on a long term scale.

Measuring devices have been integrated into early warning systems that can trigger an alarm when a certain amount of activity is recorded.

Prediction will only become possible with a detailed knowledge of the earthquake process. Even then, it may still be impossible” – Dr Dan Faulkner, University of Liverpool

Such early warning systems have been installed in Japan, Mexico and Taiwan, where the population density and high earthquake risk pose a huge threat to people’s lives.

But because of the nature of all of these precursor reactions, the systems may only be able to provide up to 30 seconds’ advance warning.

“In the history of earthquake study, only one prediction has been successful”, explains Dr Walker.

The magnitude 7.3 earthquake in 1975 in Haicheng, North China was predicted one day before it struck, allowing authorities to order evacuation of the city, saving many lives.

But the pattern of seismic activity that this prediction was based on has not resulted in a large earthquake since, and just a year later in 1976 a completely unanticipated magnitude 7.8 earthquake struck nearby Tangshan causing the death of over a quarter of a million people.

The “prediction” of the Haicheng quake was therefore just a lucky unrepeatable coincidence.

A major problem in the prediction of earthquake events that will require evacuation is the threat of issuing false alarms.

Scientists could warn of a large earthquake every time a potential precursor event is observed, however this would result in huge numbers of false alarms which put a strain on public resources and might ultimately reduce the public’s trust in scientists.

“Earthquakes are complex natural processes with thousands of interacting factors, which makes accurate prediction of them virtually impossible,” said Dr Walker.

Seismologists agree that the best way to limit the damage and loss of life resulting from a large earthquake is to predict and manage the longer-term risks in an earthquake-prone area. These include the likelihood of building collapsing and implementing emergency plans.

“Detailed scientific research has told us that each earthquake displays almost unique characteristics, preceded by foreshocks or small tremors, whereas others occur without warning. There simply are no rules to utilise in order to predict earthquakes,” said Dr Dan Faulkner, senior lecturer in rock mechanics at the University of Liverpool.

“Earthquake prediction will only become possible with a detailed knowledge of the earthquake process. Even then, it may still be impossible.”

What causes an earthquake?

An earthquake is caused when rocks in the Earth’s crust fracture suddenly, releasing energy in the form of shaking and rolling, radiating out from the epicentre.

The rocks are put under stress mostly by friction during the slow, 1-10 cm per year shuffling of tectonic plates.

The release of this friction can happen at any time, either through small frequent fractures, or rarer breaks that release a lot more energy, causing larger earthquakes.

It is these large earthquakes that have devastating consequences when they strike in heavily populated areas.

Attempts to limit the destruction of buildings and the loss of life mostly focus on preventative measures and well-communicated emergency plans.

*   *   *

Long-range earthquake prediction – really?

By Megan Lane – BBC News

11 May 201

Model figures on shaky jigsaw

In Italy, Asia and New Zealand, long-range earthquake predictions from self-taught forecasters have recently had people on edge. But is it possible to pinpoint when a quake will strike?

It’s a quake prediction based on the movements of the moon, the sun and the planets, and made by a self-taught scientist who died in 1979.

But on 11 May 2011, many people planned to stay away from Rome, fearing a quake forecast by the late Raffaele Bendandi – even though his writings contained no geographical location, nor a day or month.

In New Zealand too, the quake predictions of a former magician who specialises in fishing weather forecasts have caused unease.

“The date is not there, nor is the place” – Paola Lagorio, of the foundation that honours Bendandi

After a 6.3 quake scored a direct hit on Christchurch in February, Ken Ring forecast another on 20 March, caused by a “moon-shot straight through the centre of the earth”. Rattled residents fled the city.

Predicting quakes is highly controversial, says Brian Baptie, head of seismology at the British Geological Survey. Many scientists believe it is impossible because of the quasi-random nature of earthquakes.

“Despite huge efforts and great advances in our understanding of earthquakes, there are no good examples of an earthquake being successfully predicted in terms of where, when and how big,” he says.

Many of the methods previously applied to earthquake prediction have been discredited, he says, adding that predictions such as that in Rome “have little basis and merely cause public alarm”.

Woman holding pet cat in a tsunami devastated street in JapanCan animals pick up quake signals?

Seismologists do monitor rock movements around fault lines to gauge where pressure is building up, and this can provide a last-minute warning in the literal sense, says BBC science correspondent Jonathan Amos.

“In Japan and California, there are scientists looking for pre-cursor signals in rocks. It is possible to get a warning up to 30 seconds before an earthquake strikes your location. That’s enough time to get the doors open on a fire station, so the engines can get out as soon as it is over.”

But any longer-range prediction is much harder.

“It’s like pouring sand on to a pile, and trying to predict which grain of sand on which side of the pile will cause it to collapse. It is a classic non-linear system, and people have been trying to model it for centuries,” says Amos.

In Japan, all eyes are on the faults that lace its shaky islands.

On Monday, Trade and Industry Minister Banri Kaieda urged that the Hamaoka nuclear plant near a fault line south-west of Tokyo be shut down, pending the construction of new tsunami defences.

Seismologists have long warned that a major earthquake is overdue in this region.

But overdue earthquakes can be decades, if not centuries, in coming. And this makes it hard to prepare, beyond precautions such as construction standards and urging the populace to lay in emergency supplies that may never be needed.

Later this year, a satellite is due to launch to test the as-yet unproven theory that there is a link between electrical disturbances on the edge of our atmosphere and impending quakes on the ground below.

Toad warning

Then there are the hypotheses that animals may be able to sense impending earthquakes.

Last year, the Journal of Zoology published a study into a population of toads that left their breeding colony three days before a 6.3 quake struck L’Aquila, Italy, in 2009. This was highly unusual behaviour.

But it is hard to objectively and quantifiably study how animals respond to seismic activity, in part because earthquakes are rare and strike without warning.

A man in Christchurch carrying a young girl through stricken streetsCountries in the Pacific’s “Ring of Fire”, like New Zealand, are regularly shaken by quakes

“At the moment, we know the parts of the world where earthquakes happen and how often they happen on average in these areas,” says Dr Baptie.

This allows seismologists to make statistical estimates of probable ground movements that can be use to plan for earthquakes and mitigate their effects. “However, this is still a long way from earthquake prediction,” he says.

And what of the “prophets” who claim to predict these natural disasters?

“Many regions, such as Indonesia and Japan, experience large earthquakes on a regular basis, so vague predictions of earthquakes in these places requires no great skill.”

 

Who was Raffaele Bendandi?

  • Born in 1893 in central Italy
  • In November 1923, he predicted a quake would strike on January 2, 1924
  • Two days after this date, it did, in Italian province of Le Marche
  • Mussolini made him a Knight of the Order of the Crown of Italy
  • But he also banned Bendandi from making public predictions, on pain of exile

‘Alternatives to development’: an interview with Arturo Escobar (transitionculture.org)

28 Sep 2012

At the 2012 Degrowth conference in Venice one of the highlights for me was the talk by Arturo Escobar(my notes from which can be found here). He is the author of Encountering Development and Territories of Difference, among others.  His talk looked at how Transition might look in the context of the Global South, and held many fascinating insights.  Here is the interview I did with him, first as an audio file, and below as a transcript.

So, Arturo, could you tell us a little bit about yourself please?

My name is Arturo Escobar, I was born and grew up in Colombia and I teach in the US, at the University of North Carolina in Chapel Hill. I teach anthropology and most of my work as an anthropologist is also in Colombia, especially the rainforest region, the Pacific region of Colombia, with African descendant movements and communities.

So Arturo, you gave a presentation yesterday about what Degrowth would look like in the context of the developed world and the developing world, the Global North, the Global South. Could you set out what you see as the prime motivation in each of those places – what’s distinct between those two?

OK.  One of the points that I was trying to make is a parallel between the Degrowth movement as a set of ideas and political projects and social projects for transformation or transition in the Global North, especially in Europe and the US, especially in Europe, the US is still way south as you probably know better than me.

The parallel movement in the US, in Latin America at least, maybe not so much for the Global South as a whole but for Latin America in particular, which is the region of the world that I know the best because I am from there and I’ve been working there for a long time as an anthropologist and ecologist, as an activist, is what I call ‘Alternatives to Development’.

When you talk about Degrowth, I think one of the speakers today referred to that, I think it was Marcelo the theologian who referred to that in our session. When he speaks about Degrowth in Brazil people laugh at him: “why do we need Degrowth with all this poverty and all these problems and all these possibilities for growing?  We Brazilians are growing like crazy, Degrowth doesn’t make any sense”.

I think that’s a mistaken perception of what Degrowth is in Latin America, because people who have looked at Degrowth and Transition Town initiatives in South America, including some environmentalists, they find it appealing and they find that it’s not sufficient for tackling issues in South America.

One of the main ones – and he might be a great person for you to also interview – if  I wanted to point you to one single source in the South American debates on Transition and alternatives to development andBuen Vivir, would be this Uruguayan ecologist whose name is Eduardo Gudynas. He knows about Transition Towns, he’s read your books, he has a great outfit in Montevideo, but he spends most of his time in the Andean region, specifically Nicaragua, Bolivia, Peru, Ecuador and Colombia.

Not Chile, not Brazil, not Venezuela, especially the four countries in the Andes. The other person who is really focussing on this is an Ecuadorian whose name is Alberto Acosta, who was the president of the constituent assembly that wrote the new constitution for Ecuador, where there is a huge section on Buen Vivir, and rights of nature, and both of them have been writing about alternatives to development and about the other concept that I didn’t get to explain yesterday which is transitions to post-extractivist model of society and economy.

What they find is that Degrowth – and they have some differences with Degrowth – they say here in Latin America we still have to grow in some ways. People’s livelihoods have to improve, and it’s difficult to do that without some growth. Health, education, housing – there are some sectors where the economy still has to grow.

But the second point they say is that growth has to be subordinated to a different vision of development, which is the Buen Vivir.

Could you tell us a bit more about what that is?

Yes, the Buen Vivir is a concept that has been coming out strongly over the past 10 years, especially in South America, in the context of the emergence of the left-leaning regimes in many South American countries, almost all South American countries with the exception of Colombia and Peru now, well it’s difficult to say what Peru’s current regime is.

In that context, it is the search for a different way of thinking about development and pushed by indigenous peoples and to some extent by peasants, by African descendents, and in collaboration with ecologists, sometimes feminists, sometimes activists from different social movements.  They started to say that for this model of development, this is the moment to change our development model, from a growth-oriented and extraction of natural resources oriented model to something that is more holistic, something that really speaks to the indigenous cosmo-visions of the people in which this notion of prosperity based on material well-being only and material consumption does not exist.  What has been traditionally cultivated among indigenous communities, is not even a notion of development, that is the key, because people are saying Buen Vivir is the new theory of development.

No, it’s not a theory of development. It’s a theory of something else that is not development. People translate it as ‘the good life’. I prefer to translate it as collective well-being. But it’s a collective well-being of both humans and non-humans. Humans, human communities and the natural world, all living beings.

And what does that look like in practice? What are the elements of it?

That’s the key question, the practice, the implementation of the Buen Vivir.  That’s the struggle, especially in Ecuador and Bolivia that have governments that have been put in power mostly by coalitions of social movements, especially indigenous movements, which over the past 6 years since they were elected in 2006, and they were elected with the promise that they were going to carry out this mandate of the Buen Vivir in the constitutions of both Bolivia and Ecuador, with different notions of Buen Vivir in both constitutions.

That said, the goal of state policies should be to promote Buen Vivir which involves social justice, a new notion of rights that includes the rights of nature, ecological sustainability, the elimination of poverty or the reduction of poverty. The reduction of poverty and the protection of nature are the two main dimensions of that.

So there are two sides to the Buen Vivir, which is the social and economic political side, and the rights of nature which is the ecological side. So the aims of the constitutions and development plans, I’ve looked at the development plans of both governments and they are very contradictory, because they say “we have to carry out this mandate”. But they keep falling back to the old ideas about growth and extraction of natural resources and planning as a top-down exercise, and we the experts have decided the plan for theBuen Vivir, but communities feel excluded.

So they clash now in both countries. This is like, so in southern Colombia, southern Mexico, Chiapas and Oaxaca is between indigenous, and peasant, and black movements on the one hand, movements that are for the Buen Vivir, that are for a different vision of development, and the state approach which still is what Gudynas and Acosta in particular call ‘neo-extractivists’.

They are neo-extractivit because they are still based on the extraction of natural resources: oil, natural gas, lithium, soy beans, sugar cane, agro-fuels of all kinds, gold, minerals.  They are Left regimes that are transacting with corporations, Canadian, American, European, South African, Chinese, corporations to take out natural resources. They are not traditional extractivism because, like the older Venezuelan regimes for instance, where there was so much oil, but the oil benefited only a small elite.

Now the idea of these Left regimes, which is a very good idea obviously, is they are going to be using the revenues which are far larger than in the previous regimes that basically gave everything to the corporations. They are going to use the revenues for social redistribution, to reduce poverty and to reduce inequality and to some extent they are doing it. But in the process, they have become this neo-developmentalist development models, pretty much the same as in the past but with a better social policy.

It’s interesting that the starting point was the idea of social justice and linked to environmental protection whereas in England at the moment, for example, the British government there are basically saying we have to go for economic growth at all costs, and environmental protection is optional. It’s interesting to see how with Buen Vivir, that’s been there from the beginning.

Exactly, and that is happening in the US as well, with policies like hydro-fracking which has been given carte blanche all over the place.

So in Transition we get asked about what Transition should look like in the Global South, and we say it’s about building resilience in both places, that the process of globalising food production has reduced food resilience in the Global North because we’ve become so dependent on imports and moving stuff around, and in the Global South it’s about the destruction of small farming and so on and so on. What’s your sense of that balance of how we build resilience in both places?  Also what Transition groups who are working in the Global North can do through their actions to support what’s happening in the South?

I think the concept of resilience is very good and I know that you emphasise it from the very first book, the concept of resilience.  I think it is a concept that could cut across Global North and Global South. I would have to go and look more carefully to see if it is being used now in Latin America, but it is a very fruitful concept, and actually that would be a very good question for Eduardo Gudynas who is a very good friend of mine, so I am going to ask him the question.

There are some parallels that I think could be thought about for both the Global North and the Global South in principle. In practice they would have their own specificities as you yourself said yesterday in your presentation on the first night, because every town basically has its own specificities. Local food, I think is a very important one in the Global North. It is increasingly important in the Global South, under a different umbrella.

The different umbrella is that of food sovereignty, food autonomy. In Colombia for instance, movements prefer to use autonomia alimentaria (food autonomy) which is somewhat different to food sovereignty.  Food sovereignty tends to put the emphasis on the national level, so a county might say we basically produce food for the population blah blah blah, that’s not good enough. There has to be food autonomy locally, regionally, nationally.

So peasant movements like Via Campesina that is a very important movement in Latin America and worldwide is focussing on food sovereignty, and food autonomy to a lesser extent. So the question of food is crucial as an entry point to Transition.

Energy?  Energy is so important to the Global North, I see it as less important to the Global South, and that doesn’t necessarily mean something good. We should be thinking more about energy, and that’s actually one of Gudynas’s co-workers now that I recall, who has a programme on energy, in particular for South America. He talks about the transformations that have to take place on the level of energy for transitions to take place.

The people in the Global North who say ‘oh, you can’t talk about local food because if you talk about local food you’re condemning farmers in Kenya and Chile to poverty and unemployment. How do you respond to that argument?

I don’t think it makes any sense! If you look carefully, sure, there’s a lot of food being grown in Africa, Asia and South America for the European and American markets, but who’s benefiting from that? Most times it’s not local peasants. It ceased to be local peasants at least two or three decades ago.

Even some of the agro-fuels that are touted as big solutions environmentally and so forth, like African palm which I know very well because it has been planted in Colombia all over the place. It’s being done at the expense of local communities, local ecosystems, by large Colombian capitalists or by large corporations.

I know that in parts of Africa and the Middle East it’s mostly German and European corporations that are planting food in these countries, with local cheap labour, to be exported to European markets. So on the contrary, I think local food in the north is going to be good for local food in the south. It’s going to stop this idea that the south will have to grow luxury crops for the Global North.

So if a Transition initiative in the Global North is actively working to localise its food supply, to reduce its carbon footprint, to put in place renewable energy infrastructure, localise it’s economy, is your sense that by default that that is helping the movement towards alternative development in the Global South or could they be doing something more mindfully, more intentionally to support that struggle at the same time?

I think that the first option that you outlined is the better way to think about it. That doesn’t mean that we shouldn’t do it thinking about the Global South as well, and how the Global South is affected. There might be cases in which particular groups in the Global South might be hurt by practices that emerge in the Global North around Transition initiatives, for instance one of the speakers this morning, Antonella Picchio, a feminist economist, who says we should always think from the perspective of women.

In principle that’s very good. How do we ask the question – how might our activities in Transition initiatives in the Global North benefit, or hurt, particular vulnerable groups in the Global South.  Women, indigenous peoples, black peoples, ethnic minorities and peasants in particular.  I think that’s always a very good question to ask. It’s not such a huge question to answer, you sort of follow the threads of the actions.

But as a whole I would tend to think Transition activities in the Global North would tend to contribute if not immediately, at least at some point, to alternatives to development and local autonomies in the Global South to the extent that they continue to erode corporate power, which is what unites and which is really screwing up everybody, including people in the Global North.

My Finnish and Canadian friends tell me that the same corporations that have been screwing up the Global South for so many decades are now doing the same in northern Canada and Finland. So it’s not even going to be the north that’s going to be spared anymore.  In that sense I think the alliances have to be built. The conversations between Transition activists in the north and Transition activists in the south have to be cultivated. They will be somewhat difficult conversations and I think the questions you are asking are the ones we have to start with.

The concept, the practice of Transition that we use for different parts of the world, we have to take into account that they will be inter-cultural conversations, inter-epistemic conversations, different knowledge is going to be involved, and those require translation.  Translation across knowledges, across cultures, across histories, across different ways of being negatively affected by globalisation, across levels of privilege and so forth.

Is just applying the concept of localisation, going to generate sufficient employment to create the kind of employment that these countries need?

Probably not. I think it has to be a level, certainly a lot of emphasis on local actions, local solutions, but there has to be also some degree of thinking and policy implementation at the regional level and at the national level. The state has to become more part of the solution than part of the problem that it is now. Now it is much more of the problem.

With some of these progressive regimes it has tried to become part of the solution as well in terms of connecting with social movements, but the give and take between social movements that are pushing more for the local autonomy, the protection of territories, the preservation of cultural and biological diversity on the one hand, and the state, who has the national or transnational level in mind, is going again really tight, and ruptures are beginning to happen, even in countries like Bolivia and Ecuador where there has been more closeness between the state and the movements.

What’s the role of technology here? There are some people who would say if we could do open-source genetic modification then that would have a role. There are all these technologies like nuclear power, these kinds of things.  In your take on alternatives to development what constitutes good technology and what constitutes a technology that doesn’t have a place?

I think technology is super important.  I think Buen Vivir indigenous communities, Afro-descendant communities, peasant communities, they are not opposed to technology per se. If they can be connected to the internet, if they can have technologies that improve the productivity of the land, if they can have technologies that improve their living standards, that’s all great.

What they are opposed to is having those technologies coming in at the expense of their autonomy, at the expense of their territories, at the expense of their cultural traditions, at the expense of their world-views and ways of living. But when you read – and I think this is a misconception – that the Buen Vivir, because it has been promoted mostly by indigenous movements and intellectuals is something about going back to the past  – it’s not at all. It’s not about going back.

Someone said that here today too, that Degrowth is not about going back, it’s about moving forwards. The same with indigenous communities, it’s about moving forwards, but how?  The difference is “how?”  The way in which we’re moving forwards today on the basis of growth and instructivism and profit and the dominance of one particular model which is capitalism and modernity, for many communities and in the movements, that is the end and that has to stop.

But it’s not anti-technology and it’s not anti-modern. For me the criteria is to weaken or lessen the dominance of the growth model, the hi-tech model, the conventional economic neo-liberal model and the dominance of one particular cultural framework which is the cultural framework of modernity, and to allow for many different world-views and frameworks.

Risco calculado (Fapesp)

Workshop sobre extremos do clima expõe o desafio de converter informação científica em prevenção de desastres

FABRÍCIO MARQUES | Edição 199 – Setembro de 2012

Inundação em parque de diversões de Nova Orleans após a passagem do furacão Katrina, em 2005: tragédia despertou a consciência norte-americana. © BOB MCMILLAN / FEMA PHOTO

É praticamente certo – a certeza, no caso, chega a 99% – que vá ocorrer até 2100 um aumento na frequência de dias e noites quentes em diferentes regiões do planeta. Já em relação à intensidade das chuvas, que efetivamente recrudesceram em diversas áreas, ainda há dúvidas se o fenômeno é global – os dados disponíveis indicam que as previsões nessa direção têm um grau de confiança de 66%. Divulgado em março passado, o Relatório Especial sobre Gestão dos Riscos de Extremos Climáticos e Desastres (SREX, na sigla em inglês) apontou essas tendências, entre várias outras, com base no conhecimento científico recente compilado pelo Painel Intergovernamental sobre Mudanças Climáticas (IPCC). Seus resultados foram discutidos numa reunião realizada no auditório Moise Safra, no Centro de Convenções Albert Einstein, em São Paulo, entre os dias 16 e 17 de agosto, na qual pesquisadores de vários países também debateram estratégias para o gerenciamento dos impactos e para levar o conhecimento aos tomadores de decisão. Oworkshop “Gestão dos riscos dos extremos climáticos e desastres na América Central e na América do Sul – o que podemos aprender com o Relatório Especial do IPCC sobre extremos?”, foi promovido pela FAPESP e pelo Instituto Nacional de Pesquisas Espaciais (Inpe).

“Ficou claro nas discussões que a interface dos cientistas com gestores e comunidades locais é um ponto crítico. Há muito ruído nessa comunicação”, disse à Agência FAPESPo climatologista José Marengo, coordenador do workshop e membro do comitê organizador do SREX. Talvez a recomendação mais importante extraída dos debates tenha sido essa: é preciso estabelecer novos canais de diálogo entre cientistas e autoridades para enfrentar os riscos de desastres resultantes de eventos climáticos extremos e reduzir os prejuízos que eles causam. A necessidade de participação mais ativa dos governos em decisões relacionadas a questões como vulnerabilidade às mudanças climáticas e estratégias de adaptação também foi destacada pelos pesquisadores presentes no workshop. “Os governos se mostram pouco preparados e continuam sendo pegos de surpresa por eventos meteorológicos que estão aumentando em frequência e intensidade, como mostram os relatórios, e deverão aumentar ainda mais no futuro”, disse Marengo, que é coordenador do Centro de Ciência do Sistema Terrestre do Inpe e lidera um projeto temático, no âmbito do Programa FAPESP de Pesquisa sobre Mudanças Climáticas Globais (PFPMCG), acerca do impacto dos extremos do clima nos ecossistemas e na saúde humana no Brasil.

Segundo o pesquisador, frequentemente existem recursos para mapeamento de risco e remoção de população em áreas vulneráveis, mas o dinheiro acaba sendo transferido para outras áreas. “Isso mostra uma falha no nosso diálogo com os governos locais. Não é segredo que o clima está mudando e todos os anos pessoas morrem por conta de desastres que poderiam ser evitados se esses recursos fossem aplicados”, afirmou.

A forma como a informação científica alcança a sociedade frequentemente é diversa da imaginada pelos pesquisadores.  
“Apareceram nos nossos debates discussões, por exemplo, sobre termos como ‘incerteza’, que é derivado da área de modelagem climática e cujo conceito nós cientistas compreendemos, mas que ainda não foi traduzido adequadamente para o público”, disse Marengo. Outra confusão envolve o próprio conceito de desastre. “Não são as chuvas que matam as pessoas. É a combinação delas com famílias morando em encostas e em residências precárias. Não dá para acabar com as chuvas intensas, mas, com planejamento, é possível reduzir o número de mortes”, afirmou o pesquisador. A percepção da sociedade sobre as mudanças climáticas obedece a uma lógica às vezes distinta da dos cientistas. Marengo cita como exemplo o furacão Katrina, que devastou o sul dos Estados Unidos em 2005 e inundou a cidade de Nova Orleans. “Não há como afirmar que o Katrina, analisado de forma isolada, seja resultado das mudanças globais. Mas foi esse evento que despertou a população norte-americana para o problema”, afirmou.

Escassez de dados

Uma das principais conclusões do relatório SREX, que foi elaborado pelo IPCC a pedido do governo da Noruega e da Estratégia Internacional para a Redução de Desastres (Eird), da Organização das Nações Unidas (ONU), é que vem ocorrendo um aumento na frequência de eventos climáticos extremos no mundo nas últimas décadas em razão das mudanças climáticas. Com base nas evidências presentes, o relatório indica que é altamente provável um aumento na frequência de dias e noites quentes nos próximos anos em diferentes regiões do planeta. Mas é incerto se alguns fenômenos climáticos extremos tendem a ocorrer em escala global, devido à escassez de dados. O documento aponta dúvidas em relação ao aumento da frequência de chuvas intensas em todo o mundo, indicando regiões que apresentam aumento e outras onde ocorreu redução do evento climático. Também faltam evidências de que ciclones tropicais tenham se tornado mais frequentes, embora as chuvas relacionadas com esses fenômenos, de fato,  estejam mais intensas. Da mesma forma, é possível que secas atinjam com mais frequência e intensidade certas regiões do planeta, como o Nordeste brasileiro ou o México, mas não representem um fenômeno generalizado no planeta.

© FOTOS 1 E 2 LÉO RAMOS 3 BIDGEE / WIKICOMMONS 4 NASA 5 TOMAS CASTELAZO

Para os pesquisadores que produziram o relatório, um dos principais desafios foi afinar os discursos entre especialistas de diversas áreas. “Foi o primeiro esforço para trocar conhecimento de maneira multidisciplinar”, disse a médica e professora da Universidad Nacional Autónoma de México (Unam), Úrsula Oswald Spring, que participou da elaboração do SREX e esteve noworkshop de São Paulo. “Sem construir uma linguagem comum, não é possível avançar nas soluções dos problemas colocados pelas mudanças climáticas.”

Apesar das incertezas sobre a extensão e a frequência dos fenômenos climáticos extremos no futuro, seu impacto, hoje, já é palpável. Dados apresentados por Úrsula Spring mostraram que mulheres e crianças são as maiores vítimas de furacões, terremotos, tsunamis, inundações e outros eventos extremos, climáticos ou não. Elas representam de 68% a 89% das mortes que ocorrem nesses fenômenos no mundo todo. As mulheres são 72% das pessoas que vivem em condições de extrema pobreza, o que as torna mais vulneráveis em situações de desastres. “O papel das mulheres é o de cuidar, então salvam filhos, pais e animais e não enxergam o risco que correm”, disse Úrsula, que pesquisa o tema há 10 anos. O prejuízo também é muito maior em países pobres: 95% das mortes por desastres naturais ocorrem em países em desenvolvimento. “Para que grandes desastres ocorram é necessário que a população esteja vulnerável e exposta”, afirmou o professor da Universidad Católica do Chile, Sebastián Vicuña.

Deslizamentos

O climatologista Carlos Nobre, que é secretário de Políticas e Programas de Pesquisa e Desenvolvimento do Ministério da Ciência, Tecnologia e Inovação (MCTI), membro da coordenação do Programa FAPESP de Pesquisa sobre Mudanças Climáticas Globais (PFPMCG) e do IPCC, enumerou estudos publicados por pesquisadores do estado de São Paulo que tratam dos riscos causados pela maior frequência de chuvas intensas. Um deles apontou um aumento do número de áreas suscetíveis a alagamentos e que apresentam risco maior de deslizamentos de terra na capital paulista. Outro estudo demonstrou que, com a urbanização, as áreas de chuva intensa se expandem e aumenta o risco de contaminação por leptospirose – doença transmitida principalmente pela urina de roedores. Já uma pesquisa feita no Departamento de Ecologia da Universidade Estadual Paulista (Unesp), campus de Rio Claro, em parceria com o Inpe, mostrou que Campinas e Ribeirão Preto são as duas regiões no estado de São Paulo mais vulneráveis às mudanças climáticas. A concentração populacional em Campinas potencializa as consequências de uma enchente. Já no caso de Ribeirão Preto, a região deverá registrar temperaturas mais altas nas próximas décadas. “Podemos discernir em algumas regiões os impactos socioeconômicos causados pela aceleração dos eventos climáticos, que estão associados a maior vulnerabilidade das populações em razão da crescente urbanização do mundo e, em particular, das cidades da América Latina, onde esse processo ocorreu nas últimas décadas de forma caótica”, disse Nobre à Agência FAPESP. No Brasil, os recursos para reconstrução de regiões assoladas por desastres causados por eventos climáticos extremos tiveram uma evolução muito rápida nos últimos 10 anos e ultrapassaram o patamar de R$ 1,6 bilhão em 2011, apontou Nobre. Se há incertezas sobre a tendência de aumento da frequência de chuva em escala global, no caso de São Paulo não restam dúvidas de que as chuvas intensas têm aumentado muito na cidade nos últimos 50 ou 70 anos, observou Nobre. “Hoje temos três vezes mais chuvas intensas do que há 70 anos. E as evidências de que esse tipo de evento ocorre com maior frequência na capital paulista estão muito bem documentadas”, afirmou.

Os resultados do relatório SREX serão aproveitados e atualizados nos próximos relatórios que o IPCC divulgará em 2013. Segundo Marengo, ainda há uma escassez de estudos sobre vulnerabilidade às mudanças climáticas em regiões brasileiras. Para produzir o SREX, pôs-se de lado a norma não escrita de que um bom estudo científico é apenas aquele publicado em revistas especializadas de língua inglesa. “Conseguimos atingir um nível bom em algumas publicações brasileiras, mas ainda falta mais literatura científica publicada no país”, afirmou o pesquisador.  Os pesquisadores detectaram a necessidade de aumentar o financiamento de estudos sobre mudanças climáticas, com apoio de instituições governamentais e não governamentais. Os grupos recomendaram ainda o fortalecimento das instituições locais de gerenciamento de risco. “Não é preciso criar novas instituições, mas fortalecer as que já existem”, afirmou Marengo.