Arquivo da tag: Risco

Panel Urges Research on Geoengineering as a Tool Against Climate Change (New York Times)

Piles at a CCI Energy Solutions coal handling plant in Shelbiana, Ky. Geoengineering proposals might counteract the effects of climate change that are the result of burning fossils fuels, such as coal. Credit: Luke Sharrett/Getty Images 

With the planet facing potentially severe impacts from global warming in coming decades, a government-sponsored scientific panel on Tuesday called for more research on geoengineering — technologies to deliberately intervene in nature to counter climate change.

The panel said the research could include small-scale outdoor experiments, which many scientists say are necessary to better understand whether and how geoengineering would work.

Some environmental groups and others say that such projects could have unintended damaging effects, and could set society on an unstoppable path to full-scale deployment of the technologies.

But the National Academy of Sciences panel said that with proper governance, which it said needed to be developed, and other safeguards, such experiments should pose no significant risk.

In two widely anticipated reports, the panel — which was supported by NASA and other federal agencies, including what the reports described as the “U.S. intelligence community” — noted that drastically reducing emissions of carbon dioxide and other greenhouse gases was by far the best way to mitigate the effects of a warming planet.

A device being developed by a company called Global Thermostat, is made to capture carbon dioxide from the air. This may be one solution to counteract climate change.CreditHenry Fountain/The New York Times 

But the panel, in making the case for more research into geoengineering, said, “It may be prudent to examine additional options for limiting the risks from climate change.”

“The committee felt that the need for information at this point outweighs the need for shoving this topic under the rug,” Marcia K. McNutt, chairwoman of the panel and the editor in chief of the journal Science, said at a news conference in Washington.

Geoengineering options generally fall into two categories: capturing and storing some of the carbon dioxide that has already been emitted so that the atmosphere traps less heat, or reflecting more sunlight away from the earth so there is less heat to start with. The panel issued separate reports on each.

The panel said that while the first option, called carbon dioxide removal, was relatively low risk, it was expensive, and that even if it was pursued on a planetwide scale, it would take many decades to have a significant impact on the climate. But the group said research was needed to develop efficient and effective methods to both remove the gas and store it so it remains out of the atmosphere indefinitely.

The second option, called solar radiation management, is far more controversial. Most discussions of the concept focus on the idea of dispersing sulfates or other chemicals high in the atmosphere, where they would reflect sunlight, in some ways mimicking the effect of a large volcanic eruption.

The process would be relatively inexpensive and should quickly lower temperatures, but it would have to be repeated indefinitely and would do nothing about another carbon dioxide-related problem: the acidification of oceans.

This approach might also have unintended effects on weather patterns around the world — bringing drought to once-fertile regions, for example. Or it might be used unilaterally as a weapon by governments or even extremely wealthy individuals.

Opponents of geoengineering have long argued that even conducting research on the subject presents a moral hazard that could distract society from the necessary task of reducing the emissions that are causing warming in the first place.

“A geoengineering ‘technofix’ would take us in the wrong direction,” Lisa Archer, food and technology program director of the environmental group Friends of the Earth, said in a statement. “Real climate justice requires dealing with root causes of climate change, not launching risky, unproven and unjust schemes.”

But the panel said that society had “reached a point where the severity of the potential risks from climate change appears to outweigh the potential risks from the moral hazard” of conducting research.

Ken Caldeira, a geoengineering researcher at the Carnegie Institution for Science and a member of the committee, said that while the panel felt that it was premature to deploy any sunlight-reflecting technologies today, “it’s worth knowing more about them,” including any problems that might make them unworkable.

“If there’s a real showstopper, we should know about it now,” Dr. Caldeira said, rather than discovering it later when society might be facing a climate emergency and desperate for a solution.

Dr. Caldeira is part of a small community of scientists who have researched solar radiation management concepts. Almost all of the research has been done on computers, simulating the effects of the technique on the climate. One attempt in Britain in 2011 to conduct an outdoor test of some of the engineering concepts provoked a public outcry. The experiment was eventually canceled.

David Keith, a researcher at Harvard University who reviewed the reports before they were released, said in an interview, “I think it’s terrific that they made a stronger call than I expected for research, including field research.” Along with other researchers, Dr. Keith has proposed a field experiment to test the effect of sulfate chemicals on atmospheric ozone.

Unlike some European countries, the United States has never had a separate geoengineering research program. Dr. Caldeira said establishing a separate program was unlikely, especially given the dysfunction in Congress. But he said that because many geoengineering research proposals might also help in general understanding of the climate, agencies that fund climate research might start to look favorably upon them.

Dr. Keith agreed, adding that he hoped the new reports would “break the logjam” and “give program managers the confidence they need to begin funding.”

At the news conference, Waleed Abdalati, a member of the panel and a professor at the University of Colorado, said that geoengineering research would have to be subject to governance that took into account not just the science, “but the human ramifications, as well.”

Dr. Abdalati said that, in general, the governance needed to precede the research. “A framework that addresses what kinds of activities would require governance is a necessary first step,” he said.

Raymond Pierrehumbert, a geophysicist at the University of Chicago and a member of the panel, said in an interview that while he thought that a research program that allowed outdoor experiments was potentially dangerous, “the report allows for enough flexibility in the process to follow that it could be decided that we shouldn’t have a program that goes beyond modeling.”

Above all, he said, “it’s really necessary to have some kind of discussion among broader stakeholders, including the public, to set guidelines for an allowable zone for experimentation.”

The Risks of Climate Engineering (New York Times)

Credit: Sarah Jacoby 

THE Republican Party has long resisted action on climate change, but now that much of the electorate wants something done, it needs to find a way out of the hole it has dug for itself. A committee appointed by the National Research Council may just have handed the party a ladder.

In a two-volume report, the council is recommending that the federal government fund a research program into geoengineering as a response to a warming globe. The study could be a watershed moment because reports from the council, an arm of the National Academies that provides advice on science and technology, are often an impetus for new scientific research programs.

Sometimes known as “Plan B,” geoengineering covers a variety of technologies aimed at deliberate, large-scale intervention in the climate system to counter global warming.

Despairing at global foot-dragging, some climate scientists now believe that a turn to Plan B is inevitable. They see it as inscribed in the logic of the situation. The council’s study begins with the assertion that the “likelihood of eventually considering last-ditch efforts” to address climate destabilization grows every year.

The report is balanced in its assessment of the science. Yet by bringing geoengineering from the fringes of the climate debate into the mainstream, it legitimizes a dangerous approach.

Beneath the identifiable risks is not only a gut reaction to the hubris of it all — the idea that humans could set out to regulate the Earth system, perhaps in perpetuity — but also to what it says about where we are today. As the committee’s chairwoman, Marcia McNutt, told The Associated Press: The public should read this report “and say, ‘This is downright scary.’ And they should say, ‘If this is our Hail Mary, what a scary, scary place we are in.’ ”

Even scarier is the fact that, while most geoengineering boosters see these technologies as a means of buying time for the world to get its act together, others promote them as a substitute for cutting emissions. In 2008, Newt Gingrich, the former House speaker, later Republican presidential candidate and an early backer of geoengineering, said: “Instead of penalizing ordinary Americans, we would have an option to address global warming by rewarding scientific invention,” adding: “Bring on the American ingenuity.”

The report, considerably more cautious, describes geoengineering as one element of a “portfolio of responses” to climate change and examines the prospects of two approaches — removing carbon dioxide from the atmosphere, and enveloping the planet in a layer of sulfate particles to reduce the amount of solar radiation reaching the Earth’s surface.

At the same time, the council makes clear that there is “no substitute for dramatic reductions in the emissions” of greenhouse gases to slow global warming and acidifying oceans.

The lowest-risk strategies for removing carbon dioxide are “currently limited by cost and at present cannot achieve the desired result of removing climatically important amounts,” the report said. On the second approach, the council said that at present it was “opposed to climate-altering deployment” of technologies to reflect radiation back into space.

Still, the council called for research programs to fill the gaps in our knowledge on both approaches, evoking a belief that we can understand enough about how the Earth system operates in order to take control of it.

Expressing interest in geoengineering has been taboo for politicians worried about climate change for fear they would be accused of shirking their responsibility to cut carbon emissions. Yet in some congressional offices, interest in geoengineering is strong. And Congress isn’t the only place where there is interest. Russia in 2013 unsuccessfully sought to insert a pro-geoengineering statement into the latest report of the Intergovernmental Panel on Climate Change.

Early work on geoengineering has given rise to one of the strangest paradoxes in American politics: enthusiasm for geoengineering from some who have attacked the idea of human-caused global warming. The Heartland Institute, infamous for its billboard comparing those who support climate science to the Unabomber, Theodore J. Kaczynski, featured an article in one of its newsletters from 2007 describing geoengineering as a “practical, cost-effective global warming strategy.”

Some scholars associated with conservative think tanks like the Hoover Institution and the Hudson Institute have written optimistically about geoengineering.

Oil companies, too, have dipped their toes into the geoengineering waters with Shell, for instance, having funded research into a scheme to put lime into seawater so it absorbs more carbon dioxide.

With half of Republican voters favoring government action to tackle global warming, any Republican administration would be tempted by the technofix to beat all technofixes.

For some, instead of global warming’s being proof of human failure, engineering the climate would represent the triumph of human ingenuity. While climate change threatens to destabilize the system, geoengineering promises to protect it. If there is such a thing as a right-wing technology, geoengineering is it.

President Obama has been working assiduously to persuade the world that the United States is at last serious about Plan A — winding back its greenhouse gas emissions. The suspicions of much of the world would be reignited if the United States were the first major power to invest heavily in Plan B.

Is a climate disaster inevitable? (Book Forum)

From De Ethica, Michel Bourban (Lausanne): Climate Change, Human Rights and the Problem of Motivation; Robert Heeger (Utrecht): Climate Change and Responsibility to Future Generations: Reflections on the Normative Questions; Casey Rentmeester (Finlandia): Do No Harm: A Cross-Disciplinary, Cross-Cultural Climate Ethics; and Norbert Campagna (Luxembourg): Climate Migration and the State’s Duty to Protect. Harvard’s David Keith knows how to dial down the Earth’s thermostat — is it time to try? Renzo Taddei (UNIFESP): Alter Geoengineering. Tobias Boes and Kate Marshall on writing the Anthropocene. People don’t work as hard on hot days — or on a warming planet. James West on 2014 was the year we finally started to do something about climate change. How much is climate change going to cost us? David Roberts investigates. Is a climate disaster inevitable? Adam Frank on what astrobiology can tell us about the fate of the planet. If we’re all headed for extinction anyway—AND WE ARE—won’t it be a lot more enjoyable to run out the clock with everyone looking a little more pleasant? Welcome to the latest exciting opportunity in the sights of investors: the collapse of planet Earth. You can download Tropic of Chaos: Climate Change and the New Geography of Violence by Christian Parenti (2011). You can download Minimal Ethics for the Anthropocene by Joanna Zylinska (2014).

[Emphasis added]

R.I.P. Ulrich Beck (PopAnth)

Sociology loses one of its most important voices

by John McCreery on January 16, 2015

Ulrich Beck. Photo by International Students’ Committee via Wikimedia Commons.
Ulrich Beck. Photo by International Students’ Committee via Wikimedia Commons.

The death of Ulrich Beck on January 1, 2015 stilled one of sociology’s most important voices.

Beck has long been one of my favourite sociologists. That is because the world he describes in his book Risk Society reminds me very much of the world of Chinese popular religion that I studied in Taiwan.

There are two basic similarities. First, in the risk society as Beck describes it, public pomp and ceremony and ostentatious displays of wealth recede. Wealth is increasingly privatized, concealed in gated communities, its excesses hidden from public view. Second, social inequality not only increases but increasingly takes the form of differential exposure to many forms of invisible risks.

In the world that Beck describes, signs of wealth continue to exist. Coronations and royal births, celebrity weddings, CEO yachts, the massive homes of the rich and famous and their McMansion imitators are all visible evidence that wealth still counts.

But, says Beck, inequality’s deeper manifestations are now in differences in institutions that shelter the rich and expose the poor to risks that include not only economic fluctuations but also extreme weather and climate change, chemical and biological pollution, mutating and drug-resistant diseases. The hidden plots of terrorists and of those who combat them might also be added to this list.

 People with problems attribute them to invisible causes. They turn for help to those who claim special powers to diagnose and prescribe. 

When I visualize what Beck is talking about when he says that wealth is becoming invisible, I imagine an airport. In the main concourse there is little visible difference between those checking in at the First or Business Class counters and those checking in for the cattle car seats in Economy. All will pass the same array of Duty Free shops on their way to their planes.

But while the masses wait at the gates, the elite relax in comfortable, concealed spaces, plied with food, drink and WiFi, in lounges whose entrances are deliberately understated. This is not, however, the height of luxury.

Keiko Yamaki, a former airline stewardess turned applied anthropologist, observes in her study of airline service culture that the real elite, the super rich, no longer fly with commercial airlines. They prefer their private jets. Even those in First Class are more likely to be from the merely 1% instead of the 0.01%, who are now never seen checking in or boarding with the rest of us.

What, then, of invisible risks? The transactions that dominate the global economy are rarely, if ever, to be seen, negotiated in private and executed via encrypted digital networks. Financial institutions and the 1% who own them are protected from economic risk. The 99%, and especially those who live in the world’s poorest nations and slums are not.

The invisible threats of nuclear, chemical and biological waste are concentrated where the poor live. Drug-resistant diseases spread like wildfire through modern transportation systems, but the wealthy are protected by advanced technology and excellent health care. The poor are not.

At the end of the day, however, all must face misfortune and death, and here is where the similarity to Chinese popular religion comes in.

My business is failing. My daughter is acting crazy. My son was nearly killed in a motorcycle accident. He’s been married for three years and his wife still hasn’t had a baby. I feel sick all the time. I sometimes feel faint or pass out.

Why? The world of Chinese popular religion has answers. Impersonal factors, the alignment of your birth date with the current configuration of the stars, Yin and Yang and the Five Elements, may mean that this is a bad time for you.

Worse still, you may have offended one of the gods, ghosts or ancestors who inhabit the invisible Yin world that exists alongside the Yang world in which we live. The possibilities are endless. You need to find experts, mediums, magicians or priests, who can identify the source of your problem and prescribe remedies for it. You know that most who claim to be experts are charlatans but hope nonetheless to find the real thing.

Note how similar this is to the world that Beck describes, where the things that we fear most are said to be caused by invisible powers, the market, the virus, pollution or climate change, for example. Most of us don’t understand these things. We turn to experts for advice; but so many claim to be experts and say so many different things.

How do we find those who “really know”? The rich may have access to experts with with bigger reputations in finance, law, medicine, science or personal protection. But what does this really mean?

As I see it, all forms of consulting are magic. People with problems attribute them to invisible causes. They turn for help to those who claim special powers to diagnose and prescribe, and random chance alone will lead to identification of some who claim such powers as having “It,” that special something that produces desired results. Negative evidence will disappear in a context where most who claim special powers are known to be frauds.

The primary question for those looking for “It” is how to find the golden needle in a huge and constantly growing haystack. People turn to to their social networks for recommendations by trusted others, whose trust may, however, be grounded in nothing more than having found someone whose recommendations are, by sheer random chance, located in the tail of the normal curve where “success” is concentrated.

I read Beck’s Risk Society long before I read Nassim Taleb’s Fooled by Randomnessand The Black Swan. Taleb’s accounts of how traders who place lucky bets in the bond market are seen as geniuses with mystical insights into market mechanisms — at least until their funds collapse — seem to me to strongly support my theory of how all consulting works.

I read the words of “experts” who clamour for my attention and think of Taleb’s parable, the one in which a turkey has a perfectly consistent set of longitudinal data, stretching over nearly a year demonstrating the existence of a perfectly predictable world in which the sun will rise every morning and the farmer will feed the turkey. Then comes the day before Thanksgiving, and the farmer turns up with an axe.

Be warned: reading books like those by Beck and Taleb may reinforce skepticism of claims to scientific and other expertise. But think about it. Which world would you rather live in: One where careful scientists slowly develop hypotheses and look systematically for evidence to test them? Or a world in which our natural human tendency to magical thinking has no brake at all?

For his leading me to these thoughts, I do, indeed, mourn the death of Ulrich Beck.

Ulrich Beck obituaries by Lash and Latour (Art Forum)

Ulrich Beck. Photo: Augsburger Allgemeine.

I FIRST ENCOUNTERED Ulrich Beck as a (superannuated) postdoc. I was a Humboldt Stipendiat in Berlin, where in 1987, I heard the sociologist Helmuth Berking give a paper on Beck’s “Reflexive Modernisierung” (Reflexive Modernization) at a Freie Universität colloquium. I had already published a paper called “Postmodernity and Desire” in the journal Theory and Society, and Beck’s notion of reflexive modernization seemed to point to an opening beyond the modern/postmodern impasse. Today, Foucault, Deleuze, and even Lebenssoziologie (Life sociology) are all present in German intellectual life. But in 1987, this kind of stuff was beyond the pale. Habermas and Enlightenment modernism ruled. And rightly so: It is largely thanks to Habermas that Germany now is a land rooted less in fiercely nationalistic Blut und Boden (Blood-and-Soil) than in a more pluralistic Verfassungspatriotismus (Constitutional Patriotism).

Beck’s foundational Risikogesellschaft (Risk Society), however, abandoned the order of Habermas’s “ideal speech situation” for contingency and unintended consequences. This was hardly a celebration of contingency; Beckian contingency was rooted in the Chernobyl disaster; it was literally a poison, or in German a Gift. Hence Beck’s subsequent book was entitled Gegengift, or “Counter-poison.” It was subtitled Die organisierte Unverantwortlichkeit (The Organized Irresponsibility). Beck’s point was that institutions needed to be responsible for a politics of antidote that would address the unintentional generation of environmental crises. This was a critique of systematic institutional irresponsibility—or more literally “un-responsibility”—for ecological disaster. Beck’s thinking became more broadly accepted in Germany over the years. Yet the radically original themes of contingency and unintended consequences remained central to Beck’s own vision of modernity and inspired a generation of scholars.

Beck’s influence has been compared by Joan Subirats, writing in in El País, to that of Zygmunt Baumanand Richard Sennett. Yet there is little in Bauman’s idea of liquidity to match the power of Beck’s understanding of reflexivity. It was based in a sociology of knowledge in which the universal of the concept could never subsume the particular of the empirical. At the same time, Beck’s subject was still knowledge, not the impossibility of knowledge and inevitability of the irrational (not, in other words, the “known unknowns” and the “unknown unknowns” that have proved so damaging to contemporary political thought). Beck’s reflexivity, then, was not just about a Kant’s What can I know?—it was just as much a question of the Kantian What should I do? and especially What can I hope?

For Beck, “un-responsible” institutions were still situated in what he referred to as “simple modernity.” They would need to deal with modernity’s ecological contingency in order to be reflexive. They would need to be aware of unintended consequences, of what environmental economists (and later the theory of cognitive capitalism) would understand as “externalities.” Beck’s reflexivity extended to his later work on cosmopolitanism and Europe. For him, Europe is not an ordering of states as atoms, in which one is very much like the other. It is instead a collection of singularities. Hence his criticism of German Europe’s “Merkiavelli”-ism in treating Greece and the European South as if all were uniform Teutonic entities to be subject to the principle of austerity.

Though Beck has remained highly influential, Bruno Latour’s “actor-network” theory has outstripped his ideas in terms of popularity, establishing a dominant paradigm among sociologists. Yet the instrumentalist assumptions of actor-network theory do not open up the ethical or hopeful dimension of Beck’s work. The latter has been a counter-poison, an antidote to the instrumentalism at the heart of today’s neoliberal politics, in which our singularity has been eroded under the banner of a uniform and possessive individualism. Because of the contingency at its heart, Beck’s work could never become a dominant paradigm.

Beck’s ideas clearly drove the volume Reflexive Modernization, which he, Anthony Giddens, and I published in 1994. There, I developed a notion of “aesthetic reflexivity,” and although in some ways I am more of a Foucault, Deleuze, and perhaps Walter Benjamin guy, Beck’s ideas still drive my own work today. Thus we should extend Beckian reflexivity to speak of a reflexive community, and of a necessary risk-sharing that must be at the heart of any contemporary politics of the commons.

I was offered the post to be Ulrich’s Nachfolger (successor) at University of Bamberg when he moved to Munich in 1992. In the end, I decided to stay in the UK, but we kept in touch. Although to a certain extent I’ve become a cultural theorist, Ulrich always treated me as a sociologist, and he was right: When I attended his seventieth birthday party in April 2014, all of cultural Munich was there, from newspaper editors to museum directors. Every February, when he was based at the London School of Economics, Ulrich and his wife Elisabeth would spend a Sunday afternoon with Celia Lury and me at our house in Finsbury Park/Highbury, enjoying a lunch of Kaffee und Kuchen (coffee and cake) and deli cheeses and hams. No more than a fortnight before his death Ulrich emailed me about February 2015. I replied sadly that I would be in Asia and for the first time would miss this annual Sunday gathering. At his seventieth birthday Ulrich was in rude health. I was honestly looking forward to his eightieth. Now neither the Islington Sundays nor the eightieth birthday will happen. It is sad.

Scott Lash is the Research Director at the Center for Cultural Studies at Goldsmiths, University of London.

*  *  *

Ulrich Beck, 2007.

THE DEATH OF ULRICH BECK is terrible news. It is a tragedy for his family, for his research team, and for his many colleagues and friends, but it is also a tragedy for European thought.

Ulrich was a public intellectual of the infinitely rare kind in Germany, one that was thought only to exist in France. But he had a very individual way—and not at all French—of exercising this authority of thought: There was nothing of the intellectual critic in him. All his energy, his generosity, his infinite kindness, were put in the service of discovering what actors were in the midst of changing about their way of producing the social world. So for him, it was not about discovering the existing laws of such a world or about verifying, under new circumstances, the stability of old conceptions of sociology. No: It was the innovations in ways of being in the world that interested him above all. What’s more, he didn’t burden himself with a unified, seemingly scientific apparatus in order to locate those innovations. Objectivity, in his eyes, was going to come from his ability to modify the explanatory framework of sociology at the same time as actors modified their way of connecting to one another. His engagement consisted of simply prolonging the innovations he observed in them, innovations from which he was able to extricate power.

This ability to modify the explanatory framework was something that Ulrich would first manifest in his invention of the concept of Risikogesellschaft (risk society), which was initially so difficult to comprehend. By the term risk, he didn’t mean that life was more dangerous than before, but that the production of risks was henceforth a constituent part of modern life and that it was foolhardy to pretend that we were going to take control of them. To the contrary, it was necessary to replace the question of the mode of production and of the unequal distribution of wealth with the symmetrical question of the mode of production and the unequal distribution of ills. Coincidentally, the same year that he proposed the term Risikogesellschaft, the catastrophe of Chernobyl lent his diagnostic an indisputable significance—a diagnostic that current ecological transformations have only reinforced.

In turning the uneven division of ills into the common thread of his inquiries, Ulrich would gradually change the vocabulary of the social sciences. And, first and foremost, he changed the understanding of the relationship between societies and their environment. Everything that had seemed to be outside of culture—and outside of sociology—he would gradually reintegrate, because the consequences of industrial, scientific, and military actions were henceforth part of the very definition of communal life. Everything that modernity had decided to put off until later, or simply to deny, needed to become the very content of collective existence. Hence the delicate and intensely discussed expression “reflexive modernity” or “second modernity.”

This attention to risk would, in turn, modify all the usual ingredients of the social sciences: First, politics—its conventional definition gradually being emptied of its content while Ulrich’s notion of “subpolitics” spread everywhere—but also psychology, the elements of which never ceased to change, along with the limits of collectives. Even love, to which he devoted two books with his wife Elisabeth Beck-Gernsheim, who is so grief stricken today. Yes, Ulrich Beck went big. Perhaps this is why, on a visit to Munich, he was keen to take me on a pilgrimage to Max Weber’s house. The magnitude of Beck’s conceptions, the audacity of trying to rethink—with perfect modesty and without any pretension of style, without considering himself to be the great innovator that he was—truly made him a descendant of Weber. Like him, Beck wanted sociology to encompass everything.

What makes Beck’s death all the harder to accept, for everyone following his work, is that for many years he was making the social sciences undergo a kind of de-nationalization of its methods and theoretical frameworks. Like the question of risk, the question of cosmopolitism (or better, of cosmopolitanism) was one of his great concerns. By this venerable term, he was not designating some call for the universal human, but the redefinition of humans belonging to something other than nation-states. Because his investigations constantly butted against the obstacle of collected facts managed, conceived of, and diffused by and for states—which clearly made impossible any objective approach toward the new kinds of associations for which the empty term globalization did not allow—the methods of examination themselves had to be radically modified. In this, he was succeeding, as can be seen in the impressive expansion of his now leaderless research group.

Beck manifested this mistrust of the nation-state framework in a series of books, articles, and even pamphlets on the incredible experience of the construction of Europe, a phenomenon so admirable and yet so constantly disdained. He imagined a Europe of new affiliations, as opposed to a Europe of nation-states (and, in particular, in contrast to a uniquely Germanic or French conception of the state). How sad it is to think that such an essential question, yet one that is of interest to so few thinkers, can no longer be discussed with him.

I cannot imagine a sadder way to greet the new year, especially considering that Beck’s many research projects (we were just talking about them again in Paris a few weeks ago) addressed the most urgent questions of 2015: How to react to the world’s impotence on the question of climate change? How to find an adequate response to the resurgences of nationalisms? How to reconsider Europe through conceptions of territory and identity that are not a crude and completely obsolete reprise of sovereignty? That European thought has lost at this precise moment such a source of intelligence, innovation, and method is a true tragedy. When Beck asked, in a recent interview, “How does the transformative power of global risk (Weltrisikogesellschaft) transform politics?” no one could have suspected that he was going to leave us with the anxiety of finding the answer alone.

Bruno Latour is professor at Sciences Po Paris and Centennial Professor at the London School of Economics.

Translated from French by Molly Stevens.

A version of this text was published in German on January 5 in the Frankfurter Allgemeine Zeitung.

Pope Francis Says No to Fracking (Eco Watch)

 | January 12, 2015 9:07 am

We’ve been busy lately providing news on all the great ways Pope Francis is working to create a healthy, sustainable planet. In July 2014, Pope Francis called destruction of nature a modern sin. In November 2014, Pope Francis said “unbridled consumerism” is destroying our planetand we are “stewards, not masters” of the Earth. In December 2014, he said he will increase his call this year to address climate change. And, last week we announced that Pope Francis is opening his Vatican farm to the public.

Now, we learn from Nicolás Fedor Sulcic that Pope Francis is supportive of the anti-fracking movement. Watch this interview by Fernando Solanas where he met with Pope Francis soon after finishing a film about fracking in Argentina.

The movie, La Guerra del Fracking or The Fracking War, was banned in cinemas by the Argentinian government, so the filmmakers decided to post it on YouTube. We are awaiting translation of the film and then we’ll feature it on EcoWatch.

“When I was doing research for the film, every time I’d ask someone if they knew what fracking was they had no idea,” said Sulcic. The problem was that “the government didn’t call it fracking, they called it ‘non conventional gas’ so no one was making the link to what was happening in Argentina to what was happening America. I got really mad and knew something had to be done to make people aware of what was going on. I saw the website Artist Against Fracking and felt that was a very good example of what was needed to be done here to take the cause to more people rather than just environmental activists.”

With support by Peace Nobel prize Adolfo Perez Esquivel, Oscar winning Juan Jose Campanella and other very well known Argentinian intellectuals and social leaders, a website was launched to help raise awareness about the dangers of fracking Argentina.

Risk analysis for a complex world (Science Daily)

Date: November 18, 2014

Source: International Institute for Applied Systems Analysis

Summary: Developing adaptable systems for finance and international relations could help reduce the risk of major systemic collapses such as the 2008 financial crisis, according to a new analysis.

Developing adaptable systems for finance and international relations could help reduce the risk of major systemic collapses such as the 2008 financial crisis, according to a new analysis.

The increasing complexity and interconnection of socioeconomic and environmental systems leaves them more vulnerable to seemingly small risks that can spiral out of control, according to the new study, published in the journal Proceedings of the National Academy of Sciences.

The study examines risks are perceived as extremely unlikely or small, but because of interconnections or changes in systems, can lead to major collapses or crises. These risks, which the researchers term “femtorisks,” can include individuals such as terrorists, dissidents, or rogue traders, or factors like climate change, technologies, or globalization.

“A femtorisk is a seemingly small-scale event that can trigger, often through complex chains of events, consequences at much higher levels of organization,” says Princeton University professor and IIASA Distinguished Visiting Fellow Simon Levin, who adopted the term (originally suggested by co-organizer Joshua Ramo) together with an international group of experts during a 2011 IIASA conference on risk modeling in complex adaptive systems.

Levin explains, “A complex adaptive system is a system made up of individual agents that interact locally, with consequences at much higher levels of organization, which feed back in turn to affect individual behaviors. The individual agents can be anything from cells and molecules, to birds in a flock, to traders in a market, to each and every one of us in the global environment.”

The complexity of such systems makes it difficult or even impossible to model the outcomes of specific changes or risks, particularly very small or seemingly insignificant ones. The study examines several examples of such femtorisks that set off major crises, including the credit default swaps that led to the 2008 financial crisis, the recent protests in the Middle East and Ukraine that led to the broad upheavals in both regions’ political systems, and the warming temperatures in the Arctic that have led to massive international interest in the region for mining and economic development.

Risk management for an unpredictable world 

In light of such unpredictable risks, the researchers say, the most resilient management systems are those that can adapt to sudden threats that have not been explicitly foreseen. In particular, the researchers suggest a model drawing on biological systems such as the vertebrate immune system, which have evolved to respond to unpredictable threats and adapt to new situations.

“In practice it is generally impossible to identify which of these risks will end up being the important ones,” says Levin. “That is why flexible and adaptive governance is essential.”

The general principles of such management include: effective surveillance, generalized and immediate initial responses, learning and adaptive responses, and memory, say the researchers. Levin says, “We need to design systems to automatically limit the potential for catastrophic contagious spread of damage, and to complement that with effective and flexible adaptive responses.”

Journal Reference:

  1. Aaron Benjamin Frank, Margaret Goud Collins, Simon A. Levin, Andrew W. Lo, Joshua Ramo, Ulf Dieckmann, Victor Kremenyuk, Arkady Kryazhimskiy, JoAnne Linnerooth-Bayer, Ben Ramalingam, J. Stapleton Roy, Donald G. Saari, Stefan Thurner, Detlof von Winterfeldt. Dealing with femtorisks in international relationsProceedings of the National Academy of Sciences, 2014; 201400229 DOI: 10.1073/pnas.1400229111

Falta de chuva reforça necessidade de usinas nucleares, dizem especialistas (Agência Brasil)

Especialistas participaram do 3º Seminário sobre Energia Nuclear, na Universidade Estadual do Rio de Janeiro (UERJ)

A falta de chuva em diversas regiões do país, principalmente no Sudeste, aponta para a necessidade de se prosseguir com os investimentos em usinas nucleares. A seca, além de afetar o fornecimento de água para a população, também compromete a geração de energia das usinas hidrelétricas, aumentando a importância das nucleares. A avaliação é de especialistas que participaram do 3º Seminário sobre Energia Nuclear, na Universidade Estadual do Rio de Janeiro (UERJ), iniciado ontem, 7, e que se encerra nesta quarta-feira, 8.

O presidente das Indústrias Nucleares do Brasil (INB), Aquilino Senra, frisou que a matriz energética brasileira é muito baseada na hidreletricidade, que vem sendo afetada pelas reiteradas e prolongadas secas nos últimos anos.

“No Brasil, a produção hídrica contribui com 92% de toda energia gerada. Os 8% restantes vêm de uma complementação térmica, na qual a nuclear tem um papel de 4%. Essa situação de baixos reservatórios levará a uma tomada de decisão mais rápida sobre a expansão da produção de energia nuclear. É inevitável, nas próximas décadas, um potencial de crescimento nuclear”, disse Senra.

O supervisor da Gerência de Análise de Segurança Nuclear da Eletronuclear, Edson Kuramoto, disse que a menor quantidade de chuva nos últimos anos forçou o governo a utilizar totalmente as usinas térmicas, incluindo as nucleares, para garantir o fornecimento. “Hoje está demonstrado que a matriz energética brasileiras é hidrotérmica.

Desde 2012, com a redução das chuvas, os reservatórios estão baixos e as térmicas foram despachadas justamente para complementar a falta da geração hidráulica. A energia nuclear tem que ser lembrada, porque o Brasil domina o ciclo e nós temos grandes reservas do combustível”, disse Kuramoto.

Segundo Kuramoto, além das usinas Angra 1 e 2, já em funcionamento, e Angra 3, em construção, o país precisará de pelo menos mais quatro usinas nucleares, sendo duas no Nordeste e duas no Sudeste. “O potencial de hidrelétricas que temos ainda é no Norte do país, mas está difícil o licenciamento de novas usinas com reservatórios. No passado, nossas hidrelétricas suportavam um recesso de chuvas de seis ou sete meses, hoje é três meses. Então o país vai ter que investir nas usinas térmicas. Até 2030, finda o nosso potencial hidráulico. A partir daí, o Brasil terá de construir novas térmicas, sejam nucleares, a gás, óleo combustível ou carvão.”

Segundo o presidente da INB, o Brasil tem garantidas reservas de urânio pelos próximos 120 anos pelo menos. Isso garante um custo baixo do combustível, que ainda tem a vantagem de não emitir gases de efeito estufa. Para Senra, a questão da segurança, muito questionada por causa do acidente da Usina de Fukushima, no Japão, já está solucionada com as novas gerações de usinas.

“Os reatores de Fukushima são de segunda geração. Os que estão começando a ser instalados agora são de terceira geração e neles não ocorreriam acidentes como os que já ocorreram, seja em 1979, nos Estados Unidos [em Three Mile Island, Pensilvânia], ou em 1986, em Chernobil [Ucrânia], e em 2011, em Fukishima”, explicou Senra.

(Vladimir Platonow/Agência Brasil)

Inside the teenage brain: New studies explain risky behavior (Science Daily)

Date: August 27, 2014

Source: Florida State University

Summary: It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Young man (stock image). “Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general. Credit: © iko / Fotolia

It’s common knowledge that teenage boys seem predisposed to risky behaviors. Now, a series of new studies is shedding light on specific brain mechanisms that help to explain what might be going on inside juvenile male brains.

Florida State University College of Medicine Neuroscientist Pradeep Bhide brought together some of the world’s foremost researchers in a quest to explain why teenagers — boys, in particular — often behave erratically.

The result is a series of 19 studies that approached the question from multiple scientific domains, including psychology, neurochemistry, brain imaging, clinical neuroscience and neurobiology. The studies are published in a special volume of Developmental Neuroscience, “Teenage Brains: Think Different?”

“Psychologists, psychiatrists, educators, neuroscientists, criminal justice professionals and parents are engaged in a daily struggle to understand and solve the enigma of teenage risky behaviors,” Bhide said. “Such behaviors impact not only the teenagers who obviously put themselves at serious and lasting risk but also families and societies in general.

“The emotional and economic burdens of such behaviors are quite huge. The research described in this book offers clues to what may cause such maladaptive behaviors and how one may be able to devise methods of countering, avoiding or modifying these behaviors.”

An example of findings published in the book that provide new insights about the inner workings of a teenage boy’s brain:

• Unlike children or adults, teenage boys show enhanced activity in the part of the brain that controls emotions when confronted with a threat. Magnetic resonance scanner readings in one study revealed that the level of activity in the limbic brain of adolescent males reacting to threat, even when they’ve been told not to respond to it, was strikingly different from that in adult men.

• Using brain activity measurements, another team of researchers found that teenage boys were mostly immune to the threat of punishment but hypersensitive to the possibility of large gains from gambling. The results question the effectiveness of punishment as a deterrent for risky or deviant behavior in adolescent boys.

• Another study demonstrated that a molecule known to be vital in developing fear of dangerous situations is less active in adolescent male brains. These findings point towards neurochemical differences between teenage and adult brains, which may underlie the complex behaviors exhibited by teenagers.

“The new studies illustrate the neurobiological basis of some of the more unusual but well-known behaviors exhibited by our teenagers,” Bhide said. “Stress, hormonal changes, complexities of psycho-social environment and peer-pressure all contribute to the challenges of assimilation faced by teenagers.

“These studies attempt to isolate, examine and understand some of these potential causes of a teenager’s complex conundrum. The research sheds light on how we may be able to better interact with teenagers at home or outside the home, how to design educational strategies and how best to treat or modify a teenager’s maladaptive behavior.”

Bhide conceived and edited “Teenage Brains: Think Different?” His co-editors were Barry Kasofsky and B.J. Casey, both of Weill Medical College at Cornell University. The book was published by Karger Medical and Scientific Publisher of Basel, Switzerland. More information on the book can be found at:

The table of contents to the special journal volume can be found at:

Cientistas pedem limite à criação de vírus mortais em laboratório (O Globo)

JC e-mail 4991, de 17 de julho de 2014

Falhas em unidades americanas elevam riscos de surtos

Um grupo multidisciplinar de cientistas de importantes universidades em diferentes países publicou ontem um alerta sobre a manipulação, em laboratórios norte-americanos, de vírus que podem se espalhar e infectar homens e outros mamíferos. A preocupação vem na esteira de seguidas notícias sobre falhas de envolvendo micro-organismos potencialmente perigosos.

“Incidentes recentes com varíola, antraz e gripe aviária em alguns dos mais importantes laboratórios dos EUA nos faz lembrar da falibilidade até das unidades mais seguras, reforçando a necessidade urgente de uma reavaliação completa de biossegurança”, escreveu o autodenominado “Grupo de Trabalho de Cambridge”, composto de pesquisadores das universidades de Harvard, Yale, Ottawa, entre outras.

No alerta, eles relatam que incidentes com patógenos têm aumentado e ocorrido em média duas vezes por semana em laboratórios privados e públicos do país. A informação é de um estudo de 2012 do periódico “Applied Biosafety”.

– Quando vemos algum caso na imprensa, dá a impressão de que são episódios raros, mas não são – comentou Amir Attaran, da Universidade de Ottawa, um dos cientistas que assinou o documento. – Estamos preocupados com as experiências perigosas que estão sendo feitas para projetar os mais infecciosos e mortais vírus da gripe e da síndrome respiratória aguda grave (Sars). Achamos que essa ciência imprudente e insensata pode ferir ou matar um grande número de pessoas. O Centro de Controle de Prevenção de Doenças, na semana passada, admitiu que laboratórios de alta segurança perderam o controle com algumas amostras.

No último caso, frascos de varíola foram encontrados por acaso num depósito inutilizado de um laboratório federal em Washington. Estima-se que eles estivessem ali há mais de 50 anos.

Attaran comparou o pronunciamento do grupo ao que cientistas fizeram em 1943, antes dos bombardeios de Hiroshima, na Segunda Guerra Mundial. E disse que o risco dessas experiências são maiores do que os possíveis benefícios dessas pesquisas, citando a recriação in vitro do vírus da gripe espanhola, de 1918, que matou 40 milhões de pessoas. Em 2006, cientistas fizeram a experiência num laboratório americano.

– Não é para ficarmos alarmados aqui – garante Volnei Garrafa, coordenador do Programa de Pós-Graduação em Bioética da UnB e membro do Comitê Internacional de Bioética da Unesco. – Mas a preocupação deles é válida, sempre há riscos, e o governo precisaria se posicionar.

Incidentes recentes envolvendo varíola, antraz e gripe aviária em alguns dos mais importantes laboratórios dos Estados Unidos nos faz lembrar da falibilidade até das unidades mais seguras, reforçando a necessidade urgente de uma reavaliação completa de biossegurança. Tais incidentes têm aumentado e ocorrido em média duas vezes por semana com patógenos regulados em laboratórios privados e públicos do país. Uma infecção acidental com qualquer patógeno é preocupante. Mas riscos de acidente com os recém-criados ‘patógenos potencialmente pandêmicos’ levanta novas graves preocupações.

A criação em laboratório de novas cepas de vírus perigosos e altamente transmissíveis, especialmente de gripe, mas não apenas dela, apresenta riscos substancialmente maiores. Uma infecção acidental em tal situação poderia desencadear surtos que poderiam ser difíceis ou impossíveis de controlar. Historicamente, novas cepas de gripe, uma vez que comecem a transmissão na população humana, infectaram um quarto ou mais da população mundial em dois anos.

Para qualquer experimento, os benefícios esperados deveriam superar os riscos. Experiências envolvendo a criação de patógenos potencialmente pandêmicos deveria ser limitada até que haja uma avaliação quantitativa, objetiva e confiável, dos possíveis benefícios e oportunidades de mitigação de riscos, assim como a comparação contra abordagens experimentais mais seguras.

Uma versão moderna do processo Asilomar, que define regras para pesquisas com DNA recombinante, poderia ser um ponto de partida para identificar as melhores medidas para se atingir os objetivos de saúde pública global no combate a doenças pandêmicas e assegurar os mais altos níveis de segurança. Sempre que possível, a segurança deve ser prioridade em detrimento a ações que tenham risco de pandemia acidental.

(Flávia Milhorance / O Globo)

Tourism, Construction and an Ongoing Nuclear Crisis at Chernobyl (Newsweek)

By  / April 17, 2014 12:11 PM EDT


From high-end tourism to one of the world’s most ambitious engineering projects, strange things are happening at the site of the worst nuclear disaster in history, which could still kill plenty of people Stephan Vanfleteren/Panos

We climb eight flights of stairs. Eight more remain. This is sturdy Soviet concrete, dusty as death, but solid. So I hope, anyway. My guide, Katya, who is in her early 20s, has informed me that the administrators of the Exclusion Zone that encompasses Chernobyl do not want tourists entering the buildings of Pripyat for what appears to be an unimpeachable reason: Some of them could collapse.

But the roof of this apartment building on the edge of Pripyat, the city where Chernobyl’s employees lived until the spring of 1986, will provide what Katya says is the best panorama of this Ukrainian Pompeii and the infamous nuclear power plant, 1.9 miles away, that 28 years ago this week rendered the surrounding landscape uninhabitable for at least the next 20,000 years. So we climb on, higher into the honey-colored vernal light, even as it occurs to me that Katya is not a structural engineer. And that the adjective Soviet is essentially synonymous with collapse.

And what do I know? Nothing. I am just a curious ethnic hyphenate, Russian-born and largely American-raised. In 1986 we lived in Leningrad, about 700 miles north of the radioactive sore that burst on what should have been an ordinary spring night less than a week before the annual May Day celebration. Considering that Communist Party General Secretary Mikhail Gorbachev wasn’t told for many hours what, exactly, had transpired at Chernobyl (“Not a word about an explosion,” he said later), you can safely extrapolate to what the Soviet populace learned on April 26: absolutely nothing. But a couple of days after the disaster, a family friend from Kiev called and said we had better cancel our planned vacation in the Ukrainian countryside.

Then details started falling into place, as workers at a Swedish nuclear power plant detected radiation, eventually determining that it came from the Soviet Union. That forced the ever-defensive Kremlin’s hand, which admitted on April 28 that an accident had happened at Chernobyl. “A government commission has been set up,” a statement from Moscow assured. My father, a nervous physicist himself, was not mollified. I remember, as clearly as I remember anything of my Soviet youth, his telling me to stay out of the rain.

The narrative of Chernobyl has been told so many times, there is no point in regurgitating all of it here. Very briefly: a shoddy Soviet reactor, moderated by graphite instead of water; a turbine generator coastdown test that senselessly called for the disabling of all emergency systems; the reactor’s fall into an “iodine valley” and the consequent poisoning of the reactor by xenon-135; the incompetence and impatience of the plant’s managers, especially of Anatoly Dyatlov, a supervising engineer who stubbornly drove the test forward and would later serve prison time for his role in the night’s events; the indefensible lifting of all but six of the 211 control rods; the reactor going prompt supercritical; the inability to fully reinsert the control rods, leading to steam explosions and graphite fires; a biblical pillar of radioactive flame surging into the sky.

A cross with a crucifix is seen in the deserted Ukrainian town of Pripyat November 27, 2012. The town's population was evacuated following the  disaster at the nearby Chernobyl nuclear reactor in 1986.A cross with a crucifix is seen in the deserted Ukrainian town of Pripyat November 27, 2012. The town’s population was evacuated following the disaster at the nearby Chernobyl nuclear reactor in 1986.

Through it all, two off-the-clock workers fished in a nearby coolant pond. They continued to fish until the morning, receiving enormous doses of radiation yet somehow surviving. Theirs may be the only feel-good story of the night.

The toxic cloud that enveloped much of Europe that spring has intrigued me ever since. I can name all of the radionuclides it contained: cesium-137, iodine-131, zirconium-95 strontium-90, ruthenium-103…. But I longed to know its origins, the way a naturalist might yearn to see the source of a river somewhere high in the mountains, simply to fulfill the human need to discover beginnings and pay homage to them.

I also happen to be a journalist and now find myself in Ukraine when it is at the center of world events, as opposed to the periphery where most former Soviet states languish (when was the last time CNN did a gripping live remote from Uzbekistan?). Except I am about 90 miles north of Kiev, the site of the Maidan uprising, the epicenter of a conflict that has Russian President Vladimir Putin sharpening his swords again. Everyone else is reporting on Crimea, possible NATO retributions, a new Cold War…and here I am, in the midst of this “weirdish wild space” (h/t Dr. Seuss).

Katya is right. Not only do the stairs hold, but the view from the roof, 16 floors above Pripyat, is spectacular. Winter singes the air; nothing yet blooms. There is a severe beauty that is particularly Slavic, the earth at once fecund and stark. The white quadrangles of Pripyat seem to have risen up between the trees that grow thickly right up into Belarus, encompassing a forbidden zone of a thousand square miles. The V.I. Lenin Chernobyl Atomic Energy Station (the official name of what the world knows as Chernobyl) is visible in the distance as a squat collection of shapes, emitting equal parts radioactivity and mystery.

That apartment building was part of my two-day excursion into Chernobyl, one that quickly dispelled any notions that this swath of Eastern Europe is a radioactive wasteland. Or, rather, only a radioactive wasteland. I can’t quite believe that I am saying this, but tourism to Chernobyl is booming. There were 870 visitors in 2004, two years after the Ukrainian government allowed (some) access to the Exclusion Zone. Today, the Kiev-based tour company SoloEast says it takes 12,000 tourists to Chernobyl a year, which accounts for 70 percent of the pleasure-visitors heading there (including myself). I even stayed at a luxury hotel of sorts, a neo-rustic cottage that featured towel warmers and a sign that said, “Please keep your radioactive shoes outside.”

For the most part, the defunct station of reactors (the first went live in 1977; the last, the one that blew, in 1983) looks like a tidy industrial park in central Ohio: shorn green lawns, a smattering of abstract art, half-empty parking lots, a canal rife with fish. Nothing indicates that this is the site of the worst nuclear disaster in human history.

Yet as tourists Instagram away at Pripyat’s ruins, Chernobyl is undergoing one of the most challenging engineering feats in the world, as a French consortium called Novarka tries to replace the aging sarcophagus that contains the reactor, a concrete shell hastily and heroically built in the direct aftermath of the meltdown. The place remains a half-opened tinderbox of potential nuclear horrors, and just because much of the world has forgotten about Chernobyl doesn’t mean catastrophe won’t visit here again.

But don’t let that detract from your sightseeing.

Pictures of Soviet era politicians in an abandoned building in Pripyat the abandoned town which was built to house workers at the Chernobyl nuclear power plant. Pripyat, Ukraine 2006. Stephan Vanfleteren/PanosPictures of Soviet era politicians in an abandoned building in Pripyat the abandoned town which was built to house workers at the Chernobyl nuclear power plant. Pripyat, Ukraine 2006. Stephan Vanfleteren/Panos


Of the many atrocities committed against this swath of north-central Ukrainian soil, the most recent may be the American horror movie Chernobyl Diaries (2012), which meticulously sticks to every outworn convention of the horror genre, as if deviating from such would be a terror of its own. The poor viewer is presented with a group of happy-go-lucky young travelers, mostly American, respectively buxom and bro-ish; a goonish Ukrainian tour guide with the locution of a Neanderthal; and a Pripyat rendered in such an unrelentingly grim color palette that I thought the director (one Bradley Parker) may have smeared dirt and moss over his camera lenses.

The characters, wishing to “see some cool s**t,” embark on a tour of Pripyat. All fine so far, just a little atmospheric unease. As night falls and the familiar, beery comforts of Kiev beckon, their van (surprise!) refuses to start. There follow many expressions of misplaced machismo, terror/wonder and good old animal fear, expressed in the purest clichés imaginable:

“We paid for this tour, bro.”

“This looks pretty f**king sketchy.”

And, inevitably, “Oh, s**t.”

At one point, a character asks the question that is central to all hackneyed horror movies: “Are you sure we are out here alone?” You can figure out what happens from there. In any case, I certainly can’t tell you, as I stopped watching about three quarters of the way through, having completed what I felt were my journalistic duties and not wishing to subject myself to this cinematic torture any longer. I do remember a pack of feral hamsters. Or something.

Katya, my tour guide, told me that American visitors are afraid of mutants lurking in the tenebrous alleys and dilapidated buildings of Pripyat. She finds this misguided concern easier to manage, however, than the fearless attitude of Polish and Russian visitors, who she says will climb into and over everything without any of the corporeal concerns one might harbor when exploring an abandoned, radioactive metropolis.

School books and papers in an abandoned preschool in the deserted city of Pripyat on January 25, 2006 in Chernobyl, Ukraine. Daniel Berehulak/PanosSchool books and papers in an abandoned preschool in the deserted city of Pripyat on January 25, 2006 in Chernobyl, Ukraine. Daniel Berehulak/Panos

Igor, our driver, a Baptist with a Hebraically world-weary sense of humor, found it especially amusing that one American visitor thought that a covered walkway between two buildings was an elevated subway. Igor made several comments about the general naivete of Americans, perhaps suspecting that I enjoyed them. Most of the time, he simply remained in the car sleeping or listening to religious radio, including at one point a lengthy sermon on marriage that he did not turn down for my benefit. He has been to Chernobyl 500 times, and it bores him, he says.

Pripyat did not bore me. It is often called a ghost city, because after the Chernobyl explosion-though not immediately after it, tragically-the majority of the 49,000 residents of this town, 17,000 of whom were children, were ordered onto 1,216 buses and 300 trucks that had come from Kiev, without the basic explanation any neophyte emergency-management student would know to provide.

Of the many books written about Chernobyl, the only one I can confidently say you have to read is Voices From Chernobyl: The Oral History of a Nuclear Disaster. It is the ordinary voices that make this book extraordinary. For example, this is how Lyudmilla Ignatenko describes the evacuation from Pripyat:

It’s night. On one side of the street there are buses, hundreds of buses, they’re already preparing the town for evacuation, and on the other side, hundreds of fire trucks. They came from all over…. Over the radio they tell us they might evacuate the city for three to five days, take your warm clothes with you, you’ll be living in the forest. In tents. People were even glad-a camping trip!

Ignatenko’s husband, Vasily, was one of the firemen sent immediately after the explosions right into the reactor’s maw, where the radiation was far above the lethal dose. More than 20 would die from the exposure. In Voices From Chernobyl, she recalls someone telling her, as she watches Vasily expire in a Moscow hospital, that “this is not your husband anymore, not a beloved person, but a radioactive object with a strong density of poisoning.”

Pripyat is less a ghost town than a museum in handsome disarray. An excellent museum all the same, surely the most authentic record of the Soviet debacle that remains (other than Russia itself). A pretty good one of nuclear energy, too. I have been back to my native Leningrad twice. I have stood in front of the plain cinderblock building where I was raised; have squeezed into a desk in the very same classroom where I was once a Pioneer and where, as I bathed in nostalgia, bored post-Soviet teenagers texted away; have posed humorously in front of the Lenin statue at Finland Station with the native Californian who would become my wife. And these were all fine pricks of memory. Pripyat, though, was a hammer. With sickle.

A view of the control center of the damaged fourth reactor at the Chernobyl nuclear power plant February 24, 2011. Gleb Garanich/ReutersA view of the control center of the damaged fourth reactor at the Chernobyl nuclear power plant February 24, 2011. Gleb Garanich/Reuters

Not to get all William Wordsworth-at-Tintern-Abbey on you, but there was immense power in walking through a graveyard of gas masks on a classroom floor, or the fresh-meat station of what had once been a bustling supermarket, or the natal unit of a hospital, rusted cribs still looking, after all these years, as if they had just been robbed of their newborn contents. I don’t want to claim to have heard the same “still, sad music of humanity” that famously played to Wordsworth on the banks of the River Wye, but, well, Pripyat is the most life-affirming place that I have ever been to, despite all the suffering that lingers there. For all the cancers, deaths, irradiations and lives broken, the place remains, and there is something to be said for brute rage-against-the-dying-of-the-light survival.

Pripyat is not receding in my mind, the way so many great museums have. Sometimes, what the soul needs is not a masterpiece. And so dusty Pripyat seems to have lodged, like a radioactive particle, into some deep neural fold: slippers on a hospital floor, a rusted circuit box, a piano that can still manage a plangent note or two. Outside the music school, a colorful chaos of mosaic tiles littered the pavement. Katya leaned down, then hesitated. “I would give you some to take, but you have a daughter.”

We carried a dosimeter with us at all times; Igor, my driver, also had a beta ray detector in his car, which looked like an ancient remote control and remained largely inert. The dosimeter, meanwhile, would make its anxious clicks, but other than in a hot zone in front of a kindergarten, it rarely exceeded 3 or 4 microsieverts per hour-it read 3.88 µSv/h several hundred feet from the ruined reactor. That’s less than what you get bombarded with on a round-trip flight between San Francisco and Paris (6.4 µSv/h). Igor especially delighted in pointing out this fact; he shares that proclivity with a great number of individuals on the Internet, where numerous websites are devoted to gleefully chronicling the radioactivity of bananas (pretty high; it’s the potassium), Brazil nuts (the most radioactive food on Earth) and simply having a loved one sleep next to you (.05 microsieverts per night). I assault you with all these facts, in the manner of my Chernobyl guides, to simply point out that we are no more screwed in Pripyat than we are in Monterey or Omaha or Manhattan.

After our forays into Pripyat and the power plant, we would leave the Exclusion Zone, which one is allowed to do only after passing several dosimeter checks, conducted via ancient-looking olive machines that appeared (to my admittedly inexpert eye) to be as effective at detecting radiation as Mr. Magoo is at driving. Anyway, I passed. There was also a lot of handing over of paperwork to surly Ukrainian guards, who would probably rather be battling Russian invaders than inspecting the passports of American journalists. After several needlessly tense moments, the guards would allow us to pass, and Igor would speed down the empty roads of northern Ukraine, often while furiously texting. He did not wear a seat belt, and neither did I. It would have been a grave insult to do so.

Until very recently, the only places to stay while visiting Chernobyl were two small motels in the Exclusion Zone, which would have been reason enough not to come, at least for a spoiled American used to Western comforts (i.e., me). One tour company, in a heartwarming but ominous display of honesty, describes one of these motels, unimaginatively named “Pripyat,” to be “Soviet-style simplistic,” which is probably the worst hospitality-industry endorsement imaginable.

This yuppie reporter’s savior proved to be Countryside Cottages, a pleasantly rustic cabin-cum-hotel set on a bucolic and fenced-in landscape in the village of Orane, on the banks of the Teteriv River. The cottage is outside the Exclusion Zone, with its strange currents of tranquillity and unease: You can walk about the village freely without having to undergo dosimetry checks. By my count, Countryside Cottages, which has now been open for about two years, is the closest-and only-good place to stay near Chernobyl. The best adjective to describe it is Western, and if you have ever traveled beyond the West, you will know what I mean. Yes, the electricity did go out one evening, but only briefly, certainly not long enough to steal the chill from the horseradish vodka in the fridge. There was also a fancy coffee machine, though, alas, no organic milk. SoloEast, which owns Countryside Cottages, boasts on its web page for the hotel, “We can also teach you to plant or dig potato.” This agricultural instruction was neither offered nor, I can assure you, requested. I have already praised the towel warmers.

At the behest of my driver, Igor, I did purchase the Slavic trinity of smoked meat, alcohol and bread before leaving Kiev. In the evening, I would sit with these, watching the swift and surly Teteriv, listening to the incessant crowing of roosters. For all the discordances of modern travel, from a McDonald’s in the Latin Quarter to “eco resorts” in Haiti, perhaps nothing is quite as surreal as the cozy country comforts of the Countryside Cottages, where you are supposed to forget, as you watch gaudy Russian cable on a flat-screen, the residual wreckage you have come to see.

A destroyed school in the ghost town of Smersk, in an area where the radioactive fallout was greater than in Chernobyl itself. Stefan Boness/Ipon/PanosA destroyed school in the ghost town of Smersk, in an area where the radioactive fallout was greater than in Chernobyl itself. Stefan Boness/Ipon/Panos


Ruin porn is a thing. Trust me. It has made Detroit a destination, as there are apparently legions of tourists who’d rather behold the shell of the Michigan Central Station than guzzle piña coladas at a Sandals resort. The popularity of ruin porn is responsible for listicles like “The 38 Most Haunting Abandoned Places on Earth.” Pripyat is first on this list, which also includes the creepy dagger blade of the Ryugyong Hotel in Pyongyang, North Korea, and Bannerman Castle in the Hudson River Valley.

While I was in Pripyat, the Tate Britain in London was staging a show called Ruin Lust, whose catalog includes a quote from the 18th century French philosopher and encyclopedist Denis Diderot: “The ideas ruins evoke in me are grand. Everything comes to nothing, everything perishes, everything passes, only the world remains, only time endures.”

Ruin porn has even been the subject of an entire book: Andrew Blackwell’s Visit Sunny Chernobyl: And Other Adventures in the World’s Most Polluted Places (2012). The thing is amusing but ultimately too ironic and glib, though Blackwell does get credit for visiting, and dutifully chronicling his exploits at, the Canadian oil sands of Alberta; the refineries of Port Arthur, Texas; and the sewage canals of India. His section on Chernobyl promises to reveal “one weird old tip for repelling gamma rays.”

For some, though, ruin porn is exploitative, a version of poverty tourism: gang tours of Los Angeles, jaunts through Soweto, that sort of thing. On the topic of her city having become a hot spot for urban explorers and gonzo pornographers, one Detroit cultural official has complained that “people here are very sensitive to treating Detroit like it’s a big cemetery and our ruins are beautiful headstones. Those of us who live here don’t like to be seen that way.”

There is nobody in Pripyat to object to your voracious voyeurism. There are, however, some samosels in the Exclusion Zone, elderly settlers who returned to live on the land they had known and worked for decades. There had been about 180 villages here, and some people had survived both Stalin and Hitler. Rogue neutrons weren’t going to keep them away. So they came back, illegally. Nobody bothered to expel them.

Visiting the samosels was uncomfortable in precisely the way that detractors of ruin porn suggest. It was like touring a decrepit zoo where the animals are in obvious distress. I met two villagers, Ded Ivan and Babushka Maria, in front of a homestead in the village of Paryshiv. Many of the surrounding buildings seemed to be little more than wooden slats that accidentally, and only occasionally, formed right angles. Both Ivan and Maria were born in the 1930s, a decade that began with widespread starvation brought upon Ukraine by Stalin. The following decade commenced just as grimly, with the invasion of the Wehrmacht: Ivan remembers being bitten by a German dog that jumped out of a tank.

You are supposed to bring the samosels gifts when you visit on tours such as the one I was taking, but we had forgotten this detail, so I simply handed Maria 200 hrivny ($16.913, as of this writing), which she placed into the pocket of a filthy light blue coat. Ivan was trying to fix a chainsaw, and my driver Igor helped. Meanwhile, Maria brought me over to see the couple’s pig, and I was coaxed into feeding the snarling, smelly animal a rotten apple, which was the single most frightening and disgusting thing I did while visiting Chernobyl.

This was not a museum of Soviet history; this was Turgenev and Dostoyevsky, the Russian peasant in his element, with a sprinkling of radionuclides thrown in for modernity’s sake. “One tragedy after another,” Ivan bemoaned. He tried to explain further, but he spoke with an exceedingly heavy Ukrainian-Belorussian accent, and so we left things on that melancholy note.


While the samosels live in dishearteningly primitive conditions, the power station itself has the attention of the West’s finest engineers. Much of the Exclusion Zone can be allowed to remain in ruin-except, paradoxically, the thing that caused the devastation.

Sarcophagus comes from the Greek σαρκοφάγος, which roughly means “flesh-eating,” a reference to the limestone tomb within which decayed one’s earthly remains. The one that was erected around the reactor in the seven months following the meltdown is a brutally wondrous thing to behold: about 400,000 cubic meters of concrete and 7,300 metric tons of steel, all of it as gray as a November sky. Remarkably, it has held a radioactive crypt whose contents we don’t fully know and never want to see. Most everyone is sure that the sarcophagus can’t hold much longer, having weathered nearly 30 winters so brutal that their predecessors sapped the armies of both Hitler and Napoleon (the summers aren’t exactly clement, either).

In the winter of 2013, a portion of the turbine hall collapsed. With brazen nonchalance straight from the Brezhnev years, a spokeswoman for the plant deemed the event “unpleasant.”

James Mahaffey, a nuclear engineer and the author of the recent bookAtomic Accidents, told me that while the sarcophagus was necessary, it was “all wrong. You don’t just drop concrete on a burning reactor.” Not that there were many options (or any aesthetic considerations) in the wake of the catastrophe, but the concrete sarcophagus erected under hellish conditions in seven months essentially serves as a thermal blanket, keeping warm the radioactive elements inside (some of these have melted into a nuclear lava called corium, the most notorious deposit of which is called the Elephant’s Foot). It has been upgraded, but you can only do so much with an ’81 Lada. Everyone knows the sarcophagus has to go.

Mahaffey is not circumspect about what worries him most: “Russian concrete. Russian this and Russian that.” He lists a variety of dangers: wind blowing through gaps in the reactor, dispersing radionuclides; rain leaching off same. He later wrote, “I left out birds, insects, migrating animals, tourists, changing of the guard, and sporing bacteria.”

“It wouldn’t take much of a seismic event to knock it down,” a civil engineer recently explained to Scientific American. The Federation of American Scientists says, “If the sarcophagus were to collapse due to decay or geologic disturbance, the resulting radioactive dust storm would cause an international catastrophe on par with or worse than the 1986 accident.” Eater of flesh indeed.

Nor is the land surrounding the reactor quite the pristine preserve that some have celebrated in nature-has-triumphed-over-our-thoughtlessness-and-incompetence fashion. Earlier this year, a study by University of South Carolina biologist Timothy Mousseau and others indicated that fallen trees weren’t decomposing because, in Mousseau’s words, “the radiation inhibited microbial decomposition of the leaf litter on the top layer of the soil,” turning the ground into a vast firetrap at whose center sits the aged sarcophagus.

So, at best, Chernobyl is merely dormant. To extend that dormancy for a lot longer, Novarka was contracted in 2007 to build the New Safe Confinement. Though sometimes described as a gigantic hangar, having seen the NSC, I see it as something more elegant, its hopeful parabolic curves recalling the smooth grace of the Gateway Arch in St. Louis. In cross section, it is two layers of steel with a 39-foot layer of latticework in between. Its combined shapes and angles are so fluid and simple, you want to put them on a ninth grade geometry quiz.

Currently being built in two pieces, it will rise 30 stories and weigh 30,000 tons-and cost perhaps as much as $2 billion. When completed, the steel contraption will slide along Teflon rails on top of Reactor No. 4 (a process that will take several days). It is believed to be the largest movable structure on Earth. The NSC will be so enormous that, according to the British technology journal The Engineer, it “is one of a handful of buildings that will enclose a volume of air large enough to create its own weather.”

Chernobyl is on the border with Belarus, far from both Crimea and the eastern borderlands where Russian forces have belligerently gathered. And yet the conflict between Kiev and Moscow could have repercussions here. A report on, for example, surmised that Western nations funding the NSC “may be leery of investing amid political instability.” The article has an economist wondering if Russia will “use completion as yet another bullying point to continue their moves on Ukraine.”

This may be a pure linguistic accident, but Novarka sounds like a Slavicized contraction of Noah’s ark. Yes, I am acutely aware that ark and arch might for some seem to be homophones, and not even good ones at that. Yet the more I think about the association, the more sense it makes: This arch, like that ark, is supposed to save us from our own sins and folly. Though, admittedly, the metaphor only goes so far. It would not be water, this time, prevailing upon the earth, but a pestilence invisible and unlikely to ever recede.


Katya, my Virgil through the Exclusion Zone, estimated that 90 percent of the tourists who come to Chernobyl are just “checking a box.” I was checking a box, too, one that had remained empty ever since my father made his strange warnings 28 years ago about the Leningrad sky, which was as overcast that spring as it was every spring for which my memory was available. What was up there, all of a sudden, that I needed to avoid?

“This is a lesson for humanity,” Katya told me as we walked through town. But what lessons, exactly, Ukraine has learned from Chernobyl are not clear. Some people put the death toll in the mere dozens, these being mostly of the first responders who entered the reactor without the benefit of proper protection. Others think that, when all the cancers have run their course, the fatalities will be in the six figures. The World Health Organization says that Chernobyl claimed 4,000 souls. But nobody truly knows.

Nor did Chernobyl put an end to nuclear energy in Ukraine. According to the World Nuclear Association, Ukraine “is heavily dependent on nuclear energy-it has 15 reactors generating about half of its electricity.” And the hostilities with Russia have renewed calls for Ukraine to regain its status as a nuclear superpower. As one Ukrainian politician explained, in what seems to be textbook realpolitik, “If you have nuclear weapons, people don’t invade you.” Yeah, maybe. But yikes.

“Humanity learns mostly by disasters,” Hans Blix told me when I reached him by phone at his home in Stockholm. As head of the International Atomic Energy Agency, he was the first Westerner to see the ruined reactor, flying over it in a helicopter about a week after the disaster. “It was a sad sight,” he recalls. The graphite moderator was still aflame; he jokingly likens it, today, to “burning pencils.”

Pliny the Younger, writing of the destruction of Pompeii in A.D. 79 by the eruption of Mount Vesuvius, described how “a dense black cloud was coming up behind us, spreading over the earth like a flood…. We had scarcely sat down to rest when darkness fell, not the dark of a moonless or cloudy night, but as if the lamp had been put out in a closed room.”

Yet the most curious aspect of Pliny’s letter to Tacitus is the following: “There were people, too, who added to the real perils by inventing fictitious dangers: some reported that part of Misenum had collapsed or another part was on fire, and though their tales were false they found others to believe them. A gleam of light returned, but we took this to be a warning of the approaching flames rather than daylight.” It is almost as if Pliny is offering a rebuke against excessive despair at the moment that Pompeii was facing certain doom. It’s hope against hope.

Chernobyl is a similar amalgam of fears real and imagined, of Chernobyl Diaries alarmism combined with sobering tales about the limits of human power. You are reminded of the latter by a statue of Prometheus that today stands at the power station. Originally, that statue stood in front of the movie theater in Pripyat, which was also called Prometheus, the metallic lettering (Прометей) still affixed to the facade, a three-syllable battalion weary and weathered by battle.

Prometheus! It’s like they knew.

Mosquito transgênico para controle da dengue aprovado pela CTNBio (Portal do Meio Ambiente)

17 ABRIL 2014

Brasília – A CTNbio aprovou o pedido de liberação comercial de uma variedade transgênica de Aedes aegypti (o mosquito transmissor do vírus da dengue e de um novo virus, Chikungunya), desenvolvido pela empresa britânica Oxitec. O A. aegypti OX513a carrega um gene de letalidade condicional, que é ativado na ausência de tetraciclina. Os machos, separados das fêmeas ainda em estado de pupa, podem ser produzidos em biofábrica em enormes quantidades, sendo em seguida liberados no ambiente. Para detalhes ver .

A votação nominal na Plenária teve como resultado 16 votos favoráveis (sendo um condicional) e um contra.

Antes da votação o parecer de vistas do processo foi lido. O membro relator argumentou pela diligência do processo por várias falhas que, ao seu ver, impediam uma conclusão segura do parecer. O argumento principal foi de que a eliminação do A. aegypti, de forma rápida e extensa, abriria espaço para a recolonização do espaço por outro mosquito, como o Aedes albopictus. Seu parecer foi amplamente rechaçado pela Comissão.

Também antes da votação alguns membros sugeriram uma audiência pública de instrução, que foi rechaçada por 11 votos contra 4.

A discussão imediatamente antes da votação versou menos sobre os riscos diretos do mosquito à saúde humana e animal e ao meio ambiente e derivou para aspectos de benefícios à tecnologia. Esta divergência refletiu o consenso da CTNBio quanto à segurança do produto e à premência de novas técnicas para o controle do vetor da dengue. A discussão também refletiu a segurança da CTNBio sobre o potencial da tecnologia na redução de populações de A. aegypti, sem riscos de recrudescimento de outras doenças, parecimento de novas endemias ou substituição do mosquito vetor, em completa oposição ao ponto de vista isolado do membro relator do pedido de vistas. Uma discussão detalhada do ponto de vista do relator está disponível em

Com estes resultados,a CTNBio abre ao país a possibilidade de empregar um mosquito transgênico para o controle da dengue. A liberação comercial deste mosquito é, também, a primeira liberação comercial de um inseto transgênico no Mundo. O Brasil, usando uma legislação eficiência e séria na avaliação de risco de organismos geneticamente modificados, dá um exemplo de seriedade e maturidade tanto aos países que já fazem avaliação de risco de OGMs, como àqueles que ainda vacilam em ingressar no uso desta tecnologia.

Fonte: GenPeace.

*   *   *

17/4/2014 – 12h13

Mosquitos transgênicos são aprovados, mas pesquisadores temem riscos (Adital)

por Mateus Ramos, do Adital

mosquitos1 300x150 Mosquitos transgênicos são aprovados, mas pesquisadores temem riscos

Um importante, e perigoso, passo foi dado na última semana pela Comissão Técnica Nacional de Biossegurança (CTNBio), que aprovou o projeto de liberação de mosquitos geneticamente modificados no Brasil. Os mosquitos transgênicos serão usados para pesquisa e combate a dengue no país. O projeto, que permite a comercialização dos mosquitos pela empresa britânica Oxitec, foi considerado tecnicamente seguro pela CTNBio e, agora, só necessita de um registro da Agência Nacional de Vigilância Sanitária (Anvisa) para ser, de fato, liberado.

Para o professor da Universidade Federal de São Carlos (SP) e ex- membro da CTNBio, José Maria Ferraz, em entrevista à Adital, a resposta positiva dada ao projeto, pela Comissão, é um forte indicativo de que o mesmo será feito pela Anvisa. “Com certeza será aprovado, o próprio representante do Ministério da Saúde estava lá e disse que, frente às epidemias de dengue, era favorável à aprovação do projeto.”

Ferraz faz duras críticas à aprovação concedida pela CTNBio e ao projeto. “Não existe uma só política de enfrentamento à dengue, mas sim um conjunto de ações, além disso, não há garantias de que os mosquitos liberados também não carreguem a doença, ou seja, vão liberar milhões de mosquitos em todo o país, sem antes haver um estudo sério sobre o projeto. É uma coisa extremamente absurda o que foi feito. É uma insanidade, eu nunca vi tanta coisa errada em um só projeto.”

Outro grande problema apontado por Ferraz é o risco de se alterar, drasticamente, o número de mosquitos Aedes Aegypti. Uma possível redução pode aumentar a proliferação de outro mosquito, ainda mais nocivo, o Aedes Albopictus, que transmite não só a Dengue como outras doenças, a Malária por exemplo. Além disso, ele denuncia que falhas no projeto podem desencadear ainda a liberação de machos não estéreis e fêmeas, dificultando o controle das espécies. “O país está sendo cobaia de um experimento nunca feito antes no mundo. Aprovamos esse projeto muito rápido, de forma irresponsável.”

Os resultados prometidos pelo projeto podem ser afetados, por exemplo, caso haja o contato do mosquito com o antibiótico tetraciclina, que é encontrado em muitas rações para gatos e cachorros. “Basta que os mosquitos entrem em contato com as fezes dos animais alimentados com a ração que contenham esse antibiótico para que todo o experimento falhe.”, revela Ferraz.

Entenda o projeto

De acordo com a Oxitec, a técnica do projeto consiste em introduzir dois novos genes em mosquitos machos, que, ao copularem com as fêmeas do ambiente natural, gerariam larvas incapazes de chegar à fase adulta, ou seja, estas não chegariam à fase em que podem transmitir a doença aos seres humanos. Além disso, as crias também herdariam um marcador que as torna visíveis sob uma luz específica, facilitando o seu controle.

* Publicado originalmente no site Adital.

How does radioactive waste interact with soil and sediments? (Science Daily)

Date: February 3, 2014

Source: Sandia National Laboratories

Summary: Scientists are developing computer models that show how radioactive waste interacts with soil and sediments, shedding light on waste disposal and how to keep contamination away from drinking water.

Sandia National Laboratories geoscientist Randall Cygan uses computers to build models showing how contaminants interact with clay minerals. Credit: Lloyd Wilson

Sandia National Laboratories is developing computer models that show how radioactive waste interacts with soil and sediments, shedding light on waste disposal and how to keep contamination away from drinking water.

“Very little is known about the fundamental chemistry and whether contaminants will stay in soil or rock or be pulled off those materials and get into the water that flows to communities,” said Sandia geoscientist Randall Cygan.

Researchers have studied the geochemistry of contaminants such as radioactive materials and toxic heavy metals, including lead, arsenic and cadmium. But laboratory testing of soils is difficult. “The tricky thing about soils is that the constituent minerals are hard to characterize by traditional methods,” Cygan said. “In microscopy there are limits on how much information can be extracted.”

He said soils are often dominated by clay minerals with ultra-fine grains less than 2 microns in diameter. “That’s pretty small,” he said. “We can’t slap these materials on a microscope or conventional spectrometer and see if contaminants are incorporated into them.”

Cygan and his colleagues turned to computers. “On a computer we can build conceptual models,” he said. “Such molecular models provide a valuable way of testing viable mechanisms for how contaminants interact with the mineral surface.”

He describes clay minerals as the original nanomaterial, the final product of the weathering process of deep-seated rocks. “Rocks weather chemically and physically into clay minerals,” he said. “They have a large surface area that can potentially adsorb many different types of contaminants.”

Clay minerals are made up of aluminosilicate layers held together by electrostatic forces. Water and ions can seep between the layers, causing them to swell, pull apart and adsorb contaminants. “That’s an efficient way to sequester radionuclides or heavy metals from ground waters,” Cygan said. “It’s very difficult to analyze what’s going on in the interlayers at the molecular level through traditional experimental methods.”

Molecular modeling describes the characteristics and interaction of the contaminants in and on the clay minerals. Sandia researchers are developing the simulation tools and the critical energy force field needed to make the tools as accurate and predictive as possible. “We’ve developed a foundational understanding of how the clay minerals interact with contaminants and their atomic components,” Cygan said. “That allows us to predict how much of a contaminant can be incorporated into the interlayer and onto external surfaces, and how strongly it binds to the clay.”

The computer models quantify how well a waste repository might perform. “It allows us to develop performance assessment tools the Environmental Protection Agency and Nuclear Regulatory Commission need to technically and officially say, ‘Yes, let’s go ahead and put nuclear waste in these repositories,'” Cygan said.

Molecular modeling methods also are used by industry and government to determine the best types of waste treatment and mitigation. “We’re providing the fundamental science to improve performance assessment models to be as accurate as possible in understanding the surface chemistry of natural materials,” Cygan said. “This work helps provide quantification of how strongly or weakly uranium, for example, may adsorb to a clay surface, and whether one type of clay over another may provide a better barrier to radionuclide transport from a waste repository. Our molecular models provide a direct way of making this assessment to better guide the design and engineering of the waste site. How cool is that?”

Proposta anula leilão de exploração de petróleo no campo de Libra (Agência Câmara)

JC e-mail 4883, de 29 de janeiro de 2014

SBPC e ABC defendem mais pesquisas sobre eventuais danos ambientais da exploração do gás de xisto

Tramita na Câmara dos Deputados o Projeto de Decreto Legislativo (PDC) 1289/13, do deputado Chico Alencar (Psol-RJ), que susta a autorização do leilão de exploração de petróleo e gás no campo de Libra (RJ), realizado em outubro de 2013.

O deputado quer cancelar quatro normas que permitiram o leilão do campo onde haverá exploração do pré-sal brasileiro: as resoluções 4/13 e 5/13 do Conselho Nacional de Política Energética; a Portaria 218/13 do Ministério das Minas e Energia e o Edital de Licitação do Campo de Libra.

Com previsão de produção de 8 a 12 bilhões de barris de petróleo, o campo de Libra foi leiloado sob protestos e com forte proteção policial. Apesar da expectativa de participação de até quatro consórcios, houve apenas um, formado pelas empresas Petrobras, Shell, Total, CNPC e CNOOC. Ele venceu o leilão com a proposta de repassar à União 41,65% do excedente em óleo extraído – o percentual mínimo fixado no edital.

Alencar é contra as concessões para exploração de petróleo por considerar que a Petrobras pode explorar sozinha os campos brasileiros. Ele argumenta ainda que há vícios nas normas que permitiram o leilão. “A Agência Nacional do Petróleo publicou o texto final do edital e do contrato referentes ao leilão de Libra antes do parecer do Tribunal de Contas (TCU)”, apontou.

O deputado ressaltou ainda que as denúncias de espionagem estrangeira na Petrobras colocam suspeitas sobre o leilão. “A obtenção ilegal de informações estratégicas da Petrobras beneficia suas concorrentes no mercado e compromete a realização do leilão”, criticou.

A proposta será discutida pelas comissões de Minas e Energia; Finanças e Tributação; e Constituição e Justiça e de Cidadania. Depois, a proposta precisa ser aprovada em Plenário.

Íntegra da proposta:


(Carol Siqueira/ Agência Câmara)

Manifesto da comunidade científica
SBPC e ABC pedem mais pesquisas sobre eventuais danos ambientais da exploração do gás de xisto –

Climate Engineering: What Do the Public Think? (Science Daily)

Jan. 12, 2014 — Members of the public have a negative view of climate engineering, the deliberate large-scale manipulation of the environment to counteract climate change, according to a new study.

The results are from researchers from the University of Southampton and Massey University (New Zealand) who have undertaken the first systematic large-scale evaluation of the public reaction to climate engineering.

The work is published in Nature Climate Change this week (12 January 2014).

Some scientists think that climate engineering approaches will be required to combat the inexorable rise in atmospheric CO2 due to the burning of fossil fuels. Climate engineering could involve techniques that reduce the amount of CO2 in the atmosphere or approaches that slow temperature rise by reducing the amount of sunlight reaching the Earth’s surface.

Co-author Professor Damon Teagle of the University of Southampton said: “Because even the concept of climate engineering is highly controversial, there is pressing need to consult the public and understand their concerns before policy decisions are made.”

Lead author, Professor Malcolm Wright of Massey University, said: “Previous attempts to engage the public with climate engineering have been exploratory and small scale. In our study, we have drawn on commercial methods used to evaluate brands and new product concepts to develop a comparative approach for evaluating the public reaction to a variety of climate engineering concepts.”

The results show that the public has strong negative views towards climate engineering. Where there are positive reactions, they favour approaches that reduce carbon dioxide over those that reflected sunlight.

“It was a striking result and a very clear pattern,” said Professor Wright. “Interventions such as putting mirrors in space or fine particles into the stratosphere are not well received. More natural processes of cloud brightening or enhanced weathering are less likely to raise objections, but the public react best to creating biochar (making charcoal from vegetation to lock in CO2) or capturing carbon directly from the air.”

Nonetheless, even the most well regarded techniques still has a net negative perception.

The work consulted large representative samples in both Australia and New Zealand. Co-author Pam Feetham said: “The responses are remarkably consistent from both countries, with surprisingly few variations except for a slight tendency for older respondents to view climate engineering more favourably.”

Professor Wright noted that giving the public a voice so early in technological development was unusual, but increasingly necessary. “If these techniques are developed the public must be consulted. Our methods can be employed to evaluate the responses in other countries and reapplied in the future to measure how public opinion changes as these potential new technologies are discussed and developed,” he said.

Journal Reference:

  1. Malcolm J. Wright, Damon A. H. Teagle, Pamela M. Feetham. A quantitative evaluation of the public response to climate engineeringNature Climate Change, 2014; DOI: 10.1038/nclimate2087

Our singularity future: should we hack the climate? (Singularity Hub)

Written By: 

Posted: 01/8/14 8:31 AM


Even the most adamant techno-optimists among us must admit that new technologies can introduce hidden dangers: Fire, as the adage goes, can cook the dinner, but it can also burn the village down.

The most powerful example of unforeseen disadvantages stemming from technology is climate change. Should we attempt to fix a problem caused by technology, using more novel technology to hack the climate? The question has spurred heated debate.

Those in favor point to failed efforts to curb carbon dioxide emissions and insist we need other options. What if a poorly understood climatic tipping point tips and the weather becomes dangerous overnight; how will slowing emissions help us then?

“If you look at the projections for how much the Earth’s air temperature is supposed to warm over the next century, it is frightening. We should at least know the options,” said Rob Wood, a University of Washington climatologist who edited a recent special issue of the journal Climatic Change devoted to geoengineering.

Wood’s view is gaining support, as the predictions about the effects of climate change continue to grow more dire, and the weather plays its part to a tee.

But big, important questions need answers before geoengineering projects take off. Critics point to science’s flimsy understanding of the complex systems that drive the weather. And even supporters lament the lack of any experimental framework to contain disparate experiments on how to affect it.

“Proposed projects have been protested or canceled, and calls for a governance framework abound,” Lisa Dilling and Rachel Hauser wrote in a paper that appears in the special issue. “Some have argued, even, that it is difficult if not impossible to answer some research questions in geoengineering at the necessary scale without actually implementing geoengineering itself.”

Most proposed methods of geoengineering derive from pretty basic science, but questions surround how to deploy them at a planetary scale and how to measure desired and undesired effects on complex weather and ocean cycles. Research projects that would shed light on those questions would be big enough themselves potentially to affect neighboring populations, raising ethical questions as well.

stratoshieldEarlier efforts to test fertilizing the ocean with iron to feed algae that would suck carbon dioxide from the air and to spray the pollutant sulfur dioxide, which reflects solar radiation, into the atmosphere were mired in controversy. A reputable UK project abandoned its plans to test its findings in the field.

But refinements on those earlier approaches are percolating. They include efforts both to remove previously emitted carbon dioxide from the atmosphere and to reduce the portion of the sun’s radiation that enters the atmosphere.

One method of carbon dioxide removal (or CDR) would expose large quantities of carbon-reactive minerals to the air and then store the resulting compounds underground; another would use large C02 vacuums to suck the greenhouse gas directly from the air into underground storage.

Solar radiation management (or SRM) methods include everything from painting roofs white to seeding the clouds with salt crystals to make them more reflective and mimicking the climate-cooling effects of volcanic eruptions by spraying  sulfur compounds into the atmosphere.

The inevitable impact of geoengineering research on the wider population has led many scientists to compare geoengineering to genetic research. The comparison to genetic research also hints at the huge benefits geoengineering could have if it successfully wards off the most savage effects of climate change.

As with genetic research, principles have been developed to shape the ethics of the research. Still, the principles remain vague, according to a 2012 Nature editorial, and flawed, according to a philosophy-of-science take in the recent journal issue. Neither the U.S. government nor international treaties have addressed geoengineering per se, though many treaties would influence its testing implementation.

The hottest research now explores how long climate-hacks would take to work, lining up their timelines with the slow easing of global warming that would result from dramatically lowered carbon dioxide emissions, and how to weigh the costs of geoengineering projects and accommodate public debate.

Proceeding with caution won’t get fast answers, but it seems a wise way to address an issue as thorny as readjusting the global thermostat.

Sobre a exploração de xisto no Brasil

JC e-mail 4855, de 13 de novembro de 2013

Cientistas querem adiar exploração de xisto

Ambientalistas e pesquisadores temem os estragos ambientais. Posicionamento da SBPC e da ABC foi registrado em carta

A exploração do gás de xisto nas bacias hidrográficas brasileiras, principalmente na região Amazônica, segue na contramão de países europeus, como França e Alemanha, e algumas regiões dos Estados Unidos, como o estado de Nova York, que vêm proibindo essa atividade, temendo estragos ambientais, mesmo diante de sua viabilidade econômica. Os danos são causados porque, para extrair o gás, os vários tipos de rochas metamórficas, chamadas xisto, são destruídas pelo bombeamento hidráulico ou por uma série de aditivos químicos.

Enquanto a Agência Nacional de Petróleo (ANP) mantém sua decisão de lançar em 28 e 29 de novembro os leilões de blocos de gás de xisto, autoridades de Nova York, um dos pioneiros na exploração desse produto, desde 2007, começam a rever suas políticas internas. Mais radical, a França ratificou, recentemente, a proibição da fratura hidráulica da rocha de xisto, antes mesmo de iniciar a extração desse produto, segundo especialistas.

Cientificamente batizado de gás de “folhelho”, o gás de xisto é conhecido também como “gás não convencional” ou natural. Embora tenha a mesma origem e aplicação do gás convencional, o de xisto se difere no seu processo de extração. Isto é, o produto não consegue sair da rocha naturalmente, ao contrário do gás convencional ou natural, que migra naturalmente das camadas rochosas. Para extrair o gás do xisto, ou seja, finalizar o processo de produção, são usados mecanismos artificiais, como fraturamento da rocha pelo bombeamento hidráulico ou por vários aditivos químicos.

Ao confirmar os leilões, a ANP afirma, via assessoria de imprensa, que a iniciativa cumpre a Resolução CNPE Nº 6 (de 23 de junho deste ano), publicada no Diário Oficial da União. Serão ofertados 240 blocos exploratórios terrestres com potencial para gás natural em sete bacias sedimentares, localizados nos estados do Amazonas, Acre, Tocantins, Alagoas, Sergipe, Piauí, Mato Grosso, Goiás, Bahia, Maranhão, Paraná, São Paulo, totalizando 168.348,42 Km².


O gás de xisto a ser extraído dessas bacias terá o mesmo destino do petróleo, ou seja, será comercializado como fonte de energia. No Brasil, o gás de xisto pode suprir principalmente o Rio Grande do Sul, Santa Catarina e Paraná, onde a demanda é crescente por gás natural, produto que esses estados importam da Bolívia.

Apesar do potencial econômico, o químico Jailson Bitencourt de Andrade, conselheiro da Sociedade Brasileira para o Progresso da Ciência (SBPC), reforça seu posicionamento sobre a importância de adiar os leilões da ANP e ampliar as pesquisas sobre os impactos negativos da extração do gás de xisto, a fim de evitar as agressões ao meio ambiente. “É preciso dar uma atenção grande a isso”, alerta o pesquisador, também membro da Sociedade Brasileira de Química (SBQ) e da Academia Brasileira de Ciências (ABC). “Mesmo nos Estados Unidos, onde há uma boa cadeia de logística, capaz de reduzir o custo de exploração do gás de xisto, e mesmo que sua relação custo-benefício seja altíssima, alguns estados já estão revendo suas políticas e criando barreiras para a exploração desse produto.”

O posicionamento da SBPC e da ABC

Em carta (disponível em, divulgada em agosto, a SBPC e ABC expõem a preocupação com a decisão da ANP de incluir o gás de xisto, obtido por fraturamento da rocha, na próxima licitação. Um dos motivos é o fato de a tecnologia de extração desse gás ser embasada em processos “invasivos da camada geológica portadora do gás, por meio da técnica de fratura hidráulica, com a injeção de água e substâncias químicas, podendo ocasionar vazamentos e contaminação de aquíferos de água doce que ocorrem acima do xisto”.

Diante de tal cenário, Andrade volta a defender a necessidade de o Brasil investir mais em conhecimento científico nas bacias que devem ser exploradas, “até mesmo para ter uma noção da atual situação das rochas para poder comparar possíveis impactos dessas bacias no futuro”. Nesse caso, ele adiantou que o governo, por intermédio do Ministério de Ciência, Tecnologia e Inovação (MCTI) e da Agência Brasileira da Inovação (Finep), está formando uma rede de pesquisa para estudar os impactos do gás de xisto.

Defensor de estudar todas alternativas de produção de gás para substituir o petróleo futuramente, o pesquisador Hernani Aquini Fernandes Chaves, vice-coordenador do Instituto Nacional de Óleo e Gás (INOG), frisa, em contrapartida, que, apesar de eventuais estragos das rochas de xisto, o uso desse gás “é ambientalmente mais correto” do que o próprio petróleo. “Ele tem menos emissão de gás”, garante. “Precisamos conhecer todas as possibilidades de produção, porque, além de irrigar a economia, o petróleo é um bem finito que acaba um dia. O país é grande. Por isso tem de ver as possibilidades de levar o progresso a todas às áreas.” Ele se refere ao interior do Maranhão, uma das regiões mais pobres do Brasil e com potencial para exploração de gás de xisto.

Sem querer comparar o potencial de produção de gás de xisto dos EUA ao brasileiro, Chaves considera “muito otimista” as estimativas da Agência Internacional de Energia dos EUA feitas para o Brasil, de reservas da ordem de 7,35 trilhões de m³. Segundo Chaves, o INOG ainda não fez estimativas para produção de gás de xisto no território nacional. As bacias produtoras de gás de xisto, disse, ainda não foram comprovadas. Em fase experimental, porém, o gás de xisto já é produzido pela Petrobras na planta de São Mateus do Sul.

Ao falar sobre os danos ambientais provocados pela extração do gás de xisto, Chaves reconhece esse ser “um ponto controverso”. Por ora, ele esclarece que na Europa, sobretudo França e Alemanha, não é permitida a extração do gás de xisto pelo fato de o processo de exploração consumir muita água e prejudicar os aquíferos. Além disso, em Nova York, onde a produção foi iniciada, a exploração também passou a ser questionada. “Os ambientalistas não estão felizes com a produção desse gás”, reconhece. “Na França, por exemplo, não deixaram furar as rochas, mesmo sabendo das estimativas de produção de gás de xisto.”

Esclarecimentos da ANP

Segundo o comunicado da assessoria de imprensa da ANP, as áreas ofertadas nas rodadas de licitações promovidas pela ANP são previamente analisadas quanto à viabilidade ambiental pelo Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis (Ibama) e pelos órgãos ambientais estaduais competentes. “O objetivo desse trabalho conjunto é eventualmente excluir áreas por restrições ambientais em função de sobreposição com unidades de conservação ou outras áreas sensíveis, onde não é possível ou recomendável a ocorrência de atividades de exploração e produção (E&P) de petróleo e gás natural”.

Para todos os blocos ofertados na 12ª rodada de leilões, segundo o comunicado, houve a “devida manifestação positiva do órgão estadual ambiental” competente. “A ANP, apesar de não regular as questões ambientais, está atenta aos fatos relativos a esse tema, no que tange à produção de petróleo e gás natural no Brasil. Nesse sentido, as melhores práticas utilizadas na indústria de petróleo e gás natural em todo o mundo são constantemente acompanhadas e adotadas pela ANP”, cita o documento.

A ANP acrescenta: “Como o processo regulatório é dinâmico, a ANP tomará as medidas necessárias para, sempre que pertinente, adequar suas normas às questões que se apresentarem nos próximos anos para garantir a segurança nas operações.”

(Viviane Monteiro / Jornal da Ciência)

* * *

JC e-mail 4856, de 14 de novembro de 2013

Seminário promove debate sobre os impactos ambientais da exploração do gás de xisto

Com a participação de Jailson de Andrade, conselheiro da SBPC, o encontro discutiu também a necessidade dessa fonte de energia para o setor energético brasileiro

O Instituto Brasileiro de Análises Sociais e Econômicas (Ibase), o Greenpeace, o ISA, a Fase e o CTI promoveram ontem (13) em São Paulo um seminário, aberto ao público, sobre os impactos socioambientais da exploração do gás de xisto no Brasil. Com a participação de Jailson de Andrade, conselheiro da SBPC, o encontro promoveu o debate sobre questões ambientais envolvidas nesse tipo de exploração mineral e discutiu sua viabilidade. Foi discutida também a necessidade dessa fonte de energia para o setor energético brasileiro, com enfoque nas bacias do Acre, Mato Grosso e no aquífero Guarani. O pesquisador do Ibase Carlos Bittencourt alertou que é preciso prorrogar o leilão para que se possa fazer os estudos necessários antes de autorizar a exploração.

O processo licitatório para a exploração de áreas de gás natural convencionais e não convencionais deve acontece no final deste mês. A Agência Nacional do Petróleo (ANP) vai colocar à disposição 240 blocos exploratórios terrestres distribuídos em 12 estados do país. O xisto, gás não convencional utilizado por usinas hidrelétricas e indústrias é uma fonte de energia que, apesar de conhecida, permaneceu inexplorada durante muitos anos, por falta de tecnologia capaz de tornar viável a sua extração.

Ricardo Baitelo (Greenpeace), Bianca Dieile (FAPP-BG), Conrado Octavio (CTI) e Angel Matsés (Comunidad Nativa Matsés) e a moderação é de Padre Nelito (CNBB). O apoio para o seminário é da Ajuda da Igreja Norueguesa.

(Com informações do Ibase)

Fukushima Forever (Huff Post)

Charles Perrow

Posted: 09/20/2013 2:49 pm

Recent disclosures of tons of radioactive water from the damaged Fukushima reactors spilling into the ocean are just the latest evidence of the continuing incompetence of the Japanese utility, TEPCO. The announcement that the Japanese government will step in is also not reassuring since it was the Japanese government that failed to regulate the utility for decades. But, bad as it is, the current contamination of the ocean should be the least of our worries. The radioactive poisons are expected to form a plume that will be carried by currents to coast of North America. But the effects will be small, adding an unfortunate bit to our background radiation. Fish swimming through the plume will be affected, but we can avoid eating them.

Much more serious is the danger that the spent fuel rod pool at the top of the nuclear plant number four will collapse in a storm or an earthquake, or in a failed attempt to carefully remove each of the 1,535 rods and safely transport them to the common storage pool 50 meters away. Conditions in the unit 4 pool, 100 feet from the ground, are perilous, and if any two of the rods touch it could cause a nuclear reaction that would be uncontrollable. The radiation emitted from all these rods, if they are not continually cool and kept separate, would require the evacuation of surrounding areas including Tokyo. Because of the radiation at the site the 6,375 rods in the common storage pool could not be continuously cooled; they would fission and all of humanity will be threatened, for thousands of years.

Fukushima is just the latest episode in a dangerous dance with radiation that has been going on for 68 years. Since the atomic bombing of Nagasaki and Hiroshima in 1945 we have repeatedly let loose plutonium and other radioactive substances on our planet, and authorities have repeatedly denied or trivialized their dangers. The authorities include national governments (the U.S., Japan, the Soviet Union/ Russia, England, France and Germany); the worldwide nuclear power industry; and some scientists both in and outside of these governments and the nuclear power industry. Denials and trivialization have continued with Fukushima. (Documentation of the following observations can be found in my piece in the Bulletin of the Atomic Scientists, upon which this article is based.) (Perrow 2013)

In 1945, shortly after the bombing of two Japanese cities, the New York Times headline read: “Survey Rules Out Nagasaki Dangers”; soon after the 2011 Fukushima disaster it read “Experts Foresee No Detectable Health Impact from Fukushima Radiation.” In between these two we had experts reassuring us about the nuclear bomb tests, plutonium plant disasters at Windscale in northern England and Chelyabinsk in the Ural Mountains, and the nuclear power plant accidents at Three Mile Island in the United States and Chernobyl in what is now Ukraine, as well as the normal operation of nuclear power plants.

Initially the U.S. Government denied that low-level radiation experienced by thousands of Japanese people in and near the two cities was dangerous. In 1953, the newly formed Atomic Energy Commission insisted that low-level exposure to radiation “can be continued indefinitely without any detectable bodily change.” Biologists and other scientists took exception to this, and a 1956 report by the National Academy of Scientists, examining data from Japan and from residents of the Marshall Islands exposed to nuclear test fallout, successfully established that all radiation was harmful. The Atomic Energy Commission then promoted a statistical or population approach that minimized the danger: the damage would be so small that it would hardly be detectable in a large population and could be due to any number of other causes. Nevertheless, the Radiation Research Foundation detected it in 1,900 excess deaths among the Japanese exposed to the two bombs. (The Department of Homeland Security estimated only 430 cancer deaths).

Besides the uproar about the worldwide fallout from testing nuclear weapons, another problem with nuclear fission soon emerged: a fire in a British plant making plutonium for nuclear weapons sent radioactive material over a large area of Cumbria, resulting in an estimated 240 premature cancer deaths, though the link is still disputed. The event was not made public and no evacuations were ordered. Also kept secret, for over 25 years, was a much larger explosion and fire, also in 1957, at the Chelyabinsk nuclear weapons processing plant in the eastern Ural Mountains of the Soviet Union. One estimate is that 272,000 people were irradiated; lakes and streams were contaminated; 7,500 people were evacuated; and some areas still are uninhabitable. The CIA knew of it immediately, but they too kept it secret. If a plutonium plant could do that much damage it would be a powerful argument for not building nuclear weapons.

Powerful arguments were needed, due to the fallout from the fallout from bombs and tests. Peaceful use became the mantra. Project Plowshares, initiated in 1958, conducted 27 “peaceful nuclear explosions” from 1961 until the costs as well as public pressure from unforeseen consequences ended the program in 1975. The Chairman of the Atomic Energy Commission indicated Plowshares’ close relationship to the increasing opposition to nuclear weapons, saying that peaceful applications of nuclear explosives would “create a climate of world opinion that is more favorable to weapons development and tests” (emphasis supplied). A Pentagon official was equally blunt, saying in 1953, “The atomic bomb will be accepted far more readily if at the same time atomic energy is being used for constructive ends.” The minutes of a National Security Council in 1953 spoke of destroying the taboo associated with nuclear weapons and “dissipating” the feeling that we could not use an A-bomb.

More useful than peaceful nuclear explosions were nuclear power plants, which would produce the plutonium necessary for atomic weapons as well as legitimating them. Nuclear power plants, the daughter of the weapons program — actually its “bad seed” –f was born and soon saw first fruit with the1979 Three Mile Island accident. Increases in cancer were found but the Columbia University study declared that the level of radiation from TMI was too low to have caused them, and the “stress” hypothesis made its first appearance as the explanation for rises in cancer. Another university study disputed this, arguing that radiation caused the increase, and since a victim suit was involved, it went to a Federal judge who ruled in favor of stress. A third, larger study found “slight” increases in cancer mortality and increased risk breast and other cancers, but found “no consistent evidence” of a “significant impact.” Indeed, it would be hard to find such an impact when so many other things can cause cancer, and it is so widespread. Indeed, since stress can cause it, there is ample ambiguity that can be mobilized to defend nuclear power plants.

Ambiguity was mobilized by the Soviet Union after the 1987 Chernobyl disaster. Medical studies by Russian scientists were suppressed, and doctors were told not to use the designation of leukemia in health reports. Only after a few years had elapsed did any serious studies acknowledge that the radiation was serious. The Soviet Union forcefully argued that the large drops in life expectancy in the affected areas were due to not just stress, but lifestyle changes. The International Atomic Energy Association (IAEA), charged with both promoting nuclear power and helping make it safe, agreed, and mentioned such things as obesity, smoking, and even unprotected sex, arguing that the affected population should not be treated as “victims” but as “survivors.” The count of premature deaths has varied widely, ranging from 4,000 in the contaminated areas of Ukraine, Belarus and Russia from UN agencies, while Greenpeace puts it at 200,000. We also have the controversial worldwide estimate of 985,000 from Russian scientists with access to thousands of publications from the affected regions.

Even when nuclear power plants are running normally they are expected to release some radiation, but so little as to be harmless. Numerous studies have now challenged that. When eight U.S. nuclear plants in the U.S. were closed in 1987 they provided the opportunity for a field test. Two years later strontium-90 levels in local milk declined sharply, as did birth defects and death rates of infants within 40 miles of the plants. A 2007 study of all German nuclear power plants saw childhood leukemia for children living less than 3 miles from the plants more than double, but the researchers held that the plants could not cause it because their radiation levels were so low. Similar results were found for a French study, with a similar conclusion; it could not be low-level radiation, though they had no other explanation. A meta-study published in 2007 of 136 reactor sites in seven countries, extended to include children up to age 9, found childhood leukemia increases of 14 percent to 21 percent.

Epidemiological studies of children and adults living near the Fukushima Daiichi nuclear plant will face the same obstacles as earlier studies. About 40 percent of the aging population of Japan will die of some form of cancer; how can one be sure it was not caused by one of the multiple other causes? It took decades for the effects of the atomic bombs and Chernobyl to clearly emblazon the word “CANCER” on these events. Almost all scientists finally agree that the dose effects are linear, that is, any radiation added to natural background radiation, even low-levels of radiation, is harmful. But how harmful?

University professors have declared that the health effects of Fukushima are “negligible,” will cause “close to no deaths,” and that much of the damage was “really psychological.” Extensive and expensive follow-up on citizens from the Fukushima area, the experts say, is not worth it. There is doubt a direct link will ever be definitively made, one expert said. The head of the U.S. National Council on Radiation Protection and Measurements, said: “There’s no opportunity for conducting epidemiological studies that have any chance of success….The doses are just too low.” We have heard this in 1945, at TMi, at Chernobyl, and for normally running power plants. It is surprising that respected scientists refuse to make another test of such an important null hypothesis: that there are no discernible effects of low-level radiation.

Not surprisingly, a nuclear power trade group announced shortly after the March, 2011 meltdown at Fukushima (the meltdown started with the earthquake, well before the tsunami hit), that “no health effects are expected” as a result of the events. UN agencies agree with them and the U.S. Council. The leading UN organization on the effects of radiation concluded “Radiation exposure following the nuclear accident at Fukushima-Daiichi did not cause any immediate health effects. It is unlikely to be able to attribute any health effects in the future among the general public and the vast majority of workers.” The World Health Organization stated that while people in the United States receive about 6.5 millisieverts per year from sources including background radiation and medical procedures, only two Japanese communities had effective dose rates of 10 to 50 millisieverts, a bit more than normal.

However, other data contradict the WHO and other UN agencies. The Japanese science and technology ministry (MEXT) indicated that a child in one community would have an exposure 100 times the natural background radiation in Japan, rather than a bit more than normal. A hospital reported that more than half of the 527 children examined six months after the disaster had internal exposure to cesium-137, an isotope that poses great risk to human health. A French radiological institute found ambient dose rates 20 to 40 times that of background radiation and in the most contaminated areas the rates were even 10 times those elevated dose rates. The Institute predicts and excess cancer rate of 2 percent in the first year alone. Experts not associated with the nuclear industry or the UN agencies currently have estimated from 1,000 to 3,000 cancer deaths. Nearly two years after the disaster the WHO was still declaring that any increase in human disease “is likely to remain below detectable levels.” (It is worth noting that the WHO still only releases reports on radiation impacts in consultation with the International Atomic Energy Agency.)

In March 2013, the Fukushima Prefecture Health Management Survey reported examining 133,000 children using new, highly sensitive ultrasound equipment. The survey found that 41 percent of the children examined had cysts of up to 2 centimeters in size and lumps measuring up to 5 millimeters on their thyroid glands, presumably from inhaled and ingested radioactive iodine. However, as we might expect from our chronicle, the survey found no cause for alarm because the cysts and lumps were too small to warrant further examination. The defense ministry also conducted an ultrasound examination of children from three other prefectures distant from Fukushima and found somewhat higher percentages of small cysts and lumps, adding to the argument that radiation was not the cause. But others point out that radiation effects would not be expected to be limited to what is designated as the contaminated area; that these cysts and lumps, signs of possible thyroid cancer, have appeared alarmingly soon after exposure; that they should be followed up since it takes a few years for cancer to show up and thyroid cancer is rare in children; and that a control group far from Japan should be tested with the same ultrasound technics.

The denial that Fukushima has any significant health impacts echoes the denials of the atomic bomb effects in 1945; the secrecy surrounding Windscale and Chelyabinsk; the studies suggesting that the fallout from Three Mile Island was, in fact, serious; and the multiple denials regarding Chernobyl (that it happened, that it was serious, and that it is still serious).

As of June, 2013, according to a report in The Japan Times, 12 of 175,499 children tested had tested positive for possible thyroid cancer, and 15 more were deemed at high risk of developing the disease. For a disease that is rare, this is high number. Meanwhile, the U.S. government is still trying to get us to ignore the bad seed. June 2012, the U.S. Department of Energy granted $1.7 million to the Massachusetts Institute of Technology to address the “difficulties in gaining the broad social acceptance” of nuclear power.

Perrow, Charles. 2013. “Nuclear denial: From Hiroshima to Fukushima.” Bulletin of Atomic Scientists 69(5):56-67.

Economic Dangers of ‘Peak Oil’ Addressed (Science Daily)

Oct. 16, 2013 — Researchers from the University of Maryland and a leading university in Spain demonstrate in a new study which sectors could put the entire U.S. economy at risk when global oil production peaks (“Peak Oil”). This multi-disciplinary team recommends immediate action by government, private and commercial sectors to reduce the vulnerability of these sectors.

The figure above shows sectors’ importance and vulnerability to Peak Oil. The bubbles represent sectors. The size of the bubbles visualizes the vulnerability of a particular sector to Peak Oil according to the expected price changes; the larger the size of the bubble, the more vulnerable the sector is considered to be. The X axis shows a sector’s importance according to its contribution to GDP and on the Y axis according to its structural role. Hence, the larger bubbles in the top right corner represent highly vulnerable and highly important sectors. In the case of Peak Oil induced supply disruptions, these sectors could cause severe imbalances for the entire U.S. economy. (Credit: Image courtesy of University of Maryland)

While critics of Peak Oil studies declare that the world has more than enough oil to maintain current national and global standards, these UMD-led researchers say Peak Oil is imminent, if not already here — and is a real threat to national and global economies. Their study is among the first to outline a way of assessing the vulnerabilities of specific economic sectors to this threat, and to identify focal points for action that could strengthen the U.S. economy and make it less vulnerable to disasters.

Their work, “Economic Vulnerability to Peak Oil,” appears inGlobal Environmental Change. The paper is co-authored by Christina Prell, UMD’s Department of Sociology; Kuishuang Feng and Klaus Hubacek, UMD’s Department of Geographical Sciences, and Christian Kerschner, Institut de Ciència i Tecnologia Ambientals, Universitat Autònoma de Barcelona.

A focus on Peak Oil is increasingly gaining attention in both scientific and policy discourses, especially due to its apparent imminence and potential dangers. However, until now, little has been known about how this phenomenon will impact economies. In their paper, the research team constructs a vulnerability map of the U.S. economy, combining two approaches for analyzing economic systems. Their approach reveals the relative importance of individual economic sectors, and how vulnerable these are to oil price shocks. This dual-analysis helps identify which sectors could put the entire U.S. economy at risk from Peak Oil. For the United States, such sectors would include iron mills, chemical and plastic products manufacturing, fertilizer production and air transport.

“Our findings provide early warnings to these and related industries about potential trouble in their supply chain,” UMD Professor Hubacek said. “Our aim is to inform and engage government, public and private industry leaders, and to provide a tool for effective Peak Oil policy action planning.”

Although the team’s analysis is embedded in a Peak Oil narrative, it can be used more broadly to develop a climate roadmap for a low carbon economy.

“In this paper, we analyze the vulnerability of the U.S. economy, which is the biggest consumer of oil and oil-based products in the world, and thus provides a good example of an economic system with high resource dependence. However, the notable advantage of our approach is that it does not depend on the Peak-Oil-vulnerability narrative but is equally useful in a climate change context, for designing policies to reduce carbon dioxide emissions. In that case, one could easily include other fossil fuels such as coal in the model and results could help policy makers to identify which sectors can be controlled and/or managed for a maximum, low-carbon effect, without destabilizing the economy,” Professor Hubacek said.

One of the main ways a Peak Oil vulnerable industry can become less so, the authors say, is for that sector to reduce the structural and financial importance of oil. For example, Hubacek and colleagues note that one approach to reducing the importance of oil to agriculture could be to curbing the strong dependence on artificial fertilizers by promoting organic farming techniques and/or reducing the overall distance travelled by people and goods by fostering local, decentralized food economies.

Peak Oil Background and Impact

The Peak Oil dialogue shifts attention away from discourses on “oil depletion” and “stocks” to focus on declining production rates (flows) of oil, and increasing costs of production. The maximum possible daily flow rate (with a given technology) is what eventually determines the peak; thus, the concept can also be useful in the context of other renewable resources.

Improvements in extraction and refining technologies can influence flows, but this tends to lead to steeper decline curves after the peak is eventually reached. Such steep decline curves have also been observed for shale gas wells.

“Shale developments are, so we believe, largely overrated, because of the huge amounts of financial resources that went into them (danger of bubble) and because of their apparent steep decline rates (shale wells tend to peak fast),” according to Dr. Kerschner.

“One important implication of this dialogue shift is that extraction peaks occur much earlier in time than the actual depletion of resources,” Professor Hubacek said. “In other words, Peak Oil is currently predicted within the next decade by many, whereas complete oil depletion will in fact occur never given increasing prices. This means that eventually petroleum products may be sold in liter bottles in pharmacies like in the old days. ”

Journal Reference:

  1. Christian Kerschner, Christina Prell, Kuishuang Feng, Klaus Hubacek. Economic vulnerability to Peak OilGlobal Environmental Change, 2013; DOI:10.1016/j.gloenvcha.2013.08.015

Mosquitos transgênicos no céu do sertão (Agência Pública)


10/10/2013 – 10h36

por Redação da Agência Pública

armadilhas 300x199 Mosquitos transgênicos no céu do sertão

As armadilhas são instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Foto: Coletivo Nigéria

Com a promessa de reduzir a dengue, biofábrica de insetos transgênicos já soltou 18 milhões de mosquitos Aedes aegypti no interior da Bahia. Leia a história e veja o vídeo.

No começo da noite de uma quinta-feira de setembro, a rodoviária de Juazeiro da Bahia era o retrato da desolação. No saguão mal iluminado, funcionavam um box cuja especialidade é caldo de carne, uma lanchonete de balcão comprido, ornado por salgados, biscoitos e batata chips, e um único guichê – com perturbadoras nuvens de mosquitos sobre as cabeças de quem aguardava para comprar passagens para pequenas cidades ou capitais nordestinas.

Assentada à beira do rio São Francisco, na fronteira entre Pernambuco e Bahia, Juazeiro já foi uma cidade cortada por córregos, afluentes de um dos maiores rios do país. Hoje, tem mais de 200 mil habitantes, compõe o maior aglomerado urbano do semiárido nordestino ao lado de Petrolina – com a qual soma meio milhão de pessoas – e é infestada por muriçocas (ou pernilongos, se preferir). Os cursos de água que drenavam pequenas nascentes viraram esgotos a céu aberto, extensos criadouros do inseto, tradicionalmente combatidos com inseticida e raquete elétrica, ou janelas fechadas com ar condicionado para os mais endinheirados.

Mas os moradores de Juazeiro não espantam só muriçocas nesse início de primavera. A cidade é o centro de testes de uma nova técnica científica que utiliza Aedes aegypti transgênicos para combater a dengue, doença transmitida pela espécie. Desenvolvido pela empresa britânica de biotecnologia Oxitec, o método consiste basicamente na inserção de um gene letal nos mosquitos machos que, liberados em grande quantidade no meio ambiente, copulam com as fêmeas selvagens e geram uma cria programada para morrer. Assim, se o experimento funcionar, a morte prematura das larvas reduz progressivamente a população de mosquitos dessa espécie.

A técnica é a mais nova arma para combater uma doença que não só resiste como avança sobre os métodos até então empregados em seu controle. A Organização Mundial de Saúde estima que possam haver de 50 a 100 milhões de casos de dengue por ano no mundo. No Brasil, a doença é endêmica, com epidemias anuais em várias cidades, principalmente nas grandes capitais. Em 2012, somente entre os dias 1º de janeiro e 16 de fevereiro, foram registrados mais de 70 mil casos no país. Em 2013, no mesmo período, o número praticamente triplicou, passou para 204 mil casos. Este ano, até agora, 400 pessoas já morreram de dengue no Brasil.

Em Juazeiro, o método de patente britânica é testado pela organização social Moscamed, que reproduz e libera ao ar livre os mosquitos transgênicos desde 2011. Na biofábrica montada no município e que tem capacidade para produzir até 4 milhões de mosquitos por semana, toda cadeia produtiva do inseto transgênico é realizada – exceção feita à modificação genética propriamente dita, executada nos laboratórios da Oxitec, em Oxford. Larvas transgênicas foram importadas pela Moscamed e passaram a ser reproduzidas nos laboratórios da instituição.

Os testes desde o início são financiados pela Secretaria da Saúde da Bahia – com o apoio institucional da secretaria de Juazeiro – e no último mês de julho se estenderam ao município de Jacobina, na extremidade norte da Chapada Diamantina. Na cidade serrana de aproximadamente 80 mil habitantes, a Moscamed põe à prova a capacidade da técnica de “suprimir” (a palavra usada pelos cientistas para exterminar toda a população de mosquitos) o Aedes aegypti em toda uma cidade, já que em Juazeiro a estratégia se mostrou eficaz, mas limitada por enquanto a dois bairros.

“Os resultados de 2011 e 2012 mostraram que [a técnica] realmente funcionava bem. E a convite e financiados pelo Governo do Estado da Bahia resolvemos avançar e irmos pra Jacobina. Agora não mais como piloto, mas fazendo um teste pra realmente eliminar a população [de mosquitos]”, fala Aldo Malavasi, professor aposentado do Departamento de Genética do Instituto de Biociências da Universidade de São Paulo (USP) e atual presidente da Moscamed. A USP também integra o projeto.

Malavasi trabalha na região desde 2006, quando a Moscamed foi criada para combater uma praga agrícola, a mosca-das-frutas, com técnica parecida – a Técnica do Inseto Estéril. A lógica é a mesma: produzir insetos estéreis para copular com as fêmeas selvagens e assim reduzir gradativamente essa população. A diferença está na forma como estes insetos são esterilizados. Ao invés de modificação genética, radiação. A TIE é usada largamente desde a década de 1970, principalmente em espécies consideradas ameaças à agricultura. O problema é que até agora a tecnologia não se adequava a mosquitos como o Aedes aegypti, que não resistiam de forma satisfatória à radiação

O plano de comunicação

As primeiras liberações em campo do Aedes transgênico foram realizadas nas Ilhas Cayman, entre o final de 2009 e 2010. O território britânico no Caribe, formado por três ilhas localizadas ao Sul de Cuba, se mostrou não apenas um paraíso fiscal (existem mais empresas registradas nas ilhas do que seus 50 mil habitantes), mas também espaço propício para a liberação dos mosquitos transgênicos, devido à ausência de leis de biossegurança. As Ilhas Cayman não são signatárias do Procolo de Cartagena, o principal documento internacional sobre o assunto, nem são cobertas pela Convenção de Aarthus – aprovada pela União Europeia e da qual o Reino Unido faz parte – que versa sobre o acesso à informação, participação e justiça nos processos de tomada de decisão sobre o meio ambiente.

Ao invés da publicação e consulta pública prévia sobre os riscos envolvidos no experimento, como exigiriam os acordos internacionais citados, os cerca de 3 milhões de mosquitos soltos no clima tropical das Ilhas Cayman ganharam o mundo sem nenhum processo de debate ou consulta pública. A autorização foi concedida exclusivamente pelo Departamento de Agricultura das Ilhas. Parceiro local da Oxitec nos testes, a Mosquito Research & Control Unit (Unidade de Pesquisa e Controle de Mosquito) postou um vídeo promocional sobre o assunto apenas em outubro de 2010, ainda assim sem mencionar a natureza transgênica dos mosquitos. O vídeo foi divulgado exatamente um mês antes da apresentação dos resultados dos experimentos pela própria Oxitec no encontro anual daAmerican Society of Tropical Medicine and Hygiene (Sociedade Americana de Medicina Tropical e Higiene), nos Estados Unidos.

A comunidade científica se surpreendeu com a notícia de que as primeiras liberações no mundo de insetos modificados geneticamente já haviam sido realizadas, sem que os próprios especialistas no assunto tivessem conhecimento. A surpresa se estendeu ao resultado: segundo os dados da Oxitec, os experimentos haviam atingido 80% de redução na população de Aedes aegypti nas Ilhas Cayman. O número confirmava para a empresa que a técnica criada em laboratório poderia ser de fato eficiente. Desde então, novos testes de campo passaram a ser articulados em outros países – notadamente subdesenvolvidos ou em desenvolvimento, com clima tropical e problemas históricos com a dengue.

Depois de adiar testes semelhantes em 2006, após protestos, a Malásia se tornou o segundo país a liberar os mosquitos transgênicos entre dezembro de 2010 e janeiro de 2011. Seis mil mosquitos foram soltos num área inabitada do país. O número, bem menor em comparação ao das Ilhas Cayman, é quase insignificante diante da quantidade de mosquitos que passou a ser liberada em Juazeiro da Bahia a partir de fevereiro de 2011. A cidade, junto com Jacobina mais recentemente, se tornou desde então o maior campo de testes do tipo no mundo, com mais de 18 milhões de mosquitos já liberados, segundo números da Moscamed.

“A Oxitec errou profundamente, tanto na Malásia quanto nas Ilhas Cayman. Ao contrário do que eles fizeram, nós tivemos um extenso trabalho do que a gente chama de comunicação pública, com total transparência, com discussão com a comunidade, com visita a todas as casas. Houve um trabalho extraordinário aqui”, compara Aldo Malavasi.

Em entrevista por telefone, ele fez questão de demarcar a independência da Moscamed diante da Oxitec e ressaltou a natureza diferente das duas instituições. Criada em 2006, a Moscamed é uma organização social, sem fins lucrativos portanto, que se engajou nos testes do Aedes aegypti transgênico com o objetivo de verificar a eficácia ou não da técnica no combate à dengue. Segundo Malavasi, nenhum financiamento da Oxitec foi aceito por eles justamente para garantir a isenção na avaliação da técnica. “Nós não queremos dinheiro deles, porque o nosso objetivo é ajudar o governo brasileiro”, resume.

Em favor da transparência, o programa foi intitulado “Projeto Aedes Transgênico” (PAT), para trazer já no nome a palavra espinhosa. Outra determinação de ordem semântica foi o não uso do termo “estéril”, corrente no discurso da empresa britânica, mas empregada tecnicamente de forma incorreta, já que os mosquitos produzem crias, mas geram prole programada para morrer no estágio larval. Um jingle pôs o complexo sistema em linguagem popular e em ritmo de forró pé-de-serra. E o bloco de carnaval “Papa Mosquito” saiu às ruas de Juazeiro no Carnaval de 2011.

No âmbito institucional, além do custeio pela Secretaria de Saúde estadual, o programa também ganhou o apoio da Secretaria de Saúde de Juazeiro da Bahia. “De início teve resistência, porque as pessoas também não queriam deixar armadilhas em suas casas, mas depois, com o tempo, elas entenderam o projeto e a gente teve uma boa aceitação popular”, conta o enfermeiro sanitarista Mário Machado, diretor de Promoção e Vigilância à Saúde da secretaria.

As armadilhas, das quais fala Machado, são simples instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Assim é possível colher os ovos e verificar se eles foram fecundados por machos transgênicos ou selvagens. Isso também é possível porque os mosquitos geneticamente modificados carregam, além do gene letal, o fragmento do DNA de uma água-viva que lhe confere uma marcação fluorescente, visível em microscópios.

Desta forma, foi possível verificar que a redução da população de Aedes aegypti selvagem atingiu, segundo a Moscamed, 96% em Mandacaru – um assentamento agrícola distante poucos quilômetros do centro comercial de Juazeiro que, pelo isolamento geográfico e aceitação popular, se transformou no local ideal para as liberações. Apesar do número, a Moscamed continua com liberações no bairro. Devido à breve vida do mosquito (a fêmea vive aproximadamente 35 dias), a soltura dos insetos precisa continuar para manter o nível da população selvagem baixo. Atualmente, uma vez por semana um carro deixa a sede da organização com 50 mil mosquitos distribuídos aos milhares em potes plásticos que serão abertos nas ruas de Mandacaru.

“Hoje a maior aceitação é no Mandacaru. A receptividade foi tamanha que a Moscamed não quer sair mais de lá”, enfatiza Mário Machado.

O mesmo não aconteceu com o bairro de Itaberaba, o primeiro a receber os mosquitos no começo de 2011. Nem mesmo o histórico alto índice de infecção pelo Aedes aegypti fez com que o bairro periférico juazeirense, vizinho à sede da Moscamed, aceitasse de bom grado o experimento. Mário Machado estima “em torno de 20%” a parcela da população que se opôs aos testes e pôs fim às liberações.

“Por mais que a gente tente informar, ir de casa em casa, de bar em bar, algumas pessoas desacreditam: ‘Não, vocês estão mentindo pra gente, esse mosquito tá picando a gente’”, resigna-se.

Depois de um ano sem liberações, o mosquito parece não ter deixado muitas lembranças por ali. Em uma caminhada pelo bairro, quase não conseguimos encontrar alguém que soubesse do que estávamos falando. Não obstante, o nome de Itaberaba correu o mundo ao ser divulgado pela Oxitec que o primeiro experimento de campo no Brasil havia atingido 80% de redução na população de mosquitos selvagens.

Supervisora de campo da Moscamed, a bióloga Luiza Garziera foi uma das que foram de casa em casa explicando o processo, por vezes contornando o discurso científico para se fazer entender. “Eu falava que a gente estaria liberando esses mosquitos, que a gente liberava somente o macho, que não pica. Só quem pica é a fêmea. E que esses machos quando ‘namoram’ – porque a gente não pode falar às vezes de ‘cópula’ porque as pessoas não vão entender. Então quando esses machos namoram com a fêmea, os seus filhinhos acabam morrendo”.

Este é um dos detalhes mais importantes sobre a técnica inédita. Ao liberar apenas machos, numa taxa de 10 transgênicos para 1 selvagem, a Moscamed mergulha as pessoas numa nuvem de mosquitos, mas garante que estes não piquem aqueles. Isto acontece porque só a fêmea se alimenta de sangue humano, líquido que fornece as proteínas necessárias para sua ovulação.

A tecnologia se encaixa de forma convincente e até didática – talvez com exceção da “modificação genética”, que requer voos mais altos da imaginação. No entanto, ainda a ignorância sobre o assunto ainda campeia em considerável parcela dos moradores ouvidos para esta reportagem. Quando muito, sabe-se que se trata do extermínio do mosquito da dengue, o que é naturalmente algo positivo. No mais, ouviu-se apenas falar ou arrisca-se uma hipótese que inclua a, esta sim largamente odiada, muriçoca.

A avaliação dos riscos

Apesar da campanha de comunicação da Moscamed, a ONG britânica GeneWatch aponta uma série de problemas no processo brasileiro. O principal deles, o fato do relatório de avaliação de riscos sobre o experimento não ter sido disponibilizado ao público antes do início das liberações. Pelo contrário, a pedido dos responsáveis pelo Programa Aedes Transgênico, o processo encaminhado à Comissão Técnica Nacional de Biossegurança (CTNBio, órgão encarregado de autorizar ou não tais experimentos) foi considerado confidencial.

“Nós achamos que a Oxitec deve ter o consentimento plenamente informado da população local, isso significa que as pessoas precisam concordar com o experimento. Mas para isso elas precisam também ser informadas sobre os riscos, assim como você seria se estivesse sendo usado para testar um novo medicamento contra o câncer ou qualquer outro tipo de tratamento”, comentou, em entrevista por Skype, Helen Wallace, diretora executiva da organização não governamental.

Especialista nos riscos e na ética envolvida nesse tipo de experimento, Helen publicou este ano o relatório Genetically Modified Mosquitoes: Ongoing Concerns (“Mosquitos Geneticamente Modificados: atuais preocupações”), que elenca em 13 capítulos o que considera riscos potenciais não considerados antes de se autorizar a liberação dos mosquitos transgênicos. O documento também aponta falhas na condução dos experimentos pela Oxitec.

Por exemplo, após dois anos das liberações nas Ilhas Cayman, apenas os resultados de um pequeno teste haviam aparecido numa publicação científica. No começo de 2011, a empresa submeteu os resultados do maior experimento nas Ilhas à revista Science, mas o artigo não foi publicado. Apenas em setembro do ano passado o texto apareceu em outra revista, a Nature Biotechnology, publicado como “correspondência” – o que significa que não passou pela revisão de outros cientistas, apenas pela checagem do próprio editor da publicação.

Para Helen Wallace, a ausência de revisão crítica dos pares científicos põe o experimento da Oxitec sob suspeita. Mesmo assim, a análise do artigo, segundo o documento, sugere que a empresa precisou aumentar a proporção de liberação de mosquitos transgênicos e concentrá-los em uma pequena área para que atingisse os resultados esperados. O mesmo teria acontecido no Brasil, em Itaberaba. Os resultados do teste no Brasil também ainda não foram publicados pela Moscamed. O gerente do projeto, Danilo Carvalho, informou que um dos artigos já foi submetido a uma publicação e outro está em fase final de escrita.

Outro dos riscos apontados pelo documento está no uso comum do antibiótico tetraciclina. O medicamento é responsável por reverter o gene letal e garantir em laboratório a sobrevivência do mosquito geneticamente modificado, que do contrário não chegaria à fase adulta. Esta é a diferença vital entre a sorte dos mosquitos reproduzidos em laboratório e a de suas crias, geradas no meio ambiente a partir de fêmeas selvagens – sem o antibiótico, estão condenados à morte prematura.

A tetraciclina é comumente empregada nas indústrias da pecuária e da aquicultura, que despejam no meio ambiente grandes quantidades da substância através de seus efluentes. O antibiótico também é largamente usado na medicina e na veterinária. Ou seja, ovos e larvas geneticamente modificados poderiam entrar em contato com o antibiótico mesmo em ambientes não controlados e assim sobreviverem. Ao longo do tempo, a resistência dos mosquitos transgênicos ao gene letal poderia neutralizar seu efeito e, por fim, teríamos uma nova espécie geneticamente modificada adaptada ao meio ambiente.

laboratorio 300x186 Mosquitos transgênicos no céu do sertãoA hipótese é tratada com ceticismo pela Oxitec, que minimiza a possibilidade disto acontecer no mundo real. No entanto, documento confidencial tornado público mostra que a hipótese se mostrou, por acaso, real nos testes de pesquisador parceiro da empresa. Ao estranhar uma taxa de sobrevivência das larvas sem tetraciclina de 15% – bem maior que os usuais 3% contatos pelos experimentos da empresa –, os cientistas da Oxitec descobriram que a ração de gato com a qual seus parceiros estavam alimentando os mosquitos guardava resquícios do antibiótico, que é rotineiramente usado para tratar galinhas destinadas à ração animal.

O relatório da GeneWatch chama atenção para a presença comum do antibiótico em dejetos humanos e animais, assim como em sistemas de esgotamento doméstico, a exemplo de fossas sépticas. Isto caracterizaria um risco potencial, já que vários estudos constataram a capacidade do Aedes aegypti se reproduzir em águas contaminadas – apesar de isso ainda não ser o mais comum, nem acontecer ainda em Juazeiro, segundo a Secretaria de Saúde do município.

Além disso, há preocupações quanto a taxa de liberação de fêmeas transgênicas. O processo de separação das pupas (último estágio antes da vida adulta) é feito de forma manual, com a ajuda de um aparelho que reparte os gêneros pelo tamanho (a fêmea é ligeiramente maior). Uma taxa de 3% de fêmeas pode escapar neste processo, ganhando a liberdade e aumentando os riscos envolvidos. Por último, os experimentos ainda não verificaram se a redução na população de mosquitos incide diretamente na transmissão da dengue.

Todas as críticas são rebatidas pela Oxitec e pela Moscamed, que dizem manter um rigoroso controle de qualidade – como o monitoramento constante da taxa de liberação de fêmeas e da taxa de sobrevivências das larvas sem tetraciclina. Desta forma, qualquer sinal de mutação do mosquito seria detectado a tempo de se suspender o programa. Ao final de aproximadamente um mês, todos os insetos liberados estariam mortos. Os mosquitos, segundo as instituições responsáveis, também não passam os genes modificados mesmo que alguma fêmea desgarrada pique um ser humano.

Mosquito transgênico à venda

Em julho passado, depois do êxito dos testes de campo em Juazeiro, a Oxitec protocolou a solicitação de licença comercial na Comissão Técnica Nacional de Biossegurança (CTNBio). Desde o final de 2012, a empresa britânica possui CNPJ no país e mantém um funcionário em São Paulo. Mais recentemente, com os resultados promissores dos experimentos em Juazeiro, alugou um galpão em Campinas e está construindo o que será sua sede brasileira. O país representa hoje seu mais provável e iminente mercado, o que faz com que o diretor global de desenvolvimento de negócios da empresa, Glen Slade, viva hoje numa ponte aérea entre Oxford e São Paulo.

“A Oxitec está trabalhando desde 2009 em parceria com a USP e Moscamed, que são parceiros bons e que nos deram a oportunidade de começar projetos no Brasil. Mas agora acabamos de enviar nosso dossiê comercial à CTNBio e esperamos obter um registro no futuro, então precisamos aumentar nossa equipe no país. Claramente estamos investindo no Brasil. É um país muito importante”, disse Slade numa entrevista por Skype da sede na Oxitec, em Oxford, na Inglaterra.

A empresa de biotecnologia é uma spin-out da universidade britânica, o que significa dizer que a Oxitec surgiu dos laboratórios de uma das mais prestigiadas universidades do mundo. Fundada em 2002, desde então vem captando investimentos privados e de fundações sem fins lucrativos, tais como a Bill & Melinda Gates, para bancar o prosseguimento das pesquisas. Segundo Slade, mais de R$ 50 milhões foram gastos nesta última década no aperfeiçoamento e teste da tecnologia.

O executivo espera que a conclusão do trâmite burocrático para a concessão da licença comercial aconteça ainda próximo ano, quando a sede brasileira da Oxitec estará pronta, incluindo uma nova biofábrica. Já em contato com vários municípios do país, o executivo prefere não adiantar nomes. Nem o preço do serviço, que provavelmente será oferecido em pacotes anuais de controle da população de mosquitos, a depender o orçamento do número de habitantes da cidade.

“Nesse momento é difícil dar um preço. Como todos os produtos novos, o custo de produção é mais alto quando a gente começa do que a gente gostaria. Acho que o preço vai ser um preço muito razoável em relação aos benefícios e aos outros experimentos para controlar o mosquito, mas muito difícil de dizer hoje. Além disso, o preço vai mudar segundo a escala do projeto. Projetos pequenos não são muito eficientes, mas se tivermos a oportunidade de controlar os mosquitos no Rio de Janeiro todo, podemos trabalhar em grande escala e o preço vai baixar”, sugere.

A empresa pretende também instalar novas biofábricas nas cidades que receberem grandes projetos, o que reduzirá o custo a longo prazo, já que as liberações precisam ser mantidas indefinidamente para evitar o retorno dos mosquitos selvagens. A velocidade de reprodução do Aedes aegypti é uma preocupação. Caso seja cessado o projeto, a espécie pode recompor a população em poucas semanas.

“O plano da empresa é conseguir pagamentos repetidos para a liberação desses mosquitos todo ano. Se a tecnologia deles funcionar e realmente reduzir a incidência de dengue, você não poderá suspender estas liberações e ficará preso dentro desse sistema. Uma das maiores preocupações a longo prazo é que se as coisas começarem a dar errado, ou mesmo se tornarem menos eficientes, você realmente pode ter uma situação pior ao longo de muitos anos”, critica Helen Wallace.

O risco iria desde a redução da imunidade das pessoas à doença, até o desmantelamento de outras políticas públicas de combate à dengue, como as equipes de agentes de saúde. Apesar de tanto a Moscamed quanto a própria secretaria de Saúde de Juazeiro enfatizarem a natureza complementar da técnica, que não dispensaria os outros métodos de controle, é plausível que hajam conflitos na alocação de recursos para a área. Hoje, segundo Mário Machado da secretaria de Saúde, Juazeiro gasta em média R$ 300 mil por mês no controle de endemias, das quais a dengue é a principal.

A secretaria negocia com a Moscamed a ampliação do experimento para todo o município ou mesmo para toda a região metropolitana formada por Juazeiro e Petrolina – um teste que cobriria meio milhão pessoas –, para assim avaliar a eficácia em grandes contingentes populacionais. De qualquer forma e apesar do avanço das experiências, nem a organização social brasileira nem a empresa britânica apresentaram estimativas de preço pra uma possível liberação comercial.

“Ontem nós estávamos fazendo os primeiros estudos, pra analisar qual é o preço deles, qual o nosso. Porque eles sabem quanto custa o programa deles, que não é barato, mas não divulgam”, disse Mário Machado.

Em reportagem do jornal britânico The Observer de julho do ano passado, a Oxitec estimou o custo da técnica em “menos de” £6 libras esterlinas por pessoa por ano. Num cálculo simples, apenas multiplicando o número pela contação atual da moeda britânia frente ao real e desconsiderando as inúmeras outras variáveis dessa conta, o projeto em uma cidade de 150 mil habitantes custaria aproximadamente R$ 3,2 milhões por ano.

Se imaginarmos a quantidade de municípios de pequeno e médio porte brasileiros em que a dengue é endêmica, chega-se a pujança do mercado que se abre – mesmo desconsiderando por hora os grandes centros urbanos do país, que extrapolariam a capacidade atual da técnica. Contudo, este é apenas uma fatia do negócio. A Oxitec também possui uma série de outros insetos transgênicos, estes destinados ao controle de pragas agrícolas e que devem encontrar campo aberto no Brasil, um dos gigantes do agronegócio no mundo.

Aguardando autorização da CTNBio, a Moscamed já se preparara para testar a mosca-das-frutas transgênica, que segue a mesma lógica do Aedes aegypti. Além desta, a Oxitec tem outras 4 espécies geneticamente modificadas que poderão um dia serem testadas no Brasil, a começar por Juazeiro e o Vale do São Francisco. A região é uma das maiores produtoras de frutas frescas para exportação do país. 90% de toda uva e manga exportadas no Brasil saem daqui. Uma produção que requer o combate incessante às pragas. Nas principais avenidas de Juazeiro e Petrolina, as lojas de produtos agrícolas e agrotóxicos se sucedem, variando em seus totens as logos das multinacionais do ramo.

“Não temos planos concretos [além da mosca-das-frutas], mas, claro, gostaríamos muito de ter a oportunidade de fazer ensaios com esses produtos também. O Brasil tem uma indústria agrícola muito grande. Mas nesse momento nossa prioridade número 1 é o mosquito da dengue. Então uma vez que tivermos este projeto com recursos bastante, vamos tentar acrescentar projetos na agricultura.”, comentou Slade.

Ele e vários de seus colegas do primeiro escalão da empresa já trabalharam numa das gigantes do agronegócio, a Syngenta. O fato, segundo Helen Wallace, é um dos revelam a condição do Aedes aegypti transgênico de pioneiro de todo um novo mercado de mosquitos geneticamente modificados: “Nos achamos que a Syngenta está principalmente interessada nas pragas agrícolas. Um dos planos que conhecemos é a proposta de usar pragas agrícolas geneticamente modificadas junto com semestres transgênicas para assim aumentar a resistências destas culturas às pragas”.

“Não tem nenhum relacionamento entre Oxitec e Syngenta dessa forma. Talvez tenhamos possibilidade no futuro de trabalharmos juntos. Eu pessoalmente tenho o interesse de buscar projetos que possamos fazer com Syngenta, Basf ou outras empresas grandes da agricultura”, esclarece Glen Slade.

Em 2011, a indústria de agrotóxicos faturou R$14,1 bilhões no Brasil. Maior mercado do tipo no mundo, o país pode nos próximos anos inaugurar um novo estágio tecnológico no combate às pestes. Assim como na saúde coletiva, com o Aedes aegypti transgênico, que parece ter um futuro comercial promissor. Todavia, resta saber como a técnica conviverá com as vacinas contra o vírus da dengue, que estão em fase final de testes – uma desenvolvida por um laboratório francês, outra pelo Instituto Butantan, de São Paulo. As vacinas devem chegar ao público em 2015. O mosquito transgênico, talvez já próximo ano.

Dentre as linhagens de mosquitos transgênicos, pode surgir também uma versão nacional. Como confirmou a professora Margareth de Lara Capurro-Guimarães, do Departamento de Parasitologia da USP e coordenadora do Programa Aedes Transgênico, já está sob estudo na universidade paulista a muriçoca transgênica. Outra possível solução tecnológica para um problema de saúde pública em Juazeiro da Bahia – uma cidade na qual, segundo levantamento do Sistema Nacional de Informações sobre Saneamento (SNIS) de 2011, a rede de esgoto só atende 67% da população urbana.

* Publicado originalmente no site Agência Pública.

(Agência Pública)

Estamos preparados para o pré-sal e o gás de xisto? (O Estado de São Paulo)

JC e-mail 4817, de 20 de Setembro de 2013.

Em artigo publicado no Estadão, Washington Novaes* reforça o alerta da SBPC sobre os riscos da exploração do gás xisto

Anuncia-se que em novembro vão a leilão áreas brasileiras onde se pretende explorar o gás de xisto, da mesma forma que estão sendo leiloadas áreas do pré-sal para exploração de petróleo no mar. Deveríamos ser prudentes nas duas direções. No pré-sal, não se conhecem suficientemente possíveis consequências de exploração em áreas profundas. No caso do xisto, em vários países já há proibições de exploração ou restrições, por causa das consequências, na sua volta à superfície, da água e de insumos químicos injetados no solo para “fraturar” as camadas de rocha onde se encontra o gás a ser liberado. Mas as razões financeiras, em ambos os casos, são muito fortes e estão prevalecendo em vários lugares, principalmente nos Estados Unidos.

No Brasil, onde a tecnologia para o fraturamento de rochas ainda vai começar a ser utilizada, há um questionamento forte da Sociedade Brasileira para o Progresso da Ciência (SBPC) e da Academia Brasileira de Ciências, que, em carta à presidente da República (5/8), manifestaram sua preocupação com esse leilão para campos de gás em bacias sedimentares. Nestas, diz a carta, agências dos EUA divulgaram que o Brasil teria reservas de 7,35 trilhões de metros cúbicos em bacias no Paraná, no Parnaíba, no Solimões, no Amazonas, no Recôncavo Baiano e no São Francisco. A Agência Nacional de Petróleo (ANP) estima que as reservas podem ser o dobro disso. Mas, segundo a SBPC e a ANP, falta “conhecimento das características petrográficas, estruturais e geomecânicas” consideradas nesses cálculos, que poderão influir “decisivamente na economicidade de sua exploração”.

E ainda seria preciso considerar os altos volumes de água no processo de fratura de rochas para liberar gás, “que retornam à superfície poluídos por hidrocarbonetos e por outros compostos”, além de metais presentes nas rochas e “dos próprios aditivos químicos utilizados, que exigem caríssimas técnicas de purificação e de descarte dos resíduos finais”. A água utilizada precisaria ser confrontada “com outros usos considerados preferenciais”, como o abastecimento humano. E lembrar ainda que parte das reservas está “logo abaixo do Aquífero Guarani”; a exploração deveria “ser avaliada com muita cautela, já que há um potencial risco de contaminação das águas deste aquífero”.

Diante disso, não deveria haver licitações imediatas, “excluindo a comunidade científica e os próprios órgãos reguladores do país da possibilidade de acesso e discussão das informações”, que “poderão ser obtidas por meio de estudos realizados diretamente pelas universidades e institutos de pesquisa”. Além do maior conhecimento científico das jazidas, os estudos poderão mostrar “consequências ambientais dessa atividade, que poderão superar amplamente seus eventuais ganhos sociais”. É uma argumentação forte, que, em reunião da SBPC no Recife (22 a 27/7), levou a um pedido de que seja sustada a licitação de novembro.

Em muitos outros lugares a polêmica está acesa – como comenta o professor Luiz Fernando Scheibe, da USP, doutor em Mineração e Petrologia (12/9). Como na Grã-Bretanha, onde se argumenta que a tecnologia de fratura, entre muitos outros problemas, pode contribuir até para terremotos. A liberação de metano no processo também pode ser altamente problemática, já que tem efeitos danosos equivalentes a mais de 20 vezes os do dióxido de carbono, embora permaneça menos tempo na atmosfera. E com isso anularia as vantagens do gás de xisto para substituir o uso de carvão mineral. O próprio Programa das Nações Unidas para o Meio Ambiente (Pnuma) tem argumentado que o gás de xisto pode, na verdade, aumentar as emissões de poluentes que contribuem para mudanças do clima.

Na França os protestos têm sido muitos (Le Monde, 16/7) e levado o país a restrições fortes, assim como na Bulgária. Alguns Estados norte-americanos proibiram a tecnologia em seus territórios, mas o governo dos EUA a tem aprovado, principalmente porque o gás de xisto não só é mais barato que o carvão, como reduziu substancialmente as importações de combustíveis fósseis do país, até lhe permitindo exportar carvão excedente. E a Agência Internacional de Energia está prevendo que até 2035 haverá exploração do gás de xisto em mais de 1 milhão de pontos no mundo. Nos EUA, este ano, a produção de gás de xisto estará em cerca de 250 bilhões de metros cúbicos – facilitada pela decisão governamental de liberar a Agência de Proteção Ambiental de examinar possíveis riscos no processo e pela existência de extensa rede de gasodutos (o Brasil só os tem na região leste; gás consumido aqui vem da Bolívia).

Também a China seria potencial usuária do gás, pois 70% de sua energia vem de 3 bilhões de toneladas anuais de carvão (quase 50% do consumo no mundo).Embora tenha 30 trilhões de metros cúbicos de gás de xisto – mais que os EUA -, o problema é que as jazidas se situam em região de montanhas, muito distante dos centros de consumo – o que implicaria um aumento de 50% no custo para o usuário, comparado com o carvão. Por isso mesmo, a China deverá aumentar o consumo do carvão nas próximas décadas (Michael Brooks na New Scientist, 10/8).

E assim vamos, em mais uma questão que sintetiza o dilema algumas vezes já comentado neste espaço: lógica financeira versus lógica “ambiental”, da sobrevivência. Com governos, empresas, pessoas diante da opção de renunciar a certas tecnologias e ao uso de certos bens – por causa dos problemas de poluição, clima, consumo insustentável de recursos, etc. -, ou usá-los por causa das vantagens financeiras imediatas, que podem ser muito fortes.

Cada vez mais, será esse o centro das discussões mais fortes em toda parte, inclusive no Brasil – com repercussões amplas nos campos político e social. Preparemo-nos.

*Washington Novaes é jornalista.

Global Networks Must Be Redesigned, Experts Urge (Science Daily)

May 1, 2013 — Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems.

Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems. (Credit: © Angie Lingnau / Fotolia)

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push human-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Our Intuition of Systemic Risks Is Misleading

Networking system components that are well-behaved in separation may create counter-intuitive emergent system behaviors, which are not well-behaved at all. For example, cooperative behavior might unexpectedly break down as the connectivity of interaction partners grows. “Applying this to the global network of banks, this might actually have caused the financial meltdown in 2008,” believes Helbing.

Globally networked risks are difficult to identify, map and understand, since there are often no evident, unique cause-effect relationships. Failure rates may change depending on the random path taken by the system, with the consequence of increasing risks as cascade failures progress, thereby decreasing the capacity of the system to recover. “In certain cases, cascade effects might reach any size, and the damage might be practically unbounded,” says Helbing. “This is quite disturbing and hard to imagine.” All of these features make strongly coupled, complex systems difficult to predict and control, such that our attempts to manage them go astray.

“Take the financial system,” says Helbing. “The financial crisis hit regulators by surprise.” But back in 2003, the legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. It took 5 years until the “investment time bomb” exploded, causing losses of trillions of dollars to our economy. “The financial architecture is not properly designed,” concludes Helbing. “The system lacks breaking points, as we have them in our electrical system.” This allows local problems to spread globally, thereby reaching catastrophic dimensions.

A Global Ticking Time Bomb?

Have we unintentionally created a global time bomb? If so, what kinds of global catastrophic scenarios might humans face in complex societies? A collapse of the world economy or of our information and communication systems? Global pandemics? Unsustainable growth or environmental change? A global food or energy crisis? A cultural clash or global-scale conflict? Or will we face a combination of these contagious phenomena — a scenario that the World Economic Forum calls the “perfect storm”?

“While analyzing such global risks,” says Helbing, “one must bear in mind that the propagation speed of destructive cascade effects might be slow, but nevertheless hard to stop. It is time to recognize that crowd disasters, conflicts, revolutions, wars, and financial crises are the undesired result of operating socio-economic systems in the wrong parameter range, where systems are unstable.” In the past, these social problems seemed to be puzzling, unrelated, and almost “God-given” phenomena one had to live with. Nowadays, thanks to new complexity science models and large-scale data sets (“Big Data”), one can analyze and understand the underlying mechanisms, which let complex systems get out of control.

Disasters should not be considered “bad luck.” They are a result of inappropriate interactions and institutional settings, caused by humans. Even worse, they are often the consequence of a flawed understanding of counter-intuitive system behaviors. “For example, it is surprising that we didn’t have sufficient precautions against a financial crisis and well-elaborated contingency plans,” states Helbing. “Perhaps, this is because there should not be any bubbles and crashes according to the predominant theoretical paradigm of efficient markets.” Conventional thinking can cause fateful decisions and the repetition of previous mistakes. “In other words: While we want to do the right thing, we often do wrong things,” concludes Helbing. This obviously calls for a paradigm shift in our thinking. “For example, we may try to promote innovation, but suffer economic decline, because innovation requires diversity more than homogenization.”

Global Networks Must Be Re-Designed

Helbing’s publication explores why today’s risk analysis falls short. “Predictability and controllability are design issues,” stresses Helbing. “And uncertainty, which means the impossibility to determine the likelihood and expected size of damage, is often man-made.” Many systems could be better managed with real-time data. These would allow one to avoid delayed response and to enhance the transparency, understanding, and adaptive control of systems. However, even all the data in the world cannot compensate for ill-designed systems such as the current financial system. Such systems will sooner or later get out of control, causing catastrophic human-made failure. Therefore, a re-design of such systems is urgently needed.

Helbing’s Nature paper on “Globally Networked Risks” also calls attention to strategies that make systems more resilient, i.e. able to recover from shocks. For example, setting up backup systems (e.g. a parallel financial system), limiting the system size and connectivity, building in breaking points to stop cascade effects, or reducing complexity may be used to improve resilience. In the case of financial systems, there is still much work to be done to fully incorporate these principles.

Contemporary information and communication technologies (ICT) are also far from being failure-proof. They are based on principles that are 30 or more years old and not designed for today’s use. The explosion of cyber risks is a logical consequence. This includes threats to individuals (such as privacy intrusion, identity theft, or manipulation through personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). To counter this, Helbing recommends an entirely new ICT architecture inspired by principles of decentralized self-organization as observed in immune systems, ecology, and social systems.

Coming Era of Social Innovation

A better understanding of the success principles of societies is urgently needed. “For example, when systems become too complex, they cannot be effectively managed top-down” explains Helbing. “Guided self-organization is a promising alternative to manage complex dynamical systems bottom-up, in a decentralized way.” The underlying idea is to exploit, rather than fight, the inherent tendency of complex systems to self-organize and thereby create a robust, ordered state. For this, it is important to have the right kinds of interactions, adaptive feedback mechanisms, and institutional settings, i.e. to establish proper “rules of the game.” The paper offers the example of an intriguing “self-control” principle, where traffic lights are controlled bottom-up by the vehicle flows rather than top-down by a traffic center.

Creating and Protecting Social Capital

“One man’s disaster is another man’s opportunity. Therefore, many problems can only be successfully addressed with transparency, accountability, awareness, and collective responsibility,” underlines Helbing. Moreover, social capital such as cooperativeness or trust is important for economic value generation, social well-being and societal resilience, but it may be damaged or exploited. “Humans must learn how to quantify and protect social capital. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis.” This crisis was largely caused by a loss of trust. “It is important to stress that risk insurances today do not consider damage to social capital,” Helbing continues. However, it is known that large-scale disasters have a disproportionate public impact, in part because they destroy social capital. As we neglect social capital in risk assessments, we are taking excessive risks.

Journal Reference:

  1. Dirk Helbing. Globally networked risks and how to respondNature, 2013; 497 (7447): 51 DOI:10.1038/nature12047

Politicians Found to Be More Risk-Tolerant Than the General Population (Science Daily)

Apr. 16, 2013 — According to a recent study, the popularly elected members of the German Bundestag are substantially more risk-tolerant than the broader population of Germany. Researchers in the Cluster of Excellence “Languages of Emotion” at Freie Universität Berlin and at DIW Berlin (German Institute for Economic Research) conducted a survey of Bundestag representatives and analyzed data on the general population from the German Socio-Economic Panel Study (SOEP). Results show that risk tolerance is even higher among Bundestag representatives than among self-employed people, who are themselves more risk-tolerant than salaried employees or civil servants. This was true for all areas of risk that were surveyed in the study: automobile driving, financial investments, sports and leisure activities, career, and health. The authors interpret this finding as positive.

The full results of the study were published in German in the SOEPpapers series of the German Institute for Economic Research (DIW Berlin).

The authors of the study, Moritz Hess (University of Mannheim), Prof. Dr. Christian von Scheve (Freie Universität Berlin and DIW Berlin), Prof. Dr. Jürgen Schupp (DIW Berlin and Freie Universität Berlin), and Prof. Dr. Gert G. Wagner (DIW Berlin and Technische Universität Berlin) view the above-average risk tolerance found among Bundestag representatives as positive. According to sociologist and lead author of the study Moritz Hess: “Otherwise, important societal decisions often wouldn’t be made due to the almost incalculable risks involved. This would lead to stagnation and social standstill.” The authors do not interpret the higher risk-tolerance found among politicians as a threat to democracy. “The results show a successful and sensible division of labor among citizens, voters, and politicians,” says economist Gert G. Wagner. Democratic structures and parliamentary processes, he argues, act as a brake on the individual risk propensity of elected representatives and politicians.

For their study, the research team distributed written questionnaires to all 620 members of the 17th German Bundestag in late 2011. Twenty-eight percent of Bundestag members responded. Comparisons with the statistical characteristics of all current Bundestag representatives showed that the respondents comprise a representative sample of Bundestag members. SOEP data were used to obtain a figure for the risk tolerance of the general population for comparison with the figures for Bundestag members.

The questions posed to Bundestag members were formulated analogously to the questions in the standard SOEP questionnaire. Politicians were asked to rate their own risk tolerance on a scale from zero (= not at all risk-tolerant) to ten (= very risk-tolerant). They rated both their general risk tolerance as well as their specific risk tolerance in the areas of driving, making financial investments, sports and leisure activities, career, health, and trust towards strangers. They also rated their risk tolerance in regard to political decisions. No questions on party affiliation were asked in order to exclude the possibility that results could be used for partisan political purposes.


Hess, M., von Scheve, C., Schupp, J., Wagner. G. G. (2013): Members of German Federal Parliament More Risk-Loving Than General Population, in: DIW Economic Bulletin, Vol. 3, No. 4, 2013, pp. 20-24.

Hess, M., von Scheve, C., Schupp, J., Wagner. G. G. (2013): Sind Politiker risikofreudiger als das Volk? Eine empirische Studie zu Mitgliedern des Deutschen Bundestags, SOEPpaper No. 545, DIW Berlin.

In Big Data, We Hope and Distrust (Huffington Post)

By Robert Hall

Posted: 04/03/2013 6:57 pm

“In God we trust. All others must bring data.” — W. Edwards Deming, statistician, quality guru

Big data helped reelect a pesident, find Osama bin Laden, and contributed to the meltdown of our financial system. We are in the midst of a data revolution where social media introduces new terms like Arab Spring, Facebook Depression and Twitter anxiety that reflect a new reality: Big data is changing the social and relationship fabric of our culture.

We spend hours installing and learning how to use the latest versions of our ever-expanding technology while enduring a never-ending battle to protect our information. Then we labor while developing practices to rid ourselves of technology — rules for turning devices off during meetings or movies, legislation to outlaw texting while driving, restrictions in classrooms to prevent cheating, and scheduling meals or family time where devices are turned off. Information and technology: We love it, hate it, can’t live with it, can’t live without it, use it voraciously, and distrust it immensely. I am schizophrenic and so am I.

Big data is not only big but growing rapidly. According to IBM, we create 2.5 quintillion bytes a day and that “ninety percent of the data in the world has been created in the last two years.” Vast new computing capacity can analyze Web-browsing trails that track our every click, sensor signals from every conceivable device, GPS tracking and social network traffic. It is now possible to measure and monitor people and machines to an astonishing degree. How exciting, how promising. And how scary.

This is not our first data rodeo. The early stages of the customer relationship management movement were filled with hope and with hype. Large data warehouses were going to provide the kind of information that would make companies masters of customer relationships. There were just two problems. First, getting the data out of the warehouse wasn’t nearly as hard as getting it into the person or device interacting with the customers in a way that added value, trust and expanded relationships. We seem to always underestimate the speed of technology and overestimate the speed at which we can absorb it and socialize around it.

Second, unfortunately the customers didn’t get the memo and mostly decided in their own rich wisdom they did not need or want “masters.” In fact as providers became masters of knowing all the details about our lives, consumers became more concerned. So while many organizations were trying to learn more about customer histories, behaviors and future needs — customers and even their governments were busy trying to protect privacy, security, and access. Anyone attempting to help an adult friend or family member with mental health issues has probably run into well-intentioned HIPAA rules (regulations that ensure privacy of medical records) that unfortunately also restrict the ways you can assist them. Big data gives and the fear of big data takes away.

Big data does not big relationships make. Over the last 20 years as our data keeps getting stronger, our customer relationships keep getting weaker. Eighty-six percent of consumers trust corporations less than they did five years ago. Customer retention across industries has fallen about 30 percent in recent years. Is it actually possible that we have unwittingly contributed in the undermining of our customer relationships? How could that be? For one thing, as companies keep getting better at targeting messages to specific groups and those groups keep getting better at blocking their messages. As usual, the power to resist trumps the power to exert.

No matter how powerful big data becomes, if it is to realize its potential, it must build trust on three levels. First, customers must trust our intentions. Data that can be used for us can also be used against us. There is growing fear institutions will become a part of a “surveillance state.” While organizations have gone to great length to promote protection of our data — the numbers reflect a fair amount of doubt. For example, according to MainStreet, “87 percent of Americans do not feel large banks are transparent and 68 percent do not feel their bank is on their side.:

Second, customers must trust our actions. Even if they trust our intentions, they might still fear that our actions put them at risk. Our private information can be hacked, then misused and disclosed in damaging and embarrassing ways. After the Sandy Hook tragedy a New York newspaper published the names and addresses of over 33,000 licensed gun owners along with an interactive map that showed exactly where they lived. In response names and addresses of the newspaper editor and writers were published on-line along with information about their children. No one, including retired judges, law enforcement officers and FBI agents expected their private information to be published in the midst of a very high decibel controversy.

Third, customers must trust the outcome — that sharing data will benefit them. Even with positive intentions and constructive actions, the results may range from disappointing to damaging. Most of us have provided email addresses or other contact data — around a customer service issue or such — and then started receiving email, phone or online solicitations. I know a retired executive who helps hard-to-hire people. She spent one evening surfing the Internet to research about expunging criminal records for released felons. Years later, Amazon greets her with books targeted to the felon it believes she is. Even with opt-out options, we felt used. Or, we provide specific information, only to repeat it in the next transaction or interaction — not getting the hoped for benefit of saving our time.

It will be challenging to grow the trust at anywhere near the rate we grow the data. Information develops rapidly, competence and trust develop slowly. Investing heavily in big data and scrimping on trust will have the opposite effect desired. To quote Dolly Parton who knows a thing or two about big: “It costs a lot of money to look this cheap.”