Arquivo da tag: Risco

Mosquito transgênico para controle da dengue aprovado pela CTNBio (Portal do Meio Ambiente)

17 ABRIL 2014

Brasília – A CTNbio aprovou o pedido de liberação comercial de uma variedade transgênica de Aedes aegypti (o mosquito transmissor do vírus da dengue e de um novo virus, Chikungunya), desenvolvido pela empresa britânica Oxitec. O A. aegypti OX513a carrega um gene de letalidade condicional, que é ativado na ausência de tetraciclina. Os machos, separados das fêmeas ainda em estado de pupa, podem ser produzidos em biofábrica em enormes quantidades, sendo em seguida liberados no ambiente. Para detalhes verhttp://br.oxitec.com .

A votação nominal na Plenária teve como resultado 16 votos favoráveis (sendo um condicional) e um contra.

Antes da votação o parecer de vistas do processo foi lido. O membro relator argumentou pela diligência do processo por várias falhas que, ao seu ver, impediam uma conclusão segura do parecer. O argumento principal foi de que a eliminação do A. aegypti, de forma rápida e extensa, abriria espaço para a recolonização do espaço por outro mosquito, como o Aedes albopictus. Seu parecer foi amplamente rechaçado pela Comissão.

Também antes da votação alguns membros sugeriram uma audiência pública de instrução, que foi rechaçada por 11 votos contra 4.

A discussão imediatamente antes da votação versou menos sobre os riscos diretos do mosquito à saúde humana e animal e ao meio ambiente e derivou para aspectos de benefícios à tecnologia. Esta divergência refletiu o consenso da CTNBio quanto à segurança do produto e à premência de novas técnicas para o controle do vetor da dengue. A discussão também refletiu a segurança da CTNBio sobre o potencial da tecnologia na redução de populações de A. aegypti, sem riscos de recrudescimento de outras doenças, parecimento de novas endemias ou substituição do mosquito vetor, em completa oposição ao ponto de vista isolado do membro relator do pedido de vistas. Uma discussão detalhada do ponto de vista do relator está disponível em http://goo.gl/7aJZuI.

Com estes resultados,a CTNBio abre ao país a possibilidade de empregar um mosquito transgênico para o controle da dengue. A liberação comercial deste mosquito é, também, a primeira liberação comercial de um inseto transgênico no Mundo. O Brasil, usando uma legislação eficiência e séria na avaliação de risco de organismos geneticamente modificados, dá um exemplo de seriedade e maturidade tanto aos países que já fazem avaliação de risco de OGMs, como àqueles que ainda vacilam em ingressar no uso desta tecnologia.

Fonte: GenPeace.

*   *   *

17/4/2014 – 12h13

Mosquitos transgênicos são aprovados, mas pesquisadores temem riscos (Adital)

por Mateus Ramos, do Adital

mosquitos1 300x150 Mosquitos transgênicos são aprovados, mas pesquisadores temem riscos

Um importante, e perigoso, passo foi dado na última semana pela Comissão Técnica Nacional de Biossegurança (CTNBio), que aprovou o projeto de liberação de mosquitos geneticamente modificados no Brasil. Os mosquitos transgênicos serão usados para pesquisa e combate a dengue no país. O projeto, que permite a comercialização dos mosquitos pela empresa britânica Oxitec, foi considerado tecnicamente seguro pela CTNBio e, agora, só necessita de um registro da Agência Nacional de Vigilância Sanitária (Anvisa) para ser, de fato, liberado.

Para o professor da Universidade Federal de São Carlos (SP) e ex- membro da CTNBio, José Maria Ferraz, em entrevista à Adital, a resposta positiva dada ao projeto, pela Comissão, é um forte indicativo de que o mesmo será feito pela Anvisa. “Com certeza será aprovado, o próprio representante do Ministério da Saúde estava lá e disse que, frente às epidemias de dengue, era favorável à aprovação do projeto.”

Ferraz faz duras críticas à aprovação concedida pela CTNBio e ao projeto. “Não existe uma só política de enfrentamento à dengue, mas sim um conjunto de ações, além disso, não há garantias de que os mosquitos liberados também não carreguem a doença, ou seja, vão liberar milhões de mosquitos em todo o país, sem antes haver um estudo sério sobre o projeto. É uma coisa extremamente absurda o que foi feito. É uma insanidade, eu nunca vi tanta coisa errada em um só projeto.”

Outro grande problema apontado por Ferraz é o risco de se alterar, drasticamente, o número de mosquitos Aedes Aegypti. Uma possível redução pode aumentar a proliferação de outro mosquito, ainda mais nocivo, o Aedes Albopictus, que transmite não só a Dengue como outras doenças, a Malária por exemplo. Além disso, ele denuncia que falhas no projeto podem desencadear ainda a liberação de machos não estéreis e fêmeas, dificultando o controle das espécies. “O país está sendo cobaia de um experimento nunca feito antes no mundo. Aprovamos esse projeto muito rápido, de forma irresponsável.”

Os resultados prometidos pelo projeto podem ser afetados, por exemplo, caso haja o contato do mosquito com o antibiótico tetraciclina, que é encontrado em muitas rações para gatos e cachorros. “Basta que os mosquitos entrem em contato com as fezes dos animais alimentados com a ração que contenham esse antibiótico para que todo o experimento falhe.”, revela Ferraz.

Entenda o projeto

De acordo com a Oxitec, a técnica do projeto consiste em introduzir dois novos genes em mosquitos machos, que, ao copularem com as fêmeas do ambiente natural, gerariam larvas incapazes de chegar à fase adulta, ou seja, estas não chegariam à fase em que podem transmitir a doença aos seres humanos. Além disso, as crias também herdariam um marcador que as torna visíveis sob uma luz específica, facilitando o seu controle.

* Publicado originalmente no site Adital.

How does radioactive waste interact with soil and sediments? (Science Daily)

Date: February 3, 2014

Source: Sandia National Laboratories

Summary: Scientists are developing computer models that show how radioactive waste interacts with soil and sediments, shedding light on waste disposal and how to keep contamination away from drinking water.

Sandia National Laboratories geoscientist Randall Cygan uses computers to build models showing how contaminants interact with clay minerals. Credit: Lloyd Wilson

Sandia National Laboratories is developing computer models that show how radioactive waste interacts with soil and sediments, shedding light on waste disposal and how to keep contamination away from drinking water.

“Very little is known about the fundamental chemistry and whether contaminants will stay in soil or rock or be pulled off those materials and get into the water that flows to communities,” said Sandia geoscientist Randall Cygan.

Researchers have studied the geochemistry of contaminants such as radioactive materials and toxic heavy metals, including lead, arsenic and cadmium. But laboratory testing of soils is difficult. “The tricky thing about soils is that the constituent minerals are hard to characterize by traditional methods,” Cygan said. “In microscopy there are limits on how much information can be extracted.”

He said soils are often dominated by clay minerals with ultra-fine grains less than 2 microns in diameter. “That’s pretty small,” he said. “We can’t slap these materials on a microscope or conventional spectrometer and see if contaminants are incorporated into them.”

Cygan and his colleagues turned to computers. “On a computer we can build conceptual models,” he said. “Such molecular models provide a valuable way of testing viable mechanisms for how contaminants interact with the mineral surface.”

He describes clay minerals as the original nanomaterial, the final product of the weathering process of deep-seated rocks. “Rocks weather chemically and physically into clay minerals,” he said. “They have a large surface area that can potentially adsorb many different types of contaminants.”

Clay minerals are made up of aluminosilicate layers held together by electrostatic forces. Water and ions can seep between the layers, causing them to swell, pull apart and adsorb contaminants. “That’s an efficient way to sequester radionuclides or heavy metals from ground waters,” Cygan said. “It’s very difficult to analyze what’s going on in the interlayers at the molecular level through traditional experimental methods.”

Molecular modeling describes the characteristics and interaction of the contaminants in and on the clay minerals. Sandia researchers are developing the simulation tools and the critical energy force field needed to make the tools as accurate and predictive as possible. “We’ve developed a foundational understanding of how the clay minerals interact with contaminants and their atomic components,” Cygan said. “That allows us to predict how much of a contaminant can be incorporated into the interlayer and onto external surfaces, and how strongly it binds to the clay.”

The computer models quantify how well a waste repository might perform. “It allows us to develop performance assessment tools the Environmental Protection Agency and Nuclear Regulatory Commission need to technically and officially say, ‘Yes, let’s go ahead and put nuclear waste in these repositories,'” Cygan said.

Molecular modeling methods also are used by industry and government to determine the best types of waste treatment and mitigation. “We’re providing the fundamental science to improve performance assessment models to be as accurate as possible in understanding the surface chemistry of natural materials,” Cygan said. “This work helps provide quantification of how strongly or weakly uranium, for example, may adsorb to a clay surface, and whether one type of clay over another may provide a better barrier to radionuclide transport from a waste repository. Our molecular models provide a direct way of making this assessment to better guide the design and engineering of the waste site. How cool is that?”

Proposta anula leilão de exploração de petróleo no campo de Libra (Agência Câmara)

JC e-mail 4883, de 29 de janeiro de 2014

SBPC e ABC defendem mais pesquisas sobre eventuais danos ambientais da exploração do gás de xisto

Tramita na Câmara dos Deputados o Projeto de Decreto Legislativo (PDC) 1289/13, do deputado Chico Alencar (Psol-RJ), que susta a autorização do leilão de exploração de petróleo e gás no campo de Libra (RJ), realizado em outubro de 2013.

O deputado quer cancelar quatro normas que permitiram o leilão do campo onde haverá exploração do pré-sal brasileiro: as resoluções 4/13 e 5/13 do Conselho Nacional de Política Energética; a Portaria 218/13 do Ministério das Minas e Energia e o Edital de Licitação do Campo de Libra.

Com previsão de produção de 8 a 12 bilhões de barris de petróleo, o campo de Libra foi leiloado sob protestos e com forte proteção policial. Apesar da expectativa de participação de até quatro consórcios, houve apenas um, formado pelas empresas Petrobras, Shell, Total, CNPC e CNOOC. Ele venceu o leilão com a proposta de repassar à União 41,65% do excedente em óleo extraído – o percentual mínimo fixado no edital.

Alencar é contra as concessões para exploração de petróleo por considerar que a Petrobras pode explorar sozinha os campos brasileiros. Ele argumenta ainda que há vícios nas normas que permitiram o leilão. “A Agência Nacional do Petróleo publicou o texto final do edital e do contrato referentes ao leilão de Libra antes do parecer do Tribunal de Contas (TCU)”, apontou.

O deputado ressaltou ainda que as denúncias de espionagem estrangeira na Petrobras colocam suspeitas sobre o leilão. “A obtenção ilegal de informações estratégicas da Petrobras beneficia suas concorrentes no mercado e compromete a realização do leilão”, criticou.

Tramitação
A proposta será discutida pelas comissões de Minas e Energia; Finanças e Tributação; e Constituição e Justiça e de Cidadania. Depois, a proposta precisa ser aprovada em Plenário.

Íntegra da proposta:

PDC-1289/2013

(Carol Siqueira/ Agência Câmara)

Manifesto da comunidade científica
SBPC e ABC pedem mais pesquisas sobre eventuais danos ambientais da exploração do gás de xisto – http://www.sbpcnet.org.br/site/artigos-e-manifestos/detalhe.php?p=2011

Climate Engineering: What Do the Public Think? (Science Daily)

Jan. 12, 2014 — Members of the public have a negative view of climate engineering, the deliberate large-scale manipulation of the environment to counteract climate change, according to a new study.

The results are from researchers from the University of Southampton and Massey University (New Zealand) who have undertaken the first systematic large-scale evaluation of the public reaction to climate engineering.

The work is published in Nature Climate Change this week (12 January 2014).

Some scientists think that climate engineering approaches will be required to combat the inexorable rise in atmospheric CO2 due to the burning of fossil fuels. Climate engineering could involve techniques that reduce the amount of CO2 in the atmosphere or approaches that slow temperature rise by reducing the amount of sunlight reaching the Earth’s surface.

Co-author Professor Damon Teagle of the University of Southampton said: “Because even the concept of climate engineering is highly controversial, there is pressing need to consult the public and understand their concerns before policy decisions are made.”

Lead author, Professor Malcolm Wright of Massey University, said: “Previous attempts to engage the public with climate engineering have been exploratory and small scale. In our study, we have drawn on commercial methods used to evaluate brands and new product concepts to develop a comparative approach for evaluating the public reaction to a variety of climate engineering concepts.”

The results show that the public has strong negative views towards climate engineering. Where there are positive reactions, they favour approaches that reduce carbon dioxide over those that reflected sunlight.

“It was a striking result and a very clear pattern,” said Professor Wright. “Interventions such as putting mirrors in space or fine particles into the stratosphere are not well received. More natural processes of cloud brightening or enhanced weathering are less likely to raise objections, but the public react best to creating biochar (making charcoal from vegetation to lock in CO2) or capturing carbon directly from the air.”

Nonetheless, even the most well regarded techniques still has a net negative perception.

The work consulted large representative samples in both Australia and New Zealand. Co-author Pam Feetham said: “The responses are remarkably consistent from both countries, with surprisingly few variations except for a slight tendency for older respondents to view climate engineering more favourably.”

Professor Wright noted that giving the public a voice so early in technological development was unusual, but increasingly necessary. “If these techniques are developed the public must be consulted. Our methods can be employed to evaluate the responses in other countries and reapplied in the future to measure how public opinion changes as these potential new technologies are discussed and developed,” he said.

Journal Reference:

  1. Malcolm J. Wright, Damon A. H. Teagle, Pamela M. Feetham. A quantitative evaluation of the public response to climate engineeringNature Climate Change, 2014; DOI: 10.1038/nclimate2087

Our singularity future: should we hack the climate? (Singularity Hub)

Written By: 

Posted: 01/8/14 8:31 AM

Basaltlake-coring_greenland

Even the most adamant techno-optimists among us must admit that new technologies can introduce hidden dangers: Fire, as the adage goes, can cook the dinner, but it can also burn the village down.

The most powerful example of unforeseen disadvantages stemming from technology is climate change. Should we attempt to fix a problem caused by technology, using more novel technology to hack the climate? The question has spurred heated debate.

Those in favor point to failed efforts to curb carbon dioxide emissions and insist we need other options. What if a poorly understood climatic tipping point tips and the weather becomes dangerous overnight; how will slowing emissions help us then?

“If you look at the projections for how much the Earth’s air temperature is supposed to warm over the next century, it is frightening. We should at least know the options,” said Rob Wood, a University of Washington climatologist who edited a recent special issue of the journal Climatic Change devoted to geoengineering.

Wood’s view is gaining support, as the predictions about the effects of climate change continue to grow more dire, and the weather plays its part to a tee.

But big, important questions need answers before geoengineering projects take off. Critics point to science’s flimsy understanding of the complex systems that drive the weather. And even supporters lament the lack of any experimental framework to contain disparate experiments on how to affect it.

“Proposed projects have been protested or canceled, and calls for a governance framework abound,” Lisa Dilling and Rachel Hauser wrote in a paper that appears in the special issue. “Some have argued, even, that it is difficult if not impossible to answer some research questions in geoengineering at the necessary scale without actually implementing geoengineering itself.”

Most proposed methods of geoengineering derive from pretty basic science, but questions surround how to deploy them at a planetary scale and how to measure desired and undesired effects on complex weather and ocean cycles. Research projects that would shed light on those questions would be big enough themselves potentially to affect neighboring populations, raising ethical questions as well.

stratoshieldEarlier efforts to test fertilizing the ocean with iron to feed algae that would suck carbon dioxide from the air and to spray the pollutant sulfur dioxide, which reflects solar radiation, into the atmosphere were mired in controversy. A reputable UK project abandoned its plans to test its findings in the field.

But refinements on those earlier approaches are percolating. They include efforts both to remove previously emitted carbon dioxide from the atmosphere and to reduce the portion of the sun’s radiation that enters the atmosphere.

One method of carbon dioxide removal (or CDR) would expose large quantities of carbon-reactive minerals to the air and then store the resulting compounds underground; another would use large C02 vacuums to suck the greenhouse gas directly from the air into underground storage.

Solar radiation management (or SRM) methods include everything from painting roofs white to seeding the clouds with salt crystals to make them more reflective and mimicking the climate-cooling effects of volcanic eruptions by spraying  sulfur compounds into the atmosphere.

The inevitable impact of geoengineering research on the wider population has led many scientists to compare geoengineering to genetic research. The comparison to genetic research also hints at the huge benefits geoengineering could have if it successfully wards off the most savage effects of climate change.

As with genetic research, principles have been developed to shape the ethics of the research. Still, the principles remain vague, according to a 2012 Nature editorial, and flawed, according to a philosophy-of-science take in the recent journal issue. Neither the U.S. government nor international treaties have addressed geoengineering per se, though many treaties would influence its testing implementation.

The hottest research now explores how long climate-hacks would take to work, lining up their timelines with the slow easing of global warming that would result from dramatically lowered carbon dioxide emissions, and how to weigh the costs of geoengineering projects and accommodate public debate.

Proceeding with caution won’t get fast answers, but it seems a wise way to address an issue as thorny as readjusting the global thermostat.

Sobre a exploração de xisto no Brasil

JC e-mail 4855, de 13 de novembro de 2013

Cientistas querem adiar exploração de xisto

Ambientalistas e pesquisadores temem os estragos ambientais. Posicionamento da SBPC e da ABC foi registrado em carta

A exploração do gás de xisto nas bacias hidrográficas brasileiras, principalmente na região Amazônica, segue na contramão de países europeus, como França e Alemanha, e algumas regiões dos Estados Unidos, como o estado de Nova York, que vêm proibindo essa atividade, temendo estragos ambientais, mesmo diante de sua viabilidade econômica. Os danos são causados porque, para extrair o gás, os vários tipos de rochas metamórficas, chamadas xisto, são destruídas pelo bombeamento hidráulico ou por uma série de aditivos químicos.

Enquanto a Agência Nacional de Petróleo (ANP) mantém sua decisão de lançar em 28 e 29 de novembro os leilões de blocos de gás de xisto, autoridades de Nova York, um dos pioneiros na exploração desse produto, desde 2007, começam a rever suas políticas internas. Mais radical, a França ratificou, recentemente, a proibição da fratura hidráulica da rocha de xisto, antes mesmo de iniciar a extração desse produto, segundo especialistas.

Cientificamente batizado de gás de “folhelho”, o gás de xisto é conhecido também como “gás não convencional” ou natural. Embora tenha a mesma origem e aplicação do gás convencional, o de xisto se difere no seu processo de extração. Isto é, o produto não consegue sair da rocha naturalmente, ao contrário do gás convencional ou natural, que migra naturalmente das camadas rochosas. Para extrair o gás do xisto, ou seja, finalizar o processo de produção, são usados mecanismos artificiais, como fraturamento da rocha pelo bombeamento hidráulico ou por vários aditivos químicos.

Ao confirmar os leilões, a ANP afirma, via assessoria de imprensa, que a iniciativa cumpre a Resolução CNPE Nº 6 (de 23 de junho deste ano), publicada no Diário Oficial da União. Serão ofertados 240 blocos exploratórios terrestres com potencial para gás natural em sete bacias sedimentares, localizados nos estados do Amazonas, Acre, Tocantins, Alagoas, Sergipe, Piauí, Mato Grosso, Goiás, Bahia, Maranhão, Paraná, São Paulo, totalizando 168.348,42 Km².

Destino

O gás de xisto a ser extraído dessas bacias terá o mesmo destino do petróleo, ou seja, será comercializado como fonte de energia. No Brasil, o gás de xisto pode suprir principalmente o Rio Grande do Sul, Santa Catarina e Paraná, onde a demanda é crescente por gás natural, produto que esses estados importam da Bolívia.

Apesar do potencial econômico, o químico Jailson Bitencourt de Andrade, conselheiro da Sociedade Brasileira para o Progresso da Ciência (SBPC), reforça seu posicionamento sobre a importância de adiar os leilões da ANP e ampliar as pesquisas sobre os impactos negativos da extração do gás de xisto, a fim de evitar as agressões ao meio ambiente. “É preciso dar uma atenção grande a isso”, alerta o pesquisador, também membro da Sociedade Brasileira de Química (SBQ) e da Academia Brasileira de Ciências (ABC). “Mesmo nos Estados Unidos, onde há uma boa cadeia de logística, capaz de reduzir o custo de exploração do gás de xisto, e mesmo que sua relação custo-benefício seja altíssima, alguns estados já estão revendo suas políticas e criando barreiras para a exploração desse produto.”

O posicionamento da SBPC e da ABC

Em carta (disponível em http://www.sbpcnet.org.br/site/artigos-e-manifestos/detalhe.php?p=2011), divulgada em agosto, a SBPC e ABC expõem a preocupação com a decisão da ANP de incluir o gás de xisto, obtido por fraturamento da rocha, na próxima licitação. Um dos motivos é o fato de a tecnologia de extração desse gás ser embasada em processos “invasivos da camada geológica portadora do gás, por meio da técnica de fratura hidráulica, com a injeção de água e substâncias químicas, podendo ocasionar vazamentos e contaminação de aquíferos de água doce que ocorrem acima do xisto”.

Diante de tal cenário, Andrade volta a defender a necessidade de o Brasil investir mais em conhecimento científico nas bacias que devem ser exploradas, “até mesmo para ter uma noção da atual situação das rochas para poder comparar possíveis impactos dessas bacias no futuro”. Nesse caso, ele adiantou que o governo, por intermédio do Ministério de Ciência, Tecnologia e Inovação (MCTI) e da Agência Brasileira da Inovação (Finep), está formando uma rede de pesquisa para estudar os impactos do gás de xisto.

Defensor de estudar todas alternativas de produção de gás para substituir o petróleo futuramente, o pesquisador Hernani Aquini Fernandes Chaves, vice-coordenador do Instituto Nacional de Óleo e Gás (INOG), frisa, em contrapartida, que, apesar de eventuais estragos das rochas de xisto, o uso desse gás “é ambientalmente mais correto” do que o próprio petróleo. “Ele tem menos emissão de gás”, garante. “Precisamos conhecer todas as possibilidades de produção, porque, além de irrigar a economia, o petróleo é um bem finito que acaba um dia. O país é grande. Por isso tem de ver as possibilidades de levar o progresso a todas às áreas.” Ele se refere ao interior do Maranhão, uma das regiões mais pobres do Brasil e com potencial para exploração de gás de xisto.

Sem querer comparar o potencial de produção de gás de xisto dos EUA ao brasileiro, Chaves considera “muito otimista” as estimativas da Agência Internacional de Energia dos EUA feitas para o Brasil, de reservas da ordem de 7,35 trilhões de m³. Segundo Chaves, o INOG ainda não fez estimativas para produção de gás de xisto no território nacional. As bacias produtoras de gás de xisto, disse, ainda não foram comprovadas. Em fase experimental, porém, o gás de xisto já é produzido pela Petrobras na planta de São Mateus do Sul.

Ao falar sobre os danos ambientais provocados pela extração do gás de xisto, Chaves reconhece esse ser “um ponto controverso”. Por ora, ele esclarece que na Europa, sobretudo França e Alemanha, não é permitida a extração do gás de xisto pelo fato de o processo de exploração consumir muita água e prejudicar os aquíferos. Além disso, em Nova York, onde a produção foi iniciada, a exploração também passou a ser questionada. “Os ambientalistas não estão felizes com a produção desse gás”, reconhece. “Na França, por exemplo, não deixaram furar as rochas, mesmo sabendo das estimativas de produção de gás de xisto.”

Esclarecimentos da ANP

Segundo o comunicado da assessoria de imprensa da ANP, as áreas ofertadas nas rodadas de licitações promovidas pela ANP são previamente analisadas quanto à viabilidade ambiental pelo Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis (Ibama) e pelos órgãos ambientais estaduais competentes. “O objetivo desse trabalho conjunto é eventualmente excluir áreas por restrições ambientais em função de sobreposição com unidades de conservação ou outras áreas sensíveis, onde não é possível ou recomendável a ocorrência de atividades de exploração e produção (E&P) de petróleo e gás natural”.

Para todos os blocos ofertados na 12ª rodada de leilões, segundo o comunicado, houve a “devida manifestação positiva do órgão estadual ambiental” competente. “A ANP, apesar de não regular as questões ambientais, está atenta aos fatos relativos a esse tema, no que tange à produção de petróleo e gás natural no Brasil. Nesse sentido, as melhores práticas utilizadas na indústria de petróleo e gás natural em todo o mundo são constantemente acompanhadas e adotadas pela ANP”, cita o documento.

A ANP acrescenta: “Como o processo regulatório é dinâmico, a ANP tomará as medidas necessárias para, sempre que pertinente, adequar suas normas às questões que se apresentarem nos próximos anos para garantir a segurança nas operações.”

(Viviane Monteiro / Jornal da Ciência)

* * *

JC e-mail 4856, de 14 de novembro de 2013

Seminário promove debate sobre os impactos ambientais da exploração do gás de xisto

Com a participação de Jailson de Andrade, conselheiro da SBPC, o encontro discutiu também a necessidade dessa fonte de energia para o setor energético brasileiro

O Instituto Brasileiro de Análises Sociais e Econômicas (Ibase), o Greenpeace, o ISA, a Fase e o CTI promoveram ontem (13) em São Paulo um seminário, aberto ao público, sobre os impactos socioambientais da exploração do gás de xisto no Brasil. Com a participação de Jailson de Andrade, conselheiro da SBPC, o encontro promoveu o debate sobre questões ambientais envolvidas nesse tipo de exploração mineral e discutiu sua viabilidade. Foi discutida também a necessidade dessa fonte de energia para o setor energético brasileiro, com enfoque nas bacias do Acre, Mato Grosso e no aquífero Guarani. O pesquisador do Ibase Carlos Bittencourt alertou que é preciso prorrogar o leilão para que se possa fazer os estudos necessários antes de autorizar a exploração.

O processo licitatório para a exploração de áreas de gás natural convencionais e não convencionais deve acontece no final deste mês. A Agência Nacional do Petróleo (ANP) vai colocar à disposição 240 blocos exploratórios terrestres distribuídos em 12 estados do país. O xisto, gás não convencional utilizado por usinas hidrelétricas e indústrias é uma fonte de energia que, apesar de conhecida, permaneceu inexplorada durante muitos anos, por falta de tecnologia capaz de tornar viável a sua extração.

Ricardo Baitelo (Greenpeace), Bianca Dieile (FAPP-BG), Conrado Octavio (CTI) e Angel Matsés (Comunidad Nativa Matsés) e a moderação é de Padre Nelito (CNBB). O apoio para o seminário é da Ajuda da Igreja Norueguesa.

(Com informações do Ibase)

Fukushima Forever (Huff Post)

Charles Perrow

Posted: 09/20/2013 2:49 pm

Recent disclosures of tons of radioactive water from the damaged Fukushima reactors spilling into the ocean are just the latest evidence of the continuing incompetence of the Japanese utility, TEPCO. The announcement that the Japanese government will step in is also not reassuring since it was the Japanese government that failed to regulate the utility for decades. But, bad as it is, the current contamination of the ocean should be the least of our worries. The radioactive poisons are expected to form a plume that will be carried by currents to coast of North America. But the effects will be small, adding an unfortunate bit to our background radiation. Fish swimming through the plume will be affected, but we can avoid eating them.

Much more serious is the danger that the spent fuel rod pool at the top of the nuclear plant number four will collapse in a storm or an earthquake, or in a failed attempt to carefully remove each of the 1,535 rods and safely transport them to the common storage pool 50 meters away. Conditions in the unit 4 pool, 100 feet from the ground, are perilous, and if any two of the rods touch it could cause a nuclear reaction that would be uncontrollable. The radiation emitted from all these rods, if they are not continually cool and kept separate, would require the evacuation of surrounding areas including Tokyo. Because of the radiation at the site the 6,375 rods in the common storage pool could not be continuously cooled; they would fission and all of humanity will be threatened, for thousands of years.

Fukushima is just the latest episode in a dangerous dance with radiation that has been going on for 68 years. Since the atomic bombing of Nagasaki and Hiroshima in 1945 we have repeatedly let loose plutonium and other radioactive substances on our planet, and authorities have repeatedly denied or trivialized their dangers. The authorities include national governments (the U.S., Japan, the Soviet Union/ Russia, England, France and Germany); the worldwide nuclear power industry; and some scientists both in and outside of these governments and the nuclear power industry. Denials and trivialization have continued with Fukushima. (Documentation of the following observations can be found in my piece in the Bulletin of the Atomic Scientists, upon which this article is based.) (Perrow 2013)

In 1945, shortly after the bombing of two Japanese cities, the New York Times headline read: “Survey Rules Out Nagasaki Dangers”; soon after the 2011 Fukushima disaster it read “Experts Foresee No Detectable Health Impact from Fukushima Radiation.” In between these two we had experts reassuring us about the nuclear bomb tests, plutonium plant disasters at Windscale in northern England and Chelyabinsk in the Ural Mountains, and the nuclear power plant accidents at Three Mile Island in the United States and Chernobyl in what is now Ukraine, as well as the normal operation of nuclear power plants.

Initially the U.S. Government denied that low-level radiation experienced by thousands of Japanese people in and near the two cities was dangerous. In 1953, the newly formed Atomic Energy Commission insisted that low-level exposure to radiation “can be continued indefinitely without any detectable bodily change.” Biologists and other scientists took exception to this, and a 1956 report by the National Academy of Scientists, examining data from Japan and from residents of the Marshall Islands exposed to nuclear test fallout, successfully established that all radiation was harmful. The Atomic Energy Commission then promoted a statistical or population approach that minimized the danger: the damage would be so small that it would hardly be detectable in a large population and could be due to any number of other causes. Nevertheless, the Radiation Research Foundation detected it in 1,900 excess deaths among the Japanese exposed to the two bombs. (The Department of Homeland Security estimated only 430 cancer deaths).

Besides the uproar about the worldwide fallout from testing nuclear weapons, another problem with nuclear fission soon emerged: a fire in a British plant making plutonium for nuclear weapons sent radioactive material over a large area of Cumbria, resulting in an estimated 240 premature cancer deaths, though the link is still disputed. The event was not made public and no evacuations were ordered. Also kept secret, for over 25 years, was a much larger explosion and fire, also in 1957, at the Chelyabinsk nuclear weapons processing plant in the eastern Ural Mountains of the Soviet Union. One estimate is that 272,000 people were irradiated; lakes and streams were contaminated; 7,500 people were evacuated; and some areas still are uninhabitable. The CIA knew of it immediately, but they too kept it secret. If a plutonium plant could do that much damage it would be a powerful argument for not building nuclear weapons.

Powerful arguments were needed, due to the fallout from the fallout from bombs and tests. Peaceful use became the mantra. Project Plowshares, initiated in 1958, conducted 27 “peaceful nuclear explosions” from 1961 until the costs as well as public pressure from unforeseen consequences ended the program in 1975. The Chairman of the Atomic Energy Commission indicated Plowshares’ close relationship to the increasing opposition to nuclear weapons, saying that peaceful applications of nuclear explosives would “create a climate of world opinion that is more favorable to weapons development and tests” (emphasis supplied). A Pentagon official was equally blunt, saying in 1953, “The atomic bomb will be accepted far more readily if at the same time atomic energy is being used for constructive ends.” The minutes of a National Security Council in 1953 spoke of destroying the taboo associated with nuclear weapons and “dissipating” the feeling that we could not use an A-bomb.

More useful than peaceful nuclear explosions were nuclear power plants, which would produce the plutonium necessary for atomic weapons as well as legitimating them. Nuclear power plants, the daughter of the weapons program — actually its “bad seed” –f was born and soon saw first fruit with the1979 Three Mile Island accident. Increases in cancer were found but the Columbia University study declared that the level of radiation from TMI was too low to have caused them, and the “stress” hypothesis made its first appearance as the explanation for rises in cancer. Another university study disputed this, arguing that radiation caused the increase, and since a victim suit was involved, it went to a Federal judge who ruled in favor of stress. A third, larger study found “slight” increases in cancer mortality and increased risk breast and other cancers, but found “no consistent evidence” of a “significant impact.” Indeed, it would be hard to find such an impact when so many other things can cause cancer, and it is so widespread. Indeed, since stress can cause it, there is ample ambiguity that can be mobilized to defend nuclear power plants.

Ambiguity was mobilized by the Soviet Union after the 1987 Chernobyl disaster. Medical studies by Russian scientists were suppressed, and doctors were told not to use the designation of leukemia in health reports. Only after a few years had elapsed did any serious studies acknowledge that the radiation was serious. The Soviet Union forcefully argued that the large drops in life expectancy in the affected areas were due to not just stress, but lifestyle changes. The International Atomic Energy Association (IAEA), charged with both promoting nuclear power and helping make it safe, agreed, and mentioned such things as obesity, smoking, and even unprotected sex, arguing that the affected population should not be treated as “victims” but as “survivors.” The count of premature deaths has varied widely, ranging from 4,000 in the contaminated areas of Ukraine, Belarus and Russia from UN agencies, while Greenpeace puts it at 200,000. We also have the controversial worldwide estimate of 985,000 from Russian scientists with access to thousands of publications from the affected regions.

Even when nuclear power plants are running normally they are expected to release some radiation, but so little as to be harmless. Numerous studies have now challenged that. When eight U.S. nuclear plants in the U.S. were closed in 1987 they provided the opportunity for a field test. Two years later strontium-90 levels in local milk declined sharply, as did birth defects and death rates of infants within 40 miles of the plants. A 2007 study of all German nuclear power plants saw childhood leukemia for children living less than 3 miles from the plants more than double, but the researchers held that the plants could not cause it because their radiation levels were so low. Similar results were found for a French study, with a similar conclusion; it could not be low-level radiation, though they had no other explanation. A meta-study published in 2007 of 136 reactor sites in seven countries, extended to include children up to age 9, found childhood leukemia increases of 14 percent to 21 percent.

Epidemiological studies of children and adults living near the Fukushima Daiichi nuclear plant will face the same obstacles as earlier studies. About 40 percent of the aging population of Japan will die of some form of cancer; how can one be sure it was not caused by one of the multiple other causes? It took decades for the effects of the atomic bombs and Chernobyl to clearly emblazon the word “CANCER” on these events. Almost all scientists finally agree that the dose effects are linear, that is, any radiation added to natural background radiation, even low-levels of radiation, is harmful. But how harmful?

University professors have declared that the health effects of Fukushima are “negligible,” will cause “close to no deaths,” and that much of the damage was “really psychological.” Extensive and expensive follow-up on citizens from the Fukushima area, the experts say, is not worth it. There is doubt a direct link will ever be definitively made, one expert said. The head of the U.S. National Council on Radiation Protection and Measurements, said: “There’s no opportunity for conducting epidemiological studies that have any chance of success….The doses are just too low.” We have heard this in 1945, at TMi, at Chernobyl, and for normally running power plants. It is surprising that respected scientists refuse to make another test of such an important null hypothesis: that there are no discernible effects of low-level radiation.

Not surprisingly, a nuclear power trade group announced shortly after the March, 2011 meltdown at Fukushima (the meltdown started with the earthquake, well before the tsunami hit), that “no health effects are expected” as a result of the events. UN agencies agree with them and the U.S. Council. The leading UN organization on the effects of radiation concluded “Radiation exposure following the nuclear accident at Fukushima-Daiichi did not cause any immediate health effects. It is unlikely to be able to attribute any health effects in the future among the general public and the vast majority of workers.” The World Health Organization stated that while people in the United States receive about 6.5 millisieverts per year from sources including background radiation and medical procedures, only two Japanese communities had effective dose rates of 10 to 50 millisieverts, a bit more than normal.

However, other data contradict the WHO and other UN agencies. The Japanese science and technology ministry (MEXT) indicated that a child in one community would have an exposure 100 times the natural background radiation in Japan, rather than a bit more than normal. A hospital reported that more than half of the 527 children examined six months after the disaster had internal exposure to cesium-137, an isotope that poses great risk to human health. A French radiological institute found ambient dose rates 20 to 40 times that of background radiation and in the most contaminated areas the rates were even 10 times those elevated dose rates. The Institute predicts and excess cancer rate of 2 percent in the first year alone. Experts not associated with the nuclear industry or the UN agencies currently have estimated from 1,000 to 3,000 cancer deaths. Nearly two years after the disaster the WHO was still declaring that any increase in human disease “is likely to remain below detectable levels.” (It is worth noting that the WHO still only releases reports on radiation impacts in consultation with the International Atomic Energy Agency.)

In March 2013, the Fukushima Prefecture Health Management Survey reported examining 133,000 children using new, highly sensitive ultrasound equipment. The survey found that 41 percent of the children examined had cysts of up to 2 centimeters in size and lumps measuring up to 5 millimeters on their thyroid glands, presumably from inhaled and ingested radioactive iodine. However, as we might expect from our chronicle, the survey found no cause for alarm because the cysts and lumps were too small to warrant further examination. The defense ministry also conducted an ultrasound examination of children from three other prefectures distant from Fukushima and found somewhat higher percentages of small cysts and lumps, adding to the argument that radiation was not the cause. But others point out that radiation effects would not be expected to be limited to what is designated as the contaminated area; that these cysts and lumps, signs of possible thyroid cancer, have appeared alarmingly soon after exposure; that they should be followed up since it takes a few years for cancer to show up and thyroid cancer is rare in children; and that a control group far from Japan should be tested with the same ultrasound technics.

The denial that Fukushima has any significant health impacts echoes the denials of the atomic bomb effects in 1945; the secrecy surrounding Windscale and Chelyabinsk; the studies suggesting that the fallout from Three Mile Island was, in fact, serious; and the multiple denials regarding Chernobyl (that it happened, that it was serious, and that it is still serious).

As of June, 2013, according to a report in The Japan Times, 12 of 175,499 children tested had tested positive for possible thyroid cancer, and 15 more were deemed at high risk of developing the disease. For a disease that is rare, this is high number. Meanwhile, the U.S. government is still trying to get us to ignore the bad seed. June 2012, the U.S. Department of Energy granted $1.7 million to the Massachusetts Institute of Technology to address the “difficulties in gaining the broad social acceptance” of nuclear power.

Perrow, Charles. 2013. “Nuclear denial: From Hiroshima to Fukushima.” Bulletin of Atomic Scientists 69(5):56-67.

Economic Dangers of ‘Peak Oil’ Addressed (Science Daily)

Oct. 16, 2013 — Researchers from the University of Maryland and a leading university in Spain demonstrate in a new study which sectors could put the entire U.S. economy at risk when global oil production peaks (“Peak Oil”). This multi-disciplinary team recommends immediate action by government, private and commercial sectors to reduce the vulnerability of these sectors.

The figure above shows sectors’ importance and vulnerability to Peak Oil. The bubbles represent sectors. The size of the bubbles visualizes the vulnerability of a particular sector to Peak Oil according to the expected price changes; the larger the size of the bubble, the more vulnerable the sector is considered to be. The X axis shows a sector’s importance according to its contribution to GDP and on the Y axis according to its structural role. Hence, the larger bubbles in the top right corner represent highly vulnerable and highly important sectors. In the case of Peak Oil induced supply disruptions, these sectors could cause severe imbalances for the entire U.S. economy. (Credit: Image courtesy of University of Maryland)

While critics of Peak Oil studies declare that the world has more than enough oil to maintain current national and global standards, these UMD-led researchers say Peak Oil is imminent, if not already here — and is a real threat to national and global economies. Their study is among the first to outline a way of assessing the vulnerabilities of specific economic sectors to this threat, and to identify focal points for action that could strengthen the U.S. economy and make it less vulnerable to disasters.

Their work, “Economic Vulnerability to Peak Oil,” appears inGlobal Environmental Change. The paper is co-authored by Christina Prell, UMD’s Department of Sociology; Kuishuang Feng and Klaus Hubacek, UMD’s Department of Geographical Sciences, and Christian Kerschner, Institut de Ciència i Tecnologia Ambientals, Universitat Autònoma de Barcelona.

A focus on Peak Oil is increasingly gaining attention in both scientific and policy discourses, especially due to its apparent imminence and potential dangers. However, until now, little has been known about how this phenomenon will impact economies. In their paper, the research team constructs a vulnerability map of the U.S. economy, combining two approaches for analyzing economic systems. Their approach reveals the relative importance of individual economic sectors, and how vulnerable these are to oil price shocks. This dual-analysis helps identify which sectors could put the entire U.S. economy at risk from Peak Oil. For the United States, such sectors would include iron mills, chemical and plastic products manufacturing, fertilizer production and air transport.

“Our findings provide early warnings to these and related industries about potential trouble in their supply chain,” UMD Professor Hubacek said. “Our aim is to inform and engage government, public and private industry leaders, and to provide a tool for effective Peak Oil policy action planning.”

Although the team’s analysis is embedded in a Peak Oil narrative, it can be used more broadly to develop a climate roadmap for a low carbon economy.

“In this paper, we analyze the vulnerability of the U.S. economy, which is the biggest consumer of oil and oil-based products in the world, and thus provides a good example of an economic system with high resource dependence. However, the notable advantage of our approach is that it does not depend on the Peak-Oil-vulnerability narrative but is equally useful in a climate change context, for designing policies to reduce carbon dioxide emissions. In that case, one could easily include other fossil fuels such as coal in the model and results could help policy makers to identify which sectors can be controlled and/or managed for a maximum, low-carbon effect, without destabilizing the economy,” Professor Hubacek said.

One of the main ways a Peak Oil vulnerable industry can become less so, the authors say, is for that sector to reduce the structural and financial importance of oil. For example, Hubacek and colleagues note that one approach to reducing the importance of oil to agriculture could be to curbing the strong dependence on artificial fertilizers by promoting organic farming techniques and/or reducing the overall distance travelled by people and goods by fostering local, decentralized food economies.

Peak Oil Background and Impact

The Peak Oil dialogue shifts attention away from discourses on “oil depletion” and “stocks” to focus on declining production rates (flows) of oil, and increasing costs of production. The maximum possible daily flow rate (with a given technology) is what eventually determines the peak; thus, the concept can also be useful in the context of other renewable resources.

Improvements in extraction and refining technologies can influence flows, but this tends to lead to steeper decline curves after the peak is eventually reached. Such steep decline curves have also been observed for shale gas wells.

“Shale developments are, so we believe, largely overrated, because of the huge amounts of financial resources that went into them (danger of bubble) and because of their apparent steep decline rates (shale wells tend to peak fast),” according to Dr. Kerschner.

“One important implication of this dialogue shift is that extraction peaks occur much earlier in time than the actual depletion of resources,” Professor Hubacek said. “In other words, Peak Oil is currently predicted within the next decade by many, whereas complete oil depletion will in fact occur never given increasing prices. This means that eventually petroleum products may be sold in liter bottles in pharmacies like in the old days. ”

Journal Reference:

  1. Christian Kerschner, Christina Prell, Kuishuang Feng, Klaus Hubacek. Economic vulnerability to Peak OilGlobal Environmental Change, 2013; DOI:10.1016/j.gloenvcha.2013.08.015

Mosquitos transgênicos no céu do sertão (Agência Pública)

Saúde

10/10/2013 – 10h36

por Redação da Agência Pública

armadilhas 300x199 Mosquitos transgênicos no céu do sertão

As armadilhas são instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Foto: Coletivo Nigéria

Com a promessa de reduzir a dengue, biofábrica de insetos transgênicos já soltou 18 milhões de mosquitos Aedes aegypti no interior da Bahia. Leia a história e veja o vídeo.

No começo da noite de uma quinta-feira de setembro, a rodoviária de Juazeiro da Bahia era o retrato da desolação. No saguão mal iluminado, funcionavam um box cuja especialidade é caldo de carne, uma lanchonete de balcão comprido, ornado por salgados, biscoitos e batata chips, e um único guichê – com perturbadoras nuvens de mosquitos sobre as cabeças de quem aguardava para comprar passagens para pequenas cidades ou capitais nordestinas.

Assentada à beira do rio São Francisco, na fronteira entre Pernambuco e Bahia, Juazeiro já foi uma cidade cortada por córregos, afluentes de um dos maiores rios do país. Hoje, tem mais de 200 mil habitantes, compõe o maior aglomerado urbano do semiárido nordestino ao lado de Petrolina – com a qual soma meio milhão de pessoas – e é infestada por muriçocas (ou pernilongos, se preferir). Os cursos de água que drenavam pequenas nascentes viraram esgotos a céu aberto, extensos criadouros do inseto, tradicionalmente combatidos com inseticida e raquete elétrica, ou janelas fechadas com ar condicionado para os mais endinheirados.

Mas os moradores de Juazeiro não espantam só muriçocas nesse início de primavera. A cidade é o centro de testes de uma nova técnica científica que utiliza Aedes aegypti transgênicos para combater a dengue, doença transmitida pela espécie. Desenvolvido pela empresa britânica de biotecnologia Oxitec, o método consiste basicamente na inserção de um gene letal nos mosquitos machos que, liberados em grande quantidade no meio ambiente, copulam com as fêmeas selvagens e geram uma cria programada para morrer. Assim, se o experimento funcionar, a morte prematura das larvas reduz progressivamente a população de mosquitos dessa espécie.

A técnica é a mais nova arma para combater uma doença que não só resiste como avança sobre os métodos até então empregados em seu controle. A Organização Mundial de Saúde estima que possam haver de 50 a 100 milhões de casos de dengue por ano no mundo. No Brasil, a doença é endêmica, com epidemias anuais em várias cidades, principalmente nas grandes capitais. Em 2012, somente entre os dias 1º de janeiro e 16 de fevereiro, foram registrados mais de 70 mil casos no país. Em 2013, no mesmo período, o número praticamente triplicou, passou para 204 mil casos. Este ano, até agora, 400 pessoas já morreram de dengue no Brasil.

Em Juazeiro, o método de patente britânica é testado pela organização social Moscamed, que reproduz e libera ao ar livre os mosquitos transgênicos desde 2011. Na biofábrica montada no município e que tem capacidade para produzir até 4 milhões de mosquitos por semana, toda cadeia produtiva do inseto transgênico é realizada – exceção feita à modificação genética propriamente dita, executada nos laboratórios da Oxitec, em Oxford. Larvas transgênicas foram importadas pela Moscamed e passaram a ser reproduzidas nos laboratórios da instituição.

Os testes desde o início são financiados pela Secretaria da Saúde da Bahia – com o apoio institucional da secretaria de Juazeiro – e no último mês de julho se estenderam ao município de Jacobina, na extremidade norte da Chapada Diamantina. Na cidade serrana de aproximadamente 80 mil habitantes, a Moscamed põe à prova a capacidade da técnica de “suprimir” (a palavra usada pelos cientistas para exterminar toda a população de mosquitos) o Aedes aegypti em toda uma cidade, já que em Juazeiro a estratégia se mostrou eficaz, mas limitada por enquanto a dois bairros.

“Os resultados de 2011 e 2012 mostraram que [a técnica] realmente funcionava bem. E a convite e financiados pelo Governo do Estado da Bahia resolvemos avançar e irmos pra Jacobina. Agora não mais como piloto, mas fazendo um teste pra realmente eliminar a população [de mosquitos]”, fala Aldo Malavasi, professor aposentado do Departamento de Genética do Instituto de Biociências da Universidade de São Paulo (USP) e atual presidente da Moscamed. A USP também integra o projeto.

Malavasi trabalha na região desde 2006, quando a Moscamed foi criada para combater uma praga agrícola, a mosca-das-frutas, com técnica parecida – a Técnica do Inseto Estéril. A lógica é a mesma: produzir insetos estéreis para copular com as fêmeas selvagens e assim reduzir gradativamente essa população. A diferença está na forma como estes insetos são esterilizados. Ao invés de modificação genética, radiação. A TIE é usada largamente desde a década de 1970, principalmente em espécies consideradas ameaças à agricultura. O problema é que até agora a tecnologia não se adequava a mosquitos como o Aedes aegypti, que não resistiam de forma satisfatória à radiação

O plano de comunicação

As primeiras liberações em campo do Aedes transgênico foram realizadas nas Ilhas Cayman, entre o final de 2009 e 2010. O território britânico no Caribe, formado por três ilhas localizadas ao Sul de Cuba, se mostrou não apenas um paraíso fiscal (existem mais empresas registradas nas ilhas do que seus 50 mil habitantes), mas também espaço propício para a liberação dos mosquitos transgênicos, devido à ausência de leis de biossegurança. As Ilhas Cayman não são signatárias do Procolo de Cartagena, o principal documento internacional sobre o assunto, nem são cobertas pela Convenção de Aarthus – aprovada pela União Europeia e da qual o Reino Unido faz parte – que versa sobre o acesso à informação, participação e justiça nos processos de tomada de decisão sobre o meio ambiente.

Ao invés da publicação e consulta pública prévia sobre os riscos envolvidos no experimento, como exigiriam os acordos internacionais citados, os cerca de 3 milhões de mosquitos soltos no clima tropical das Ilhas Cayman ganharam o mundo sem nenhum processo de debate ou consulta pública. A autorização foi concedida exclusivamente pelo Departamento de Agricultura das Ilhas. Parceiro local da Oxitec nos testes, a Mosquito Research & Control Unit (Unidade de Pesquisa e Controle de Mosquito) postou um vídeo promocional sobre o assunto apenas em outubro de 2010, ainda assim sem mencionar a natureza transgênica dos mosquitos. O vídeo foi divulgado exatamente um mês antes da apresentação dos resultados dos experimentos pela própria Oxitec no encontro anual daAmerican Society of Tropical Medicine and Hygiene (Sociedade Americana de Medicina Tropical e Higiene), nos Estados Unidos.

A comunidade científica se surpreendeu com a notícia de que as primeiras liberações no mundo de insetos modificados geneticamente já haviam sido realizadas, sem que os próprios especialistas no assunto tivessem conhecimento. A surpresa se estendeu ao resultado: segundo os dados da Oxitec, os experimentos haviam atingido 80% de redução na população de Aedes aegypti nas Ilhas Cayman. O número confirmava para a empresa que a técnica criada em laboratório poderia ser de fato eficiente. Desde então, novos testes de campo passaram a ser articulados em outros países – notadamente subdesenvolvidos ou em desenvolvimento, com clima tropical e problemas históricos com a dengue.

Depois de adiar testes semelhantes em 2006, após protestos, a Malásia se tornou o segundo país a liberar os mosquitos transgênicos entre dezembro de 2010 e janeiro de 2011. Seis mil mosquitos foram soltos num área inabitada do país. O número, bem menor em comparação ao das Ilhas Cayman, é quase insignificante diante da quantidade de mosquitos que passou a ser liberada em Juazeiro da Bahia a partir de fevereiro de 2011. A cidade, junto com Jacobina mais recentemente, se tornou desde então o maior campo de testes do tipo no mundo, com mais de 18 milhões de mosquitos já liberados, segundo números da Moscamed.

“A Oxitec errou profundamente, tanto na Malásia quanto nas Ilhas Cayman. Ao contrário do que eles fizeram, nós tivemos um extenso trabalho do que a gente chama de comunicação pública, com total transparência, com discussão com a comunidade, com visita a todas as casas. Houve um trabalho extraordinário aqui”, compara Aldo Malavasi.

Em entrevista por telefone, ele fez questão de demarcar a independência da Moscamed diante da Oxitec e ressaltou a natureza diferente das duas instituições. Criada em 2006, a Moscamed é uma organização social, sem fins lucrativos portanto, que se engajou nos testes do Aedes aegypti transgênico com o objetivo de verificar a eficácia ou não da técnica no combate à dengue. Segundo Malavasi, nenhum financiamento da Oxitec foi aceito por eles justamente para garantir a isenção na avaliação da técnica. “Nós não queremos dinheiro deles, porque o nosso objetivo é ajudar o governo brasileiro”, resume.

Em favor da transparência, o programa foi intitulado “Projeto Aedes Transgênico” (PAT), para trazer já no nome a palavra espinhosa. Outra determinação de ordem semântica foi o não uso do termo “estéril”, corrente no discurso da empresa britânica, mas empregada tecnicamente de forma incorreta, já que os mosquitos produzem crias, mas geram prole programada para morrer no estágio larval. Um jingle pôs o complexo sistema em linguagem popular e em ritmo de forró pé-de-serra. E o bloco de carnaval “Papa Mosquito” saiu às ruas de Juazeiro no Carnaval de 2011.

No âmbito institucional, além do custeio pela Secretaria de Saúde estadual, o programa também ganhou o apoio da Secretaria de Saúde de Juazeiro da Bahia. “De início teve resistência, porque as pessoas também não queriam deixar armadilhas em suas casas, mas depois, com o tempo, elas entenderam o projeto e a gente teve uma boa aceitação popular”, conta o enfermeiro sanitarista Mário Machado, diretor de Promoção e Vigilância à Saúde da secretaria.

As armadilhas, das quais fala Machado, são simples instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Assim é possível colher os ovos e verificar se eles foram fecundados por machos transgênicos ou selvagens. Isso também é possível porque os mosquitos geneticamente modificados carregam, além do gene letal, o fragmento do DNA de uma água-viva que lhe confere uma marcação fluorescente, visível em microscópios.

Desta forma, foi possível verificar que a redução da população de Aedes aegypti selvagem atingiu, segundo a Moscamed, 96% em Mandacaru – um assentamento agrícola distante poucos quilômetros do centro comercial de Juazeiro que, pelo isolamento geográfico e aceitação popular, se transformou no local ideal para as liberações. Apesar do número, a Moscamed continua com liberações no bairro. Devido à breve vida do mosquito (a fêmea vive aproximadamente 35 dias), a soltura dos insetos precisa continuar para manter o nível da população selvagem baixo. Atualmente, uma vez por semana um carro deixa a sede da organização com 50 mil mosquitos distribuídos aos milhares em potes plásticos que serão abertos nas ruas de Mandacaru.

“Hoje a maior aceitação é no Mandacaru. A receptividade foi tamanha que a Moscamed não quer sair mais de lá”, enfatiza Mário Machado.

O mesmo não aconteceu com o bairro de Itaberaba, o primeiro a receber os mosquitos no começo de 2011. Nem mesmo o histórico alto índice de infecção pelo Aedes aegypti fez com que o bairro periférico juazeirense, vizinho à sede da Moscamed, aceitasse de bom grado o experimento. Mário Machado estima “em torno de 20%” a parcela da população que se opôs aos testes e pôs fim às liberações.

“Por mais que a gente tente informar, ir de casa em casa, de bar em bar, algumas pessoas desacreditam: ‘Não, vocês estão mentindo pra gente, esse mosquito tá picando a gente’”, resigna-se.

Depois de um ano sem liberações, o mosquito parece não ter deixado muitas lembranças por ali. Em uma caminhada pelo bairro, quase não conseguimos encontrar alguém que soubesse do que estávamos falando. Não obstante, o nome de Itaberaba correu o mundo ao ser divulgado pela Oxitec que o primeiro experimento de campo no Brasil havia atingido 80% de redução na população de mosquitos selvagens.

Supervisora de campo da Moscamed, a bióloga Luiza Garziera foi uma das que foram de casa em casa explicando o processo, por vezes contornando o discurso científico para se fazer entender. “Eu falava que a gente estaria liberando esses mosquitos, que a gente liberava somente o macho, que não pica. Só quem pica é a fêmea. E que esses machos quando ‘namoram’ – porque a gente não pode falar às vezes de ‘cópula’ porque as pessoas não vão entender. Então quando esses machos namoram com a fêmea, os seus filhinhos acabam morrendo”.

Este é um dos detalhes mais importantes sobre a técnica inédita. Ao liberar apenas machos, numa taxa de 10 transgênicos para 1 selvagem, a Moscamed mergulha as pessoas numa nuvem de mosquitos, mas garante que estes não piquem aqueles. Isto acontece porque só a fêmea se alimenta de sangue humano, líquido que fornece as proteínas necessárias para sua ovulação.

A tecnologia se encaixa de forma convincente e até didática – talvez com exceção da “modificação genética”, que requer voos mais altos da imaginação. No entanto, ainda a ignorância sobre o assunto ainda campeia em considerável parcela dos moradores ouvidos para esta reportagem. Quando muito, sabe-se que se trata do extermínio do mosquito da dengue, o que é naturalmente algo positivo. No mais, ouviu-se apenas falar ou arrisca-se uma hipótese que inclua a, esta sim largamente odiada, muriçoca.

A avaliação dos riscos

Apesar da campanha de comunicação da Moscamed, a ONG britânica GeneWatch aponta uma série de problemas no processo brasileiro. O principal deles, o fato do relatório de avaliação de riscos sobre o experimento não ter sido disponibilizado ao público antes do início das liberações. Pelo contrário, a pedido dos responsáveis pelo Programa Aedes Transgênico, o processo encaminhado à Comissão Técnica Nacional de Biossegurança (CTNBio, órgão encarregado de autorizar ou não tais experimentos) foi considerado confidencial.

“Nós achamos que a Oxitec deve ter o consentimento plenamente informado da população local, isso significa que as pessoas precisam concordar com o experimento. Mas para isso elas precisam também ser informadas sobre os riscos, assim como você seria se estivesse sendo usado para testar um novo medicamento contra o câncer ou qualquer outro tipo de tratamento”, comentou, em entrevista por Skype, Helen Wallace, diretora executiva da organização não governamental.

Especialista nos riscos e na ética envolvida nesse tipo de experimento, Helen publicou este ano o relatório Genetically Modified Mosquitoes: Ongoing Concerns (“Mosquitos Geneticamente Modificados: atuais preocupações”), que elenca em 13 capítulos o que considera riscos potenciais não considerados antes de se autorizar a liberação dos mosquitos transgênicos. O documento também aponta falhas na condução dos experimentos pela Oxitec.

Por exemplo, após dois anos das liberações nas Ilhas Cayman, apenas os resultados de um pequeno teste haviam aparecido numa publicação científica. No começo de 2011, a empresa submeteu os resultados do maior experimento nas Ilhas à revista Science, mas o artigo não foi publicado. Apenas em setembro do ano passado o texto apareceu em outra revista, a Nature Biotechnology, publicado como “correspondência” – o que significa que não passou pela revisão de outros cientistas, apenas pela checagem do próprio editor da publicação.

Para Helen Wallace, a ausência de revisão crítica dos pares científicos põe o experimento da Oxitec sob suspeita. Mesmo assim, a análise do artigo, segundo o documento, sugere que a empresa precisou aumentar a proporção de liberação de mosquitos transgênicos e concentrá-los em uma pequena área para que atingisse os resultados esperados. O mesmo teria acontecido no Brasil, em Itaberaba. Os resultados do teste no Brasil também ainda não foram publicados pela Moscamed. O gerente do projeto, Danilo Carvalho, informou que um dos artigos já foi submetido a uma publicação e outro está em fase final de escrita.

Outro dos riscos apontados pelo documento está no uso comum do antibiótico tetraciclina. O medicamento é responsável por reverter o gene letal e garantir em laboratório a sobrevivência do mosquito geneticamente modificado, que do contrário não chegaria à fase adulta. Esta é a diferença vital entre a sorte dos mosquitos reproduzidos em laboratório e a de suas crias, geradas no meio ambiente a partir de fêmeas selvagens – sem o antibiótico, estão condenados à morte prematura.

A tetraciclina é comumente empregada nas indústrias da pecuária e da aquicultura, que despejam no meio ambiente grandes quantidades da substância através de seus efluentes. O antibiótico também é largamente usado na medicina e na veterinária. Ou seja, ovos e larvas geneticamente modificados poderiam entrar em contato com o antibiótico mesmo em ambientes não controlados e assim sobreviverem. Ao longo do tempo, a resistência dos mosquitos transgênicos ao gene letal poderia neutralizar seu efeito e, por fim, teríamos uma nova espécie geneticamente modificada adaptada ao meio ambiente.

laboratorio 300x186 Mosquitos transgênicos no céu do sertãoA hipótese é tratada com ceticismo pela Oxitec, que minimiza a possibilidade disto acontecer no mundo real. No entanto, documento confidencial tornado público mostra que a hipótese se mostrou, por acaso, real nos testes de pesquisador parceiro da empresa. Ao estranhar uma taxa de sobrevivência das larvas sem tetraciclina de 15% – bem maior que os usuais 3% contatos pelos experimentos da empresa –, os cientistas da Oxitec descobriram que a ração de gato com a qual seus parceiros estavam alimentando os mosquitos guardava resquícios do antibiótico, que é rotineiramente usado para tratar galinhas destinadas à ração animal.

O relatório da GeneWatch chama atenção para a presença comum do antibiótico em dejetos humanos e animais, assim como em sistemas de esgotamento doméstico, a exemplo de fossas sépticas. Isto caracterizaria um risco potencial, já que vários estudos constataram a capacidade do Aedes aegypti se reproduzir em águas contaminadas – apesar de isso ainda não ser o mais comum, nem acontecer ainda em Juazeiro, segundo a Secretaria de Saúde do município.

Além disso, há preocupações quanto a taxa de liberação de fêmeas transgênicas. O processo de separação das pupas (último estágio antes da vida adulta) é feito de forma manual, com a ajuda de um aparelho que reparte os gêneros pelo tamanho (a fêmea é ligeiramente maior). Uma taxa de 3% de fêmeas pode escapar neste processo, ganhando a liberdade e aumentando os riscos envolvidos. Por último, os experimentos ainda não verificaram se a redução na população de mosquitos incide diretamente na transmissão da dengue.

Todas as críticas são rebatidas pela Oxitec e pela Moscamed, que dizem manter um rigoroso controle de qualidade – como o monitoramento constante da taxa de liberação de fêmeas e da taxa de sobrevivências das larvas sem tetraciclina. Desta forma, qualquer sinal de mutação do mosquito seria detectado a tempo de se suspender o programa. Ao final de aproximadamente um mês, todos os insetos liberados estariam mortos. Os mosquitos, segundo as instituições responsáveis, também não passam os genes modificados mesmo que alguma fêmea desgarrada pique um ser humano.

Mosquito transgênico à venda

Em julho passado, depois do êxito dos testes de campo em Juazeiro, a Oxitec protocolou a solicitação de licença comercial na Comissão Técnica Nacional de Biossegurança (CTNBio). Desde o final de 2012, a empresa britânica possui CNPJ no país e mantém um funcionário em São Paulo. Mais recentemente, com os resultados promissores dos experimentos em Juazeiro, alugou um galpão em Campinas e está construindo o que será sua sede brasileira. O país representa hoje seu mais provável e iminente mercado, o que faz com que o diretor global de desenvolvimento de negócios da empresa, Glen Slade, viva hoje numa ponte aérea entre Oxford e São Paulo.

“A Oxitec está trabalhando desde 2009 em parceria com a USP e Moscamed, que são parceiros bons e que nos deram a oportunidade de começar projetos no Brasil. Mas agora acabamos de enviar nosso dossiê comercial à CTNBio e esperamos obter um registro no futuro, então precisamos aumentar nossa equipe no país. Claramente estamos investindo no Brasil. É um país muito importante”, disse Slade numa entrevista por Skype da sede na Oxitec, em Oxford, na Inglaterra.

A empresa de biotecnologia é uma spin-out da universidade britânica, o que significa dizer que a Oxitec surgiu dos laboratórios de uma das mais prestigiadas universidades do mundo. Fundada em 2002, desde então vem captando investimentos privados e de fundações sem fins lucrativos, tais como a Bill & Melinda Gates, para bancar o prosseguimento das pesquisas. Segundo Slade, mais de R$ 50 milhões foram gastos nesta última década no aperfeiçoamento e teste da tecnologia.

O executivo espera que a conclusão do trâmite burocrático para a concessão da licença comercial aconteça ainda próximo ano, quando a sede brasileira da Oxitec estará pronta, incluindo uma nova biofábrica. Já em contato com vários municípios do país, o executivo prefere não adiantar nomes. Nem o preço do serviço, que provavelmente será oferecido em pacotes anuais de controle da população de mosquitos, a depender o orçamento do número de habitantes da cidade.

“Nesse momento é difícil dar um preço. Como todos os produtos novos, o custo de produção é mais alto quando a gente começa do que a gente gostaria. Acho que o preço vai ser um preço muito razoável em relação aos benefícios e aos outros experimentos para controlar o mosquito, mas muito difícil de dizer hoje. Além disso, o preço vai mudar segundo a escala do projeto. Projetos pequenos não são muito eficientes, mas se tivermos a oportunidade de controlar os mosquitos no Rio de Janeiro todo, podemos trabalhar em grande escala e o preço vai baixar”, sugere.

A empresa pretende também instalar novas biofábricas nas cidades que receberem grandes projetos, o que reduzirá o custo a longo prazo, já que as liberações precisam ser mantidas indefinidamente para evitar o retorno dos mosquitos selvagens. A velocidade de reprodução do Aedes aegypti é uma preocupação. Caso seja cessado o projeto, a espécie pode recompor a população em poucas semanas.

“O plano da empresa é conseguir pagamentos repetidos para a liberação desses mosquitos todo ano. Se a tecnologia deles funcionar e realmente reduzir a incidência de dengue, você não poderá suspender estas liberações e ficará preso dentro desse sistema. Uma das maiores preocupações a longo prazo é que se as coisas começarem a dar errado, ou mesmo se tornarem menos eficientes, você realmente pode ter uma situação pior ao longo de muitos anos”, critica Helen Wallace.

O risco iria desde a redução da imunidade das pessoas à doença, até o desmantelamento de outras políticas públicas de combate à dengue, como as equipes de agentes de saúde. Apesar de tanto a Moscamed quanto a própria secretaria de Saúde de Juazeiro enfatizarem a natureza complementar da técnica, que não dispensaria os outros métodos de controle, é plausível que hajam conflitos na alocação de recursos para a área. Hoje, segundo Mário Machado da secretaria de Saúde, Juazeiro gasta em média R$ 300 mil por mês no controle de endemias, das quais a dengue é a principal.

A secretaria negocia com a Moscamed a ampliação do experimento para todo o município ou mesmo para toda a região metropolitana formada por Juazeiro e Petrolina – um teste que cobriria meio milhão pessoas –, para assim avaliar a eficácia em grandes contingentes populacionais. De qualquer forma e apesar do avanço das experiências, nem a organização social brasileira nem a empresa britânica apresentaram estimativas de preço pra uma possível liberação comercial.

“Ontem nós estávamos fazendo os primeiros estudos, pra analisar qual é o preço deles, qual o nosso. Porque eles sabem quanto custa o programa deles, que não é barato, mas não divulgam”, disse Mário Machado.

Em reportagem do jornal britânico The Observer de julho do ano passado, a Oxitec estimou o custo da técnica em “menos de” £6 libras esterlinas por pessoa por ano. Num cálculo simples, apenas multiplicando o número pela contação atual da moeda britânia frente ao real e desconsiderando as inúmeras outras variáveis dessa conta, o projeto em uma cidade de 150 mil habitantes custaria aproximadamente R$ 3,2 milhões por ano.

Se imaginarmos a quantidade de municípios de pequeno e médio porte brasileiros em que a dengue é endêmica, chega-se a pujança do mercado que se abre – mesmo desconsiderando por hora os grandes centros urbanos do país, que extrapolariam a capacidade atual da técnica. Contudo, este é apenas uma fatia do negócio. A Oxitec também possui uma série de outros insetos transgênicos, estes destinados ao controle de pragas agrícolas e que devem encontrar campo aberto no Brasil, um dos gigantes do agronegócio no mundo.

Aguardando autorização da CTNBio, a Moscamed já se preparara para testar a mosca-das-frutas transgênica, que segue a mesma lógica do Aedes aegypti. Além desta, a Oxitec tem outras 4 espécies geneticamente modificadas que poderão um dia serem testadas no Brasil, a começar por Juazeiro e o Vale do São Francisco. A região é uma das maiores produtoras de frutas frescas para exportação do país. 90% de toda uva e manga exportadas no Brasil saem daqui. Uma produção que requer o combate incessante às pragas. Nas principais avenidas de Juazeiro e Petrolina, as lojas de produtos agrícolas e agrotóxicos se sucedem, variando em seus totens as logos das multinacionais do ramo.

“Não temos planos concretos [além da mosca-das-frutas], mas, claro, gostaríamos muito de ter a oportunidade de fazer ensaios com esses produtos também. O Brasil tem uma indústria agrícola muito grande. Mas nesse momento nossa prioridade número 1 é o mosquito da dengue. Então uma vez que tivermos este projeto com recursos bastante, vamos tentar acrescentar projetos na agricultura.”, comentou Slade.

Ele e vários de seus colegas do primeiro escalão da empresa já trabalharam numa das gigantes do agronegócio, a Syngenta. O fato, segundo Helen Wallace, é um dos revelam a condição do Aedes aegypti transgênico de pioneiro de todo um novo mercado de mosquitos geneticamente modificados: “Nos achamos que a Syngenta está principalmente interessada nas pragas agrícolas. Um dos planos que conhecemos é a proposta de usar pragas agrícolas geneticamente modificadas junto com semestres transgênicas para assim aumentar a resistências destas culturas às pragas”.

“Não tem nenhum relacionamento entre Oxitec e Syngenta dessa forma. Talvez tenhamos possibilidade no futuro de trabalharmos juntos. Eu pessoalmente tenho o interesse de buscar projetos que possamos fazer com Syngenta, Basf ou outras empresas grandes da agricultura”, esclarece Glen Slade.

Em 2011, a indústria de agrotóxicos faturou R$14,1 bilhões no Brasil. Maior mercado do tipo no mundo, o país pode nos próximos anos inaugurar um novo estágio tecnológico no combate às pestes. Assim como na saúde coletiva, com o Aedes aegypti transgênico, que parece ter um futuro comercial promissor. Todavia, resta saber como a técnica conviverá com as vacinas contra o vírus da dengue, que estão em fase final de testes – uma desenvolvida por um laboratório francês, outra pelo Instituto Butantan, de São Paulo. As vacinas devem chegar ao público em 2015. O mosquito transgênico, talvez já próximo ano.

Dentre as linhagens de mosquitos transgênicos, pode surgir também uma versão nacional. Como confirmou a professora Margareth de Lara Capurro-Guimarães, do Departamento de Parasitologia da USP e coordenadora do Programa Aedes Transgênico, já está sob estudo na universidade paulista a muriçoca transgênica. Outra possível solução tecnológica para um problema de saúde pública em Juazeiro da Bahia – uma cidade na qual, segundo levantamento do Sistema Nacional de Informações sobre Saneamento (SNIS) de 2011, a rede de esgoto só atende 67% da população urbana.

* Publicado originalmente no site Agência Pública.

(Agência Pública)

Estamos preparados para o pré-sal e o gás de xisto? (O Estado de São Paulo)

JC e-mail 4817, de 20 de Setembro de 2013.

Em artigo publicado no Estadão, Washington Novaes* reforça o alerta da SBPC sobre os riscos da exploração do gás xisto

Anuncia-se que em novembro vão a leilão áreas brasileiras onde se pretende explorar o gás de xisto, da mesma forma que estão sendo leiloadas áreas do pré-sal para exploração de petróleo no mar. Deveríamos ser prudentes nas duas direções. No pré-sal, não se conhecem suficientemente possíveis consequências de exploração em áreas profundas. No caso do xisto, em vários países já há proibições de exploração ou restrições, por causa das consequências, na sua volta à superfície, da água e de insumos químicos injetados no solo para “fraturar” as camadas de rocha onde se encontra o gás a ser liberado. Mas as razões financeiras, em ambos os casos, são muito fortes e estão prevalecendo em vários lugares, principalmente nos Estados Unidos.

No Brasil, onde a tecnologia para o fraturamento de rochas ainda vai começar a ser utilizada, há um questionamento forte da Sociedade Brasileira para o Progresso da Ciência (SBPC) e da Academia Brasileira de Ciências, que, em carta à presidente da República (5/8), manifestaram sua preocupação com esse leilão para campos de gás em bacias sedimentares. Nestas, diz a carta, agências dos EUA divulgaram que o Brasil teria reservas de 7,35 trilhões de metros cúbicos em bacias no Paraná, no Parnaíba, no Solimões, no Amazonas, no Recôncavo Baiano e no São Francisco. A Agência Nacional de Petróleo (ANP) estima que as reservas podem ser o dobro disso. Mas, segundo a SBPC e a ANP, falta “conhecimento das características petrográficas, estruturais e geomecânicas” consideradas nesses cálculos, que poderão influir “decisivamente na economicidade de sua exploração”.

E ainda seria preciso considerar os altos volumes de água no processo de fratura de rochas para liberar gás, “que retornam à superfície poluídos por hidrocarbonetos e por outros compostos”, além de metais presentes nas rochas e “dos próprios aditivos químicos utilizados, que exigem caríssimas técnicas de purificação e de descarte dos resíduos finais”. A água utilizada precisaria ser confrontada “com outros usos considerados preferenciais”, como o abastecimento humano. E lembrar ainda que parte das reservas está “logo abaixo do Aquífero Guarani”; a exploração deveria “ser avaliada com muita cautela, já que há um potencial risco de contaminação das águas deste aquífero”.

Diante disso, não deveria haver licitações imediatas, “excluindo a comunidade científica e os próprios órgãos reguladores do país da possibilidade de acesso e discussão das informações”, que “poderão ser obtidas por meio de estudos realizados diretamente pelas universidades e institutos de pesquisa”. Além do maior conhecimento científico das jazidas, os estudos poderão mostrar “consequências ambientais dessa atividade, que poderão superar amplamente seus eventuais ganhos sociais”. É uma argumentação forte, que, em reunião da SBPC no Recife (22 a 27/7), levou a um pedido de que seja sustada a licitação de novembro.

Em muitos outros lugares a polêmica está acesa – como comenta o professor Luiz Fernando Scheibe, da USP, doutor em Mineração e Petrologia (12/9). Como na Grã-Bretanha, onde se argumenta que a tecnologia de fratura, entre muitos outros problemas, pode contribuir até para terremotos. A liberação de metano no processo também pode ser altamente problemática, já que tem efeitos danosos equivalentes a mais de 20 vezes os do dióxido de carbono, embora permaneça menos tempo na atmosfera. E com isso anularia as vantagens do gás de xisto para substituir o uso de carvão mineral. O próprio Programa das Nações Unidas para o Meio Ambiente (Pnuma) tem argumentado que o gás de xisto pode, na verdade, aumentar as emissões de poluentes que contribuem para mudanças do clima.

Na França os protestos têm sido muitos (Le Monde, 16/7) e levado o país a restrições fortes, assim como na Bulgária. Alguns Estados norte-americanos proibiram a tecnologia em seus territórios, mas o governo dos EUA a tem aprovado, principalmente porque o gás de xisto não só é mais barato que o carvão, como reduziu substancialmente as importações de combustíveis fósseis do país, até lhe permitindo exportar carvão excedente. E a Agência Internacional de Energia está prevendo que até 2035 haverá exploração do gás de xisto em mais de 1 milhão de pontos no mundo. Nos EUA, este ano, a produção de gás de xisto estará em cerca de 250 bilhões de metros cúbicos – facilitada pela decisão governamental de liberar a Agência de Proteção Ambiental de examinar possíveis riscos no processo e pela existência de extensa rede de gasodutos (o Brasil só os tem na região leste; gás consumido aqui vem da Bolívia).

Também a China seria potencial usuária do gás, pois 70% de sua energia vem de 3 bilhões de toneladas anuais de carvão (quase 50% do consumo no mundo).Embora tenha 30 trilhões de metros cúbicos de gás de xisto – mais que os EUA -, o problema é que as jazidas se situam em região de montanhas, muito distante dos centros de consumo – o que implicaria um aumento de 50% no custo para o usuário, comparado com o carvão. Por isso mesmo, a China deverá aumentar o consumo do carvão nas próximas décadas (Michael Brooks na New Scientist, 10/8).

E assim vamos, em mais uma questão que sintetiza o dilema algumas vezes já comentado neste espaço: lógica financeira versus lógica “ambiental”, da sobrevivência. Com governos, empresas, pessoas diante da opção de renunciar a certas tecnologias e ao uso de certos bens – por causa dos problemas de poluição, clima, consumo insustentável de recursos, etc. -, ou usá-los por causa das vantagens financeiras imediatas, que podem ser muito fortes.

Cada vez mais, será esse o centro das discussões mais fortes em toda parte, inclusive no Brasil – com repercussões amplas nos campos político e social. Preparemo-nos.

*Washington Novaes é jornalista.

Global Networks Must Be Redesigned, Experts Urge (Science Daily)

May 1, 2013 — Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems.

Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems. (Credit: © Angie Lingnau / Fotolia)

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push human-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Our Intuition of Systemic Risks Is Misleading

Networking system components that are well-behaved in separation may create counter-intuitive emergent system behaviors, which are not well-behaved at all. For example, cooperative behavior might unexpectedly break down as the connectivity of interaction partners grows. “Applying this to the global network of banks, this might actually have caused the financial meltdown in 2008,” believes Helbing.

Globally networked risks are difficult to identify, map and understand, since there are often no evident, unique cause-effect relationships. Failure rates may change depending on the random path taken by the system, with the consequence of increasing risks as cascade failures progress, thereby decreasing the capacity of the system to recover. “In certain cases, cascade effects might reach any size, and the damage might be practically unbounded,” says Helbing. “This is quite disturbing and hard to imagine.” All of these features make strongly coupled, complex systems difficult to predict and control, such that our attempts to manage them go astray.

“Take the financial system,” says Helbing. “The financial crisis hit regulators by surprise.” But back in 2003, the legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. It took 5 years until the “investment time bomb” exploded, causing losses of trillions of dollars to our economy. “The financial architecture is not properly designed,” concludes Helbing. “The system lacks breaking points, as we have them in our electrical system.” This allows local problems to spread globally, thereby reaching catastrophic dimensions.

A Global Ticking Time Bomb?

Have we unintentionally created a global time bomb? If so, what kinds of global catastrophic scenarios might humans face in complex societies? A collapse of the world economy or of our information and communication systems? Global pandemics? Unsustainable growth or environmental change? A global food or energy crisis? A cultural clash or global-scale conflict? Or will we face a combination of these contagious phenomena — a scenario that the World Economic Forum calls the “perfect storm”?

“While analyzing such global risks,” says Helbing, “one must bear in mind that the propagation speed of destructive cascade effects might be slow, but nevertheless hard to stop. It is time to recognize that crowd disasters, conflicts, revolutions, wars, and financial crises are the undesired result of operating socio-economic systems in the wrong parameter range, where systems are unstable.” In the past, these social problems seemed to be puzzling, unrelated, and almost “God-given” phenomena one had to live with. Nowadays, thanks to new complexity science models and large-scale data sets (“Big Data”), one can analyze and understand the underlying mechanisms, which let complex systems get out of control.

Disasters should not be considered “bad luck.” They are a result of inappropriate interactions and institutional settings, caused by humans. Even worse, they are often the consequence of a flawed understanding of counter-intuitive system behaviors. “For example, it is surprising that we didn’t have sufficient precautions against a financial crisis and well-elaborated contingency plans,” states Helbing. “Perhaps, this is because there should not be any bubbles and crashes according to the predominant theoretical paradigm of efficient markets.” Conventional thinking can cause fateful decisions and the repetition of previous mistakes. “In other words: While we want to do the right thing, we often do wrong things,” concludes Helbing. This obviously calls for a paradigm shift in our thinking. “For example, we may try to promote innovation, but suffer economic decline, because innovation requires diversity more than homogenization.”

Global Networks Must Be Re-Designed

Helbing’s publication explores why today’s risk analysis falls short. “Predictability and controllability are design issues,” stresses Helbing. “And uncertainty, which means the impossibility to determine the likelihood and expected size of damage, is often man-made.” Many systems could be better managed with real-time data. These would allow one to avoid delayed response and to enhance the transparency, understanding, and adaptive control of systems. However, even all the data in the world cannot compensate for ill-designed systems such as the current financial system. Such systems will sooner or later get out of control, causing catastrophic human-made failure. Therefore, a re-design of such systems is urgently needed.

Helbing’s Nature paper on “Globally Networked Risks” also calls attention to strategies that make systems more resilient, i.e. able to recover from shocks. For example, setting up backup systems (e.g. a parallel financial system), limiting the system size and connectivity, building in breaking points to stop cascade effects, or reducing complexity may be used to improve resilience. In the case of financial systems, there is still much work to be done to fully incorporate these principles.

Contemporary information and communication technologies (ICT) are also far from being failure-proof. They are based on principles that are 30 or more years old and not designed for today’s use. The explosion of cyber risks is a logical consequence. This includes threats to individuals (such as privacy intrusion, identity theft, or manipulation through personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). To counter this, Helbing recommends an entirely new ICT architecture inspired by principles of decentralized self-organization as observed in immune systems, ecology, and social systems.

Coming Era of Social Innovation

A better understanding of the success principles of societies is urgently needed. “For example, when systems become too complex, they cannot be effectively managed top-down” explains Helbing. “Guided self-organization is a promising alternative to manage complex dynamical systems bottom-up, in a decentralized way.” The underlying idea is to exploit, rather than fight, the inherent tendency of complex systems to self-organize and thereby create a robust, ordered state. For this, it is important to have the right kinds of interactions, adaptive feedback mechanisms, and institutional settings, i.e. to establish proper “rules of the game.” The paper offers the example of an intriguing “self-control” principle, where traffic lights are controlled bottom-up by the vehicle flows rather than top-down by a traffic center.

Creating and Protecting Social Capital

“One man’s disaster is another man’s opportunity. Therefore, many problems can only be successfully addressed with transparency, accountability, awareness, and collective responsibility,” underlines Helbing. Moreover, social capital such as cooperativeness or trust is important for economic value generation, social well-being and societal resilience, but it may be damaged or exploited. “Humans must learn how to quantify and protect social capital. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis.” This crisis was largely caused by a loss of trust. “It is important to stress that risk insurances today do not consider damage to social capital,” Helbing continues. However, it is known that large-scale disasters have a disproportionate public impact, in part because they destroy social capital. As we neglect social capital in risk assessments, we are taking excessive risks.

Journal Reference:

  1. Dirk Helbing. Globally networked risks and how to respondNature, 2013; 497 (7447): 51 DOI:10.1038/nature12047

Politicians Found to Be More Risk-Tolerant Than the General Population (Science Daily)

Apr. 16, 2013 — According to a recent study, the popularly elected members of the German Bundestag are substantially more risk-tolerant than the broader population of Germany. Researchers in the Cluster of Excellence “Languages of Emotion” at Freie Universität Berlin and at DIW Berlin (German Institute for Economic Research) conducted a survey of Bundestag representatives and analyzed data on the general population from the German Socio-Economic Panel Study (SOEP). Results show that risk tolerance is even higher among Bundestag representatives than among self-employed people, who are themselves more risk-tolerant than salaried employees or civil servants. This was true for all areas of risk that were surveyed in the study: automobile driving, financial investments, sports and leisure activities, career, and health. The authors interpret this finding as positive.

The full results of the study were published in German in the SOEPpapers series of the German Institute for Economic Research (DIW Berlin).

The authors of the study, Moritz Hess (University of Mannheim), Prof. Dr. Christian von Scheve (Freie Universität Berlin and DIW Berlin), Prof. Dr. Jürgen Schupp (DIW Berlin and Freie Universität Berlin), and Prof. Dr. Gert G. Wagner (DIW Berlin and Technische Universität Berlin) view the above-average risk tolerance found among Bundestag representatives as positive. According to sociologist and lead author of the study Moritz Hess: “Otherwise, important societal decisions often wouldn’t be made due to the almost incalculable risks involved. This would lead to stagnation and social standstill.” The authors do not interpret the higher risk-tolerance found among politicians as a threat to democracy. “The results show a successful and sensible division of labor among citizens, voters, and politicians,” says economist Gert G. Wagner. Democratic structures and parliamentary processes, he argues, act as a brake on the individual risk propensity of elected representatives and politicians.

For their study, the research team distributed written questionnaires to all 620 members of the 17th German Bundestag in late 2011. Twenty-eight percent of Bundestag members responded. Comparisons with the statistical characteristics of all current Bundestag representatives showed that the respondents comprise a representative sample of Bundestag members. SOEP data were used to obtain a figure for the risk tolerance of the general population for comparison with the figures for Bundestag members.

The questions posed to Bundestag members were formulated analogously to the questions in the standard SOEP questionnaire. Politicians were asked to rate their own risk tolerance on a scale from zero (= not at all risk-tolerant) to ten (= very risk-tolerant). They rated both their general risk tolerance as well as their specific risk tolerance in the areas of driving, making financial investments, sports and leisure activities, career, health, and trust towards strangers. They also rated their risk tolerance in regard to political decisions. No questions on party affiliation were asked in order to exclude the possibility that results could be used for partisan political purposes.

References:

Hess, M., von Scheve, C., Schupp, J., Wagner. G. G. (2013): Members of German Federal Parliament More Risk-Loving Than General Population, in: DIW Economic Bulletin, Vol. 3, No. 4, 2013, pp. 20-24.

Hess, M., von Scheve, C., Schupp, J., Wagner. G. G. (2013): Sind Politiker risikofreudiger als das Volk? Eine empirische Studie zu Mitgliedern des Deutschen Bundestags, SOEPpaper No. 545, DIW Berlin.

In Big Data, We Hope and Distrust (Huffington Post)

By Robert Hall

Posted: 04/03/2013 6:57 pm

“In God we trust. All others must bring data.” — W. Edwards Deming, statistician, quality guru

Big data helped reelect a pesident, find Osama bin Laden, and contributed to the meltdown of our financial system. We are in the midst of a data revolution where social media introduces new terms like Arab Spring, Facebook Depression and Twitter anxiety that reflect a new reality: Big data is changing the social and relationship fabric of our culture.

We spend hours installing and learning how to use the latest versions of our ever-expanding technology while enduring a never-ending battle to protect our information. Then we labor while developing practices to rid ourselves of technology — rules for turning devices off during meetings or movies, legislation to outlaw texting while driving, restrictions in classrooms to prevent cheating, and scheduling meals or family time where devices are turned off. Information and technology: We love it, hate it, can’t live with it, can’t live without it, use it voraciously, and distrust it immensely. I am schizophrenic and so am I.

Big data is not only big but growing rapidly. According to IBM, we create 2.5 quintillion bytes a day and that “ninety percent of the data in the world has been created in the last two years.” Vast new computing capacity can analyze Web-browsing trails that track our every click, sensor signals from every conceivable device, GPS tracking and social network traffic. It is now possible to measure and monitor people and machines to an astonishing degree. How exciting, how promising. And how scary.

This is not our first data rodeo. The early stages of the customer relationship management movement were filled with hope and with hype. Large data warehouses were going to provide the kind of information that would make companies masters of customer relationships. There were just two problems. First, getting the data out of the warehouse wasn’t nearly as hard as getting it into the person or device interacting with the customers in a way that added value, trust and expanded relationships. We seem to always underestimate the speed of technology and overestimate the speed at which we can absorb it and socialize around it.

Second, unfortunately the customers didn’t get the memo and mostly decided in their own rich wisdom they did not need or want “masters.” In fact as providers became masters of knowing all the details about our lives, consumers became more concerned. So while many organizations were trying to learn more about customer histories, behaviors and future needs — customers and even their governments were busy trying to protect privacy, security, and access. Anyone attempting to help an adult friend or family member with mental health issues has probably run into well-intentioned HIPAA rules (regulations that ensure privacy of medical records) that unfortunately also restrict the ways you can assist them. Big data gives and the fear of big data takes away.

Big data does not big relationships make. Over the last 20 years as our data keeps getting stronger, our customer relationships keep getting weaker. Eighty-six percent of consumers trust corporations less than they did five years ago. Customer retention across industries has fallen about 30 percent in recent years. Is it actually possible that we have unwittingly contributed in the undermining of our customer relationships? How could that be? For one thing, as companies keep getting better at targeting messages to specific groups and those groups keep getting better at blocking their messages. As usual, the power to resist trumps the power to exert.

No matter how powerful big data becomes, if it is to realize its potential, it must build trust on three levels. First, customers must trust our intentions. Data that can be used for us can also be used against us. There is growing fear institutions will become a part of a “surveillance state.” While organizations have gone to great length to promote protection of our data — the numbers reflect a fair amount of doubt. For example, according to MainStreet, “87 percent of Americans do not feel large banks are transparent and 68 percent do not feel their bank is on their side.:

Second, customers must trust our actions. Even if they trust our intentions, they might still fear that our actions put them at risk. Our private information can be hacked, then misused and disclosed in damaging and embarrassing ways. After the Sandy Hook tragedy a New York newspaper published the names and addresses of over 33,000 licensed gun owners along with an interactive map that showed exactly where they lived. In response names and addresses of the newspaper editor and writers were published on-line along with information about their children. No one, including retired judges, law enforcement officers and FBI agents expected their private information to be published in the midst of a very high decibel controversy.

Third, customers must trust the outcome — that sharing data will benefit them. Even with positive intentions and constructive actions, the results may range from disappointing to damaging. Most of us have provided email addresses or other contact data — around a customer service issue or such — and then started receiving email, phone or online solicitations. I know a retired executive who helps hard-to-hire people. She spent one evening surfing the Internet to research about expunging criminal records for released felons. Years later, Amazon greets her with books targeted to the felon it believes she is. Even with opt-out options, we felt used. Or, we provide specific information, only to repeat it in the next transaction or interaction — not getting the hoped for benefit of saving our time.

It will be challenging to grow the trust at anywhere near the rate we grow the data. Information develops rapidly, competence and trust develop slowly. Investing heavily in big data and scrimping on trust will have the opposite effect desired. To quote Dolly Parton who knows a thing or two about big: “It costs a lot of money to look this cheap.”

How Big Could a Man-Made Earthquake Get? (Popular Mechanics)

Scientists have found evidence that wastewater injection induced a record-setting quake in Oklahoma two years ago. How big can a man-made earthquake get, and will we see more of them in the future?

By Sarah Fecht – April 2, 2013 5:00 PM

hydraulic fracking drilling illustration

Hydraulic fracking drilling illustration. Brandon Laufenberg/Getty Images

In November 2011, a magnitude-5.7 earthquake rattled Prague, Okla., and 16 other nearby states. It flattened 14 homes and many other buildings, injured two people, and set the record as the state’s largest recorded earthquake. And according to a new study in the journal Geology, the event can also claim the title of Largest Earthquake That’s Ever Been Induced by Fluid Injection.”

In the paper, a team of geologists pinpoints the quake’s starting point at less than 200 meters (about 650 feet) from an injection well where wastewater from oil drilling was being pumped into the ground at high pressures. At 5.7 magnitude, the Prague earthquake was about 10 times stronger than the previous record holder: a magnitude-4.8 Rocky Mountain Arsenal earthquake in Colorado in 1967, caused by the U.S. Army injecting a deep well with 148,000 gallons per day of fluid wastes from chemical-weapons testing. So how big can these man-made earthquakes get?

The short answer is that scientists don’t really know yet, but it’s possible that fluid injection could cause some big ones on very rare occasions. “We don’t see any reason that there should be any upper limit for an earthquake that is induced,” says Bill Ellsworth, a geophysicist with the U.S. Geological Survey, who wasn’t involved in the new study.

As with natural earthquakes, most man-made earthquakes have been small to moderate in size, and most are felt only by seismometers. Larger quakes are orders of magnitude rarer than small quakes. For example, for every 1000 magnitude-1.0 earthquakes that occur, expect to see 100 magnitude-2.0s, 10 magnitude-3.0s, just 1 magnitude-4.0, and so on. And just as with natural earthquakes, the strength of the induced earthquake depends on the size of the nearby fault and the amount of stress acting on it. Some faults just don’t have the capacity to cause big earthquakes, whether natural or induced.

How do Humans Trigger Earthquakes?

Faults have two major kinds of stressors: shear stress, which makes two plates slide past each other along the fault line, and normal stress, which pushes the two plates together. Usually the normal stress keeps the fault from moving sideways. But when a fluid is injected into the ground, as in Prague, that can reduce the normal stress and make it easier for the fault to slip sideways. It’s as if if you have a tall stack of books on a table, Ellsworth says: If you take half the books away, it’s easier to slide the stack across the table.

“Water increases the fluid pressure in pores of rocks, which acts against the pressure across the fault,” says Geoffrey Abers, a Columbia University geologist and one of the new study’s authors. “By increasing the fluid pressure, you’re decreasing the strength of the fault.”

A similar mechanism may be behind earthquakes induced by large water reservoirs. In those instances, the artificial lake behind a dam causes water to seep into the pore spaces in the ground. In 1967, India’s Koyna Dam caused a 6.5 earthquake that killed 177 people, injured more than 2000, and left 50,000 homeless. Unprecedented seasonal fluctuations in water level behind a dam in Oroville, Calif., are believed to be behind the magnitude-6.1 earthquake that occurred there in 1975.

Extracting a fluid from the ground can also contribute to triggering a quake. “Think about filling a balloon with water and burying it at the beach,” Ellsworth says. “If you let the water out, the sand will collapse inward.” Similarly, when humans remove large amounts of oil and natural gas from the ground, it can put additional stress on a fault line. “In this case it may be the shear stresses that are being increased, rather than normal stresses,” Ellsworth says.

Take the example of the Gazli gas field in Uzbekistan, thought to be located in a seismically inactive area when drilling began in 1962. As drillers removed the natural gas, the pressure in the gas field dropped from 1030 psi in 1962 to 515 psi in 1976, then down to 218 psi in 1985. Meanwhile, three large magnitude-7.0 earthquakes struck: two in 1976 and one in 1984. Each quake had an epicenter within 12 miles of Gazli and caused a surface uplift of some 31 inches. Because the quakes occurred in Soviet-era Uzbekistan, information about the exact locations, magnitudes, and causes are not available. However, a report by the National Research Council concludes that “observations of crustal uplift and the proximity of these large earthquakes to the Gazli gas field in a previously seismically quiet region strongly suggest that they were induced by hydrocarbon extraction.” Extraction of oil is believed to have caused at least three big earthquakes in California, with magnitudes of 5.9, 6.1, and 6.5.

Some people worry that hydraulic fracturing, or fracking‚Äîwherein high-pressure fluids are used to crack through rock layers to extract oil and natural gas‚Äîwill lead to an increased risk of earthquakes. However, the National Research Council report points out that there are tens of thousands of hydrofracking wells in existence today, and there has only been one case in which a “felt” tremor was linked to fracking. That was a 2.3 earthquake in Blackpool, England, in 2011, which didn’t cause any significant damage. Although scientists have known since the 1920s that humans trigger earthquakes, experts caution that it’s not always easy to determine whether a specific event was induced.

Are Human Activities Making Quakes More Common?

Human activities have been linked to increased earthquake frequencies in certain areas. For instance, researchers have shown a strong correlation between the volume of fluid injected into the Rocky Mountain Arsenal well and the frequency of earthquakes in that area.

Geothermal-energy sites can also induce many earthquakes, possibly due to pressure, heat, and volume changes. The Geysers in California is the largest geothermal field in the U.S., generating 725 megawatts of electricity using steam from deep within the earth. Before The Geysers began operating in 1960, seismic activity was low in the area. Now the area experiences hundreds of earthquakes per year. Researchers have found correlations between the volume of steam production and the number of earthquakes in the region. In addition, as the area of the steam wells increased over the years, so did the spatial distribution of earthquakes.

Whether or not human activity is increasing the magnitude of earthquakes, however, is more of a gray area. When it comes to injection wells, evidence suggests that earthquake magnitudes rise along with the volume of injected wastewater, and possibly injection pressure and rate of injection as well, according to a statement from the Department of Interior.

The vast majority of earthquakes caused by The Geysers are considered to be microseismic events—too small for humans to feel. However, researchers from Lawrence Berkeley National Laboratory note that magnitude-4.0 earthquakes, which can cause minor damage, seem to be increasing in frequency.

The new study says that though earthquakes with a magnitude of 5.0 or greater are rare east of the Rockies, scientists have observed an 11-fold increase between 2008 and 2011, compared with 1976 through 2007. But the increase hasn’t been tied to human activity. “We do not really know what is causing this increase, but it is remarkable,” Abers says. “It is reasonable that at least some may be natural.”

Chemicals, Risk And The Public (Chicago Tribune)

April 29, 1989|By Earon S. Davis

The public is increasingly uncomfortable with both the processes and the results of government and industry decision-making about chemical hazards.

Decisions that expose people to uncertain and potentially catastrophic risks from chemicals seem to be made without adequate scientific information and without an appreciation of what makes a risk acceptable to the public.

The history of environmental and occupational health provides myriad examples in which entire industries have acted in complete disregard of public health risks and in which government failed to act until well after disasters were apparent.

It is not necessary to name each chemical, each debacle, in which the public was once told the risks were insignificant, but these include DDT, asbestos, Kepone, tobacco smoke, dioxin, PCBs, vinyl chloride, flame retardants in children`s sleepware, Chlordane, Alar and urea formaldehyde foam. These chemicals were banned or severely restricted, and virutally no chemical has been found to be safer than originally claimed by industry and government.

It is no wonder that government and industry efforts to characterize so many uncertain risks as “insignificant“ are met with great skepticism. In a pluralistic, democratic society, acceptance of uncertainty is a complex matter that requires far more than statistical models. Depending upon cultural and ethical factors, some risks are simply more acceptable than others.

When it comes to chemical risks to human health, many factors combine to place a relatively higher burden on government and industry to show social benefits. Not the least of these is the unsatisfactory track record of industry and its regulatory agencies.

Equally important are the tremendous gaps in scientific knowledge about chemically induced health effects, as well as the specific characteristics of these risks.

Chemical risks differ from many other kinds because, not only are the victims struck largely at random, but there is usually no way to know which illnesses are eventually caused by a chemical. There are so many poorly understood illnesses and so many chemical exposures which take many years to develop that most chemical victims will not even be identified, let alone properly compensated.

To the public, this difference is significant, but to industry it poses few problems. Rather, it presents the opportunity to create risks and yet remain free of liability for the bulk of the costs imposed on society, except in the rare instance where a chemical produces a disease which does not otherwise appear in humans.

Statutes of limitations, corporate litigiousness, inability or unwillingness of physicians to testify on causation and the sheer passage of time pose major obstacles to chemical victims attempting to receive compensation.

The delayed effects of chemical exposures also make it impossible to fully document the risks until decades after the Pandora`s box has been opened. The public is increasingly afraid that regulators are using the lack of immediately identified victims as evidence of chemical safety, which it simply is not.

Chemical risks are different because they strike people who have given no consent, who may be completely unaware of danger and who may not even have been born at the time of the decision that led to their exposure. They are unusual, too, because we don`t know enough about the causes of cancer, birth defects and neurological and immunologic disorders to understand the real risks posed by most chemicals.

The National Academy of Sciences has found that most chemicals in commerce have not even been tested for many of these potential health effects. In fact, there are growing concerns of new neurologic and chemical sensitivity disorders of which almost nothing is known.

We are exposed to so many chemicals that there is literally no way of estimating the cumulative risks. Many chemicals also present synergistic effects in which exposure to two or more substances produces risks many times greater than the simple sum of the risks. Society has begun to see that the thousands of acceptable risks could add up to one unacceptable generic chemical danger.

The major justification for chemical risks, given all of the unknowns and uncertainties, is an overriding benefit to society. One might justify taking a one-in-a-million risk for a product that would make the nation more economically competitive or prevent many serious cases of illness. But such a risk may not be acceptable if it is to make plastic seats last a little longer, to make laundry 5 percent brighter or lawns a bit greener, or to allow apples to ripen more uniformly.

These are some of the reasons the public is unwilling to accept many of the risks being forced upon it by government and industry. There is no “mass hysteria“ or “chemophobia.“ There is growing awareness of the preciousness of human life, the banal nature of much of what industry is producing and the gross inadequacy of efforts to protect the public from long-term chemical hazards.

If the public is to regain confidence in the risk management process, industry and government must open up their own decision-making to public inquiry and input. The specific hazards and benefits of any chemical product or byproduct should be explained in plain language. Uncertainties that cannot be quantified must also be explained and given full consideration. And the process must include ethical and moral considerations such as those addressed above. These are issues to be decided by the public, not bureaucrats or corporate interests.

For industry and government to regain public support, they must stop blaming “ignorance“ and overzealous public interest groups for the concern of the publc and the media.

Rather, they should begin by better appreciating the tremendous responsibility they bear to our current and future generations, and by paying more attention to the real bottom line in our democracy: the honest, rational concerns of the average American taxpayer.

Emerging Ethical Dilemmas in Science and Technology (Science Daily)

Dec. 17, 2012 — As a new year approaches, the University of Notre Dame’s John J. Reilly Center for Science, Technology and Values has announced its inaugural list of emerging ethical dilemmas and policy issues in science and technology for 2013.

The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.

The center generated its inaugural list with the help of Reilly fellows, other Notre Dame experts and friends of the center.

The center aimed to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. It will feature one of these issues on its website each month in 2013, giving readers more information, questions to ask and resources to consult.

The ethical dilemmas and policy issues are:

Personalized genetic tests/personalized medicine

Within the last 10 years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. What are the potential privacy issues, and how do we protect this very personal and private information? Are we headed toward a new era of therapeutic intervention to increase quality of life, or a new era of eugenics?

Hacking into medical devices

Implanted medical devices, such as pacemakers, are susceptible to hackers. Barnaby Jack, of security vendor IOActive, recently demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. How do we make sure these devices are secure?

Driverless Zipcars

In three states — Nevada, Florida, and California — it is now legal for Google to operate its driverless cars. Google’s goal is to create a fully automated vehicle that is safer and more effective than a human-operated vehicle, and the company plans to marry this idea with the concept of the Zipcar. The ethics of automation and equality of access for people of different income levels are just a taste of the difficult ethical, legal and policy questions that will need to be addressed.

3-D printing

Scientists are attempting to use 3-D printing to create everything from architectural models to human organs, but we could be looking at a future in which we can print personalized pharmaceuticals or home-printed guns and explosives. For now, 3-D printing is largely the realm of artists and designers, but we can easily envision a future in which 3-D printers are affordable and patterns abound for products both benign and malicious, and that cut out the manufacturing sector completely.

Adaptation to climate change

The differential susceptibility of people around the world to climate change warrants an ethical discussion. We need to identify effective and safe ways to help people deal with the effects of climate change, as well as learn to manage and manipulate wild species and nature in order to preserve biodiversity. Some of these adaptation strategies might be highly technical (e.g. building sea walls to stem off sea level rise), but others are social and cultural (e.g., changing agricultural practices).

Low-quality and counterfeit pharmaceuticals

Until recently, detecting low-quality and counterfeit pharmaceuticals required access to complex testing equipment, often unavailable in developing countries where these problems abound. The enormous amount of trade in pharmaceutical intermediaries and active ingredients raise a number of issues, from the technical (improvement in manufacturing practices and analytical capabilities) to the ethical and legal (for example, India ruled in favor of manufacturing life-saving drugs, even if it violates U.S. patent law).

Autonomous systems

Machines (both for peaceful purposes and for war fighting) are increasingly evolving from human-controlled, to automated, to autonomous, with the ability to act on their own without human input. As these systems operate without human control and are designed to function and make decisions on their own, the ethical, legal, social and policy implications have grown exponentially. Who is responsible for the actions undertaken by autonomous systems? If robotic technology can potentially reduce the number of human fatalities, is it the responsibility of scientists to design these systems?

Human-animal hybrids (chimeras)

So far scientists have kept human-animal hybrids on the cellular level. According to some, even more modest experiments involving animal embryos and human stem cells violate human dignity and blur the line between species. Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species?

Ensuring access to wireless and spectrum

Mobile wireless connectivity is having a profound effect on society in both developed and developing countries. These technologies are completely transforming how we communicate, conduct business, learn, form relationships, navigate and entertain ourselves. At the same time, government agencies increasingly rely on the radio spectrum for their critical missions. This confluence of wireless technology developments and societal needs presents numerous challenges and opportunities for making the most effective use of the radio spectrum. We now need to have a policy conversation about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underserved (rural, low-income, developing areas) populations.

Data collection and privacy

How often do we consider the massive amounts of data we give to commercial entities when we use social media, store discount cards or order goods via the Internet? Now that microprocessors and permanent memory are inexpensive technology, we need think about the kinds of information that should be collected and retained. Should we create a diabetic insulin implant that could notify your doctor or insurance company when you make poor diet choices, and should that decision make you ineligible for certain types of medical treatment? Should cars be equipped to monitor speed and other measures of good driving, and should this data be subpoenaed by authorities following a crash? These issues require appropriate policy discussions in order to bridge the gap between data collection and meaningful outcomes.

Human enhancements

Pharmaceutical, surgical, mechanical and neurological enhancements are already available for therapeutic purposes. But these same enhancements can be used to magnify human biological function beyond the societal norm. Where do we draw the line between therapy and enhancement? How do we justify enhancing human bodies when so many individuals still lack access to basic therapeutic medicine?

Government, Industry Can Better Manage Risks of Very Rare Catastrophic Events, Experts Say (Science Daily)

ScienceDaily (Nov. 15, 2012) — Several potentially preventable disasters have occurred during the past decade, including the recent outbreak of rare fungal meningitis linked to steroid shots given to 13,000 patients to relieve back pain. Before that, the 9/11 terrorist attacks in 2001, the Space Shuttle Columbia explosion in 2003, the financial crisis that started in 2008, the Deepwater Horizon accident in the Gulf of Mexico in 2011, and the Fukushima tsunami and ensuing nuclear accident also in 2011 were among rare and unexpected disasters that were considered extremely unlikely or even unthinkable.

A Stanford University engineer and risk management expert has analyzed the phenomenon of government and industry waiting for rare catastrophes to happen before taking risk management steps. She concluded that a different approach to these events would go far towards anticipating them, preventing them or limiting the losses.

To examine the risk management failures discernible in several major catastrophes, the research draws upon the combination of systems analysis and probability as used, for example, in engineering risk analysis. When relevant statistics are not available, it discusses the powerful alternative of systemic risk analysis to try to anticipate and manage the risks of highly uncertain, rare events. The paper by Stanford University researcher Professor Elisabeth Paté-Cornell recommends “a systematic risk analysis anchored in history and fundamental knowledge” as opposed to both industry and regulators sometimes waiting until after a disaster occurs to take safety measures as was the case, for example, of the Deepwater Horizon accident in 2011. Her paper, “On ‘Black Swans’ and ‘Perfect Storms’: Risk Analysis and Management When Statistics Are Not Enough,” appears in the November 2012 issue of Risk Analysis, published by the Society for Risk Analysis.

Paté-Cornell’s paper draws upon two commonly cited images representing different types of uncertainty — “black swans” and “perfect storms” — that are used both to describe extremely unlikely but high-consequence events and often to justify inaction until after the fact. The uncertainty in “perfect storms” derives mainly from the randomness of rare but known events occurring together. The uncertainty in “black swans” stems from the limits of fundamental understanding of a phenomenon, including in extreme cases, a complete lack of knowledge about its very existence.

Given these two extreme types of uncertainties, Paté-Cornell asks what has been learned about rare events in engineering risk analysis that can be incorporated in other fields such as finance or medicine. She notes that risk management often requires “an in-depth analysis of the system, its functions, and the probabilities of its failure modes.” The discipline confronts uncertainties by systematic identification of failure “scenarios,” including rare ones, using “reasoned imagination,” signals (new intelligence information, medical alerts, near-misses and accident precursors) and a set of analytical tools to assess the chances of events that have not happened yet. A main emphasis of systemic risk analysis is on dependencies (of failures, human errors, etc.) and on the role of external factors, such as earthquakes and tsunamis that become common causes of failure.

The “risk of no risk analysis” is illustrated by the case of the 14 meter Fukushima tsunami resulting from a magnitude 9 earthquake. Historical records showed that large tsunamis had occurred at least twice before in the same area. The first time was the Sanriku earthquake in the year 869, which was estimated at magnitude 8.6 with a tsunami that penetrated 4 kilometers inland. The second was the Sanriku earthquake of 1611, estimated at magnitude 8.1 that caused a tsunami with an estimated maximum wave height of about 20 meters. Yet, those previous events were not factored into the design of the Fukushima Dai-ichi nuclear reactor, which was built for a maximum wave height of 5.7 meters, simply based on the tidal wave caused in that area by the 1960 earthquake in Chile. Similar failures to capture historical data and various “signals” occurred in the cases of the 9/11 attacks, the Columbia Space Shuttle explosion and other examples analyzed in the paper.

The risks of truly unimaginable events that have never been seen before (such as the AIDS epidemics) cannot be assessed a priori, but careful and systematic monitoring, signals observation and a concerted response are keys to limiting the losses. Other rare events that place heavy pressure on human or technical systems are the result of convergences of known events (“perfect storms”) that can and should be anticipated. Their probabilities can be assessed using a set of analytical tools that capture dependencies and dynamics in scenario analysis. Given the results of such models, there should be no excuse for failing to take measures against rare but predictable events that have damaging consequences, and to react to signals, even imperfect ones, that something new may be unfolding.

Journal Reference:

  1. Elisabeth Paté-Cornell. On “Black Swans” and “Perfect Storms”: Risk Analysis and Management When Statistics Are Not EnoughRisk Analysis, 2012; DOI:10.1111/j.1539-6924.2011.01787.x

Risk (Fractal Ontology)

http://fractalontology.wordpress.com/2012/11/01/risk/

Joseph Weissman | Thursday, November 1, 2012

Paul Klee, “Insula Dulcamara” (1938); Oil on newsprint, mounted on burlap

I began writing this before disaster struck very close to home; and so I finish it without finishing it. A disaster never really ends; it strikes and strikes continuously — and so even silence is insufficient. But yet there is also no expression of concern, no response which could address comprehensively the immense and widespread suffering of bodies and minds and spirits. I would want to emphasize my plea below upon the responsibility of thinkers and artists and writers to create new ways of thinking the disaster; if only to mitigate the possibility of their recurrence. (Is it not the case that the disaster increasingly has the characteristics of the accident; that the Earth and global techno-science are increasingly co-extensive Powers?) And yet despite these necessary new ways of thinking and feeling, I fear it will remain the case that nothing can be said about a disaster, if only because nothing can ultimately be thought about the disaster. But it cannot be simply passed over in silence; if nothing can be said, then perhaps everything may be said.

Inherent to the notion of risk is the multiple, or multiplicity. The distance between the many and the multiple is nearly infinite; every problem of the one and the many resolves to the perspective of the one, while multiplicity always singularizes, takes a line of pure variation or difference to its highest power. A multiplicity is already a life, the sea, time: a cosmos or style in terms of powers and forces; a melody or refrain in its fractured infinity.

The multiple is clear in its “being” only transitorily — as the survey of a fleet or swarm or network; the thought which grasps it climbs mountains, ascends vertiginously towards that infinite height which would finally reveal the substrate of the plane, the “truth” of its shadowy depths, the mysterious origins of its nomadic populations.

No telescopic lens could be large enough to approach this distance; and yet it is traversed instantaneously when the tragic arc of a becoming terminates in disaster; when a line of flight turns into a line of death, when one-or-several lines of organization and development reach a point beyond which avoiding self-destruction is impossible.

Chaos, boundless furnace of becoming! Fulminating entropy which compels even the cosmos itself upon a tragic arc of time; are birth and death not one in chaos or superfusion?

Schizophrenia is perhaps this harrowing recognition that there are only machines machining machines, without limit, bottomless.

In chaos, there is no longer disaster; but there are no longer subjects or situations or signifiers. Every subject, signifier and situation approaches its inevitable as the Disaster which would rend their very being from them; hence the nihilism of the sign, the emptiness of the subject, the void of the situation. Existence is farce — if loss is (permitted to become) tragedy, omnipresent, cosmic, deified.

There is an infinite tragedy at the heart of the disaster; a trauma which makes the truth of our fate impossible-to-utter; on the one hand because imbued with infinite meaning, because singular — and on the other, in turn, meaningless, because essentially nullified, without-reason. That the disaster is never simply pure incidental chaos, a purely an-historical interruption, is perhaps the key point: we start and end with a disaster that prevents us from establishing either end or beginning — a disaster which swiftly looms to cosmic and even ontological proportions…

Perhaps there is only a life after the crisis, after a breakthrough or breakdown; after an encounter with the outside. A life as strategy or risk, which is perhaps to say a multiplicity: a life, or the breakthrough of — and, perhaps inevitably, breakdown before — white walls, mediation, determinacy.

A life in any case is always-already a voice, a cosmos, a thought: it is light or free movement whose origin and destination cannot be identified as stable sites or moments, whose comings and goings are curiously intertwined and undetermined.

We cannot know the limits of a life’s power; but we know disaster. We know that multiplicities, surging flocks of actions and passions, are continually at risk.

The world presents itself unto a life as an inescapable gravity, monstrous fate, the contagion of space, time, organization. A life expresses itself as an openness which is lacerated by the Open.

A life is a cosmos within a cosmos — and so a life opens up closed systems; it struggles and learns not in spite of entropy but on account of it, through a kind of critical strategy, even a perversely recursive or fractal strategy; through the micro-cosmogenetic sieve of organic life, entropy perversely becomes a hyper-organizational principle.

A life enters into a perpetual and weightless ballet — in a defiance-which-is-not-a-defiance of stasis; a stasis which yet presents a grave and continuous danger to a life.

What is a life, apart from infinite movement or disaster? Time, a dream, the sea: but a life moves beyond rivers of time, or seas of dreaming, or the outer spaces of radical forgetting (and alien memories…)

A life is a silence which may become wise. A life — or that perverse machine which works only by breaking down — or through…

A life is intimacy through parasitism, already a desiring-machine-factory or a tensor-calculus of the unconscious.

A life lives in taut suspension from one or several lines of becoming, of flight or death — lines whose ultimate trajectories may not be known through any safe or even sure method.

A life is the torsion between dying and rebirth.

Superfusion between all potentialities, a life is infinite-becoming of the subjectless-subject. Superject.

Journeying and returning, without moving, from the infinity and chaos of the outside/inside. A stationary voyage in a non-dimensional cosmos, where everything flows, heats, grinds.

Phenomenology is a geology of the unconscious, a problem of the crystalline apparatus of time. Could there be at long last a technology of time which would abandon strip-mining the subsconscious?

A chrono-technics which ethico-aesthetically creates and transforms virtual and actual worlds, traces possibilities of living, thinking, thinking; diagnoses psychic, social and physical ecosystems simultaneously.

A communications-strategy, but one that could point beyond the vicious binary of coercion and conflation — but so therefore would not-communicate.

There is a a recursive problem surrounding the silence and darkness at the heart of a life; it is perhaps impossible to exhaust (at least clinically) the infinitely-deferred origin of those crystalline temporal dynamisms which in turn structure any-moment-whatsoever.

Is there a silence which would constitute that very singular machinic ‘sheaf’, the venerated crystalline paradise of the moved-unmoving?

Silence, wisdom.

The impossibility of this origin is also the interminability of the analysis; also the infinite movement attending any moment whatsoever. It is the history of disaster, of the devil.

There is only thinking when a thought becomes critically or clinically engaged with a world, a cosmos. This engagement discovers its bottomlessness in a disaster for thought itself. A disaster for life, thought, the world; but also perhaps their infinitely-deferred origins…

What happens in the physical, economic, social and psychic collapse of a world, a thought, a life? Is it only in this collapse, commensurate with the collision, interference of one cosmos with another…?

Collapse is never a final state. There is no closed system of causes but a kind of original fracture. The schizophrenic coexistence of many separate worlds in a kind of meta-stable superfusion.

A thought, a cosmos, a world, a life can have no other origin than the radical corruption and novel genesis of a pure substance of thinking, living, “worlding,” “cosmosing.” A becoming refracts within its own infinite history the history of a life, a world, a thought.

Although things doubtless seem discouraging, at any moment whatsoever a philosophy can be made possible. At any time and place, this cyclonic involution of the library of Babel can be reactivated, this golden ball propelled by comet-fire and dancing towards the future can be captured in a moment’s reflection…

The breakdown of the world, of thought, of life — the experience of absolute collapse, of the horror of the vacuum, is already close the infinite zero-point reached immediately and effortlessly by schizophrenia. Even in a joyous mode when it recognizes the properly affirmative component of the revelation of cosmos as production, production as multiplicity, multiplicity as it opens onto the infinite or the future. (Only the infinity of the future can become-equal to a life.)

That spirit which fixes a beginning in space and time, fixes it without fixing itself; it exemplifies the possibility of atemporality and the heresy of the asignifying, even while founding the possibility of piety and dogma.

The disaster presents thought and language with their cosmic doubles; thought encounters a disaster in the way a subject encounters a radical outside, a death.

Only selection answers to chaos, to the infinite horizon of a life — virtually mapping infinite potential planes of organization onto a singular line of development. Only selection, only the possibility of philosophy, points beyond the inevitability of disaster.

The disaster and its aversion is the basic orientation of critical thought; thinking the disaster: this impossible task is the critical cultural aim of art and writing. Speaking the truth of the disaster is perhaps impossible. A life encounters disaster as the annihilating of the code itself; not merely a decoding but the alienation from the essence of matter or speech or language. The means to thinking the disaster lie in poetic imagination, the possibility of the temporal retrojection of narrative elements; the disaster can be thought only through “unthinking” it: in the capacity of critical or poetic imagination to explore the means by which a disaster was retroactively averted. The counterfactual acquires a new and radical dimension: not the theological dimension of salvation, but a clinical dimension — the power to of think the transformation of the conditions of the disaster.

Embrapa envia sementes de milho e arroz para o Banco de Svalbard, na Noruega (O Globo)

JC e-mail 4577, de 05 de Setembro de 2012.

Banco nórdico é o mais seguro do mundo, construído para resistir a catástrofes climáticas e a explosão nuclear.

A Embrapa envia esta semana 264 amostras representativas de sementes de milho e 541 de arroz para o Banco Global de Sementes de Svalbard, na Noruega, como parte do acordo assinado com o Real Ministério de Agricultura e Alimentação do país em 2008. Serão enviadas ao banco genético norueguês as coleções nucleares de arroz e milho, ou seja, um grupo limitado de acessos derivados de uma coleção vegetal, escolhido para representar a variabilidade genética da coleção inteira. Tradicionalmente, as coleções nucleares são estabelecidas com tamanho em torno de 10% dos acessos de toda a coleção original e incluem aproximadamente 70% no acervo genético.

A escolha dessas culturas atende a uma das recomendações do Banco de Svalbard quanto à relevância para a segurança alimentar e agricultura sustentável. Embora não sejam culturas originárias do Brasil, são cultivadas no país há séculos e têm características de rusticidade e adaptabilidade às condições nacionais. A próxima cultura agrícola a ser encaminhada para o banco norueguês será o feijão, o que deve acontecer até o fim de 2012.

O envio de amostras para Svalbard é mais uma garantia de segurança, já que o banco nórdico é o mais seguro do mundo, construído com total segurança para resistir a catástrofes climáticas e até a uma explosão nuclear. O banco tem capacidade para quatro milhões e quinhentas mil amostras de sementes. O conjunto arquitetônico conta com três câmaras de segurança máxima situadas ao final de um túnel de 125 metros dentro de uma montanha em uma pequena ilha do arquipélago de Svalbard situado no paralelo 780 N, próximo do Pólo Norte.

As sementes são armazenadas a 20ºC abaixo de zero em embalagens hermeticamente fechadas, guardadas em caixas armazenadas em prateleiras. O depósito está rodeado pelo clima glacial do Ártico, o que assegura as baixas temperaturas, mesmo se houver falha no suprimento de energia elétrica. As baixas temperatura e umidade garantem a baixa atividade metabólica, mantendo a viabilidade das sementes por um milênio ou mais.

Earthquake Hazards Map Study Finds Deadly Flaws (Science Daily)

ScienceDaily (Aug. 31, 2012) — Three of the largest and deadliest earthquakes in recent history occurred where earthquake hazard maps didn’t predict massive quakes. A University of Missouri scientist and his colleagues recently studied the reasons for the maps’ failure to forecast these quakes. They also explored ways to improve the maps. Developing better hazard maps and alerting people to their limitations could potentially save lives and money in areas such as the New Madrid, Missouri fault zone.

“Forecasting earthquakes involves many uncertainties, so we should inform the public of these uncertainties,” said Mian Liu, of MU’s department of geological sciences. “The public is accustomed to the uncertainties of weather forecasting, but foreseeing where and when earthquakes may strike is far more difficult. Too much reliance on earthquake hazard maps can have serious consequences. Two suggestions may improve this situation. First, we recommend a better communication of the uncertainties, which would allow citizens to make more informed decisions about how to best use their resources. Second, seismic hazard maps must be empirically tested to find out how reliable they are and thus improve them.”

Liu and his colleagues suggest testing maps against what is called a null hypothesis, the possibility that the likelihood of an earthquake in a given area — like Japan — is uniform. Testing would show which mapping approaches were better at forecasting earthquakes and subsequently improve the maps.

Liu and his colleagues at Northwestern University and the University of Tokyo detailed how hazard maps had failed in three major quakes that struck within a decade of each other. The researchers interpreted the shortcomings of hazard maps as the result of bad assumptions, bad data, bad physics and bad luck.

Wenchuan, China — In 2008, a quake struck China’s Sichuan Province and cost more than 69,000 lives. Locals blamed the government and contractors for not making buildings in the area earthquake-proof, according to Liu, who says that hazard maps bear some of the blame as well since the maps, based on bad assumptions, had designated the zone as an area of relatively low earthquake hazard.

Léogâne, Haiti — The 2010 earthquake that devastated Port-au-Prince and killed an estimated 316,000 people occurred along a fault that had not caused a major quake in hundreds of years. Using only the short history of earthquakes since seismometers were invented approximately one hundred years ago yielded hazard maps that were didn’t indicate the danger there.

Tōhoku, Japan — Scientists previously thought the faults off the northeast coast of Japan weren’t capable of causing massive quakes and thus giant tsunamis like the one that destroyed the Fukushima nuclear reactor. This bad understanding of particular faults’ capabilities led to a lack of adequate preparation. The area had been prepared for smaller quakes and the resulting tsunamis, but the Tōhoku quake overwhelmed the defenses.

“If we limit our attention to the earthquake records in the past, we will be unprepared for the future,” Liu said. “Hazard maps tend to underestimate the likelihood of quakes in areas where they haven’t occurred previously. In most places, including the central and eastern U.S., seismologists don’t have a long enough record of earthquake history to make predictions based on historical patterns. Although bad luck can mean that quakes occur in places with a genuinely low probability, what we see are too many ‘black swans,’ or too many exceptions to the presumed patterns.”

“We’re playing a complicated game against nature,” said the study’s first author, Seth Stein of Northwestern University. “It’s a very high stakes game. We don’t really understand all the rules very well. As a result, our ability to assess earthquake hazards often isn’t very good, and the policies that we make to mitigate earthquake hazards sometimes aren’t well thought out. For example, the billions of dollars the Japanese spent on tsunami defenses were largely wasted.

“We need to very carefully try to formulate the best strategies we can, given the limits of our knowledge,” Stein said. “Understanding the uncertainties in earthquake hazard maps, testing them, and improving them is important if we want to do better than we’ve done so far.”

The study, “Why earthquake hazard maps often fail and what to do about it,” was published by the journal Tectonophysics. First author of the study was Seth Stein of Northwestern University. Robert Geller of the University of Tokyo was co-author. Mian Liu is William H. Byler Distinguished Chair in Geological Sciences in the College of Arts and Science at the University of Missouri.

Cloud Brightening to Control Global Warming? Geoengineers Propose an Experiment (Science Daily)

A conceptualized image of an unmanned, wind-powered, remotely controlled ship that could be used to implement cloud brightening. (Credit: John McNeill)

ScienceDaily (Aug. 20, 2012) — Even though it sounds like science fiction, researchers are taking a second look at a controversial idea that uses futuristic ships to shoot salt water high into the sky over the oceans, creating clouds that reflect sunlight and thus counter global warming.

University of Washington atmospheric physicist Rob Wood describes a possible way to run an experiment to test the concept on a small scale in a comprehensive paper published this month in the journal Philosophical Transactions of the Royal Society.

The point of the paper — which includes updates on the latest study into what kind of ship would be best to spray the salt water into the sky, how large the water droplets should be and the potential climatological impacts — is to encourage more scientists to consider the idea of marine cloud brightening and even poke holes in it. In the paper, he and a colleague detail an experiment to test the concept.

“What we’re trying to do is make the case that this is a beneficial experiment to do,” Wood said. With enough interest in cloud brightening from the scientific community, funding for an experiment may become possible, he said.

The theory behind so-called marine cloud brightening is that adding particles, in this case sea salt, to the sky over the ocean would form large, long-lived clouds. Clouds appear when water forms around particles. Since there is a limited amount of water in the air, adding more particles creates more, but smaller, droplets.

“It turns out that a greater number of smaller drops has a greater surface area, so it means the clouds reflect a greater amount of light back into space,” Wood said. That creates a cooling effect on Earth.

Marine cloud brightening is part of a broader concept known as geoengineering which encompasses efforts to use technology to manipulate the environment. Brightening, like other geoengineering proposals, is controversial for its ethical and political ramifications and the uncertainty around its impact. But those aren’t reasons not to study it, Wood said.

“I would rather that responsible scientists test the idea than groups that might have a vested interest in proving its success,” he said. The danger with private organizations experimenting with geoengineering is that “there is an assumption that it’s got to work,” he said.

Wood and his colleagues propose trying a small-scale experiment to test feasibility and begin to study effects. The test should start by deploying sprayers on a ship or barge to ensure that they can inject enough particles of the targeted size to the appropriate elevation, Wood and a colleague wrote in the report. An airplane equipped with sensors would study the physical and chemical characteristics of the particles and how they disperse.

The next step would be to use additional airplanes to study how the cloud develops and how long it remains. The final phase of the experiment would send out five to 10 ships spread out across a 100 kilometer, or 62 mile, stretch. The resulting clouds would be large enough so that scientists could use satellites to examine them and their ability to reflect light.

Wood said there is very little chance of long-term effects from such an experiment. Based on studies of pollutants, which emit particles that cause a similar reaction in clouds, scientists know that the impact of adding particles to clouds lasts only a few days.

Still, such an experiment would be unusual in the world of climate science, where scientists observe rather than actually try to change the atmosphere.

Wood notes that running the experiment would advance knowledge around how particles like pollutants impact the climate, although the main reason to do it would be to test the geoengineering idea.

A phenomenon that inspired marine cloud brightening is ship trails: clouds that form behind the paths of ships crossing the ocean, similar to the trails that airplanes leave across the sky. Ship trails form around particles released from burning fuel.

But in some cases ship trails make clouds darker. “We don’t really know why that is,” Wood said.

Despite increasing interest from scientists like Wood, there is still strong resistance to cloud brightening.

“It’s a quick-fix idea when really what we need to do is move toward a low-carbon emission economy, which is turning out to be a long process,” Wood said. “I think we ought to know about the possibilities, just in case.”

The authors of the paper are treading cautiously.

“We stress that there would be no justification for deployment of [marine cloud brightening] unless it was clearly established that no significant adverse consequences would result. There would also need to be an international agreement firmly in favor of such action,” they wrote in the paper’s summary.

There are 25 authors on the paper, including scientists from University of Leeds, University of Edinburgh and the Pacific Northwest National Laboratory. The lead author is John Latham of the National Center for Atmospheric Research and the University of Manchester, who pioneered the idea of marine cloud brightening.

Wood’s research was supported by the UW College of the Environment Institute.

Journal Reference:

J. Latham, K. Bower, T. Choularton, H. Coe, P. Connolly, G. Cooper, T. Craft, J. Foster, A. Gadian, L. Galbraith, H. Iacovides, D. Johnston, B. Launder, B. Leslie, J. Meyer, A. Neukermans, B. Ormond, B. Parkes, P. Rasch, J. Rush, S. Salter, T. Stevenson, H. Wang, Q. Wang, R. Wood. Marine cloud brighteningPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2012; 370 (1974): 4217 DOI:10.1098/rsta.2012.0086

Doctors Often Don’t Disclose All Possible Risks to Patients Before Treatment (Science Daily)

ScienceDaily (Aug. 7, 2012) — Most informed consent disputes involve disagreements about who said what and when, not stand-offs over whether a particular risk ought to have been disclosed. But doctors may “routinely underestimate the importance of a small set of risks that vex patients” according to international experts writing in this week’s PLoS Medicine.

Increasingly, doctors are expected to advise and empower patients to make rational choices by sharing information that may affect treatment decisions, including risks of adverse outcomes. However, authors from Australia and the US led by David Studdert from the University of Melbourne argue that doctors, especially surgeons, are often unsure which clinical risks they should disclose and discuss with patients before treatment.

To understand more about the clinical circumstances in which disputes arise between doctors and patients in this area, the authors analyzed 481 malpractice claims and patient complaints from Australia involving allegations of deficiencies in the process of obtaining informed consent.

The authors found that 45 (9%) of the cases studied were disputed duty cases — that is, they involved head-to-head disagreements over whether a particular risk ought to have been disclosed before treatment. Two-thirds of these disputed duty cases involved surgical procedures, and the majority (38/45) of them related to five specific outcomes that had quality of life implications for patients, including chronic pain and the need for re-operation.

The authors found that the most common justifications doctors gave for not telling patients about particular risks before treatment were that they considered such risks too rare to warrant discussion or the specific risk was covered by a more general risk that was discussed.

However, nine in ten of the disputes studied centered on factual disagreements — arguments over who said what, and when. The authors say: “Documenting consent discussions in the lead-up to surgical procedures is particularly important, as most informed consent claims and complaints involved factual disagreements over the disclosure of operative risks.”

The authors say: “Our findings suggest that doctors may systematically underestimate the premium patients place on understanding certain risks in advance of treatment.”

They conclude: “Improved understanding of these situations helps to spotlight gaps between what patients want to hear and what doctors perceive patients want — or should want — to hear. It may also be useful information for doctors eager to avoid medico-legal disputes.”

Teen Survival Expectations Predict Later Risk-Taking Behavior (Science Daily)

ScienceDaily (Aug. 1, 2012) — Some young people’s expectations that they will not live long, healthy lives may actually foreshadow such outcomes.

New research published August 1 in the open access journal PLOS ONEreports that, for American teens, the expectation of death before the age of 35 predicted increased risk behaviors including substance abuse and suicide attempts later in life and a doubling to tripling of mortality rates in young adulthood.

The researchers, led by Quynh Nguyen of Northeastern University in Boston, found that one in seven participants in grades 7 to 12 reported perceiving a 50-50 chance or less of surviving to age 35. Upon follow-up interviews over a decade later, the researchers found that low expectations of longevity at young ages predicted increased suicide attempts and suicidal thoughts as well as heavy drinking, smoking, and use of illicit substances later in life relative to their peers who were almost certain they would live to age 35.

“The association between early survival expectations and detrimental outcomes suggests that monitoring survival expectations may be useful for identifying at-risk youth,” the authors state.

The study compared data collected from 19,000 adolescents in 1994-1995 to follow-up data collected from the same respondents 13-14 years later. The cohort was part of the National Longitudinal Study of Adolescent Health (Add Health), conducted by the Carolina Population Center and funded by the National Institutes of Health and 23 other federal agencies and foundations.

Journal Reference:

Quynh C. Nguyen, Andres Villaveces, Stephen W. Marshall, Jon M. Hussey, Carolyn T. Halpern, Charles Poole. Adolescent Expectations of Early Death Predict Adult Risk BehaviorsPLoS ONE, 2012; 7 (8): e41905 DOI: 10.1371/journal.pone.0041905

Severe Nuclear Reactor Accidents Likely Every 10 to 20 Years, European Study Suggests (Science Daily)

ScienceDaily (May 22, 2012) — Western Europe has the worldwide highest risk of radioactive contamination caused by major reactor accidents.

Global risk of radioactive contamination. The map shows the annual probability in percent of radioactive contamination by more than 40 kilobecquerels per square meter. In Western Europe the risk is around two percent per year. (Credit: Daniel Kunkel, MPI for Chemistry, 2011)

Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry in Mainz have calculated that such events may occur once every 10 to 20 years (based on the current number of reactors) — some 200 times more often than estimated in the past. The researchers also determined that, in the event of such a major accident, half of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants.

The reactor accident in Fukushima has fuelled the discussion about nuclear energy and triggered Germany’s exit from their nuclear power program. It appears that the global risk of such a catastrophe is higher than previously thought, a result of a study carried out by a research team led by Jos Lelieveld, Director of the Max Planck Institute for Chemistry in Mainz: “After Fukushima, the prospect of such an incident occurring again came into question, and whether we can actually calculate the radioactive fallout using our atmospheric models.” According to the results of the study, a nuclear meltdown in one of the reactors in operation worldwide is likely to occur once in 10 to 20 years. Currently, there are 440 nuclear reactors in operation, and 60 more are planned.

To determine the likelihood of a nuclear meltdown, the researchers applied a simple calculation. They divided the operating hours of all civilian nuclear reactors in the world, from the commissioning of the first up to the present, by the number of reactor meltdowns that have actually occurred. The total number of operating hours is 14,500 years, the number of reactor meltdowns comes to four — one in Chernobyl and three in Fukushima. This translates into one major accident, being defined according to the International Nuclear Event Scale (INES), every 3,625 years. Even if this result is conservatively rounded to one major accident every 5,000 reactor years, the risk is 200 times higher than the estimate for catastrophic, non-contained core meltdowns made by the U.S. Nuclear Regulatory Commission in 1990. The Mainz researchers did not distinguish ages and types of reactors, or whether they are located in regions of enhanced risks, for example by earthquakes. After all, nobody had anticipated the reactor catastrophe in Japan.

25 percent of the radioactive particles are transported further than 2,000 kilometres

Subsequently, the researchers determined the geographic distribution of radioactive gases and particles around a possible accident site using a computer model that describes Earth’s atmosphere. The model calculates meteorological conditions and flows, and also accounts for chemical reactions in the atmosphere. The model can compute the global distribution of trace gases, for example, and can also simulate the spreading of radioactive gases and particles. To approximate the radioactive contamination, the researchers calculated how the particles of radioactive caesium-137 (137Cs) disperse in the atmosphere, where they deposit on Earth’s surface and in what quantities. The 137Cs isotope is a product of the nuclear fission of uranium. It has a half-life of 30 years and was one of the key elements in the radioactive contamination following the disasters of Chernobyl and Fukushima.

The computer simulations revealed that, on average, only eight percent of the 137Cs particles are expected to deposit within an area of 50 kilometres around the nuclear accident site. Around 50 percent of the particles would be deposited outside a radius of 1,000 kilometres, and around 25 percent would spread even further than 2,000 kilometres. These results underscore that reactor accidents are likely to cause radioactive contamination well beyond national borders.

The results of the dispersion calculations were combined with the likelihood of a nuclear meltdown and the actual density of reactors worldwide to calculate the current risk of radioactive contamination around the world. According to the International Atomic Energy Agency (IAEA), an area with more than 40 kilobecquerels of radioactivity per square meter is defined as contaminated.

The team in Mainz found that in Western Europe, where the density of reactors is particularly high, the contamination by more than 40 kilobecquerels per square meter is expected to occur once in about every 50 years. It appears that citizens in the densely populated southwestern part of Germany run the worldwide highest risk of radioactive contamination, associated with the numerous nuclear power plants situated near the borders between France, Belgium and Germany, and the dominant westerly wind direction.

If a single nuclear meltdown were to occur in Western Europe, around 28 million people on average would be affected by contamination of more than 40 kilobecquerels per square meter. This figure is even higher in southern Asia, due to the dense populations. A major nuclear accident there would affect around 34 million people, while in the eastern USA and in East Asia this would be 14 to 21 million people.

“Germany’s exit from the nuclear energy program will reduce the national risk of radioactive contamination. However, an even stronger reduction would result if Germany’s neighbours were to switch off their reactors,” says Jos Lelieveld. “Not only do we need an in-depth and public analysis of the actual risks of nuclear accidents. In light of our findings I believe an internationally coordinated phasing out of nuclear energy should also be considered ,” adds the atmospheric chemist.