Arquivo da tag: Risco

Economic Dangers of ‘Peak Oil’ Addressed (Science Daily)

Oct. 16, 2013 — Researchers from the University of Maryland and a leading university in Spain demonstrate in a new study which sectors could put the entire U.S. economy at risk when global oil production peaks (“Peak Oil”). This multi-disciplinary team recommends immediate action by government, private and commercial sectors to reduce the vulnerability of these sectors.

The figure above shows sectors’ importance and vulnerability to Peak Oil. The bubbles represent sectors. The size of the bubbles visualizes the vulnerability of a particular sector to Peak Oil according to the expected price changes; the larger the size of the bubble, the more vulnerable the sector is considered to be. The X axis shows a sector’s importance according to its contribution to GDP and on the Y axis according to its structural role. Hence, the larger bubbles in the top right corner represent highly vulnerable and highly important sectors. In the case of Peak Oil induced supply disruptions, these sectors could cause severe imbalances for the entire U.S. economy. (Credit: Image courtesy of University of Maryland)

While critics of Peak Oil studies declare that the world has more than enough oil to maintain current national and global standards, these UMD-led researchers say Peak Oil is imminent, if not already here — and is a real threat to national and global economies. Their study is among the first to outline a way of assessing the vulnerabilities of specific economic sectors to this threat, and to identify focal points for action that could strengthen the U.S. economy and make it less vulnerable to disasters.

Their work, “Economic Vulnerability to Peak Oil,” appears inGlobal Environmental Change. The paper is co-authored by Christina Prell, UMD’s Department of Sociology; Kuishuang Feng and Klaus Hubacek, UMD’s Department of Geographical Sciences, and Christian Kerschner, Institut de Ciència i Tecnologia Ambientals, Universitat Autònoma de Barcelona.

A focus on Peak Oil is increasingly gaining attention in both scientific and policy discourses, especially due to its apparent imminence and potential dangers. However, until now, little has been known about how this phenomenon will impact economies. In their paper, the research team constructs a vulnerability map of the U.S. economy, combining two approaches for analyzing economic systems. Their approach reveals the relative importance of individual economic sectors, and how vulnerable these are to oil price shocks. This dual-analysis helps identify which sectors could put the entire U.S. economy at risk from Peak Oil. For the United States, such sectors would include iron mills, chemical and plastic products manufacturing, fertilizer production and air transport.

“Our findings provide early warnings to these and related industries about potential trouble in their supply chain,” UMD Professor Hubacek said. “Our aim is to inform and engage government, public and private industry leaders, and to provide a tool for effective Peak Oil policy action planning.”

Although the team’s analysis is embedded in a Peak Oil narrative, it can be used more broadly to develop a climate roadmap for a low carbon economy.

“In this paper, we analyze the vulnerability of the U.S. economy, which is the biggest consumer of oil and oil-based products in the world, and thus provides a good example of an economic system with high resource dependence. However, the notable advantage of our approach is that it does not depend on the Peak-Oil-vulnerability narrative but is equally useful in a climate change context, for designing policies to reduce carbon dioxide emissions. In that case, one could easily include other fossil fuels such as coal in the model and results could help policy makers to identify which sectors can be controlled and/or managed for a maximum, low-carbon effect, without destabilizing the economy,” Professor Hubacek said.

One of the main ways a Peak Oil vulnerable industry can become less so, the authors say, is for that sector to reduce the structural and financial importance of oil. For example, Hubacek and colleagues note that one approach to reducing the importance of oil to agriculture could be to curbing the strong dependence on artificial fertilizers by promoting organic farming techniques and/or reducing the overall distance travelled by people and goods by fostering local, decentralized food economies.

Peak Oil Background and Impact

The Peak Oil dialogue shifts attention away from discourses on “oil depletion” and “stocks” to focus on declining production rates (flows) of oil, and increasing costs of production. The maximum possible daily flow rate (with a given technology) is what eventually determines the peak; thus, the concept can also be useful in the context of other renewable resources.

Improvements in extraction and refining technologies can influence flows, but this tends to lead to steeper decline curves after the peak is eventually reached. Such steep decline curves have also been observed for shale gas wells.

“Shale developments are, so we believe, largely overrated, because of the huge amounts of financial resources that went into them (danger of bubble) and because of their apparent steep decline rates (shale wells tend to peak fast),” according to Dr. Kerschner.

“One important implication of this dialogue shift is that extraction peaks occur much earlier in time than the actual depletion of resources,” Professor Hubacek said. “In other words, Peak Oil is currently predicted within the next decade by many, whereas complete oil depletion will in fact occur never given increasing prices. This means that eventually petroleum products may be sold in liter bottles in pharmacies like in the old days. ”

Journal Reference:

  1. Christian Kerschner, Christina Prell, Kuishuang Feng, Klaus Hubacek. Economic vulnerability to Peak OilGlobal Environmental Change, 2013; DOI:10.1016/j.gloenvcha.2013.08.015

Mosquitos transgênicos no céu do sertão (Agência Pública)

Saúde

10/10/2013 – 10h36

por Redação da Agência Pública

armadilhas 300x199 Mosquitos transgênicos no céu do sertão

As armadilhas são instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Foto: Coletivo Nigéria

Com a promessa de reduzir a dengue, biofábrica de insetos transgênicos já soltou 18 milhões de mosquitos Aedes aegypti no interior da Bahia. Leia a história e veja o vídeo.

No começo da noite de uma quinta-feira de setembro, a rodoviária de Juazeiro da Bahia era o retrato da desolação. No saguão mal iluminado, funcionavam um box cuja especialidade é caldo de carne, uma lanchonete de balcão comprido, ornado por salgados, biscoitos e batata chips, e um único guichê – com perturbadoras nuvens de mosquitos sobre as cabeças de quem aguardava para comprar passagens para pequenas cidades ou capitais nordestinas.

Assentada à beira do rio São Francisco, na fronteira entre Pernambuco e Bahia, Juazeiro já foi uma cidade cortada por córregos, afluentes de um dos maiores rios do país. Hoje, tem mais de 200 mil habitantes, compõe o maior aglomerado urbano do semiárido nordestino ao lado de Petrolina – com a qual soma meio milhão de pessoas – e é infestada por muriçocas (ou pernilongos, se preferir). Os cursos de água que drenavam pequenas nascentes viraram esgotos a céu aberto, extensos criadouros do inseto, tradicionalmente combatidos com inseticida e raquete elétrica, ou janelas fechadas com ar condicionado para os mais endinheirados.

Mas os moradores de Juazeiro não espantam só muriçocas nesse início de primavera. A cidade é o centro de testes de uma nova técnica científica que utiliza Aedes aegypti transgênicos para combater a dengue, doença transmitida pela espécie. Desenvolvido pela empresa britânica de biotecnologia Oxitec, o método consiste basicamente na inserção de um gene letal nos mosquitos machos que, liberados em grande quantidade no meio ambiente, copulam com as fêmeas selvagens e geram uma cria programada para morrer. Assim, se o experimento funcionar, a morte prematura das larvas reduz progressivamente a população de mosquitos dessa espécie.

A técnica é a mais nova arma para combater uma doença que não só resiste como avança sobre os métodos até então empregados em seu controle. A Organização Mundial de Saúde estima que possam haver de 50 a 100 milhões de casos de dengue por ano no mundo. No Brasil, a doença é endêmica, com epidemias anuais em várias cidades, principalmente nas grandes capitais. Em 2012, somente entre os dias 1º de janeiro e 16 de fevereiro, foram registrados mais de 70 mil casos no país. Em 2013, no mesmo período, o número praticamente triplicou, passou para 204 mil casos. Este ano, até agora, 400 pessoas já morreram de dengue no Brasil.

Em Juazeiro, o método de patente britânica é testado pela organização social Moscamed, que reproduz e libera ao ar livre os mosquitos transgênicos desde 2011. Na biofábrica montada no município e que tem capacidade para produzir até 4 milhões de mosquitos por semana, toda cadeia produtiva do inseto transgênico é realizada – exceção feita à modificação genética propriamente dita, executada nos laboratórios da Oxitec, em Oxford. Larvas transgênicas foram importadas pela Moscamed e passaram a ser reproduzidas nos laboratórios da instituição.

Os testes desde o início são financiados pela Secretaria da Saúde da Bahia – com o apoio institucional da secretaria de Juazeiro – e no último mês de julho se estenderam ao município de Jacobina, na extremidade norte da Chapada Diamantina. Na cidade serrana de aproximadamente 80 mil habitantes, a Moscamed põe à prova a capacidade da técnica de “suprimir” (a palavra usada pelos cientistas para exterminar toda a população de mosquitos) o Aedes aegypti em toda uma cidade, já que em Juazeiro a estratégia se mostrou eficaz, mas limitada por enquanto a dois bairros.

“Os resultados de 2011 e 2012 mostraram que [a técnica] realmente funcionava bem. E a convite e financiados pelo Governo do Estado da Bahia resolvemos avançar e irmos pra Jacobina. Agora não mais como piloto, mas fazendo um teste pra realmente eliminar a população [de mosquitos]”, fala Aldo Malavasi, professor aposentado do Departamento de Genética do Instituto de Biociências da Universidade de São Paulo (USP) e atual presidente da Moscamed. A USP também integra o projeto.

Malavasi trabalha na região desde 2006, quando a Moscamed foi criada para combater uma praga agrícola, a mosca-das-frutas, com técnica parecida – a Técnica do Inseto Estéril. A lógica é a mesma: produzir insetos estéreis para copular com as fêmeas selvagens e assim reduzir gradativamente essa população. A diferença está na forma como estes insetos são esterilizados. Ao invés de modificação genética, radiação. A TIE é usada largamente desde a década de 1970, principalmente em espécies consideradas ameaças à agricultura. O problema é que até agora a tecnologia não se adequava a mosquitos como o Aedes aegypti, que não resistiam de forma satisfatória à radiação

O plano de comunicação

As primeiras liberações em campo do Aedes transgênico foram realizadas nas Ilhas Cayman, entre o final de 2009 e 2010. O território britânico no Caribe, formado por três ilhas localizadas ao Sul de Cuba, se mostrou não apenas um paraíso fiscal (existem mais empresas registradas nas ilhas do que seus 50 mil habitantes), mas também espaço propício para a liberação dos mosquitos transgênicos, devido à ausência de leis de biossegurança. As Ilhas Cayman não são signatárias do Procolo de Cartagena, o principal documento internacional sobre o assunto, nem são cobertas pela Convenção de Aarthus – aprovada pela União Europeia e da qual o Reino Unido faz parte – que versa sobre o acesso à informação, participação e justiça nos processos de tomada de decisão sobre o meio ambiente.

Ao invés da publicação e consulta pública prévia sobre os riscos envolvidos no experimento, como exigiriam os acordos internacionais citados, os cerca de 3 milhões de mosquitos soltos no clima tropical das Ilhas Cayman ganharam o mundo sem nenhum processo de debate ou consulta pública. A autorização foi concedida exclusivamente pelo Departamento de Agricultura das Ilhas. Parceiro local da Oxitec nos testes, a Mosquito Research & Control Unit (Unidade de Pesquisa e Controle de Mosquito) postou um vídeo promocional sobre o assunto apenas em outubro de 2010, ainda assim sem mencionar a natureza transgênica dos mosquitos. O vídeo foi divulgado exatamente um mês antes da apresentação dos resultados dos experimentos pela própria Oxitec no encontro anual daAmerican Society of Tropical Medicine and Hygiene (Sociedade Americana de Medicina Tropical e Higiene), nos Estados Unidos.

A comunidade científica se surpreendeu com a notícia de que as primeiras liberações no mundo de insetos modificados geneticamente já haviam sido realizadas, sem que os próprios especialistas no assunto tivessem conhecimento. A surpresa se estendeu ao resultado: segundo os dados da Oxitec, os experimentos haviam atingido 80% de redução na população de Aedes aegypti nas Ilhas Cayman. O número confirmava para a empresa que a técnica criada em laboratório poderia ser de fato eficiente. Desde então, novos testes de campo passaram a ser articulados em outros países – notadamente subdesenvolvidos ou em desenvolvimento, com clima tropical e problemas históricos com a dengue.

Depois de adiar testes semelhantes em 2006, após protestos, a Malásia se tornou o segundo país a liberar os mosquitos transgênicos entre dezembro de 2010 e janeiro de 2011. Seis mil mosquitos foram soltos num área inabitada do país. O número, bem menor em comparação ao das Ilhas Cayman, é quase insignificante diante da quantidade de mosquitos que passou a ser liberada em Juazeiro da Bahia a partir de fevereiro de 2011. A cidade, junto com Jacobina mais recentemente, se tornou desde então o maior campo de testes do tipo no mundo, com mais de 18 milhões de mosquitos já liberados, segundo números da Moscamed.

“A Oxitec errou profundamente, tanto na Malásia quanto nas Ilhas Cayman. Ao contrário do que eles fizeram, nós tivemos um extenso trabalho do que a gente chama de comunicação pública, com total transparência, com discussão com a comunidade, com visita a todas as casas. Houve um trabalho extraordinário aqui”, compara Aldo Malavasi.

Em entrevista por telefone, ele fez questão de demarcar a independência da Moscamed diante da Oxitec e ressaltou a natureza diferente das duas instituições. Criada em 2006, a Moscamed é uma organização social, sem fins lucrativos portanto, que se engajou nos testes do Aedes aegypti transgênico com o objetivo de verificar a eficácia ou não da técnica no combate à dengue. Segundo Malavasi, nenhum financiamento da Oxitec foi aceito por eles justamente para garantir a isenção na avaliação da técnica. “Nós não queremos dinheiro deles, porque o nosso objetivo é ajudar o governo brasileiro”, resume.

Em favor da transparência, o programa foi intitulado “Projeto Aedes Transgênico” (PAT), para trazer já no nome a palavra espinhosa. Outra determinação de ordem semântica foi o não uso do termo “estéril”, corrente no discurso da empresa britânica, mas empregada tecnicamente de forma incorreta, já que os mosquitos produzem crias, mas geram prole programada para morrer no estágio larval. Um jingle pôs o complexo sistema em linguagem popular e em ritmo de forró pé-de-serra. E o bloco de carnaval “Papa Mosquito” saiu às ruas de Juazeiro no Carnaval de 2011.

No âmbito institucional, além do custeio pela Secretaria de Saúde estadual, o programa também ganhou o apoio da Secretaria de Saúde de Juazeiro da Bahia. “De início teve resistência, porque as pessoas também não queriam deixar armadilhas em suas casas, mas depois, com o tempo, elas entenderam o projeto e a gente teve uma boa aceitação popular”, conta o enfermeiro sanitarista Mário Machado, diretor de Promoção e Vigilância à Saúde da secretaria.

As armadilhas, das quais fala Machado, são simples instrumentos instalados nas casas de alguns moradores da área do experimento. As ovitrampas, como são chamadas, fazem as vezes de criadouros para as fêmeas. Assim é possível colher os ovos e verificar se eles foram fecundados por machos transgênicos ou selvagens. Isso também é possível porque os mosquitos geneticamente modificados carregam, além do gene letal, o fragmento do DNA de uma água-viva que lhe confere uma marcação fluorescente, visível em microscópios.

Desta forma, foi possível verificar que a redução da população de Aedes aegypti selvagem atingiu, segundo a Moscamed, 96% em Mandacaru – um assentamento agrícola distante poucos quilômetros do centro comercial de Juazeiro que, pelo isolamento geográfico e aceitação popular, se transformou no local ideal para as liberações. Apesar do número, a Moscamed continua com liberações no bairro. Devido à breve vida do mosquito (a fêmea vive aproximadamente 35 dias), a soltura dos insetos precisa continuar para manter o nível da população selvagem baixo. Atualmente, uma vez por semana um carro deixa a sede da organização com 50 mil mosquitos distribuídos aos milhares em potes plásticos que serão abertos nas ruas de Mandacaru.

“Hoje a maior aceitação é no Mandacaru. A receptividade foi tamanha que a Moscamed não quer sair mais de lá”, enfatiza Mário Machado.

O mesmo não aconteceu com o bairro de Itaberaba, o primeiro a receber os mosquitos no começo de 2011. Nem mesmo o histórico alto índice de infecção pelo Aedes aegypti fez com que o bairro periférico juazeirense, vizinho à sede da Moscamed, aceitasse de bom grado o experimento. Mário Machado estima “em torno de 20%” a parcela da população que se opôs aos testes e pôs fim às liberações.

“Por mais que a gente tente informar, ir de casa em casa, de bar em bar, algumas pessoas desacreditam: ‘Não, vocês estão mentindo pra gente, esse mosquito tá picando a gente’”, resigna-se.

Depois de um ano sem liberações, o mosquito parece não ter deixado muitas lembranças por ali. Em uma caminhada pelo bairro, quase não conseguimos encontrar alguém que soubesse do que estávamos falando. Não obstante, o nome de Itaberaba correu o mundo ao ser divulgado pela Oxitec que o primeiro experimento de campo no Brasil havia atingido 80% de redução na população de mosquitos selvagens.

Supervisora de campo da Moscamed, a bióloga Luiza Garziera foi uma das que foram de casa em casa explicando o processo, por vezes contornando o discurso científico para se fazer entender. “Eu falava que a gente estaria liberando esses mosquitos, que a gente liberava somente o macho, que não pica. Só quem pica é a fêmea. E que esses machos quando ‘namoram’ – porque a gente não pode falar às vezes de ‘cópula’ porque as pessoas não vão entender. Então quando esses machos namoram com a fêmea, os seus filhinhos acabam morrendo”.

Este é um dos detalhes mais importantes sobre a técnica inédita. Ao liberar apenas machos, numa taxa de 10 transgênicos para 1 selvagem, a Moscamed mergulha as pessoas numa nuvem de mosquitos, mas garante que estes não piquem aqueles. Isto acontece porque só a fêmea se alimenta de sangue humano, líquido que fornece as proteínas necessárias para sua ovulação.

A tecnologia se encaixa de forma convincente e até didática – talvez com exceção da “modificação genética”, que requer voos mais altos da imaginação. No entanto, ainda a ignorância sobre o assunto ainda campeia em considerável parcela dos moradores ouvidos para esta reportagem. Quando muito, sabe-se que se trata do extermínio do mosquito da dengue, o que é naturalmente algo positivo. No mais, ouviu-se apenas falar ou arrisca-se uma hipótese que inclua a, esta sim largamente odiada, muriçoca.

A avaliação dos riscos

Apesar da campanha de comunicação da Moscamed, a ONG britânica GeneWatch aponta uma série de problemas no processo brasileiro. O principal deles, o fato do relatório de avaliação de riscos sobre o experimento não ter sido disponibilizado ao público antes do início das liberações. Pelo contrário, a pedido dos responsáveis pelo Programa Aedes Transgênico, o processo encaminhado à Comissão Técnica Nacional de Biossegurança (CTNBio, órgão encarregado de autorizar ou não tais experimentos) foi considerado confidencial.

“Nós achamos que a Oxitec deve ter o consentimento plenamente informado da população local, isso significa que as pessoas precisam concordar com o experimento. Mas para isso elas precisam também ser informadas sobre os riscos, assim como você seria se estivesse sendo usado para testar um novo medicamento contra o câncer ou qualquer outro tipo de tratamento”, comentou, em entrevista por Skype, Helen Wallace, diretora executiva da organização não governamental.

Especialista nos riscos e na ética envolvida nesse tipo de experimento, Helen publicou este ano o relatório Genetically Modified Mosquitoes: Ongoing Concerns (“Mosquitos Geneticamente Modificados: atuais preocupações”), que elenca em 13 capítulos o que considera riscos potenciais não considerados antes de se autorizar a liberação dos mosquitos transgênicos. O documento também aponta falhas na condução dos experimentos pela Oxitec.

Por exemplo, após dois anos das liberações nas Ilhas Cayman, apenas os resultados de um pequeno teste haviam aparecido numa publicação científica. No começo de 2011, a empresa submeteu os resultados do maior experimento nas Ilhas à revista Science, mas o artigo não foi publicado. Apenas em setembro do ano passado o texto apareceu em outra revista, a Nature Biotechnology, publicado como “correspondência” – o que significa que não passou pela revisão de outros cientistas, apenas pela checagem do próprio editor da publicação.

Para Helen Wallace, a ausência de revisão crítica dos pares científicos põe o experimento da Oxitec sob suspeita. Mesmo assim, a análise do artigo, segundo o documento, sugere que a empresa precisou aumentar a proporção de liberação de mosquitos transgênicos e concentrá-los em uma pequena área para que atingisse os resultados esperados. O mesmo teria acontecido no Brasil, em Itaberaba. Os resultados do teste no Brasil também ainda não foram publicados pela Moscamed. O gerente do projeto, Danilo Carvalho, informou que um dos artigos já foi submetido a uma publicação e outro está em fase final de escrita.

Outro dos riscos apontados pelo documento está no uso comum do antibiótico tetraciclina. O medicamento é responsável por reverter o gene letal e garantir em laboratório a sobrevivência do mosquito geneticamente modificado, que do contrário não chegaria à fase adulta. Esta é a diferença vital entre a sorte dos mosquitos reproduzidos em laboratório e a de suas crias, geradas no meio ambiente a partir de fêmeas selvagens – sem o antibiótico, estão condenados à morte prematura.

A tetraciclina é comumente empregada nas indústrias da pecuária e da aquicultura, que despejam no meio ambiente grandes quantidades da substância através de seus efluentes. O antibiótico também é largamente usado na medicina e na veterinária. Ou seja, ovos e larvas geneticamente modificados poderiam entrar em contato com o antibiótico mesmo em ambientes não controlados e assim sobreviverem. Ao longo do tempo, a resistência dos mosquitos transgênicos ao gene letal poderia neutralizar seu efeito e, por fim, teríamos uma nova espécie geneticamente modificada adaptada ao meio ambiente.

laboratorio 300x186 Mosquitos transgênicos no céu do sertãoA hipótese é tratada com ceticismo pela Oxitec, que minimiza a possibilidade disto acontecer no mundo real. No entanto, documento confidencial tornado público mostra que a hipótese se mostrou, por acaso, real nos testes de pesquisador parceiro da empresa. Ao estranhar uma taxa de sobrevivência das larvas sem tetraciclina de 15% – bem maior que os usuais 3% contatos pelos experimentos da empresa –, os cientistas da Oxitec descobriram que a ração de gato com a qual seus parceiros estavam alimentando os mosquitos guardava resquícios do antibiótico, que é rotineiramente usado para tratar galinhas destinadas à ração animal.

O relatório da GeneWatch chama atenção para a presença comum do antibiótico em dejetos humanos e animais, assim como em sistemas de esgotamento doméstico, a exemplo de fossas sépticas. Isto caracterizaria um risco potencial, já que vários estudos constataram a capacidade do Aedes aegypti se reproduzir em águas contaminadas – apesar de isso ainda não ser o mais comum, nem acontecer ainda em Juazeiro, segundo a Secretaria de Saúde do município.

Além disso, há preocupações quanto a taxa de liberação de fêmeas transgênicas. O processo de separação das pupas (último estágio antes da vida adulta) é feito de forma manual, com a ajuda de um aparelho que reparte os gêneros pelo tamanho (a fêmea é ligeiramente maior). Uma taxa de 3% de fêmeas pode escapar neste processo, ganhando a liberdade e aumentando os riscos envolvidos. Por último, os experimentos ainda não verificaram se a redução na população de mosquitos incide diretamente na transmissão da dengue.

Todas as críticas são rebatidas pela Oxitec e pela Moscamed, que dizem manter um rigoroso controle de qualidade – como o monitoramento constante da taxa de liberação de fêmeas e da taxa de sobrevivências das larvas sem tetraciclina. Desta forma, qualquer sinal de mutação do mosquito seria detectado a tempo de se suspender o programa. Ao final de aproximadamente um mês, todos os insetos liberados estariam mortos. Os mosquitos, segundo as instituições responsáveis, também não passam os genes modificados mesmo que alguma fêmea desgarrada pique um ser humano.

Mosquito transgênico à venda

Em julho passado, depois do êxito dos testes de campo em Juazeiro, a Oxitec protocolou a solicitação de licença comercial na Comissão Técnica Nacional de Biossegurança (CTNBio). Desde o final de 2012, a empresa britânica possui CNPJ no país e mantém um funcionário em São Paulo. Mais recentemente, com os resultados promissores dos experimentos em Juazeiro, alugou um galpão em Campinas e está construindo o que será sua sede brasileira. O país representa hoje seu mais provável e iminente mercado, o que faz com que o diretor global de desenvolvimento de negócios da empresa, Glen Slade, viva hoje numa ponte aérea entre Oxford e São Paulo.

“A Oxitec está trabalhando desde 2009 em parceria com a USP e Moscamed, que são parceiros bons e que nos deram a oportunidade de começar projetos no Brasil. Mas agora acabamos de enviar nosso dossiê comercial à CTNBio e esperamos obter um registro no futuro, então precisamos aumentar nossa equipe no país. Claramente estamos investindo no Brasil. É um país muito importante”, disse Slade numa entrevista por Skype da sede na Oxitec, em Oxford, na Inglaterra.

A empresa de biotecnologia é uma spin-out da universidade britânica, o que significa dizer que a Oxitec surgiu dos laboratórios de uma das mais prestigiadas universidades do mundo. Fundada em 2002, desde então vem captando investimentos privados e de fundações sem fins lucrativos, tais como a Bill & Melinda Gates, para bancar o prosseguimento das pesquisas. Segundo Slade, mais de R$ 50 milhões foram gastos nesta última década no aperfeiçoamento e teste da tecnologia.

O executivo espera que a conclusão do trâmite burocrático para a concessão da licença comercial aconteça ainda próximo ano, quando a sede brasileira da Oxitec estará pronta, incluindo uma nova biofábrica. Já em contato com vários municípios do país, o executivo prefere não adiantar nomes. Nem o preço do serviço, que provavelmente será oferecido em pacotes anuais de controle da população de mosquitos, a depender o orçamento do número de habitantes da cidade.

“Nesse momento é difícil dar um preço. Como todos os produtos novos, o custo de produção é mais alto quando a gente começa do que a gente gostaria. Acho que o preço vai ser um preço muito razoável em relação aos benefícios e aos outros experimentos para controlar o mosquito, mas muito difícil de dizer hoje. Além disso, o preço vai mudar segundo a escala do projeto. Projetos pequenos não são muito eficientes, mas se tivermos a oportunidade de controlar os mosquitos no Rio de Janeiro todo, podemos trabalhar em grande escala e o preço vai baixar”, sugere.

A empresa pretende também instalar novas biofábricas nas cidades que receberem grandes projetos, o que reduzirá o custo a longo prazo, já que as liberações precisam ser mantidas indefinidamente para evitar o retorno dos mosquitos selvagens. A velocidade de reprodução do Aedes aegypti é uma preocupação. Caso seja cessado o projeto, a espécie pode recompor a população em poucas semanas.

“O plano da empresa é conseguir pagamentos repetidos para a liberação desses mosquitos todo ano. Se a tecnologia deles funcionar e realmente reduzir a incidência de dengue, você não poderá suspender estas liberações e ficará preso dentro desse sistema. Uma das maiores preocupações a longo prazo é que se as coisas começarem a dar errado, ou mesmo se tornarem menos eficientes, você realmente pode ter uma situação pior ao longo de muitos anos”, critica Helen Wallace.

O risco iria desde a redução da imunidade das pessoas à doença, até o desmantelamento de outras políticas públicas de combate à dengue, como as equipes de agentes de saúde. Apesar de tanto a Moscamed quanto a própria secretaria de Saúde de Juazeiro enfatizarem a natureza complementar da técnica, que não dispensaria os outros métodos de controle, é plausível que hajam conflitos na alocação de recursos para a área. Hoje, segundo Mário Machado da secretaria de Saúde, Juazeiro gasta em média R$ 300 mil por mês no controle de endemias, das quais a dengue é a principal.

A secretaria negocia com a Moscamed a ampliação do experimento para todo o município ou mesmo para toda a região metropolitana formada por Juazeiro e Petrolina – um teste que cobriria meio milhão pessoas –, para assim avaliar a eficácia em grandes contingentes populacionais. De qualquer forma e apesar do avanço das experiências, nem a organização social brasileira nem a empresa britânica apresentaram estimativas de preço pra uma possível liberação comercial.

“Ontem nós estávamos fazendo os primeiros estudos, pra analisar qual é o preço deles, qual o nosso. Porque eles sabem quanto custa o programa deles, que não é barato, mas não divulgam”, disse Mário Machado.

Em reportagem do jornal britânico The Observer de julho do ano passado, a Oxitec estimou o custo da técnica em “menos de” £6 libras esterlinas por pessoa por ano. Num cálculo simples, apenas multiplicando o número pela contação atual da moeda britânia frente ao real e desconsiderando as inúmeras outras variáveis dessa conta, o projeto em uma cidade de 150 mil habitantes custaria aproximadamente R$ 3,2 milhões por ano.

Se imaginarmos a quantidade de municípios de pequeno e médio porte brasileiros em que a dengue é endêmica, chega-se a pujança do mercado que se abre – mesmo desconsiderando por hora os grandes centros urbanos do país, que extrapolariam a capacidade atual da técnica. Contudo, este é apenas uma fatia do negócio. A Oxitec também possui uma série de outros insetos transgênicos, estes destinados ao controle de pragas agrícolas e que devem encontrar campo aberto no Brasil, um dos gigantes do agronegócio no mundo.

Aguardando autorização da CTNBio, a Moscamed já se preparara para testar a mosca-das-frutas transgênica, que segue a mesma lógica do Aedes aegypti. Além desta, a Oxitec tem outras 4 espécies geneticamente modificadas que poderão um dia serem testadas no Brasil, a começar por Juazeiro e o Vale do São Francisco. A região é uma das maiores produtoras de frutas frescas para exportação do país. 90% de toda uva e manga exportadas no Brasil saem daqui. Uma produção que requer o combate incessante às pragas. Nas principais avenidas de Juazeiro e Petrolina, as lojas de produtos agrícolas e agrotóxicos se sucedem, variando em seus totens as logos das multinacionais do ramo.

“Não temos planos concretos [além da mosca-das-frutas], mas, claro, gostaríamos muito de ter a oportunidade de fazer ensaios com esses produtos também. O Brasil tem uma indústria agrícola muito grande. Mas nesse momento nossa prioridade número 1 é o mosquito da dengue. Então uma vez que tivermos este projeto com recursos bastante, vamos tentar acrescentar projetos na agricultura.”, comentou Slade.

Ele e vários de seus colegas do primeiro escalão da empresa já trabalharam numa das gigantes do agronegócio, a Syngenta. O fato, segundo Helen Wallace, é um dos revelam a condição do Aedes aegypti transgênico de pioneiro de todo um novo mercado de mosquitos geneticamente modificados: “Nos achamos que a Syngenta está principalmente interessada nas pragas agrícolas. Um dos planos que conhecemos é a proposta de usar pragas agrícolas geneticamente modificadas junto com semestres transgênicas para assim aumentar a resistências destas culturas às pragas”.

“Não tem nenhum relacionamento entre Oxitec e Syngenta dessa forma. Talvez tenhamos possibilidade no futuro de trabalharmos juntos. Eu pessoalmente tenho o interesse de buscar projetos que possamos fazer com Syngenta, Basf ou outras empresas grandes da agricultura”, esclarece Glen Slade.

Em 2011, a indústria de agrotóxicos faturou R$14,1 bilhões no Brasil. Maior mercado do tipo no mundo, o país pode nos próximos anos inaugurar um novo estágio tecnológico no combate às pestes. Assim como na saúde coletiva, com o Aedes aegypti transgênico, que parece ter um futuro comercial promissor. Todavia, resta saber como a técnica conviverá com as vacinas contra o vírus da dengue, que estão em fase final de testes – uma desenvolvida por um laboratório francês, outra pelo Instituto Butantan, de São Paulo. As vacinas devem chegar ao público em 2015. O mosquito transgênico, talvez já próximo ano.

Dentre as linhagens de mosquitos transgênicos, pode surgir também uma versão nacional. Como confirmou a professora Margareth de Lara Capurro-Guimarães, do Departamento de Parasitologia da USP e coordenadora do Programa Aedes Transgênico, já está sob estudo na universidade paulista a muriçoca transgênica. Outra possível solução tecnológica para um problema de saúde pública em Juazeiro da Bahia – uma cidade na qual, segundo levantamento do Sistema Nacional de Informações sobre Saneamento (SNIS) de 2011, a rede de esgoto só atende 67% da população urbana.

* Publicado originalmente no site Agência Pública.

(Agência Pública)

Estamos preparados para o pré-sal e o gás de xisto? (O Estado de São Paulo)

JC e-mail 4817, de 20 de Setembro de 2013.

Em artigo publicado no Estadão, Washington Novaes* reforça o alerta da SBPC sobre os riscos da exploração do gás xisto

Anuncia-se que em novembro vão a leilão áreas brasileiras onde se pretende explorar o gás de xisto, da mesma forma que estão sendo leiloadas áreas do pré-sal para exploração de petróleo no mar. Deveríamos ser prudentes nas duas direções. No pré-sal, não se conhecem suficientemente possíveis consequências de exploração em áreas profundas. No caso do xisto, em vários países já há proibições de exploração ou restrições, por causa das consequências, na sua volta à superfície, da água e de insumos químicos injetados no solo para “fraturar” as camadas de rocha onde se encontra o gás a ser liberado. Mas as razões financeiras, em ambos os casos, são muito fortes e estão prevalecendo em vários lugares, principalmente nos Estados Unidos.

No Brasil, onde a tecnologia para o fraturamento de rochas ainda vai começar a ser utilizada, há um questionamento forte da Sociedade Brasileira para o Progresso da Ciência (SBPC) e da Academia Brasileira de Ciências, que, em carta à presidente da República (5/8), manifestaram sua preocupação com esse leilão para campos de gás em bacias sedimentares. Nestas, diz a carta, agências dos EUA divulgaram que o Brasil teria reservas de 7,35 trilhões de metros cúbicos em bacias no Paraná, no Parnaíba, no Solimões, no Amazonas, no Recôncavo Baiano e no São Francisco. A Agência Nacional de Petróleo (ANP) estima que as reservas podem ser o dobro disso. Mas, segundo a SBPC e a ANP, falta “conhecimento das características petrográficas, estruturais e geomecânicas” consideradas nesses cálculos, que poderão influir “decisivamente na economicidade de sua exploração”.

E ainda seria preciso considerar os altos volumes de água no processo de fratura de rochas para liberar gás, “que retornam à superfície poluídos por hidrocarbonetos e por outros compostos”, além de metais presentes nas rochas e “dos próprios aditivos químicos utilizados, que exigem caríssimas técnicas de purificação e de descarte dos resíduos finais”. A água utilizada precisaria ser confrontada “com outros usos considerados preferenciais”, como o abastecimento humano. E lembrar ainda que parte das reservas está “logo abaixo do Aquífero Guarani”; a exploração deveria “ser avaliada com muita cautela, já que há um potencial risco de contaminação das águas deste aquífero”.

Diante disso, não deveria haver licitações imediatas, “excluindo a comunidade científica e os próprios órgãos reguladores do país da possibilidade de acesso e discussão das informações”, que “poderão ser obtidas por meio de estudos realizados diretamente pelas universidades e institutos de pesquisa”. Além do maior conhecimento científico das jazidas, os estudos poderão mostrar “consequências ambientais dessa atividade, que poderão superar amplamente seus eventuais ganhos sociais”. É uma argumentação forte, que, em reunião da SBPC no Recife (22 a 27/7), levou a um pedido de que seja sustada a licitação de novembro.

Em muitos outros lugares a polêmica está acesa – como comenta o professor Luiz Fernando Scheibe, da USP, doutor em Mineração e Petrologia (12/9). Como na Grã-Bretanha, onde se argumenta que a tecnologia de fratura, entre muitos outros problemas, pode contribuir até para terremotos. A liberação de metano no processo também pode ser altamente problemática, já que tem efeitos danosos equivalentes a mais de 20 vezes os do dióxido de carbono, embora permaneça menos tempo na atmosfera. E com isso anularia as vantagens do gás de xisto para substituir o uso de carvão mineral. O próprio Programa das Nações Unidas para o Meio Ambiente (Pnuma) tem argumentado que o gás de xisto pode, na verdade, aumentar as emissões de poluentes que contribuem para mudanças do clima.

Na França os protestos têm sido muitos (Le Monde, 16/7) e levado o país a restrições fortes, assim como na Bulgária. Alguns Estados norte-americanos proibiram a tecnologia em seus territórios, mas o governo dos EUA a tem aprovado, principalmente porque o gás de xisto não só é mais barato que o carvão, como reduziu substancialmente as importações de combustíveis fósseis do país, até lhe permitindo exportar carvão excedente. E a Agência Internacional de Energia está prevendo que até 2035 haverá exploração do gás de xisto em mais de 1 milhão de pontos no mundo. Nos EUA, este ano, a produção de gás de xisto estará em cerca de 250 bilhões de metros cúbicos – facilitada pela decisão governamental de liberar a Agência de Proteção Ambiental de examinar possíveis riscos no processo e pela existência de extensa rede de gasodutos (o Brasil só os tem na região leste; gás consumido aqui vem da Bolívia).

Também a China seria potencial usuária do gás, pois 70% de sua energia vem de 3 bilhões de toneladas anuais de carvão (quase 50% do consumo no mundo).Embora tenha 30 trilhões de metros cúbicos de gás de xisto – mais que os EUA -, o problema é que as jazidas se situam em região de montanhas, muito distante dos centros de consumo – o que implicaria um aumento de 50% no custo para o usuário, comparado com o carvão. Por isso mesmo, a China deverá aumentar o consumo do carvão nas próximas décadas (Michael Brooks na New Scientist, 10/8).

E assim vamos, em mais uma questão que sintetiza o dilema algumas vezes já comentado neste espaço: lógica financeira versus lógica “ambiental”, da sobrevivência. Com governos, empresas, pessoas diante da opção de renunciar a certas tecnologias e ao uso de certos bens – por causa dos problemas de poluição, clima, consumo insustentável de recursos, etc. -, ou usá-los por causa das vantagens financeiras imediatas, que podem ser muito fortes.

Cada vez mais, será esse o centro das discussões mais fortes em toda parte, inclusive no Brasil – com repercussões amplas nos campos político e social. Preparemo-nos.

*Washington Novaes é jornalista.

Global Networks Must Be Redesigned, Experts Urge (Science Daily)

May 1, 2013 — Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems.

Our global networks have generated many benefits and new opportunities. However, they have also established highways for failure propagation, which can ultimately result in human-made disasters. For example, today’s quick spreading of emerging epidemics is largely a result of global air traffic, with serious impacts on global health, social welfare, and economic systems. (Credit: © Angie Lingnau / Fotolia)

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push human-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Our Intuition of Systemic Risks Is Misleading

Networking system components that are well-behaved in separation may create counter-intuitive emergent system behaviors, which are not well-behaved at all. For example, cooperative behavior might unexpectedly break down as the connectivity of interaction partners grows. “Applying this to the global network of banks, this might actually have caused the financial meltdown in 2008,” believes Helbing.

Globally networked risks are difficult to identify, map and understand, since there are often no evident, unique cause-effect relationships. Failure rates may change depending on the random path taken by the system, with the consequence of increasing risks as cascade failures progress, thereby decreasing the capacity of the system to recover. “In certain cases, cascade effects might reach any size, and the damage might be practically unbounded,” says Helbing. “This is quite disturbing and hard to imagine.” All of these features make strongly coupled, complex systems difficult to predict and control, such that our attempts to manage them go astray.

“Take the financial system,” says Helbing. “The financial crisis hit regulators by surprise.” But back in 2003, the legendary investor Warren Buffet warned of mega-catastrophic risks created by large-scale investments into financial derivatives. It took 5 years until the “investment time bomb” exploded, causing losses of trillions of dollars to our economy. “The financial architecture is not properly designed,” concludes Helbing. “The system lacks breaking points, as we have them in our electrical system.” This allows local problems to spread globally, thereby reaching catastrophic dimensions.

A Global Ticking Time Bomb?

Have we unintentionally created a global time bomb? If so, what kinds of global catastrophic scenarios might humans face in complex societies? A collapse of the world economy or of our information and communication systems? Global pandemics? Unsustainable growth or environmental change? A global food or energy crisis? A cultural clash or global-scale conflict? Or will we face a combination of these contagious phenomena — a scenario that the World Economic Forum calls the “perfect storm”?

“While analyzing such global risks,” says Helbing, “one must bear in mind that the propagation speed of destructive cascade effects might be slow, but nevertheless hard to stop. It is time to recognize that crowd disasters, conflicts, revolutions, wars, and financial crises are the undesired result of operating socio-economic systems in the wrong parameter range, where systems are unstable.” In the past, these social problems seemed to be puzzling, unrelated, and almost “God-given” phenomena one had to live with. Nowadays, thanks to new complexity science models and large-scale data sets (“Big Data”), one can analyze and understand the underlying mechanisms, which let complex systems get out of control.

Disasters should not be considered “bad luck.” They are a result of inappropriate interactions and institutional settings, caused by humans. Even worse, they are often the consequence of a flawed understanding of counter-intuitive system behaviors. “For example, it is surprising that we didn’t have sufficient precautions against a financial crisis and well-elaborated contingency plans,” states Helbing. “Perhaps, this is because there should not be any bubbles and crashes according to the predominant theoretical paradigm of efficient markets.” Conventional thinking can cause fateful decisions and the repetition of previous mistakes. “In other words: While we want to do the right thing, we often do wrong things,” concludes Helbing. This obviously calls for a paradigm shift in our thinking. “For example, we may try to promote innovation, but suffer economic decline, because innovation requires diversity more than homogenization.”

Global Networks Must Be Re-Designed

Helbing’s publication explores why today’s risk analysis falls short. “Predictability and controllability are design issues,” stresses Helbing. “And uncertainty, which means the impossibility to determine the likelihood and expected size of damage, is often man-made.” Many systems could be better managed with real-time data. These would allow one to avoid delayed response and to enhance the transparency, understanding, and adaptive control of systems. However, even all the data in the world cannot compensate for ill-designed systems such as the current financial system. Such systems will sooner or later get out of control, causing catastrophic human-made failure. Therefore, a re-design of such systems is urgently needed.

Helbing’s Nature paper on “Globally Networked Risks” also calls attention to strategies that make systems more resilient, i.e. able to recover from shocks. For example, setting up backup systems (e.g. a parallel financial system), limiting the system size and connectivity, building in breaking points to stop cascade effects, or reducing complexity may be used to improve resilience. In the case of financial systems, there is still much work to be done to fully incorporate these principles.

Contemporary information and communication technologies (ICT) are also far from being failure-proof. They are based on principles that are 30 or more years old and not designed for today’s use. The explosion of cyber risks is a logical consequence. This includes threats to individuals (such as privacy intrusion, identity theft, or manipulation through personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). To counter this, Helbing recommends an entirely new ICT architecture inspired by principles of decentralized self-organization as observed in immune systems, ecology, and social systems.

Coming Era of Social Innovation

A better understanding of the success principles of societies is urgently needed. “For example, when systems become too complex, they cannot be effectively managed top-down” explains Helbing. “Guided self-organization is a promising alternative to manage complex dynamical systems bottom-up, in a decentralized way.” The underlying idea is to exploit, rather than fight, the inherent tendency of complex systems to self-organize and thereby create a robust, ordered state. For this, it is important to have the right kinds of interactions, adaptive feedback mechanisms, and institutional settings, i.e. to establish proper “rules of the game.” The paper offers the example of an intriguing “self-control” principle, where traffic lights are controlled bottom-up by the vehicle flows rather than top-down by a traffic center.

Creating and Protecting Social Capital

“One man’s disaster is another man’s opportunity. Therefore, many problems can only be successfully addressed with transparency, accountability, awareness, and collective responsibility,” underlines Helbing. Moreover, social capital such as cooperativeness or trust is important for economic value generation, social well-being and societal resilience, but it may be damaged or exploited. “Humans must learn how to quantify and protect social capital. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis.” This crisis was largely caused by a loss of trust. “It is important to stress that risk insurances today do not consider damage to social capital,” Helbing continues. However, it is known that large-scale disasters have a disproportionate public impact, in part because they destroy social capital. As we neglect social capital in risk assessments, we are taking excessive risks.

Journal Reference:

  1. Dirk Helbing. Globally networked risks and how to respondNature, 2013; 497 (7447): 51 DOI:10.1038/nature12047

Politicians Found to Be More Risk-Tolerant Than the General Population (Science Daily)

Apr. 16, 2013 — According to a recent study, the popularly elected members of the German Bundestag are substantially more risk-tolerant than the broader population of Germany. Researchers in the Cluster of Excellence “Languages of Emotion” at Freie Universität Berlin and at DIW Berlin (German Institute for Economic Research) conducted a survey of Bundestag representatives and analyzed data on the general population from the German Socio-Economic Panel Study (SOEP). Results show that risk tolerance is even higher among Bundestag representatives than among self-employed people, who are themselves more risk-tolerant than salaried employees or civil servants. This was true for all areas of risk that were surveyed in the study: automobile driving, financial investments, sports and leisure activities, career, and health. The authors interpret this finding as positive.

The full results of the study were published in German in the SOEPpapers series of the German Institute for Economic Research (DIW Berlin).

The authors of the study, Moritz Hess (University of Mannheim), Prof. Dr. Christian von Scheve (Freie Universität Berlin and DIW Berlin), Prof. Dr. Jürgen Schupp (DIW Berlin and Freie Universität Berlin), and Prof. Dr. Gert G. Wagner (DIW Berlin and Technische Universität Berlin) view the above-average risk tolerance found among Bundestag representatives as positive. According to sociologist and lead author of the study Moritz Hess: “Otherwise, important societal decisions often wouldn’t be made due to the almost incalculable risks involved. This would lead to stagnation and social standstill.” The authors do not interpret the higher risk-tolerance found among politicians as a threat to democracy. “The results show a successful and sensible division of labor among citizens, voters, and politicians,” says economist Gert G. Wagner. Democratic structures and parliamentary processes, he argues, act as a brake on the individual risk propensity of elected representatives and politicians.

For their study, the research team distributed written questionnaires to all 620 members of the 17th German Bundestag in late 2011. Twenty-eight percent of Bundestag members responded. Comparisons with the statistical characteristics of all current Bundestag representatives showed that the respondents comprise a representative sample of Bundestag members. SOEP data were used to obtain a figure for the risk tolerance of the general population for comparison with the figures for Bundestag members.

The questions posed to Bundestag members were formulated analogously to the questions in the standard SOEP questionnaire. Politicians were asked to rate their own risk tolerance on a scale from zero (= not at all risk-tolerant) to ten (= very risk-tolerant). They rated both their general risk tolerance as well as their specific risk tolerance in the areas of driving, making financial investments, sports and leisure activities, career, health, and trust towards strangers. They also rated their risk tolerance in regard to political decisions. No questions on party affiliation were asked in order to exclude the possibility that results could be used for partisan political purposes.

References:

Hess, M., von Scheve, C., Schupp, J., Wagner. G. G. (2013): Members of German Federal Parliament More Risk-Loving Than General Population, in: DIW Economic Bulletin, Vol. 3, No. 4, 2013, pp. 20-24.

Hess, M., von Scheve, C., Schupp, J., Wagner. G. G. (2013): Sind Politiker risikofreudiger als das Volk? Eine empirische Studie zu Mitgliedern des Deutschen Bundestags, SOEPpaper No. 545, DIW Berlin.

In Big Data, We Hope and Distrust (Huffington Post)

By Robert Hall

Posted: 04/03/2013 6:57 pm

“In God we trust. All others must bring data.” — W. Edwards Deming, statistician, quality guru

Big data helped reelect a pesident, find Osama bin Laden, and contributed to the meltdown of our financial system. We are in the midst of a data revolution where social media introduces new terms like Arab Spring, Facebook Depression and Twitter anxiety that reflect a new reality: Big data is changing the social and relationship fabric of our culture.

We spend hours installing and learning how to use the latest versions of our ever-expanding technology while enduring a never-ending battle to protect our information. Then we labor while developing practices to rid ourselves of technology — rules for turning devices off during meetings or movies, legislation to outlaw texting while driving, restrictions in classrooms to prevent cheating, and scheduling meals or family time where devices are turned off. Information and technology: We love it, hate it, can’t live with it, can’t live without it, use it voraciously, and distrust it immensely. I am schizophrenic and so am I.

Big data is not only big but growing rapidly. According to IBM, we create 2.5 quintillion bytes a day and that “ninety percent of the data in the world has been created in the last two years.” Vast new computing capacity can analyze Web-browsing trails that track our every click, sensor signals from every conceivable device, GPS tracking and social network traffic. It is now possible to measure and monitor people and machines to an astonishing degree. How exciting, how promising. And how scary.

This is not our first data rodeo. The early stages of the customer relationship management movement were filled with hope and with hype. Large data warehouses were going to provide the kind of information that would make companies masters of customer relationships. There were just two problems. First, getting the data out of the warehouse wasn’t nearly as hard as getting it into the person or device interacting with the customers in a way that added value, trust and expanded relationships. We seem to always underestimate the speed of technology and overestimate the speed at which we can absorb it and socialize around it.

Second, unfortunately the customers didn’t get the memo and mostly decided in their own rich wisdom they did not need or want “masters.” In fact as providers became masters of knowing all the details about our lives, consumers became more concerned. So while many organizations were trying to learn more about customer histories, behaviors and future needs — customers and even their governments were busy trying to protect privacy, security, and access. Anyone attempting to help an adult friend or family member with mental health issues has probably run into well-intentioned HIPAA rules (regulations that ensure privacy of medical records) that unfortunately also restrict the ways you can assist them. Big data gives and the fear of big data takes away.

Big data does not big relationships make. Over the last 20 years as our data keeps getting stronger, our customer relationships keep getting weaker. Eighty-six percent of consumers trust corporations less than they did five years ago. Customer retention across industries has fallen about 30 percent in recent years. Is it actually possible that we have unwittingly contributed in the undermining of our customer relationships? How could that be? For one thing, as companies keep getting better at targeting messages to specific groups and those groups keep getting better at blocking their messages. As usual, the power to resist trumps the power to exert.

No matter how powerful big data becomes, if it is to realize its potential, it must build trust on three levels. First, customers must trust our intentions. Data that can be used for us can also be used against us. There is growing fear institutions will become a part of a “surveillance state.” While organizations have gone to great length to promote protection of our data — the numbers reflect a fair amount of doubt. For example, according to MainStreet, “87 percent of Americans do not feel large banks are transparent and 68 percent do not feel their bank is on their side.:

Second, customers must trust our actions. Even if they trust our intentions, they might still fear that our actions put them at risk. Our private information can be hacked, then misused and disclosed in damaging and embarrassing ways. After the Sandy Hook tragedy a New York newspaper published the names and addresses of over 33,000 licensed gun owners along with an interactive map that showed exactly where they lived. In response names and addresses of the newspaper editor and writers were published on-line along with information about their children. No one, including retired judges, law enforcement officers and FBI agents expected their private information to be published in the midst of a very high decibel controversy.

Third, customers must trust the outcome — that sharing data will benefit them. Even with positive intentions and constructive actions, the results may range from disappointing to damaging. Most of us have provided email addresses or other contact data — around a customer service issue or such — and then started receiving email, phone or online solicitations. I know a retired executive who helps hard-to-hire people. She spent one evening surfing the Internet to research about expunging criminal records for released felons. Years later, Amazon greets her with books targeted to the felon it believes she is. Even with opt-out options, we felt used. Or, we provide specific information, only to repeat it in the next transaction or interaction — not getting the hoped for benefit of saving our time.

It will be challenging to grow the trust at anywhere near the rate we grow the data. Information develops rapidly, competence and trust develop slowly. Investing heavily in big data and scrimping on trust will have the opposite effect desired. To quote Dolly Parton who knows a thing or two about big: “It costs a lot of money to look this cheap.”

How Big Could a Man-Made Earthquake Get? (Popular Mechanics)

Scientists have found evidence that wastewater injection induced a record-setting quake in Oklahoma two years ago. How big can a man-made earthquake get, and will we see more of them in the future?

By Sarah Fecht – April 2, 2013 5:00 PM

hydraulic fracking drilling illustration

Hydraulic fracking drilling illustration. Brandon Laufenberg/Getty Images

In November 2011, a magnitude-5.7 earthquake rattled Prague, Okla., and 16 other nearby states. It flattened 14 homes and many other buildings, injured two people, and set the record as the state’s largest recorded earthquake. And according to a new study in the journal Geology, the event can also claim the title of Largest Earthquake That’s Ever Been Induced by Fluid Injection.”

In the paper, a team of geologists pinpoints the quake’s starting point at less than 200 meters (about 650 feet) from an injection well where wastewater from oil drilling was being pumped into the ground at high pressures. At 5.7 magnitude, the Prague earthquake was about 10 times stronger than the previous record holder: a magnitude-4.8 Rocky Mountain Arsenal earthquake in Colorado in 1967, caused by the U.S. Army injecting a deep well with 148,000 gallons per day of fluid wastes from chemical-weapons testing. So how big can these man-made earthquakes get?

The short answer is that scientists don’t really know yet, but it’s possible that fluid injection could cause some big ones on very rare occasions. “We don’t see any reason that there should be any upper limit for an earthquake that is induced,” says Bill Ellsworth, a geophysicist with the U.S. Geological Survey, who wasn’t involved in the new study.

As with natural earthquakes, most man-made earthquakes have been small to moderate in size, and most are felt only by seismometers. Larger quakes are orders of magnitude rarer than small quakes. For example, for every 1000 magnitude-1.0 earthquakes that occur, expect to see 100 magnitude-2.0s, 10 magnitude-3.0s, just 1 magnitude-4.0, and so on. And just as with natural earthquakes, the strength of the induced earthquake depends on the size of the nearby fault and the amount of stress acting on it. Some faults just don’t have the capacity to cause big earthquakes, whether natural or induced.

How do Humans Trigger Earthquakes?

Faults have two major kinds of stressors: shear stress, which makes two plates slide past each other along the fault line, and normal stress, which pushes the two plates together. Usually the normal stress keeps the fault from moving sideways. But when a fluid is injected into the ground, as in Prague, that can reduce the normal stress and make it easier for the fault to slip sideways. It’s as if if you have a tall stack of books on a table, Ellsworth says: If you take half the books away, it’s easier to slide the stack across the table.

“Water increases the fluid pressure in pores of rocks, which acts against the pressure across the fault,” says Geoffrey Abers, a Columbia University geologist and one of the new study’s authors. “By increasing the fluid pressure, you’re decreasing the strength of the fault.”

A similar mechanism may be behind earthquakes induced by large water reservoirs. In those instances, the artificial lake behind a dam causes water to seep into the pore spaces in the ground. In 1967, India’s Koyna Dam caused a 6.5 earthquake that killed 177 people, injured more than 2000, and left 50,000 homeless. Unprecedented seasonal fluctuations in water level behind a dam in Oroville, Calif., are believed to be behind the magnitude-6.1 earthquake that occurred there in 1975.

Extracting a fluid from the ground can also contribute to triggering a quake. “Think about filling a balloon with water and burying it at the beach,” Ellsworth says. “If you let the water out, the sand will collapse inward.” Similarly, when humans remove large amounts of oil and natural gas from the ground, it can put additional stress on a fault line. “In this case it may be the shear stresses that are being increased, rather than normal stresses,” Ellsworth says.

Take the example of the Gazli gas field in Uzbekistan, thought to be located in a seismically inactive area when drilling began in 1962. As drillers removed the natural gas, the pressure in the gas field dropped from 1030 psi in 1962 to 515 psi in 1976, then down to 218 psi in 1985. Meanwhile, three large magnitude-7.0 earthquakes struck: two in 1976 and one in 1984. Each quake had an epicenter within 12 miles of Gazli and caused a surface uplift of some 31 inches. Because the quakes occurred in Soviet-era Uzbekistan, information about the exact locations, magnitudes, and causes are not available. However, a report by the National Research Council concludes that “observations of crustal uplift and the proximity of these large earthquakes to the Gazli gas field in a previously seismically quiet region strongly suggest that they were induced by hydrocarbon extraction.” Extraction of oil is believed to have caused at least three big earthquakes in California, with magnitudes of 5.9, 6.1, and 6.5.

Some people worry that hydraulic fracturing, or fracking‚Äîwherein high-pressure fluids are used to crack through rock layers to extract oil and natural gas‚Äîwill lead to an increased risk of earthquakes. However, the National Research Council report points out that there are tens of thousands of hydrofracking wells in existence today, and there has only been one case in which a “felt” tremor was linked to fracking. That was a 2.3 earthquake in Blackpool, England, in 2011, which didn’t cause any significant damage. Although scientists have known since the 1920s that humans trigger earthquakes, experts caution that it’s not always easy to determine whether a specific event was induced.

Are Human Activities Making Quakes More Common?

Human activities have been linked to increased earthquake frequencies in certain areas. For instance, researchers have shown a strong correlation between the volume of fluid injected into the Rocky Mountain Arsenal well and the frequency of earthquakes in that area.

Geothermal-energy sites can also induce many earthquakes, possibly due to pressure, heat, and volume changes. The Geysers in California is the largest geothermal field in the U.S., generating 725 megawatts of electricity using steam from deep within the earth. Before The Geysers began operating in 1960, seismic activity was low in the area. Now the area experiences hundreds of earthquakes per year. Researchers have found correlations between the volume of steam production and the number of earthquakes in the region. In addition, as the area of the steam wells increased over the years, so did the spatial distribution of earthquakes.

Whether or not human activity is increasing the magnitude of earthquakes, however, is more of a gray area. When it comes to injection wells, evidence suggests that earthquake magnitudes rise along with the volume of injected wastewater, and possibly injection pressure and rate of injection as well, according to a statement from the Department of Interior.

The vast majority of earthquakes caused by The Geysers are considered to be microseismic events—too small for humans to feel. However, researchers from Lawrence Berkeley National Laboratory note that magnitude-4.0 earthquakes, which can cause minor damage, seem to be increasing in frequency.

The new study says that though earthquakes with a magnitude of 5.0 or greater are rare east of the Rockies, scientists have observed an 11-fold increase between 2008 and 2011, compared with 1976 through 2007. But the increase hasn’t been tied to human activity. “We do not really know what is causing this increase, but it is remarkable,” Abers says. “It is reasonable that at least some may be natural.”

Chemicals, Risk And The Public (Chicago Tribune)

April 29, 1989|By Earon S. Davis

The public is increasingly uncomfortable with both the processes and the results of government and industry decision-making about chemical hazards.

Decisions that expose people to uncertain and potentially catastrophic risks from chemicals seem to be made without adequate scientific information and without an appreciation of what makes a risk acceptable to the public.

The history of environmental and occupational health provides myriad examples in which entire industries have acted in complete disregard of public health risks and in which government failed to act until well after disasters were apparent.

It is not necessary to name each chemical, each debacle, in which the public was once told the risks were insignificant, but these include DDT, asbestos, Kepone, tobacco smoke, dioxin, PCBs, vinyl chloride, flame retardants in children`s sleepware, Chlordane, Alar and urea formaldehyde foam. These chemicals were banned or severely restricted, and virutally no chemical has been found to be safer than originally claimed by industry and government.

It is no wonder that government and industry efforts to characterize so many uncertain risks as “insignificant“ are met with great skepticism. In a pluralistic, democratic society, acceptance of uncertainty is a complex matter that requires far more than statistical models. Depending upon cultural and ethical factors, some risks are simply more acceptable than others.

When it comes to chemical risks to human health, many factors combine to place a relatively higher burden on government and industry to show social benefits. Not the least of these is the unsatisfactory track record of industry and its regulatory agencies.

Equally important are the tremendous gaps in scientific knowledge about chemically induced health effects, as well as the specific characteristics of these risks.

Chemical risks differ from many other kinds because, not only are the victims struck largely at random, but there is usually no way to know which illnesses are eventually caused by a chemical. There are so many poorly understood illnesses and so many chemical exposures which take many years to develop that most chemical victims will not even be identified, let alone properly compensated.

To the public, this difference is significant, but to industry it poses few problems. Rather, it presents the opportunity to create risks and yet remain free of liability for the bulk of the costs imposed on society, except in the rare instance where a chemical produces a disease which does not otherwise appear in humans.

Statutes of limitations, corporate litigiousness, inability or unwillingness of physicians to testify on causation and the sheer passage of time pose major obstacles to chemical victims attempting to receive compensation.

The delayed effects of chemical exposures also make it impossible to fully document the risks until decades after the Pandora`s box has been opened. The public is increasingly afraid that regulators are using the lack of immediately identified victims as evidence of chemical safety, which it simply is not.

Chemical risks are different because they strike people who have given no consent, who may be completely unaware of danger and who may not even have been born at the time of the decision that led to their exposure. They are unusual, too, because we don`t know enough about the causes of cancer, birth defects and neurological and immunologic disorders to understand the real risks posed by most chemicals.

The National Academy of Sciences has found that most chemicals in commerce have not even been tested for many of these potential health effects. In fact, there are growing concerns of new neurologic and chemical sensitivity disorders of which almost nothing is known.

We are exposed to so many chemicals that there is literally no way of estimating the cumulative risks. Many chemicals also present synergistic effects in which exposure to two or more substances produces risks many times greater than the simple sum of the risks. Society has begun to see that the thousands of acceptable risks could add up to one unacceptable generic chemical danger.

The major justification for chemical risks, given all of the unknowns and uncertainties, is an overriding benefit to society. One might justify taking a one-in-a-million risk for a product that would make the nation more economically competitive or prevent many serious cases of illness. But such a risk may not be acceptable if it is to make plastic seats last a little longer, to make laundry 5 percent brighter or lawns a bit greener, or to allow apples to ripen more uniformly.

These are some of the reasons the public is unwilling to accept many of the risks being forced upon it by government and industry. There is no “mass hysteria“ or “chemophobia.“ There is growing awareness of the preciousness of human life, the banal nature of much of what industry is producing and the gross inadequacy of efforts to protect the public from long-term chemical hazards.

If the public is to regain confidence in the risk management process, industry and government must open up their own decision-making to public inquiry and input. The specific hazards and benefits of any chemical product or byproduct should be explained in plain language. Uncertainties that cannot be quantified must also be explained and given full consideration. And the process must include ethical and moral considerations such as those addressed above. These are issues to be decided by the public, not bureaucrats or corporate interests.

For industry and government to regain public support, they must stop blaming “ignorance“ and overzealous public interest groups for the concern of the publc and the media.

Rather, they should begin by better appreciating the tremendous responsibility they bear to our current and future generations, and by paying more attention to the real bottom line in our democracy: the honest, rational concerns of the average American taxpayer.

Emerging Ethical Dilemmas in Science and Technology (Science Daily)

Dec. 17, 2012 — As a new year approaches, the University of Notre Dame’s John J. Reilly Center for Science, Technology and Values has announced its inaugural list of emerging ethical dilemmas and policy issues in science and technology for 2013.

The Reilly Center explores conceptual, ethical and policy issues where science and technology intersect with society from different disciplinary perspectives. Its goal is to promote the advancement of science and technology for the common good.

The center generated its inaugural list with the help of Reilly fellows, other Notre Dame experts and friends of the center.

The center aimed to present a list of items for scientists and laypeople alike to consider in the coming months and years as new technologies develop. It will feature one of these issues on its website each month in 2013, giving readers more information, questions to ask and resources to consult.

The ethical dilemmas and policy issues are:

Personalized genetic tests/personalized medicine

Within the last 10 years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. What are the potential privacy issues, and how do we protect this very personal and private information? Are we headed toward a new era of therapeutic intervention to increase quality of life, or a new era of eugenics?

Hacking into medical devices

Implanted medical devices, such as pacemakers, are susceptible to hackers. Barnaby Jack, of security vendor IOActive, recently demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. How do we make sure these devices are secure?

Driverless Zipcars

In three states — Nevada, Florida, and California — it is now legal for Google to operate its driverless cars. Google’s goal is to create a fully automated vehicle that is safer and more effective than a human-operated vehicle, and the company plans to marry this idea with the concept of the Zipcar. The ethics of automation and equality of access for people of different income levels are just a taste of the difficult ethical, legal and policy questions that will need to be addressed.

3-D printing

Scientists are attempting to use 3-D printing to create everything from architectural models to human organs, but we could be looking at a future in which we can print personalized pharmaceuticals or home-printed guns and explosives. For now, 3-D printing is largely the realm of artists and designers, but we can easily envision a future in which 3-D printers are affordable and patterns abound for products both benign and malicious, and that cut out the manufacturing sector completely.

Adaptation to climate change

The differential susceptibility of people around the world to climate change warrants an ethical discussion. We need to identify effective and safe ways to help people deal with the effects of climate change, as well as learn to manage and manipulate wild species and nature in order to preserve biodiversity. Some of these adaptation strategies might be highly technical (e.g. building sea walls to stem off sea level rise), but others are social and cultural (e.g., changing agricultural practices).

Low-quality and counterfeit pharmaceuticals

Until recently, detecting low-quality and counterfeit pharmaceuticals required access to complex testing equipment, often unavailable in developing countries where these problems abound. The enormous amount of trade in pharmaceutical intermediaries and active ingredients raise a number of issues, from the technical (improvement in manufacturing practices and analytical capabilities) to the ethical and legal (for example, India ruled in favor of manufacturing life-saving drugs, even if it violates U.S. patent law).

Autonomous systems

Machines (both for peaceful purposes and for war fighting) are increasingly evolving from human-controlled, to automated, to autonomous, with the ability to act on their own without human input. As these systems operate without human control and are designed to function and make decisions on their own, the ethical, legal, social and policy implications have grown exponentially. Who is responsible for the actions undertaken by autonomous systems? If robotic technology can potentially reduce the number of human fatalities, is it the responsibility of scientists to design these systems?

Human-animal hybrids (chimeras)

So far scientists have kept human-animal hybrids on the cellular level. According to some, even more modest experiments involving animal embryos and human stem cells violate human dignity and blur the line between species. Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species?

Ensuring access to wireless and spectrum

Mobile wireless connectivity is having a profound effect on society in both developed and developing countries. These technologies are completely transforming how we communicate, conduct business, learn, form relationships, navigate and entertain ourselves. At the same time, government agencies increasingly rely on the radio spectrum for their critical missions. This confluence of wireless technology developments and societal needs presents numerous challenges and opportunities for making the most effective use of the radio spectrum. We now need to have a policy conversation about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underserved (rural, low-income, developing areas) populations.

Data collection and privacy

How often do we consider the massive amounts of data we give to commercial entities when we use social media, store discount cards or order goods via the Internet? Now that microprocessors and permanent memory are inexpensive technology, we need think about the kinds of information that should be collected and retained. Should we create a diabetic insulin implant that could notify your doctor or insurance company when you make poor diet choices, and should that decision make you ineligible for certain types of medical treatment? Should cars be equipped to monitor speed and other measures of good driving, and should this data be subpoenaed by authorities following a crash? These issues require appropriate policy discussions in order to bridge the gap between data collection and meaningful outcomes.

Human enhancements

Pharmaceutical, surgical, mechanical and neurological enhancements are already available for therapeutic purposes. But these same enhancements can be used to magnify human biological function beyond the societal norm. Where do we draw the line between therapy and enhancement? How do we justify enhancing human bodies when so many individuals still lack access to basic therapeutic medicine?

Government, Industry Can Better Manage Risks of Very Rare Catastrophic Events, Experts Say (Science Daily)

ScienceDaily (Nov. 15, 2012) — Several potentially preventable disasters have occurred during the past decade, including the recent outbreak of rare fungal meningitis linked to steroid shots given to 13,000 patients to relieve back pain. Before that, the 9/11 terrorist attacks in 2001, the Space Shuttle Columbia explosion in 2003, the financial crisis that started in 2008, the Deepwater Horizon accident in the Gulf of Mexico in 2011, and the Fukushima tsunami and ensuing nuclear accident also in 2011 were among rare and unexpected disasters that were considered extremely unlikely or even unthinkable.

A Stanford University engineer and risk management expert has analyzed the phenomenon of government and industry waiting for rare catastrophes to happen before taking risk management steps. She concluded that a different approach to these events would go far towards anticipating them, preventing them or limiting the losses.

To examine the risk management failures discernible in several major catastrophes, the research draws upon the combination of systems analysis and probability as used, for example, in engineering risk analysis. When relevant statistics are not available, it discusses the powerful alternative of systemic risk analysis to try to anticipate and manage the risks of highly uncertain, rare events. The paper by Stanford University researcher Professor Elisabeth Paté-Cornell recommends “a systematic risk analysis anchored in history and fundamental knowledge” as opposed to both industry and regulators sometimes waiting until after a disaster occurs to take safety measures as was the case, for example, of the Deepwater Horizon accident in 2011. Her paper, “On ‘Black Swans’ and ‘Perfect Storms’: Risk Analysis and Management When Statistics Are Not Enough,” appears in the November 2012 issue of Risk Analysis, published by the Society for Risk Analysis.

Paté-Cornell’s paper draws upon two commonly cited images representing different types of uncertainty — “black swans” and “perfect storms” — that are used both to describe extremely unlikely but high-consequence events and often to justify inaction until after the fact. The uncertainty in “perfect storms” derives mainly from the randomness of rare but known events occurring together. The uncertainty in “black swans” stems from the limits of fundamental understanding of a phenomenon, including in extreme cases, a complete lack of knowledge about its very existence.

Given these two extreme types of uncertainties, Paté-Cornell asks what has been learned about rare events in engineering risk analysis that can be incorporated in other fields such as finance or medicine. She notes that risk management often requires “an in-depth analysis of the system, its functions, and the probabilities of its failure modes.” The discipline confronts uncertainties by systematic identification of failure “scenarios,” including rare ones, using “reasoned imagination,” signals (new intelligence information, medical alerts, near-misses and accident precursors) and a set of analytical tools to assess the chances of events that have not happened yet. A main emphasis of systemic risk analysis is on dependencies (of failures, human errors, etc.) and on the role of external factors, such as earthquakes and tsunamis that become common causes of failure.

The “risk of no risk analysis” is illustrated by the case of the 14 meter Fukushima tsunami resulting from a magnitude 9 earthquake. Historical records showed that large tsunamis had occurred at least twice before in the same area. The first time was the Sanriku earthquake in the year 869, which was estimated at magnitude 8.6 with a tsunami that penetrated 4 kilometers inland. The second was the Sanriku earthquake of 1611, estimated at magnitude 8.1 that caused a tsunami with an estimated maximum wave height of about 20 meters. Yet, those previous events were not factored into the design of the Fukushima Dai-ichi nuclear reactor, which was built for a maximum wave height of 5.7 meters, simply based on the tidal wave caused in that area by the 1960 earthquake in Chile. Similar failures to capture historical data and various “signals” occurred in the cases of the 9/11 attacks, the Columbia Space Shuttle explosion and other examples analyzed in the paper.

The risks of truly unimaginable events that have never been seen before (such as the AIDS epidemics) cannot be assessed a priori, but careful and systematic monitoring, signals observation and a concerted response are keys to limiting the losses. Other rare events that place heavy pressure on human or technical systems are the result of convergences of known events (“perfect storms”) that can and should be anticipated. Their probabilities can be assessed using a set of analytical tools that capture dependencies and dynamics in scenario analysis. Given the results of such models, there should be no excuse for failing to take measures against rare but predictable events that have damaging consequences, and to react to signals, even imperfect ones, that something new may be unfolding.

Journal Reference:

  1. Elisabeth Paté-Cornell. On “Black Swans” and “Perfect Storms”: Risk Analysis and Management When Statistics Are Not EnoughRisk Analysis, 2012; DOI:10.1111/j.1539-6924.2011.01787.x

Risk (Fractal Ontology)

http://fractalontology.wordpress.com/2012/11/01/risk/

Joseph Weissman | Thursday, November 1, 2012

Paul Klee, “Insula Dulcamara” (1938); Oil on newsprint, mounted on burlap

I began writing this before disaster struck very close to home; and so I finish it without finishing it. A disaster never really ends; it strikes and strikes continuously — and so even silence is insufficient. But yet there is also no expression of concern, no response which could address comprehensively the immense and widespread suffering of bodies and minds and spirits. I would want to emphasize my plea below upon the responsibility of thinkers and artists and writers to create new ways of thinking the disaster; if only to mitigate the possibility of their recurrence. (Is it not the case that the disaster increasingly has the characteristics of the accident; that the Earth and global techno-science are increasingly co-extensive Powers?) And yet despite these necessary new ways of thinking and feeling, I fear it will remain the case that nothing can be said about a disaster, if only because nothing can ultimately be thought about the disaster. But it cannot be simply passed over in silence; if nothing can be said, then perhaps everything may be said.

Inherent to the notion of risk is the multiple, or multiplicity. The distance between the many and the multiple is nearly infinite; every problem of the one and the many resolves to the perspective of the one, while multiplicity always singularizes, takes a line of pure variation or difference to its highest power. A multiplicity is already a life, the sea, time: a cosmos or style in terms of powers and forces; a melody or refrain in its fractured infinity.

The multiple is clear in its “being” only transitorily — as the survey of a fleet or swarm or network; the thought which grasps it climbs mountains, ascends vertiginously towards that infinite height which would finally reveal the substrate of the plane, the “truth” of its shadowy depths, the mysterious origins of its nomadic populations.

No telescopic lens could be large enough to approach this distance; and yet it is traversed instantaneously when the tragic arc of a becoming terminates in disaster; when a line of flight turns into a line of death, when one-or-several lines of organization and development reach a point beyond which avoiding self-destruction is impossible.

Chaos, boundless furnace of becoming! Fulminating entropy which compels even the cosmos itself upon a tragic arc of time; are birth and death not one in chaos or superfusion?

Schizophrenia is perhaps this harrowing recognition that there are only machines machining machines, without limit, bottomless.

In chaos, there is no longer disaster; but there are no longer subjects or situations or signifiers. Every subject, signifier and situation approaches its inevitable as the Disaster which would rend their very being from them; hence the nihilism of the sign, the emptiness of the subject, the void of the situation. Existence is farce — if loss is (permitted to become) tragedy, omnipresent, cosmic, deified.

There is an infinite tragedy at the heart of the disaster; a trauma which makes the truth of our fate impossible-to-utter; on the one hand because imbued with infinite meaning, because singular — and on the other, in turn, meaningless, because essentially nullified, without-reason. That the disaster is never simply pure incidental chaos, a purely an-historical interruption, is perhaps the key point: we start and end with a disaster that prevents us from establishing either end or beginning — a disaster which swiftly looms to cosmic and even ontological proportions…

Perhaps there is only a life after the crisis, after a breakthrough or breakdown; after an encounter with the outside. A life as strategy or risk, which is perhaps to say a multiplicity: a life, or the breakthrough of — and, perhaps inevitably, breakdown before — white walls, mediation, determinacy.

A life in any case is always-already a voice, a cosmos, a thought: it is light or free movement whose origin and destination cannot be identified as stable sites or moments, whose comings and goings are curiously intertwined and undetermined.

We cannot know the limits of a life’s power; but we know disaster. We know that multiplicities, surging flocks of actions and passions, are continually at risk.

The world presents itself unto a life as an inescapable gravity, monstrous fate, the contagion of space, time, organization. A life expresses itself as an openness which is lacerated by the Open.

A life is a cosmos within a cosmos — and so a life opens up closed systems; it struggles and learns not in spite of entropy but on account of it, through a kind of critical strategy, even a perversely recursive or fractal strategy; through the micro-cosmogenetic sieve of organic life, entropy perversely becomes a hyper-organizational principle.

A life enters into a perpetual and weightless ballet — in a defiance-which-is-not-a-defiance of stasis; a stasis which yet presents a grave and continuous danger to a life.

What is a life, apart from infinite movement or disaster? Time, a dream, the sea: but a life moves beyond rivers of time, or seas of dreaming, or the outer spaces of radical forgetting (and alien memories…)

A life is a silence which may become wise. A life — or that perverse machine which works only by breaking down — or through…

A life is intimacy through parasitism, already a desiring-machine-factory or a tensor-calculus of the unconscious.

A life lives in taut suspension from one or several lines of becoming, of flight or death — lines whose ultimate trajectories may not be known through any safe or even sure method.

A life is the torsion between dying and rebirth.

Superfusion between all potentialities, a life is infinite-becoming of the subjectless-subject. Superject.

Journeying and returning, without moving, from the infinity and chaos of the outside/inside. A stationary voyage in a non-dimensional cosmos, where everything flows, heats, grinds.

Phenomenology is a geology of the unconscious, a problem of the crystalline apparatus of time. Could there be at long last a technology of time which would abandon strip-mining the subsconscious?

A chrono-technics which ethico-aesthetically creates and transforms virtual and actual worlds, traces possibilities of living, thinking, thinking; diagnoses psychic, social and physical ecosystems simultaneously.

A communications-strategy, but one that could point beyond the vicious binary of coercion and conflation — but so therefore would not-communicate.

There is a a recursive problem surrounding the silence and darkness at the heart of a life; it is perhaps impossible to exhaust (at least clinically) the infinitely-deferred origin of those crystalline temporal dynamisms which in turn structure any-moment-whatsoever.

Is there a silence which would constitute that very singular machinic ‘sheaf’, the venerated crystalline paradise of the moved-unmoving?

Silence, wisdom.

The impossibility of this origin is also the interminability of the analysis; also the infinite movement attending any moment whatsoever. It is the history of disaster, of the devil.

There is only thinking when a thought becomes critically or clinically engaged with a world, a cosmos. This engagement discovers its bottomlessness in a disaster for thought itself. A disaster for life, thought, the world; but also perhaps their infinitely-deferred origins…

What happens in the physical, economic, social and psychic collapse of a world, a thought, a life? Is it only in this collapse, commensurate with the collision, interference of one cosmos with another…?

Collapse is never a final state. There is no closed system of causes but a kind of original fracture. The schizophrenic coexistence of many separate worlds in a kind of meta-stable superfusion.

A thought, a cosmos, a world, a life can have no other origin than the radical corruption and novel genesis of a pure substance of thinking, living, “worlding,” “cosmosing.” A becoming refracts within its own infinite history the history of a life, a world, a thought.

Although things doubtless seem discouraging, at any moment whatsoever a philosophy can be made possible. At any time and place, this cyclonic involution of the library of Babel can be reactivated, this golden ball propelled by comet-fire and dancing towards the future can be captured in a moment’s reflection…

The breakdown of the world, of thought, of life — the experience of absolute collapse, of the horror of the vacuum, is already close the infinite zero-point reached immediately and effortlessly by schizophrenia. Even in a joyous mode when it recognizes the properly affirmative component of the revelation of cosmos as production, production as multiplicity, multiplicity as it opens onto the infinite or the future. (Only the infinity of the future can become-equal to a life.)

That spirit which fixes a beginning in space and time, fixes it without fixing itself; it exemplifies the possibility of atemporality and the heresy of the asignifying, even while founding the possibility of piety and dogma.

The disaster presents thought and language with their cosmic doubles; thought encounters a disaster in the way a subject encounters a radical outside, a death.

Only selection answers to chaos, to the infinite horizon of a life — virtually mapping infinite potential planes of organization onto a singular line of development. Only selection, only the possibility of philosophy, points beyond the inevitability of disaster.

The disaster and its aversion is the basic orientation of critical thought; thinking the disaster: this impossible task is the critical cultural aim of art and writing. Speaking the truth of the disaster is perhaps impossible. A life encounters disaster as the annihilating of the code itself; not merely a decoding but the alienation from the essence of matter or speech or language. The means to thinking the disaster lie in poetic imagination, the possibility of the temporal retrojection of narrative elements; the disaster can be thought only through “unthinking” it: in the capacity of critical or poetic imagination to explore the means by which a disaster was retroactively averted. The counterfactual acquires a new and radical dimension: not the theological dimension of salvation, but a clinical dimension — the power to of think the transformation of the conditions of the disaster.

Embrapa envia sementes de milho e arroz para o Banco de Svalbard, na Noruega (O Globo)

JC e-mail 4577, de 05 de Setembro de 2012.

Banco nórdico é o mais seguro do mundo, construído para resistir a catástrofes climáticas e a explosão nuclear.

A Embrapa envia esta semana 264 amostras representativas de sementes de milho e 541 de arroz para o Banco Global de Sementes de Svalbard, na Noruega, como parte do acordo assinado com o Real Ministério de Agricultura e Alimentação do país em 2008. Serão enviadas ao banco genético norueguês as coleções nucleares de arroz e milho, ou seja, um grupo limitado de acessos derivados de uma coleção vegetal, escolhido para representar a variabilidade genética da coleção inteira. Tradicionalmente, as coleções nucleares são estabelecidas com tamanho em torno de 10% dos acessos de toda a coleção original e incluem aproximadamente 70% no acervo genético.

A escolha dessas culturas atende a uma das recomendações do Banco de Svalbard quanto à relevância para a segurança alimentar e agricultura sustentável. Embora não sejam culturas originárias do Brasil, são cultivadas no país há séculos e têm características de rusticidade e adaptabilidade às condições nacionais. A próxima cultura agrícola a ser encaminhada para o banco norueguês será o feijão, o que deve acontecer até o fim de 2012.

O envio de amostras para Svalbard é mais uma garantia de segurança, já que o banco nórdico é o mais seguro do mundo, construído com total segurança para resistir a catástrofes climáticas e até a uma explosão nuclear. O banco tem capacidade para quatro milhões e quinhentas mil amostras de sementes. O conjunto arquitetônico conta com três câmaras de segurança máxima situadas ao final de um túnel de 125 metros dentro de uma montanha em uma pequena ilha do arquipélago de Svalbard situado no paralelo 780 N, próximo do Pólo Norte.

As sementes são armazenadas a 20ºC abaixo de zero em embalagens hermeticamente fechadas, guardadas em caixas armazenadas em prateleiras. O depósito está rodeado pelo clima glacial do Ártico, o que assegura as baixas temperaturas, mesmo se houver falha no suprimento de energia elétrica. As baixas temperatura e umidade garantem a baixa atividade metabólica, mantendo a viabilidade das sementes por um milênio ou mais.

Earthquake Hazards Map Study Finds Deadly Flaws (Science Daily)

ScienceDaily (Aug. 31, 2012) — Three of the largest and deadliest earthquakes in recent history occurred where earthquake hazard maps didn’t predict massive quakes. A University of Missouri scientist and his colleagues recently studied the reasons for the maps’ failure to forecast these quakes. They also explored ways to improve the maps. Developing better hazard maps and alerting people to their limitations could potentially save lives and money in areas such as the New Madrid, Missouri fault zone.

“Forecasting earthquakes involves many uncertainties, so we should inform the public of these uncertainties,” said Mian Liu, of MU’s department of geological sciences. “The public is accustomed to the uncertainties of weather forecasting, but foreseeing where and when earthquakes may strike is far more difficult. Too much reliance on earthquake hazard maps can have serious consequences. Two suggestions may improve this situation. First, we recommend a better communication of the uncertainties, which would allow citizens to make more informed decisions about how to best use their resources. Second, seismic hazard maps must be empirically tested to find out how reliable they are and thus improve them.”

Liu and his colleagues suggest testing maps against what is called a null hypothesis, the possibility that the likelihood of an earthquake in a given area — like Japan — is uniform. Testing would show which mapping approaches were better at forecasting earthquakes and subsequently improve the maps.

Liu and his colleagues at Northwestern University and the University of Tokyo detailed how hazard maps had failed in three major quakes that struck within a decade of each other. The researchers interpreted the shortcomings of hazard maps as the result of bad assumptions, bad data, bad physics and bad luck.

Wenchuan, China — In 2008, a quake struck China’s Sichuan Province and cost more than 69,000 lives. Locals blamed the government and contractors for not making buildings in the area earthquake-proof, according to Liu, who says that hazard maps bear some of the blame as well since the maps, based on bad assumptions, had designated the zone as an area of relatively low earthquake hazard.

Léogâne, Haiti — The 2010 earthquake that devastated Port-au-Prince and killed an estimated 316,000 people occurred along a fault that had not caused a major quake in hundreds of years. Using only the short history of earthquakes since seismometers were invented approximately one hundred years ago yielded hazard maps that were didn’t indicate the danger there.

Tōhoku, Japan — Scientists previously thought the faults off the northeast coast of Japan weren’t capable of causing massive quakes and thus giant tsunamis like the one that destroyed the Fukushima nuclear reactor. This bad understanding of particular faults’ capabilities led to a lack of adequate preparation. The area had been prepared for smaller quakes and the resulting tsunamis, but the Tōhoku quake overwhelmed the defenses.

“If we limit our attention to the earthquake records in the past, we will be unprepared for the future,” Liu said. “Hazard maps tend to underestimate the likelihood of quakes in areas where they haven’t occurred previously. In most places, including the central and eastern U.S., seismologists don’t have a long enough record of earthquake history to make predictions based on historical patterns. Although bad luck can mean that quakes occur in places with a genuinely low probability, what we see are too many ‘black swans,’ or too many exceptions to the presumed patterns.”

“We’re playing a complicated game against nature,” said the study’s first author, Seth Stein of Northwestern University. “It’s a very high stakes game. We don’t really understand all the rules very well. As a result, our ability to assess earthquake hazards often isn’t very good, and the policies that we make to mitigate earthquake hazards sometimes aren’t well thought out. For example, the billions of dollars the Japanese spent on tsunami defenses were largely wasted.

“We need to very carefully try to formulate the best strategies we can, given the limits of our knowledge,” Stein said. “Understanding the uncertainties in earthquake hazard maps, testing them, and improving them is important if we want to do better than we’ve done so far.”

The study, “Why earthquake hazard maps often fail and what to do about it,” was published by the journal Tectonophysics. First author of the study was Seth Stein of Northwestern University. Robert Geller of the University of Tokyo was co-author. Mian Liu is William H. Byler Distinguished Chair in Geological Sciences in the College of Arts and Science at the University of Missouri.

Cloud Brightening to Control Global Warming? Geoengineers Propose an Experiment (Science Daily)

A conceptualized image of an unmanned, wind-powered, remotely controlled ship that could be used to implement cloud brightening. (Credit: John McNeill)

ScienceDaily (Aug. 20, 2012) — Even though it sounds like science fiction, researchers are taking a second look at a controversial idea that uses futuristic ships to shoot salt water high into the sky over the oceans, creating clouds that reflect sunlight and thus counter global warming.

University of Washington atmospheric physicist Rob Wood describes a possible way to run an experiment to test the concept on a small scale in a comprehensive paper published this month in the journal Philosophical Transactions of the Royal Society.

The point of the paper — which includes updates on the latest study into what kind of ship would be best to spray the salt water into the sky, how large the water droplets should be and the potential climatological impacts — is to encourage more scientists to consider the idea of marine cloud brightening and even poke holes in it. In the paper, he and a colleague detail an experiment to test the concept.

“What we’re trying to do is make the case that this is a beneficial experiment to do,” Wood said. With enough interest in cloud brightening from the scientific community, funding for an experiment may become possible, he said.

The theory behind so-called marine cloud brightening is that adding particles, in this case sea salt, to the sky over the ocean would form large, long-lived clouds. Clouds appear when water forms around particles. Since there is a limited amount of water in the air, adding more particles creates more, but smaller, droplets.

“It turns out that a greater number of smaller drops has a greater surface area, so it means the clouds reflect a greater amount of light back into space,” Wood said. That creates a cooling effect on Earth.

Marine cloud brightening is part of a broader concept known as geoengineering which encompasses efforts to use technology to manipulate the environment. Brightening, like other geoengineering proposals, is controversial for its ethical and political ramifications and the uncertainty around its impact. But those aren’t reasons not to study it, Wood said.

“I would rather that responsible scientists test the idea than groups that might have a vested interest in proving its success,” he said. The danger with private organizations experimenting with geoengineering is that “there is an assumption that it’s got to work,” he said.

Wood and his colleagues propose trying a small-scale experiment to test feasibility and begin to study effects. The test should start by deploying sprayers on a ship or barge to ensure that they can inject enough particles of the targeted size to the appropriate elevation, Wood and a colleague wrote in the report. An airplane equipped with sensors would study the physical and chemical characteristics of the particles and how they disperse.

The next step would be to use additional airplanes to study how the cloud develops and how long it remains. The final phase of the experiment would send out five to 10 ships spread out across a 100 kilometer, or 62 mile, stretch. The resulting clouds would be large enough so that scientists could use satellites to examine them and their ability to reflect light.

Wood said there is very little chance of long-term effects from such an experiment. Based on studies of pollutants, which emit particles that cause a similar reaction in clouds, scientists know that the impact of adding particles to clouds lasts only a few days.

Still, such an experiment would be unusual in the world of climate science, where scientists observe rather than actually try to change the atmosphere.

Wood notes that running the experiment would advance knowledge around how particles like pollutants impact the climate, although the main reason to do it would be to test the geoengineering idea.

A phenomenon that inspired marine cloud brightening is ship trails: clouds that form behind the paths of ships crossing the ocean, similar to the trails that airplanes leave across the sky. Ship trails form around particles released from burning fuel.

But in some cases ship trails make clouds darker. “We don’t really know why that is,” Wood said.

Despite increasing interest from scientists like Wood, there is still strong resistance to cloud brightening.

“It’s a quick-fix idea when really what we need to do is move toward a low-carbon emission economy, which is turning out to be a long process,” Wood said. “I think we ought to know about the possibilities, just in case.”

The authors of the paper are treading cautiously.

“We stress that there would be no justification for deployment of [marine cloud brightening] unless it was clearly established that no significant adverse consequences would result. There would also need to be an international agreement firmly in favor of such action,” they wrote in the paper’s summary.

There are 25 authors on the paper, including scientists from University of Leeds, University of Edinburgh and the Pacific Northwest National Laboratory. The lead author is John Latham of the National Center for Atmospheric Research and the University of Manchester, who pioneered the idea of marine cloud brightening.

Wood’s research was supported by the UW College of the Environment Institute.

Journal Reference:

J. Latham, K. Bower, T. Choularton, H. Coe, P. Connolly, G. Cooper, T. Craft, J. Foster, A. Gadian, L. Galbraith, H. Iacovides, D. Johnston, B. Launder, B. Leslie, J. Meyer, A. Neukermans, B. Ormond, B. Parkes, P. Rasch, J. Rush, S. Salter, T. Stevenson, H. Wang, Q. Wang, R. Wood. Marine cloud brighteningPhilosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2012; 370 (1974): 4217 DOI:10.1098/rsta.2012.0086

Doctors Often Don’t Disclose All Possible Risks to Patients Before Treatment (Science Daily)

ScienceDaily (Aug. 7, 2012) — Most informed consent disputes involve disagreements about who said what and when, not stand-offs over whether a particular risk ought to have been disclosed. But doctors may “routinely underestimate the importance of a small set of risks that vex patients” according to international experts writing in this week’s PLoS Medicine.

Increasingly, doctors are expected to advise and empower patients to make rational choices by sharing information that may affect treatment decisions, including risks of adverse outcomes. However, authors from Australia and the US led by David Studdert from the University of Melbourne argue that doctors, especially surgeons, are often unsure which clinical risks they should disclose and discuss with patients before treatment.

To understand more about the clinical circumstances in which disputes arise between doctors and patients in this area, the authors analyzed 481 malpractice claims and patient complaints from Australia involving allegations of deficiencies in the process of obtaining informed consent.

The authors found that 45 (9%) of the cases studied were disputed duty cases — that is, they involved head-to-head disagreements over whether a particular risk ought to have been disclosed before treatment. Two-thirds of these disputed duty cases involved surgical procedures, and the majority (38/45) of them related to five specific outcomes that had quality of life implications for patients, including chronic pain and the need for re-operation.

The authors found that the most common justifications doctors gave for not telling patients about particular risks before treatment were that they considered such risks too rare to warrant discussion or the specific risk was covered by a more general risk that was discussed.

However, nine in ten of the disputes studied centered on factual disagreements — arguments over who said what, and when. The authors say: “Documenting consent discussions in the lead-up to surgical procedures is particularly important, as most informed consent claims and complaints involved factual disagreements over the disclosure of operative risks.”

The authors say: “Our findings suggest that doctors may systematically underestimate the premium patients place on understanding certain risks in advance of treatment.”

They conclude: “Improved understanding of these situations helps to spotlight gaps between what patients want to hear and what doctors perceive patients want — or should want — to hear. It may also be useful information for doctors eager to avoid medico-legal disputes.”

Teen Survival Expectations Predict Later Risk-Taking Behavior (Science Daily)

ScienceDaily (Aug. 1, 2012) — Some young people’s expectations that they will not live long, healthy lives may actually foreshadow such outcomes.

New research published August 1 in the open access journal PLOS ONEreports that, for American teens, the expectation of death before the age of 35 predicted increased risk behaviors including substance abuse and suicide attempts later in life and a doubling to tripling of mortality rates in young adulthood.

The researchers, led by Quynh Nguyen of Northeastern University in Boston, found that one in seven participants in grades 7 to 12 reported perceiving a 50-50 chance or less of surviving to age 35. Upon follow-up interviews over a decade later, the researchers found that low expectations of longevity at young ages predicted increased suicide attempts and suicidal thoughts as well as heavy drinking, smoking, and use of illicit substances later in life relative to their peers who were almost certain they would live to age 35.

“The association between early survival expectations and detrimental outcomes suggests that monitoring survival expectations may be useful for identifying at-risk youth,” the authors state.

The study compared data collected from 19,000 adolescents in 1994-1995 to follow-up data collected from the same respondents 13-14 years later. The cohort was part of the National Longitudinal Study of Adolescent Health (Add Health), conducted by the Carolina Population Center and funded by the National Institutes of Health and 23 other federal agencies and foundations.

Journal Reference:

Quynh C. Nguyen, Andres Villaveces, Stephen W. Marshall, Jon M. Hussey, Carolyn T. Halpern, Charles Poole. Adolescent Expectations of Early Death Predict Adult Risk BehaviorsPLoS ONE, 2012; 7 (8): e41905 DOI: 10.1371/journal.pone.0041905

Severe Nuclear Reactor Accidents Likely Every 10 to 20 Years, European Study Suggests (Science Daily)

ScienceDaily (May 22, 2012) — Western Europe has the worldwide highest risk of radioactive contamination caused by major reactor accidents.

Global risk of radioactive contamination. The map shows the annual probability in percent of radioactive contamination by more than 40 kilobecquerels per square meter. In Western Europe the risk is around two percent per year. (Credit: Daniel Kunkel, MPI for Chemistry, 2011)

Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry in Mainz have calculated that such events may occur once every 10 to 20 years (based on the current number of reactors) — some 200 times more often than estimated in the past. The researchers also determined that, in the event of such a major accident, half of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants.

The reactor accident in Fukushima has fuelled the discussion about nuclear energy and triggered Germany’s exit from their nuclear power program. It appears that the global risk of such a catastrophe is higher than previously thought, a result of a study carried out by a research team led by Jos Lelieveld, Director of the Max Planck Institute for Chemistry in Mainz: “After Fukushima, the prospect of such an incident occurring again came into question, and whether we can actually calculate the radioactive fallout using our atmospheric models.” According to the results of the study, a nuclear meltdown in one of the reactors in operation worldwide is likely to occur once in 10 to 20 years. Currently, there are 440 nuclear reactors in operation, and 60 more are planned.

To determine the likelihood of a nuclear meltdown, the researchers applied a simple calculation. They divided the operating hours of all civilian nuclear reactors in the world, from the commissioning of the first up to the present, by the number of reactor meltdowns that have actually occurred. The total number of operating hours is 14,500 years, the number of reactor meltdowns comes to four — one in Chernobyl and three in Fukushima. This translates into one major accident, being defined according to the International Nuclear Event Scale (INES), every 3,625 years. Even if this result is conservatively rounded to one major accident every 5,000 reactor years, the risk is 200 times higher than the estimate for catastrophic, non-contained core meltdowns made by the U.S. Nuclear Regulatory Commission in 1990. The Mainz researchers did not distinguish ages and types of reactors, or whether they are located in regions of enhanced risks, for example by earthquakes. After all, nobody had anticipated the reactor catastrophe in Japan.

25 percent of the radioactive particles are transported further than 2,000 kilometres

Subsequently, the researchers determined the geographic distribution of radioactive gases and particles around a possible accident site using a computer model that describes Earth’s atmosphere. The model calculates meteorological conditions and flows, and also accounts for chemical reactions in the atmosphere. The model can compute the global distribution of trace gases, for example, and can also simulate the spreading of radioactive gases and particles. To approximate the radioactive contamination, the researchers calculated how the particles of radioactive caesium-137 (137Cs) disperse in the atmosphere, where they deposit on Earth’s surface and in what quantities. The 137Cs isotope is a product of the nuclear fission of uranium. It has a half-life of 30 years and was one of the key elements in the radioactive contamination following the disasters of Chernobyl and Fukushima.

The computer simulations revealed that, on average, only eight percent of the 137Cs particles are expected to deposit within an area of 50 kilometres around the nuclear accident site. Around 50 percent of the particles would be deposited outside a radius of 1,000 kilometres, and around 25 percent would spread even further than 2,000 kilometres. These results underscore that reactor accidents are likely to cause radioactive contamination well beyond national borders.

The results of the dispersion calculations were combined with the likelihood of a nuclear meltdown and the actual density of reactors worldwide to calculate the current risk of radioactive contamination around the world. According to the International Atomic Energy Agency (IAEA), an area with more than 40 kilobecquerels of radioactivity per square meter is defined as contaminated.

The team in Mainz found that in Western Europe, where the density of reactors is particularly high, the contamination by more than 40 kilobecquerels per square meter is expected to occur once in about every 50 years. It appears that citizens in the densely populated southwestern part of Germany run the worldwide highest risk of radioactive contamination, associated with the numerous nuclear power plants situated near the borders between France, Belgium and Germany, and the dominant westerly wind direction.

If a single nuclear meltdown were to occur in Western Europe, around 28 million people on average would be affected by contamination of more than 40 kilobecquerels per square meter. This figure is even higher in southern Asia, due to the dense populations. A major nuclear accident there would affect around 34 million people, while in the eastern USA and in East Asia this would be 14 to 21 million people.

“Germany’s exit from the nuclear energy program will reduce the national risk of radioactive contamination. However, an even stronger reduction would result if Germany’s neighbours were to switch off their reactors,” says Jos Lelieveld. “Not only do we need an in-depth and public analysis of the actual risks of nuclear accidents. In light of our findings I believe an internationally coordinated phasing out of nuclear energy should also be considered ,” adds the atmospheric chemist.

Is there a technological solution to global warming? (The New Yorker)

ANNALS OF SCIENCE

THE CLIMATE FIXERS

by , MAY 14, 2012

Geoengineering holds out the promise of artificially reversing recent climate trends, but it entails enormous risks.

Geoengineering holds out the promise of artificially reversing recent climate trends, but it entails enormous risks.

Late in the afternoon on April 2, 1991, Mt. Pinatubo, a volcano on the Philippine island of Luzon, began to rumble with a series of the powerful steam explosions that typically precede an eruption. Pinatubo had been dormant for more than four centuries, and in the volcanological world the mountain had become little more than a footnote. The tremors continued in a steady crescendo for the next two months, until June 15th, when the mountain exploded with enough force to expel molten lava at the speed of six hundred miles an hour. The lava flooded a two-hundred-and-fifty-square-mile area, requiring the evacuation of two hundred thousand people.

Within hours, the plume of gas and ash had penetrated the stratosphere, eventually reaching an altitude of twenty-one miles. Three weeks later, an aerosol cloud had encircled the earth, and it remained for nearly two years. Twenty million metric tons of sulfur dioxide mixed with droplets of water, creating a kind of gaseous mirror, which reflected solar rays back into the sky. Throughout 1992 and 1993, the amount of sunlight that reached the surface of the earth was reduced by more than ten per cent.

The heavy industrial activity of the previous hundred years had caused the earth’s climate to warm by roughly three-quarters of a degree Celsius, helping to make the twentieth century the hottest in at least a thousand years. The eruption of Mt. Pinatubo, however, reduced global temperatures by nearly that much in a single year. It also disrupted patterns of precipitation throughout the planet. It is believed to have influenced events as varied as floods along the Mississippi River in 1993 and, later that year, the drought that devastated the African Sahel. Most people considered the eruption a calamity.

For geophysical scientists, though, Mt. Pinatubo provided the best model in at least a century to help us understand what might happen if humans attempted to ameliorate global warming by deliberately altering the climate of the earth.

For years, even to entertain the possibility of human intervention on such a scale—geoengineering, as the practice is known—has been denounced as hubris. Predicting long-term climatic behavior by using computer models has proved difficult, and the notion of fiddling with the planet’s climate based on the results generated by those models worries even scientists who are fully engaged in the research. “There will be no easy victories, but at some point we are going to have to take the facts seriously,’’ David Keith, a professor of engineering and public policy at Harvard and one of geoengineering’s most thoughtful supporters, told me. “Nonetheless,’’ he added, “it is hyperbolic to say this, but no less true: when you start to reflect light away from the planet, you can easily imagine a chain of events that would extinguish life on earth.”

There is only one reason to consider deploying a scheme with even a tiny chance of causing such a catastrophe: if the risks of not deploying it were clearly higher. No one is yet prepared to make such a calculation, but researchers are moving in that direction. To offer guidance, the Intergovernmental Panel on Climate Change (I.P.C.C.) has developed a series of scenarios on global warming. The cheeriest assessment predicts that by the end of the century the earth’s average temperature will rise between 1.1 and 2.9 degrees Celsius. A more pessimistic projection envisages a rise of between 2.4 and 6.4 degrees—far higher than at any time in recorded history. (There are nearly two degrees Fahrenheit in one degree Celsius. A rise of 2.4 to 6.4 degrees Celsius would equal 4.3 to 11.5 degrees Fahrenheit.) Until recently, climate scientists believed that a six-degree rise, the effects of which would be an undeniable disaster, was unlikely. But new data have changed the minds of many. Late last year, Fatih Birol, the chief economist for the International Energy Agency, said that current levels of consumption “put the world perfectly on track for a six-degree Celsius rise in temperature. . . . Everybody, even schoolchildren, knows this will have catastrophic implications for all of us.”

Tens of thousands of wildfires have already been attributed to warming, as have melting glaciers and rising seas. (The warming of the oceans is particularly worrisome; as Arctic ice melts, water that was below the surface becomes exposed to the sun and absorbs more solar energy, which leads to warmer oceans—a loop that could rapidly spin out of control.) Even a two-degree climb in average global temperatures could cause crop failures in parts of the world that can least afford to lose the nourishment. The size of deserts would increase, along with the frequency and intensity of wildfires. Deliberately modifying the earth’s atmosphere would be a desperate gamble with significant risks. Yet the more likely climate change is to cause devastation, the more attractive even the most perilous attempts to mitigate those changes will become.

“We don’t know how bad this is going to be, and we don’t know when it is going to get bad,’’ Ken Caldeira, a climate scientist with the Carnegie Institution, told me. In 2007, Caldeira was a principal contributor to an I.P.C.C. team that won a Nobel Peace Prize. “There are wide variations within the models,’’ he said. “But we had better get ready, because we are running rapidly toward a minefield. We just don’t know where the minefield starts, or how long it will be before we find ourselves in the middle of it.”

The Maldives, a string of islands off the coast of India whose highest point above sea level is eight feet, may be the first nation to drown. In Alaska, entire towns have begun to shift in the loosening permafrost. The Florida economy is highly dependent upon coastal weather patterns; the tide station at Miami Beach has registered an increase of seven inches since 1935, according to the National Oceanic and Atmospheric Administration. One Australian study, published this year in the journal Nature Climate Change, found that a two-degree Celsius rise in the earth’s temperature would be accompanied by a significant spike in the number of lives lost just in Brisbane. Many climate scientists say their biggest fear is that warming could melt the Arctic permafrost—which stretches for thousands of miles across Alaska, Canada, and Siberia. There is twice as much CO2 locked beneath the tundra as there is in the earth’s atmosphere. Melting would release enormous stores of methane, a greenhouse gas nearly thirty times more potent than carbon dioxide. If that happens, as the hydrologist Jane C. S. Long told me when we met recently in her office at the Lawrence Livermore National Laboratory, “it’s game over.”

The Stratospheric Particle Injection for Climate Engineering project, or SPICE, is a British academic consortium that seeks to mimic the actions of volcanoes like Pinatubo by pumping particles of sulfur dioxide, or similar reflective chemicals, into the stratosphere through a twelve-mile-long pipe held aloft by a balloon at one end and tethered, at the other, to a boat anchored at sea.

The consortium consists of three groups. At Bristol University, researchers led by Matt Watson, a professor of geophysics, are trying to determine which particles would have the maximum desired impact with the smallest likelihood of unwanted side effects. Sulfur dioxide produces sulfuric acid, which destroys the ozone layer of the atmosphere; there are similar compounds that might work while proving less environmentally toxic—including synthetic particles that could be created specifically for this purpose. At Cambridge, Hugh Hunt and his team are trying to determine the best way to get those particles into the stratosphere. A third group, at Oxford, has been focussing on the effect such an intervention would likely have on the earth’s climate.

Hunt and I spoke in Cambridge, at Trinity College, where he is a professor of engineering and the Keeper of the Trinity College clock, a renowned timepiece that gains or loses less than a second a month. In his office, dozens of boomerangs dangle from the wall. When I asked about them, he grabbed one and hurled it at my head. “I teach three-dimensional dynamics,’’ he said, flicking his hand in the air to grab it as it returned. Hunt has devoted his intellectual life to the study of mechanical vibration. His Web page is filled with instructive videos about gyroscopes, rings wobbling down rods, and boomerangs.

“I like to demonstrate the way things spin,’’ he said, as he put the boomerang down and picked up an inflated pink balloon attached to a string. “The principle is pretty simple.” Holding the string, Hunt began to bobble the balloon as if it were being tossed by foul weather. “Everything is fine if it is sitting still,’’ he continued, holding the balloon steady. Then he began to wave his arm erratically. “One of the problems is that nothing is going to be still up there. It is going to be moving around. And the question we’ve got is . . . this pipe”—the industrial hose that will convey the particles into the sky—“is going to be under huge stressors.’’ He snapped the string connected to the balloon. “How do you know it’s not going to break? We are really pushing things to the limit in terms of their strength, so it is essential that we get the dynamics of motion right.’’

Most scientists, even those with no interest in personal publicity, are vigorous advocates for their own work. Not this group. “I don’t know how many times I have said this, but the last thing I would ever want is for the project I have been working on to be implemented,’’ Hunt said. “If we have to use these tools, it means something on this planet has gone seriously wrong.’’

Last fall, the SPICE team decided to conduct a brief and uncontroversial pilot study. At least they thought it would be uncontroversial. To demonstrate how they would disperse the sulfur dioxide, they had planned to float a balloon over Norfolk, at an altitude of a kilometre, and send a hundred and fifty litres of water into the air through a hose. After the date and time of the test was announced, in the middle of September, more than fifty organizations signed a petition objecting to the experiment, in part because they fear that even to consider engineering the climate would provide politicians with an excuse for avoiding tough decisions on reducing greenhouse-gas emissions. Opponents of the water test pointed out the many uncertainties in the research (which is precisely why the team wanted to do the experiment). The British government decided to put it off for at least six months.

“When people say we shouldn’t even explore this issue, it scares me,’’ Hunt said. He pointed out that carbon emissions are heavy, and finding a place to deposit them will not be easy. “Roughly speaking, the CO2 we generate weighs three or four times as much as the fuel it comes from.” That means that a short round-trip journey—say, eight hundred miles—by car, using two tanks of gas, produces three hundred kilograms of CO2. “This is ten heavy suitcases from one short trip,’’ Hunt said. “And you have to store it where it can’t evaporate.

“So I have three questions, Where are you going to put it? Who are you going to ask to dispose of this for you? And how much are you reasonably willing to pay them to do it?” he continued. “There is nobody on this planet who can answer any of those questions. There is no established place or technique, and nobody has any idea what it would cost. And we need the answers now.”

Hunt stood up, walked slowly to the window, and gazed at the manicured Trinity College green. “I know this is all unpleasant,’’ he said. “Nobody wants it, but nobody wants to put high doses of poisonous chemicals into their body, either. That is what chemotherapy is, though, and for people suffering from cancer those poisons are often their only hope. Every day, tens of thousands of people take them willingly—because they are very sick or dying. This is how I prefer to look at the possibility of engineering the climate. It isn’t a cure for anything. But it could very well turn out to be the least bad option we are going to have.’’

The notion of modifying the weather dates back at least to the eighteen-thirties, when the American meteorologist James Pollard Espy became known as the Storm King, for his (prescient but widely ridiculed) proposals to stimulate rain by selectively burning forests. More recently, the U.S. government project Stormfury attempted for decades to lessen the force of hurricanes by seeding them with silver iodide. And in 2008 Chinese soldiers fired more than a thousand rockets filled with chemicals at clouds over Beijing to prevent them from raining on the Olympics. The relationship between carbon emissions and the earth’s temperature has been clear for more than a century: in 1908, the Swedish scientist Svante Arrhenius suggested that burning fossil fuels might help prevent the coming ice age. In 1965, President Lyndon Johnson received a report from his Science Advisory Committee, titled “Restoring the Quality of Our Environment,” that noted for the first time the potential need to balance increased greenhouse-gas emissions by “raising the albedo, or the reflectivity, of the earth.” The report suggested that such a change could be achieved by spreading small reflective particles over large parts of the ocean.

While such tactics could clearly fail, perhaps the greater concern is what might happen if they succeeded in ways nobody had envisioned. Injecting sulfur dioxide, or particles that perform a similar function, would rapidly lower the temperature of the earth, at relatively little expense—most estimates put the cost at less than ten billion dollars a year. But it would do nothing to halt ocean acidification, which threatens to destroy coral reefs and wipe out an enormous number of aquatic species. The risks of reducing the amount of sunlight that reaches the atmosphere on that scale would be as obvious—and immediate—as the benefits. If such a program were suddenly to fall apart, the earth would be subjected to extremely rapid warming, with nothing to stop it. And while such an effort would cool the globe, it might do so in ways that disrupt the behavior of the Asian and African monsoons, which provide the water that billions of people need to drink and to grow their food.

“Geoengineering” actually refers to two distinct ideas about how to cool the planet. The first, solar-radiation management, focusses on reducing the impact of the sun. Whether by seeding clouds, spreading giant mirrors in the desert, or injecting sulfates into the stratosphere, most such plans seek to replicate the effects of eruptions like Mt. Pinatubo’s. The other approach is less risky, and involves removing carbon directly from the atmosphere and burying it in vast ocean storage beds or deep inside the earth. But without a significant technological advance such projects will be expensive and may take many years to have any significant effect.

There are dozens of versions of each scheme, and they range from plausible to absurd. There have been proposals to send mirrors, sunshades, and parasols into space. Recently, the scientific entrepreneur Nathan Myhrvold, whose company Intellectual Ventures has invested in several geoengineering ideas, said that we could cool the earth by stirring the seas. He has proposed deploying a million plastic tubes, each about a hundred metres long, to roil the water, which would help it trap more CO2. “The ocean is this giant heat sink,’’ he told me. “But it is very cold. The bottom is nearly freezing. If you just stirred the ocean more, you could absorb the excess CO2 and keep the planet cold.” (This is not as crazy as it sounds. In the center of the ocean, wind-driven currents bring fresh water to the surface, so stirring the ocean could transform it into a well-organized storage depot. The new water would absorb more carbon while the old water carried the carbon it has already captured into the deep.)

The Harvard physicist Russell Seitz wants to create what amounts to a giant oceanic bubble bath: bubbles trap air, which brightens them enough to reflect sunlight away from the surface of the earth. Another tactic would require maintaining a fine spray of seawater—the world’s biggest fountain—which would mix with salt to help clouds block sunlight.

The best solution, nearly all scientists agree, would be the simplest: stop burning fossil fuels, which would reduce the amount of carbon we dump into the atmosphere. That fact has been emphasized in virtually every study that addresses the potential effect of climate change on the earth—and there have been many—but none have had a discernible impact on human behavior or government policy. Some climate scientists believe we can accommodate an atmosphere with concentrations of carbon dioxide that are twice the levels of the preindustrial era—about five hundred and fifty parts per million. Others have long claimed that global warming would become dangerous when atmospheric concentrations of carbon rose above three hundred and fifty parts per million. We passed that number years ago. After a decline in 2009, which coincided with the harsh global recession, carbon emissions soared by six per cent in 2010—the largest increase ever recorded. On average, in the past decade, fossil-fuel emissions grew at about three times the rate of growth in the nineteen-nineties.

Although the I.P.C.C., along with scores of other scientific bodies, has declared that the warming of the earth is unequivocal, few countries have demonstrated the political will required to act—perhaps least of all the United States, which consumes more energy than any nation other than China, and, last year, more than it ever had before. The Obama Administration has failed to pass any meaningful climate legislation. Mitt Romney, the presumptive Republican nominee, has yet to settle on a clear position. Last year, he said he believed the world was getting warmer—and humans were a cause. By October, he had retreated. “My view is that we don’t know what is causing climate change on this planet,” he said, adding that spending huge sums to try to reduce CO2 emissions “is not the right course for us.” China, which became the world’s largest emitter of greenhouse gases several years ago, constructs a new coal-burning power plant nearly every week. With each passing year, goals become exponentially harder to reach, and global reductions along the lines suggested by the I.P.C.C. seem more like a “pious wish,” to use the words of the Dutch chemist Paul Crutzen, who in 1995 received a Nobel Prize for his work on ozone depletion.

“Most nations now recognize the need to shift to a low-carbon economy, and nothing should divert us from the main priority of reducing global greenhouse gas emissions,’’ Lord Rees of Ludlow wrote in his 2009 forward to a highly influential report on geoengineering released by the Royal Society, Britain’s national academy of sciences. “But if such reductions achieve too little, too late, there will surely be pressure to consider a ‘plan B’—to seek ways to counteract climatic effects of green-house gas emissions.’’

While that pressure is building rapidly, some climate activists oppose even holding discussions about a possible Plan B, arguing, as the Norfolk protesters did in September, that it would be perceived as indirect permission to abandon serious efforts to cut emissions. Many people see geoengineering as a false solution to an existential crisis—akin to encouraging a heart-attack patient to avoid exercise and continue to gobble fatty food while simply doubling his dose of Lipitor. “The scientist’s focus on tinkering with our entire planetary system is not a dynamic new technological and scientific frontier, but an expression of political despair,” Doug Parr, the chief scientist at Greenpeace UK, has written.

During the 1974 Mideast oil crisis, the American engineer Hewitt Crane, then working at S.R.I. International, realized that standard measurements for sources of energy—barrels of oil, tons of coal, gallons of gas, British thermal units—were nearly impossible to compare. At a time when these commodities were being rationed, Crane wondered how people could conserve resources if they couldn’t even measure them. The world was burning through twenty-three thousand gallons of oil every second. It was an astonishing figure, but one that Crane had trouble placing into any useful context.

Crane devised a new measure of energy consumption: a three-dimensional unit he called a cubic mile of oil. One cubic mile of oil would fill a pool that was a mile long, a mile wide, and a mile deep. Today, it takes three cubic miles’ worth of fossil fuels to power the world for a year. That’s a trillion gallons of gas. To replace just one of those cubic miles with a source of energy that will not add carbon dioxide to the atmosphere—nuclear power, for instance—would require the construction of a new atomic plant every week for fifty years; to switch to wind power would mean erecting thousands of windmills each month. It is hard to conceive of a way to replace that much energy with less dramatic alternatives. It is also impossible to talk seriously about climate change without talking about economic development. Climate experts have argued that we ought to stop emitting greenhouse gases within fifty years, but by then the demand for energy could easily be three times what it is today: nine cubic miles of oil.

The planet is getting richer as well as more crowded, and the pressure to produce more energy will become acute long before the end of the century. Predilections of the rich world—constant travel, industrial activity, increasing reliance on meat for protein—require enormous physical resources. Yet many people still hope to solve the problem of climate change just by eliminating greenhouse-gas emissions. “When people talk about bringing emissions to zero, they are talking about something that will never happen,’’ Ken Caldeira told me. “Because that would require a complete alteration in the way humans are built.”

Caldeira began researching geoengineering almost by accident. For much of his career, he has focussed on the implications of ocean acidification. During the nineteen-nineties, he spent a year in the Soviet Union, at the Leningrad lab of Mikhail Budyko, who is considered the founder of physical climatology. It was Budyko, in the nineteen-sixties, who first suggested cooling the earth by putting sulfur particles in the sky.

“In the nineteen-nineties, when I was working at Livermore, we had a meeting in Aspen to discuss the scale of the energy-system transformation needed in order to address the climate problem,’’ Caldeira said. “Among the people who attended was Lowell Wood, a protégé of Edward Teller. Wood is a brilliant but sometimes erratic man . . . lots of ideas, some better than others.” At Aspen, Wood delivered a talk on geoengineering. In the presentation, he explained, as he has many times since, that shielding the earth properly could deflect one or two per cent of the sunlight that reaches the atmosphere. That, he said, would be all it would take to counter the worst effects of warming.

David Keith was in the audience with Caldeira that day in Aspen. Keith now splits his time between Harvard and Calgary, where he runs Carbon Engineering, a company that is developing new technology to capture CO2 from the atmosphere—at a cost that he believes would make it sensible to do so. At the time, though, both men considered Wood’s idea ridiculous. “We said this will never happen,’’ Caldeira recalled. “We were so certain Wood was nuts, because we assumed you can change the global mean temperature, but you will still get seasonal and regional patterns you can’t correct. We were in the back of the room, and neither of us could believe it.”

Caldeira decided to prove his point by running a computer simulation of Wood’s approach. Scenarios for future climate change are almost always developed using powerful three-dimensional models of the earth and its atmosphere. They tend to be most accurate when estimating large numbers, like average global temperatures. Local and regional weather patterns are more difficult to predict, as anyone who has relied on a five-day weather forecast can understand. Still, in 1998 Caldeira tested the idea, and, “much to my surprise, it seemed to work and work well,” he told me. It turned out that reducing sunlight offset the effect of CO2 both regionally and seasonally. Since then, his results have been confirmed by several other groups.

Recently, Caldeira and colleagues at Carnegie and Stanford set out to examine whether the techniques of solar-radiation management would disrupt the sensitive agricultural balance on which the earth depends. Using two models, they simulated climates with carbon-dioxide levels similar to those which exist today. They then doubled those concentrations to reflect levels that would be likely in several decades if current trends continue unabated. Finally, in a third set of simulations, they doubled the CO2 in the atmosphere, but added a layer of sulfate aerosols to the stratosphere, which would deflect about two per cent of incoming sunlight from the earth. The data were then applied to crop models that are commonly used to project future yields. Again, the results were unexpected.

Farm productivity, on average, went up. The models suggested that precipitation would increase in the northern and middle latitudes, and crop yields would grow. In the tropics, though, the results were significantly different. There heat stress would increase, and yields would decline. “Climate change is not so much a reduction in productivity as a redistribution,’’ Caldeira said. “And it is one in which the poorest people on earth get hit the hardest and the rich world benefits”—a phenomenon, he added, that is not new.

“I have two perspectives on what this might mean,’’ he said. “One says: humans are like rats or cockroaches. We are already living from the equator to the Arctic Circle. The weather has already become .7 degrees warmer, and barely anyone has noticed or cares. And, yes, the coral reefs might become extinct, and people from the Seychelles might go hungry. But they have gone hungry in the past, and nobody cared. So basically we will live in our gated communities, and we will have our TV shows and Chicken McNuggets, and we will be O.K. The people who would suffer are the people who always suffer.

“There is another way to look at this, though,’’ he said. “And that is to compare it to the subprime-mortgage crisis, where you saw that a few million bad mortgages led to a five-per-cent drop in gross domestic product throughout the world. Something that was a relatively small knock to the financial system led to a global crisis. And that could certainly be the case with climate change. But five per cent is an interesting figure, because in the Stern Report’’—an often cited review led by the British economist Nicholas Stern, which signalled the alarm about greenhouse-gas emissions by focussing on economics—“they estimated climate change would cost the world five per cent of its G.D.P. Most economists say that solving this problem is one or two per cent of G.D.P. The Clean Water and Clean Air Acts each cost about one per cent of G.D.P.,” Caldeira continued. “We just had a much worse shock to our banking system. And it didn’t even get us to reform the economy in any significant way. So why is the threat of a five-per-cent hit from climate change going to get us to transform the energy system?”

Solar-radiation management, which most reports have agreed is technologically feasible, would provide, at best, a temporary solution to rapid warming—a treatment but not a cure. There are only two ways to genuinely solve the problem: by drastically reducing emissions or by removing the CO2 from the atmosphere. Trees do that every day. They “capture” carbon dioxide in their leaves, metabolize it in the branch system, and store it in their roots. But to do so on a global scale would require turning trillions of tons of greenhouse-gas emissions into a substance that could be stored cheaply and easily underground or in ocean beds.

Until recently, the costs of removing carbon from the atmosphere on that scale have been regarded by economists as prohibitive. CO2 needs to be heated in order to be separated out; using current technology, the expense would rival that of creating an entirely new energy system. Typically, power plants release CO2 into the atmosphere through exhaust systems referred to as flues. The most efficient way we have now to capture CO2 is to remove it from flue gas as the emissions escape. Over the past five years, several research groups—one of which includes David Keith’s company, Carbon Engineering, in Calgary—have developed new techniques to extract carbon from the atmosphere, at costs that may make it economically feasible on a larger scale.

Early this winter, I visited a demonstration project on the campus of S.R.I. International, the Menlo Park institution that is a combination think tank and technological incubator. The project, built by Global Thermostat, looked like a very high-tech elevator or an awfully expensive math problem. “When I called chemical engineers and said I want to do this on a planetary scale, they laughed,’’ Peter Eisenberger, Global Thermostat’s president, told me. In 1996, Eisenberger was appointed the founding director of the Earth Institute, at Columbia University, where he remains a professor of earth and environmental sciences. Before that, he spent a decade running the materials research institute at Princeton University, and nearly as much time at Exxon, in charge of research and development. He believes he has developed a system to capture CO2 from the atmosphere at low heat and potentially at low cost.

The trial project is essentially a five-story brick edifice specially constructed to function like a honeycomb. Global Thermostat coats the bricks with chemicals called amines to draw CO2 from the air and bind with it. The carbon dioxide is then separated with a proprietary method that uses low-temperature heat—something readily available for free, since it is a waste product of many power plants. “Using low-temperature heat changes the equation,’’ Eisenberger said. He is an excitable man with the enthusiasm of a graduate student and the manic gestures of an orchestra conductor. He went on to explain that the amine coating on the bricks binds the CO2 at the molecular level, and the amount it can capture depends on the surface area; honeycombs provide the most surface space possible per square metre.

There are two groups of honey-combs that sit on top of each other. As Eisenberger pointed out, “You can only absorb so much CO2 at once, so when the honeycomb is full it drops into a lower section.” Steam heats and releases the CO2—and the honeycomb rises again. (Currently, carbon dioxide is used commercially in carbonated beverages, brewing, and pneumatic drying systems for packaged food. It is also used in welding. Eisenberger argues that, ideally, carbon waste would be recycled to create an industrial form of photosynthesis, which would help reduce our dependence on fossil fuels.)

Unlike some other scientists engaged in geoengineering, Eisenberger is not bothered by the notion of tinkering with nature. “We have devised a system that introduces no additional threats into the environment,’’ he told me. “And the idea of interfering with benign nature is ridiculous. The Bambi view of nature is totally false. Nature is violent, amoral, and nihilistic. If you look at the history of this planet, you will see cycles of creation and destruction that would offend our morality as human beings. But somehow, because it’s ‘nature,’ it’s supposed to be fine.’’ Eisenberger founded and runs Global Thermostat with Graciela Chichilnisky, an Argentine economist who wrote the plan, adopted in 2005, for the international carbon market that emerged from the Kyoto Climate talks. Edgar Bronfman, Jr., an heir to the Seagram fortune, is Global Thermostat’s biggest investor. (The company is one of the finalists for Richard Branson’s Virgin Earth Challenge prize. In 2007, Branson offered a cash prize of twenty-five million dollars to anyone who could devise a process that would drain large quantities of greenhouse gases from the atmosphere.)

“What is fascinating for me is the way the innovation process has changed,’’ Eisenberger said. “In the past, somebody would make a discovery in a laboratory and say, ‘What can I do with this?’ And now we ask, ‘What do we want to design?,’ because we believe there is powerful enough knowledge to do it. That is what my partner and I did.” The pilot, which began running last year, works on a very small scale, capturing about seven hundred tons of CO2 a year. (By comparison, an automobile puts out about six tons a year.) Eisenberger says that it is important to remember that it took more than a century to assemble the current energy system: coal and gas plants, factories, and the worldwide transportation network that has been responsible for depositing trillions of tons of CO2 into the atmosphere. “We are not going to get it all out of the atmosphere in twenty years,’’ he said. “It will take at least thirty years to do this, but if we start now that is plenty of time. You would just need a source of low-temperature heat—factories anywhere in the world are ideal.” He envisions a network of twenty thousand such devices scattered across the planet. Each would cost about a hundred million dollars—a two-trillion-dollar investment spread out over three decades.

“There is a strong history of the system refusing to accept something new,” Eisenberger said. “People say I am nuts. But it would be surprising if people didn’t call me crazy. Look at the history of innovation! If people don’t call you nuts, then you are doing something wrong.”

After leaving Eisenberger’s demonstration project, I spoke with Curtis Carlson, who, for more than a decade, has been the chairman and chief executive officer of S.R.I. and a leading voice on the future of American innovation. “These geoengineering methods will not be implemented for decades—or ever,” he said. Nonetheless, scientists worry that if methane emissions from the Arctic increase as rapidly as some of the data now suggest, climate intervention isn’t going to be an option. It’s going to be a requirement. “When and where do we have the serious discussion about how to intervene?” Carlson asked. “There are no agreed-upon rules or criteria. There isn’t even a body that could create the rules.”

Over the past three years, a series of increasingly urgent reports—from the Royal Society, in the U.K., the Washington-based Bipartisan Policy Center, and the Government Accountability Office, among other places—have practically begged decision-makers to begin planning for a world in which geoengineering might be their only recourse. As one recent study from the Wilson International Center for Scholars concluded, “At the very least, we need to learn what approaches to avoid even if desperate.”

The most environmentally sound approach to geoengineering is the least palatable politically. “If it becomes necessary to ring the planet with sulfates, why would you do that all at once?’’ Ken Caldeira asked. “If the total amount of climate change that occurs could be neutralized by one Mt. Pinatubo, then doesn’t it make sense to add one per cent this year, two per cent next year, and three per cent the year after that?’’ he said. “Ramp it up slowly, throughout the century, and that way we can monitor what is happening. If we see something at one per cent that seems dangerous, we can easily dial it back. But who is going to do that when we don’t have a visible crisis? Which politician in which country?’’

Unfortunately, the least risky approach politically is also the most dangerous: do nothing until the world is faced with a cataclysm and then slip into a frenzied crisis mode. The political implications of any such action would be impossible to overstate. What would happen, for example, if one country decided to embark on such a program without the agreement of other countries? Or if industrialized nations agreed to inject sulfur particles into the stratosphere and accidentally set off a climate emergency that caused drought in China, India, or Africa?

“Let’s say the Chinese government decides their monsoon strength, upon which hundreds of millions of people rely for sustenance, is weakening,” Caldeira said. “They have reason to believe that making clouds right near the ocean might help, and they started to do that, and the Indians found out and believed—justifiably or not—that it would make their monsoon worse. What happens then? Where do we go to discuss that? We have no mechanism to settle that dispute.”

Most estimates suggest that it could cost a few billion dollars a year to scatter enough sulfur particles in the atmosphere to change the weather patterns of the planet. At that price, any country, most groups, and even some individuals could afford to do it. The technology is open and available—and that makes it more like the Internet than like a national weapons program. The basic principles are widely published; the intellectual property behind nearly every technique lies in the public domain. If the Maldives wanted to send airplanes into the stratosphere to scatter sulfates, who could stop them?

“The odd thing here is that this is a democratizing technology,’’ Nathan Myhrvold told me. “Rich, powerful countries might have invented much of it, but it will be there for anyone to use. People get themselves all balled up into knots over whether this can be done unilaterally or by one group or one nation. Well, guess what. We decide to do much worse than this every day, and we decide unilaterally. We are polluting the earth unilaterally. Whether it’s life-taking decisions, like wars, or something like a trade embargo, the world is about people taking action, not agreeing to take action. And, frankly, the Maldives could say, ‘Fuck you all—we want to stay alive.’ Would you blame them? Wouldn’t any reasonable country do the same?” ♦

ILLUSTRATION: NISHANT CHOKSI

Read more http://www.newyorker.com/reporting/2012/05/14/120514fa_fact_specter#ixzz1vFsQQbfl

Risco é coisa séria (JC)

JC e-mail 4364, de 14 de Outubro de 2011.

Artigo de Francisco G. Nóbrega enviado ao JC Email pelo autor.

A sociedade moderna está banhada em comunicação. Como “boa notícia não é notícia”, a lente psicológica humana registra sempre um cenário pior que a realidade. A percepção usual é que os riscos de todos os tipos aumentam dia a dia. A redução global da violência, por exemplo, é tema do livro recente do psicólogo da Universidade Harvard, Steven Pinker (http://www.samharris.org/blog/item/qa-with-steven-pinker). Ao arrepio do senso comum, ele demonstra, objetivamente, que estamos progredindo neste quesito.

Mas nossa mente não descança em sua aguda capacidade de detectar outras fontes de risco. Temos alguns campeões de audiência: energia nuclear para eletricidade, alimentos geneticamente modificados e aquecimento global catastrófico e antropogênico. O dano potencial das três ameaças mencionadas, objetivamente, não se concretizou de maneira alguma, embora a terceira ameaça deva se realizar no futuro, segundo seus defensores. As pessoas se encantam com o automóvel e seus acessórios, cada vez mais atraentes. Não se pensa em baní-lo, apesar de resultar em cerca de 40.000 mortos e inúmeros incapacitados cada ano, só no Brasil. David Ropeik, que pertence ao Centro Harvard para Análise de Risco, explica como facilmente se distorce o perigo real de situações. Quanto mais afastadas do senso comum (como radiação e plantas geneticamente modificadas), mais facilmente são manipuladas, por ignorância ou interesses outros, apavorando o cidadão comum. Ropeik explica como este medo sem sentido passa a ser um fator de estresse e um risco objetivo para a saúde das pessoas, devendo ser evitado.

Dentro desse universo, são justificadas as preocupações do Dr. Ferraz (“O feijão nosso de cada dia”, Jornal da Ciência, 6/10/2011). Ele é membro da CTNBio, atua na setorial vegetal/ambiental e sua área de concentração é em agroecologia, o que explica, pelo menos em parte, suas dúvidas. No entanto essas preocupações não têm a consistência sugerida pelo autor e a análise da CTNBio, que resultou na aprovação deste feijão, é confiável.

A comissão se pauta sempre pelas diretivas da legislação que são amplas, para dar conta de todas as possibilidades de risco para os consumidores e meio ambiente. No entanto o corpo técnico existe exatamente para atuar de maneira seletiva e consciente, examinando caso a caso. Os testes são examinadas com o rigor que a modificação introduzida na planta exige para plena segurança. Se as modificações são consideradas sem qualquer risco significativo, os testes são avaliados à luz deste fato.

Testes com muitos animais, altamente confiáveis estatisticamente, seriam exigidos pela comissão na eventualidade de uma planta transgênica produzir, por exemplo, uma molécula pesticida não protéica que seria em tudo semelhante a uma droga produzida pela indústria farmacêutica. Isto poderá acontecer em certo momento, já que as plantas têm capacidade de produzir os mais variados pesticidas naturais para se defenderem na natureza. A substância seria absorvida no intestino e se disseminaria por órgãos e tecidos, possivelmente exercendo efeitos sistêmicos e localizados que exigem avaliação. Isso já aconteceu, sem querer, com uma batata produzida por melhoramento convencional nos EUA. Seu consumo levou a mal estar e foi recolhida apressadamente: portava altos níveis de glicoalcalóides tóxicos para o homem, o que explicava sua excelente resistência às pragas da cultura.

No caso do feijão Embrapa, nenhuma molécula não protéica nova é produzida e o pequeno RNA que interfere com a replicação do vírus, caso alguém venha a ingerir folhas e caules, será um entre centenas ou milhares de RNAs que ingerimos diariamente com qualquer produto vegetal. O RNA introduzido, no entanto, não foi detectado no grão do feijão cozido, usando técnicas extremamente poderosas.

As variações detectadas, se estatisticamente significativas (concentração de vitamina B2 ou cisteína por exemplo) não representam risco algum. A técnica clássica de cultura de tecidos, usada para gerar variedades de qualidade em horticultura e propagação de árvores, reconhecidamente resulta em variações naturais que introduzem certas modificações desejáveis e algumas indesejáveis, que o melhorista depois seleciona. É a variação somaclonal, que também afeta os clones geneticamente modificados na sua fase de seleção.

Portanto, é no mínimo ingênuo dizer que o feijão Embrapa 5.1 “deveria ser idêntico” a variedade de origem pois as manipulações necessárias para gerar o transgênico resultam em certas alterações que, se irrelevantes, são ignoradas e se deletérias são descartadas pelos cientistas. Se fizermos as mesmas análises, cujos resultados preocupam alguns, com as muitas variedades convencionais consumidas no país, as diferenças serão impressionantes e irrelevantes para a questão “segurança”.

Como já foi comentado anteriormente, não existe base factual (bioquímica ou genética) para imaginar que o feijão Embrapa apresente risco maior do que um feijão comum ou melhorado por mutagênese química ou física, que por sinal, não é supervisionado nutricional e molecularmente antes de sua comercialização. Sem base biológica, os testes tornam-se formalidades supérfluas e o ruído experimental, principalmente com amostras pequenas, quase inevitavelmente vai gerar resultados que são irrelavantes a menos que se amplie muito o número de animais (para amostras controle e transgênicas) além de ser prudente incluir animais alimentados com outros feijões convencionais para uma idéia realista do significado das variações detectadas. Imaginem o custo dessa busca “caça fantasma”, desencadeada simplesmente devido a uma aplicação pouco esclarecida do princípio da precaução. As preocupações sem base racional, levantadas a todo momento pelos que temem a tecnologia, se aplicariam com maior lógica aos produtos convencionais.

Caso isso aconteça, do dia para a noite estaria inviabilizada a produção agrícola do planeta. Por que não fazer estudos com Rhizobium e nodulação em todos os feijões comercializados? Por que não conduzir estudos nutricionais de longo prazo com os alimentos convencionais derivados de mutagênese? Qual a razão lógica que exclui essas preocupações com as plantas convencionais? Ou a razão seria metafísica? A alteração introduzida seria “contra a natureza”, algo como o pecado original, que, em muitas interpretações, consistiu apenas em comer o fruto da “árvore do conhecimento”? Recentemente 41 cientistas suecos da área vegetal lançaram um manifesto contra a sobre-regulação da genética moderna na Europa (reproduzido no blog GenPeace: genpeace.blogspot.com). Os autores observam que, fazendo um paralelo com as exigências para os produtos farmacêuticos, a “lógica da legislação atual sugere que apenas drogas produzidas por meio de engenharia genética deveriam ser avaliadas quando a efeitos indesejáveis”.

Instilar o medo com base em suposições não ajuda a proteger a população ou o meio ambiente. Marie Curie teria dito “Na vida nada deve ser temido. Mas tudo deve ser compreendido”. Considero irresponsável usar o “princípio da precaução” como alguns o fazem. Inclusive a OMS caiu nesta armadilha, classificando os telefones celulares no grupo 2B de risco para causar câncer. A radiação destes equipamentos é cerca de um milhão de vezes inferior à energia que pode produzir radicais livres e gerar dano ao DNA. A classe 2B inclui o risco de câncer relativo ao café, resíduos da queima de combustíveis fósseis e uso de dentadura…. O que a WHO manteve viva, irresponsavelmente, é a justificativa para a dúvida, que vai legitimar pesquisas caras e irrelevantes, cujo resultado será inconclusivo, como o mega estudo anterior. Incrivelmente mais perigoso é o uso do celular enquanto se dirige.

Francisco G. da Nóbrega é professor da Universidade de São Paulo (USP).

VOCÊ SABE COM QUEM ESTÁ FALANDO? (TRIP)

Roberto da Matta reflete sobre como limites são as maiores conquistas e os maiores riscos

TRIP 196 – 14.02.2011 | Texto por Roberto da Matta
Fotos http://www.flickr.com/commons

Não deixa de ser curioso que o ser vivo mais consciente da própria morte, o animal mais certo de que sua única certeza é um limite final e definitivo — a morte —, seja o bicho que mais inventa e questiona limites. Os seus limites e os dos outros. Mais os dos outros que os seus.

A reflexão sobre os limites, sobre o que é suficiente ou bastante para cada um de nós (e consequentemente para os outros), é o resultado de mais igualdade, liberdade, oportunidade, poder de consumo e daquilo que se chama de “modernidade”: de mercado e competição eleitoral e de democracia. Da operação consistente de um sistema que tem no centro o indivíduo-cidadão livre e igual perante a lei. Todas as sociedades que passaram por uma aguda transformação no sentido de maior igualdade, acoplada a uma consciência mais aguda de liberdade, vivem um aparente paradoxo. Como usufruir a liberdade e a igualdade sem ofender os outros e, mais que isso, sem levar o sistema a uma anarquia e a um caos no qual alguns podem fazer tudo, o outro não existe e — como consequência — quem ocupa cargos importantes sobretudo no governo e do Estado acaba virando um mandão (ou mandona) de modo que, em vez de igualdade e limite, temos o justo oposto: uma hierarquia e o enriquecimento dos poderosos por meio daquilo que é o teste mais claro do limite e da igualdade: o sistema eleitoral que os elegeu.

— II —

http://www.flickr.com/commons

Neste momento em que o Brasil consolida sua democracia e torna-se um ator global, é crucial discutir esse equilíbrio entre o que aspiramos construir como coletividade mais justa e humana e as leis e normas que agindo sobre todos nós e governando por assim dizer esse jogo democrático que vem sendo jogado faz um tempo considerável, considerando nossa história republicana, limitam os nossos movimentos indicando o que é correto e ético realizar.

Não nos parece uma tarefa fácil conciliar desejos (que geralmente são ilimitados e odeiam controles) e a questão fundamental de cumprir regras, seguir leis e construir espaços públicos seguros e igualitários, válidos para todos, numa sociedade que também tem o seu lado claramente aristocrático e hierárquico. Um sistema que ama a democracia, mas também gosta de usar o “Você sabe com quem está falando?”, que é justamente a prova, conforme disse em Carnavais, malandros e heróis, um livro publicado, imagine, em 1979!

Ali, eu descobri o nosso amor simultâneo pela igualdade e, a seu lado, o nosso afeto pelo familismo e pelo partidarismo governados pela ética de condescendência tão nossa conhecida, que diz: nós somos diferentes e temos biografia; para os amigos tudo, aos inimigos (e estranhos, os que não conhecemos) a lei!

Não há nada mais claro da nossa aversão aos limites do que essa recusa de obedecer a lei, o cargo público para o qual fomos eleitos ou o sinal de trânsito. Uma pessoa, como digo no citado ensaio, que não foi criada para pensar em limites, porque todos somos (ou fomos) filhinhos de mamãe e criados em ambientes onde sabíamos perfeitamente bem quem era superior, quem era subordinado, quem mandava e quem obedecia, não pode funcionar igualitariamente na rua, onde ninguém é de ninguém ou sabe quem são os outros.

http://www.flickr.com/commons

A dificuldade em usar com tranquilidade o “Você sabe com quem está falando?” decorre da massificação da sociedade brasileira, que, com o aumento de renda e dos mecanismos destinados a melhorar o consumo das camadas mais pobres, torna todo mundo muito mais parecido e de certo modo obriga tanto o milionário filho de família tradicional quanto o pedreiro, o padeiro, o garçom, o estudante, o operário e o empregado doméstico a entrar numa fila. E, nela, a pensar que somos todos realmente iguais em certas situações públicas porque o limite do outro garante o meu limite.

O resultado dessa tomada de posição, básica numa democracia, é simples, mas muitas vezes ignorado entre nós: a minha liberdade teoricamente ilimitada tem que se ajustar à sua e as duas acabam promovendo uma conformidade voluntária com limites, com fronteiras cívicas que não podem ser ultrapassadas, como a de furar a fila ou a de dar uma carteirada.

Na sua simplicidade, a fila é um dos melhores, se não for o melhor, exemplos de como operam os limites numa democracia. Seus princípios são simples e reveladores: quem chega primeiro é atendido em primeiro lugar. Numa fila, portanto, não vale o oculto. Ou temos uma clara linha de pessoas, umas atrás das outras, ou a vaca vai para o brejo. Quando eu era menino, lembro-me bem como era impossível ter uma fila no Brasil. As velhas senhoras e as pessoas importantes (sobretudo os políticos) não se conformavam com suas regras e traziam como argumento para serem atendidos, passando na frente dos outros, ou a idade, ou o cargo, ou conhecimento com quem estava atendendo, ou algum laço de família. Afinal quem vai deixar a vovó esperando para depois tomar uma bronca em casa? Hoje, sabemos que idosos e deficientes não entram em fila. Mas estamos igualmente alertas para o fato de que um cargo ou um laço de amizade não faz de alguém um supercidadão com poderes ilimitados junto aos que estão penando numa fila por algumas horas. O princípio do quem primeiro chega é primeiro atendido revela uma outra dimensão da democracia e dos limites que deve ser igualmente discutida.

Refiro-me ao fato de que a fila anda (ou deve andar!). Ela é construída, como tudo que é governado por regras simples e conhecidas de todos, pelo princípio da rotatividade. Se “a fila anda”, ela faz com que o último acabe em primeiro e quem estava na frente seja obrigado a sair depois de ter sido atendido. Mais: se ele (ou ela) quiser voltar, vai para o “fim da fila”. Ora, isso não é um belo exemplo dos limites que tornam todos iguais, fazendo-os primeiros ou últimos e, consequentemente, tornando o primeiro e o último relativos? Numa hora e em dado lugar sou o primeiro, noutro sou um cara comum e apenas sigo as normas gerais da cidadania. Mas sei — e esse é um ponto capital — que, mesmo em primeiro lugar ou no último, tenho limites, tolerâncias, direitos sem dúvida, mas um monte de deveres. Uma vez atendido, cedo lugar a um outro que faz o mesmo com o seguinte e assim, meus amigos, a fila da democracia anda!

Tal como num jogo de futebol ou numa disputa política liberal e competitiva, a fila requer conformidade com as regras, com os limites impostos pela disputa, bem como um mínimo de honradez diante delas. Se entro na fila, espero que todos honrem o meu e os seus lugares. Isto é: o meu senso de limites é despertado pelo senso de limites dos outros. Se, numa disputa política, um partido não segue as regras e compra políticos e votos, então o sistema de disputa fica abalado ou deixa de existir. Todo jogador quer vencer, todo atacante quer o gol da vitória, mas ele não pode vencer quebrando as pernas dos seus adversários.

http://www.flickr.com/commons

Do mesmo modo e pela mesma lógica, ninguém pode ser sempre o primeiro da fila (e nem o último), como ninguém pode ser campeão para sempre. Se isso acontece, ou seja, se um time campeão mudar as regras para ser campeão para sempre, então o futebol vai pros quintos dos infernos. Ele simplesmente acaba com o jogo como uma disputa. Na disputa, o adversário não é um inimigo, do mesmo modo que, numa fila, quem está na frente não é um superior. O poder ilimitado e congelado ou fixo em pessoas ou partidos, como ocorre nas ditaduras, liquida a democracia justamente porque ele usurpa os limites nos quais se baseia a fila. Justamente porque ele acaba com a disputa e a esperança banal, mas básica, de que a fila anda e que amanhã podemos ser campeões! O fim do rodízio do poder que obriga o respeito aos limites de todos é a raiz dos autoritarismos que são hoje impensáveis no Brasil. Sem ele, a oposição e a esquerda não estariam no poder honrando e ajudando a provar que, onde há disputa, alguém vai perder ou ganhar.

— III —

Termino com uma história que é, de fato, uma parábola que fala tanto de democracia quanto de capitalismo, com seu poder de despertar inveja e aristocratizar pelo dinheiro.

Conta-se que, numa reunião na mansão de um milhardário americano, o escritor Kurt Vonnegut Jr. (autor, entre outros, do incrível Matadouro 5) perguntou ao seu colega Joseph Heller (autor do não menos perturbador Ardil 22): “Joe, você não fica chateado sabendo que esse cara ganha mais num dia do que você jamais ganhou com a venda de Ardil 22 no mundo todo?”. Heller respondeu: “Não, porque eu tenho alguma coisa que esse cara não tem”. Vonnegut olhou firme para ele e disse: “E o que é que você pode ter que esse sujeito já não tenha?”. Resposta do Heller: “Eu conheço o significado da palavra suficiente”.

Ora, é justamente esse suficiente que nos torna resistentes tanto ao poder do dinheiro como fim valor absoluto, capaz de suspender limites numa sociedade de iguais, quanto a uma dimensão muito importante da vida. É ele que permite valorizar o que somos e temos, o modo como vivemos, os nossos prazeres e escolhas. É essa reflexão sobre o que nos basta que nos faz ver a olho nu que ninguém pode ter (ou tem) tudo. E, se ninguém pode ter tudo, todos temos alguma coisa.

http://www.flickr.com/commons

A ideia de suficiência e de limite, portanto, traz de volta uma dimensão humana importante e não conformista. A dimensão que assegura por linhas tortas, é certo, que nenhum ser humano pode ser belo, bonito, rico, saudável e feliz ao mesmo tempo. Os reveses da vida, que nos fazem estar sempre no fim ou no início da fila, que nos dão a impressão de impotência ou onipotência, têm muito a ver com essa reflexão que pouco fazemos no Brasil. A saber: o que queremos do nosso país e deste mundo? O que precisamos e em que quantidade ou escala? Será que sendo quem sou eu não tenho mais do que o mais rico dos ricos ou o mais poderoso dos poderosos? Afinal de contas, a igualdade na diferença é uma alternativa para estilos de ser. Não se pode negar o valor do dinheiro, mas não se pode aceitar que o dinheiro seja tudo e que o amor, a compaixão, a honestidade, a honradez e a alegria de viver em harmonia consigo mesmo sejam inferiores à riqueza ou ao poder. Afinal de contas, o que seria da vida sem esses pequenos-grandes prazeres e gozos que são de fato o seu sal e a sua pimenta? Vale a pena ser infeliz com uma grande conta bancária, ou ser feliz com uma conta bancária? Ou, quem sabe, viver sem ir ao banco?

Porque, afinal de contas, o limite não está apenas nas coisas externas, ele está em todos nós — mortais complexos destinados ao gozo e ao sofrimento neste maravilhoso e único vale de lágrimas, nesta interminável fila que, andando, nos obriga a dialogar com os nossos limites e com o lado ilimitado de cada um de nós.

*Antropólogo, escritor e professor da PUC-RJ. Autor de vários ensaios sobre sociedades tribais e o Brasil, como Um Mundo Dividido; Carnavais, malandros e heróis; O que faz o Brasil, Brasil; Relativizando: uma introdução à antropologia social, todos editados pela Rocco. Seu último livro, Fé em Deus e pé na tábua, é um ensaio sobre o trânsito no Brasil. DaMatta tem uma coluna semanal nos jornais O Estado de São Paulo, no Globo e Diário de Fortaleza